Academic literature on the topic 'Memory management'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Memory management.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Memory management"
Raghuvanshi, Durgesh. "Memory Management in Operating System." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 2346–47. http://dx.doi.org/10.31142/ijtsrd18342.
Full textBharti, Bhavna, and Prof Avinash Sharma. "Memory Management in BigData: A Perpective View." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 1993–98. http://dx.doi.org/10.31142/ijtsrd14436.
Full textKarunarathna, Harshinie, Jennifer Brown, Riccardo Briganti, Dabravka Pockrajac, Ivan Haigh, Christian Beck, Pushpa Dissanayake, Diego Pedrozo-Acuna, Kathryn Sparrow, and Mathew Wadey. "MODELLING MEMORY OF COASTAL FLOOD SYSTEMS." Coastal Engineering Proceedings 1, no. 34 (October 28, 2014): 19. http://dx.doi.org/10.9753/icce.v34.management.19.
Full textNilakanta, Sree, L. L. Miller, and Dan Zhu. "Organizational Memory Management." Journal of Database Management 17, no. 1 (January 2006): 85–94. http://dx.doi.org/10.4018/jdm.2006010106.
Full textMilligan, Peter. "Distributed memory management." Microprocessing and Microprogramming 38, no. 1-5 (September 1993): 97–98. http://dx.doi.org/10.1016/0165-6074(93)90131-4.
Full textHung, Ting-Wei, Yen-Hao Chen, and Yi-Yu Liu. "Memory management for dual-addressing memory architecture." IEICE Electronics Express 10, no. 15 (2013): 20130467. http://dx.doi.org/10.1587/elex.10.20130467.
Full textJOHNSON, THEODORE, and TIMOTHY A. DAVIS. "PARALLEL BUDDY MEMORY MANAGEMENT." Parallel Processing Letters 02, no. 04 (December 1992): 391–98. http://dx.doi.org/10.1142/s0129626492000544.
Full textPerez, Gilberto, and Isabel Ramos. "Understanding Organizational Memory from the Integrated Management Systems (ERP)." Journal of Information Systems and Technology Management 10, no. 3 (December 30, 2013): 541–60. http://dx.doi.org/10.4301/s1807-17752013000300005.
Full textMilenkovic, M. "Microprocessor memory management units." IEEE Micro 10, no. 2 (April 1990): 70–85. http://dx.doi.org/10.1109/40.52948.
Full textAiken, Alexander, Manuel Fähndrich, and Raph Levien. "Better static memory management." ACM SIGPLAN Notices 30, no. 6 (June 1995): 174–85. http://dx.doi.org/10.1145/223428.207137.
Full textDissertations / Theses on the topic "Memory management"
Panthulu, Pradeep. "Intelligent Memory Management Heuristics." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4399/.
Full textMårtensson, Henrik. "Memory Management of Manycore Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-107879.
Full textStojanovic, Marta. "Automatic memory management in Java." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65392.pdf.
Full textSánchez, Navarro Francisco Jesús. "Smart memory management through locality analysis." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/5965.
Full textHemos desarrollado un análisis estático de localidad de datos. Este análisis está basado en los vectores de reuso y contiene los tres típicos pasos: reuso, volumen y análisis de interferencias. Comparado con trabajos previos, tanto el análisis de volúmenes como el de interferencias ha sido mejorado utilizando información de profiling así como un análisis de interferencias más preciso. El analizador de localidad de datos propuesto ha sido incluido como un paso más en un compilador de investigación. Los resultados demuestran que, para aplicaciones numéricas, el análisis es muy preciso y el overhead de cálculo es bajo. Este análisis es la base para todas las otras partes de la tesis. Además, para algunas propuestas en la última parte de la tesis, hemos usado un análisis de localidad de datos basado en las ecuaciones de fallos de cache. Este análisis, aunque requiere más tiempo de cálculo, es más preciso y más apropiado para cachés asociativas por conjuntos. El uso de dos análisis de localidad diferentes también demuestra que las propuestas arquitectónicas de esta tesis son independientes del análisis de localidad particular utilizado.
Después de mostrar la precisión del análisis, lo hemos utilizado para estudiar el comportamiento de localidad exhibido por los programas SPECfp95. Este tipo de análisis es necesario antes de proponer alguna nueva técnica ya que ayuda al investigador a entender porqué ocurren los fallos de caché. Se muestra que con el análisis propuesto se puede estudiar de forma muy precisa la localidad de un programa y detectar donde estan los "puntos negros" así como la razón de estos fallos en cache. Este estudio del comportamiento de localidad de diferentes programas es la base y motivación para las diferentes técnicas propuestas en esta tesis para mejorar el rendimiento de la memoria.
Así, usando el análisis de localidad de datos y basándonos en los resultados obtenidos después de analizar el comportamiento de localidad de un conjunto de programas, proponemos utilizar este análisis con el objetivo de guiar tres técnicas diferentes: (i) manejo de caches multimódulo, (ii) prebúsqueda software para bucles con planificación módulo, y (iii) planificación de instrucciones de arquitecturas VLIW clusterizadas.
El primer uso del análisis de localidad propuesto es el manejo de una novedosa organización de caché. Esta caché soporta bypass y/o está compuesta por diferentes módulos, cada uno orientado a explotar un tipo particular de localidad. La mayor diferencia de esta caché con respecto propuestas previas es que la decisión de "cachear" o no, o en qué módulo un nuevo bloque es almacenado, está controlado por algunos bits en las instrucciones de memoria ("pistas" de localidad). Estas "pistas" (hints) son fijadas en tiempo de compilación utilizando el análisis de localidad propuesto. Así, la complejidad del manejo de esta caché se mantiene bajo ya que no requiere ningún hardware adicional. Los resultados demuestran que cachés más pequeñas con un manejo más inteligente pueden funcionar tan bien (o mejor) que cachés convencionales más grandes.
Hemos utilizado también el análisis de localidad para estudiar la interacción entre la segmentación software y la prebúsqueda software. La segmentación software es una técnica muy efectiva para la planificación de código en bucles (principalmente en aplicaciones numéricas en procesadores VLIW). El esquema más popular de prebúsqueda software se llama planificación módulo. Muchos trabajos sobre planificación módulo se pueden encontrar en la literatura, pero casi todos ellos consideran una suposición crítica: consideran un comportamiento optimista de la cache (en otras palabras, usan siempre la latencia de acierto cuando planifican instrucciones de memoria). Así, los resultados que presentan ignoran los efectos del bloqueo debido a dependencias con instrucciones de memoria. En esta parte de la tesis mostramos que esta suposición puede llevar a planificaciones cuyo rendimiento es bastante más bajo cuando se considera una memoria real. Nosotros proponemos un algoritmo para planificar instrucciones de memoria en bucles con planificación módulo. Hemos estudiado diferentes estrategias de prebúsqueda software y finalmente hemos propuesto un algoritmo que realiza prebúsqueda basándose en el análisis de localidad y en la forma del grafo de dependencias del bucle. Los resultados obtenidos demuestran que el esquema propuesto mejora el rendimiento de las otras heurísticas ya que obtiene un mejor compromiso entre tiempo de cálculo y de bloqueo.
Finalmente, el último uso del análisis de localidad estudiado en esta tesis es para guiar un planificador de instrucciones para arquitecturas VLIW clusterizadas. Las arquitecturas clusterizadas están siendo una tendencia común en el diseño de procesadores empotrados/DSP. Típicamente, el núcleo de estos procesadores está basado en un diseño VLIW el cual particiona tanto el banco de registros como las unidades funcionales. En este trabajo vamos un paso más allá y también hacemos la partición de la memoria caché. En este caso, tanto las comunicaciones entre registros como entre memorias han de ser consideradas. Nosotros proponemos un algoritmo que realiza la partición del grafo así como la planificación de instrucciones en un único paso en lugar de hacerlo secuencialmente, lo cual se demuestra que es más efectivo. Este algoritmo es mejorado añadiendo una análisis basado en las ecuaciones de fallos de cache con el objetivo de guiar en la planificación de las instrucciones de memoria para reducir no solo comunicaciones entre registros, sino también fallos de cache.
Cache memories were incorporated in microprocessors in the early times and represent the most common
solution to deal with the gap between processor and memory speeds. However, many studies point out that the cache storage capacity is wasted many times, which means a direct impact in processor performance. Although a cache is designed to exploit different types of locality, all memory references are handled in the same way, ignoring particular locality behaviors. The restricted use of the locality information for each memory access can limit the effectivity of the cache. In this thesis we show how a data locality analysis can help the researcher to understand where and why cache misses occur, and then to propose different techniques that make use of this information in order to improve the performance of cache memory. We propose techniques in which locality information obtained by the locality analyzer is passed from the compiler to the hardware through the ISA to guide the management of memory accesses.
We have developed a static data locality analysis. This analysis is based on reuse vectors and performs the three typical steps: reuse, volume and interfere analysis. Compared with previous works, both volume
and interference analysis have been improved by using profile information as well as a more precise inter-
ference analysis. The proposed data locality analyzer has been inserted as another pass in a research compiler. Results show that for numerical applications the analysis is very accurate and the computing overhead is low. This analysis is the base for all other parts of the thesis. In addition, for some proposals in the last part of the thesis we have used a data locality analysis based on cache miss equations. This analysis, although more time consuming, is more accurate and more appropriate for set-associative caches. The usage of two different locality analyzers also shows that the architectural proposals of this thesis are independent from the particular locality analysis.
After showing the accuracy of the analysis, we have used it to study the locality behavior exhibited by the SPECfp95 programs. This kind of analysis is necessary before proposing any new technique since can help the researcher to understand why cache misses occur. We show that with the proposed analysis we can study very accurately the locality of a program and detect where the hot spots are as well as the reason for these misses. This study of the locality behavior of different programs is the base and motivation for the different techniques proposed in this thesis to improve the memory performance.
Thus, using the data locality analysis and based on the results obtained after analyzing the locality behavior of a set of programs, we propose to use this analysis in order to guide three different techniques: (i) management of multi-module caches, (ii) software prefetching for modulo scheduled loops, and (iii) instruction scheduling for clustered VLIW architectures.
The first use of the proposed data locality analysis is to manage a novel cache organization. This cache supports bypassing and/or is composed of different modules, each one oriented to exploit a particular type of locality. The main difference of this cache with respect to previous proposals is that the decision of caching or not, or in which module a new fetched block is allocated is managed by some bits in memory instructions (locality hints). These hints are set at compile time using the proposed locality analysis. Thus, the management complexity of this cache is kept low since no additional hardware is required. Results show that smaller caches with a smart management can perform as well as (or better than) bigger conventional caches.
We have also used the locality analysis to study the interaction between software pipelining and software prefetching. Software pipelining has been shown to be a very effective scheduling technique for loops (mainly in numerical applications for VLIW processors). The most popular scheme for software pipelining is called modulo scheduling. Many works on modulo scheduling can be found in the literature, but almost all of them make a critical assumption: they consider an optimistic behavior of the cache (in other words, they use the hit latency when a memory instruction is scheduled). Thus, the results they present ignore the effect of stalls due to dependences with memory instructions. In this part of the thesis we show that this assumption can lead to schedules whose performance is rather low when a real memory is considered. Thus, we propose an algorithm to schedule memory instructions in modulo scheduled loops. We have studied different software prefetching strategies and finally proposed an algorithm that performs prefetching based on the locality analysis and the shape of the loop dependence graph. Results obtained shows that the proposed scheme outperforms other heuristic approaches since it achieves a better trade-off between compute and stall time than the others. Finally, the last use of the locality analysis studied in this thesis is to guide an instruction scheduler for a clustered VLIW architecture. Clustered architectures are becoming a common trend in the design of embedded/DSP processors. Typically, the core of these processors is based on a VLIW design which partitionates both register file and functional units. In this work we go a step beyond and also make a partition of the cache memory. Then, both inter-register and inter-memory communications have to be taken into account. We propose an algorithm that performs both graph partition and instruction scheduling in a single step instead of doing it sequentially, which is shown to be more effective. This algorithm is improved by adding an analysis based on the cache miss equations in order to guide the scheduling of memory instructions in clusters with the aim of reducing not only inter-register communications, but also cache misses.
Hanai, Ryo. "Memory management for real-time applications." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135980.
Full textWilk, Daniel. "Hierarchical application-oriented physical memory management." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29419.pdf.
Full textMohapatra, Dushmanta. "Coordinated memory management in virtualized environments." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54454.
Full textFeeley, Michael Joseph. "Global memory management for workstation networks /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6997.
Full textYang, Shufan. "Memory interconnect management on a chip multiprocessor." Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520682.
Full textÖsterlund, Erik. "Automatic memory management system for automatic parallelization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13693.
Full textBooks on the topic "Memory management"
Baler, Henry G., ed. Memory Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60368-9.
Full textBekkers, Yves, and Jacques Cohen, eds. Memory Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/bfb0017181.
Full textAmerican Institute of Certified Public Accountants. Information Technology Division., ed. Memory management. New York: American Institute of Certified Public Accountants, 1993.
Find full textJ, Neuberger Marc, ed. C⁺⁺ memory management. New York: Windcrest/McGraw-Hill, 1994.
Find full textDorfman, Len. C[plusplus] memory management. New York: Windcrest/McGraw-Hill, 1994.
Find full textDorfman, Len. C memory management techniques. Blue Ridge Summit, PA: Windcrest/McGraw Hill, 1993.
Find full textDorfman, Len. C memory management techniques. Blue Ridge Summit, PA: Windcrest/McGraw Hill, 1993.
Find full textPlattner, Hasso, and Alexander Zeier. In-Memory Data Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29575-1.
Full textPlattner, Hasso, and Alexander Zeier. In-Memory Data Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19363-7.
Full textKokosa, Konrad. Pro .NET Memory Management. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4027-4.
Full textBook chapters on the topic "Memory management"
Lister, A. M., and R. D. Eager. "Memory Management." In Fundamentals of Operating Systems, 51–75. New York, NY: Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4757-2252-9_5.
Full textKävrestad, Joakim. "Memory Management." In Fundamentals of Digital Forensics, 175–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96319-8_15.
Full textAycock, John. "Memory Management." In Retrogame Archeology, 17–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30004-7_2.
Full textVillela, Roger. "Memory Management." In Introducing Mechanisms and APIs for Memory Management, 1–12. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5416-5_1.
Full textLee, Keith. "Memory Management." In Pro Objective-C, 53–74. Berkeley, CA: Apress, 2013. http://dx.doi.org/10.1007/978-1-4302-5051-7_4.
Full textLazzarini, Victor. "Memory Management." In Computer Music Instruments II, 95–103. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13712-0_8.
Full textGabbrielli, Maurizio, and Simone Martini. "Memory Management." In Undergraduate Topics in Computer Science, 91–118. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84882-914-5_5.
Full textHalvorsen, Ole Henry, and Douglas Clarke. "Memory Management." In OS X and iOS Kernel Programming, 99–117. Berkeley, CA: Apress, 2011. http://dx.doi.org/10.1007/978-1-4302-3537-8_6.
Full textOlsson, Mikael. "Memory Management." In C Quick Syntax Reference, 63–66. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-6500-9_19.
Full textDalrymple, Mark, and Scott Knaster. "Memory Management." In Learn Objective-C on the Mac, 161–77. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1816-6_9.
Full textConference papers on the topic "Memory management"
Jones, Richard. "Dynamic memory management." In the 2007 International Lisp Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1622123.1622126.
Full textEgger, Bernhard, Jaejin Lee, and Heonshik Shin. "Scratchpad memory management for portable systems with a memory management unit." In the 6th ACM & IEEE International conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1176887.1176933.
Full textBerger, Emery. "Session details: Explicit memory management." In ISMM07: International Symposium on Memory Management. New York, NY, USA: ACM, 2007. http://dx.doi.org/10.1145/3259088.
Full textJain, Jignesh, Meet Satra, Pratik Kumar Jain, and Era Johri. "Memory management in android." In 2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC). IEEE, 2017. http://dx.doi.org/10.1109/icbdaci.2017.8070844.
Full textAiken, Alexander, Manuel Fähndrich, and Raph Levien. "Better static memory management." In the ACM SIGPLAN 1995 conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/207110.207137.
Full textKoca, Caglar, and Ozgur B. Akan. "Quantum Memory Management Systems." In NANOCOM' 15: ACM The Second Annual International Conference on Nanoscale Computing and Communication. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2800795.2800810.
Full textGay, David, Rob Ennals, and Eric Brewer. "Safe manual memory management." In the 6th international symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1296907.1296911.
Full text"Session 9: Memory Management." In 2015 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 2015. http://dx.doi.org/10.1109/rtas.2015.7108451.
Full textLi, Zhijuan. "Session details: Memory management." In LCTES03: ACM SIGPLAN Symposium on Languages, Compilers, and Tools for Embedded Systems. New York, NY, USA: ACM, 2003. http://dx.doi.org/10.1145/3244029.
Full textShrivastava, Aviral. "Session details: Memory management." In ESWEEK'12: Eighth Embedded System Week. New York, NY, USA: ACM, 2012. http://dx.doi.org/10.1145/3250281.
Full textReports on the topic "Memory management"
Carr, R. Virtual Memory Management. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1453979.
Full textBlack, David L., Anoop Gupta, and Wolf-Dietrich Weber. Competitive Management of Distributed Shared Memory. Fort Belvoir, VA: Defense Technical Information Center, January 1988. http://dx.doi.org/10.21236/ada207322.
Full textMcCullough, Daryl, and Tanya Korelsky. EmmA, An Evolution Memory Management Assistant. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada373403.
Full textRalston, Lynda M., Andrew M. Bardos, William R. Beaudet, and Lee M. Burberry. Database Management Using Optical Associative Memory. Fort Belvoir, VA: Defense Technical Information Center, February 1998. http://dx.doi.org/10.21236/ada342499.
Full textTerauchi, Tachio, and Alex Aiken. Memory Management with Use-Counted Regions. Fort Belvoir, VA: Defense Technical Information Center, March 2004. http://dx.doi.org/10.21236/ada603317.
Full textBartlett, Roscoe Ainsworth. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing. Office of Scientific and Technical Information (OSTI), May 2010. http://dx.doi.org/10.2172/992335.
Full textAwad, Amro, Simon David Hammond, Gwendolyn Renae Voskuilen, and Robert J. Hoekstra. Samba: A Detailed Memory Management Unit (MMU) for the SST Simulation Framework. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1528796.
Full textSu, Chun-Yi. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems. Office of Scientific and Technical Information (OSTI), December 2014. http://dx.doi.org/10.2172/1179104.
Full textVineyard, Craig Michael, and Stephen Joseph Verzi. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing. Office of Scientific and Technical Information (OSTI), September 2017. http://dx.doi.org/10.2172/1396076.
Full textBrown, J. C., and A. A. Mirin. MICA, a facility to achieve portability for message-passing and dynamic memory management in FORTRAN. Office of Scientific and Technical Information (OSTI), January 1994. http://dx.doi.org/10.2172/10123838.
Full text