Academic literature on the topic 'Cache memory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cache memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cache memory"

1

Verbeek, N. A. M. "Food cache recovery by Northwestern Crows (Corvus caurinus)." Canadian Journal of Zoology 75, no. 8 (August 1, 1997): 1351–56. http://dx.doi.org/10.1139/z97-760.

Full text
Abstract:
This field study examined experimentally whether Northwestern Crows (Corvus caurinus) used random search or memory to relocate food caches. The crows cached food items in the ground, one per cache, and covered the cache before leaving the site. Most caches were recovered within 24 h. The crows found caches made by me 1 m from their own caches significantly less often than they found their own caches. Replacing the covering of the cache with material other than the crows used did not significantly affect recovery success, but the crows found significantly fewer of their caches when the latter were experimentally moved 15 cm. Adding a 25 cm long stick to the site 15 cm from the cache significantly decreased a crow's ability to relocate its cache, but not when it was placed 30 cm away. A 50 cm long stick placed 15 or 30 cm away had the same negative effect on a crow's ability to relocate its cache, but not when it was placed 45 cm away. When memory is used, recovery success can be as high as 99%; when random search is used, it can be as low as 6%.
APA, Harvard, Vancouver, ISO, and other styles
2

Bednekoff, Peter A., and Russell P. Balda. "Social Caching and Observational Spatial Memory in Pinyon Jays." Behaviour 133, no. 11-12 (1996): 807–26. http://dx.doi.org/10.1163/156853996x00251.

Full text
Abstract:
In the wild, pinyon jays (Gymnorhinus cyanocephalus) live in large, integrated flocks and cache tens of thousands of seeds per year. This study explored social aspects of caching and recovering by pinyon jays. In Experiment 1, birds cached in a large experimental room under three conditions: alone, paired with a dominant, and paired with a subordinate. In all cases, birds recovered caches alone seven days later. Individuals ate more seeds before caching when alone than when paired and started caching sooner when subordinate than when dominant. Pinyon jays accurately returned to sites containing their own caches but not to sites containing caches made by partner birds. However, they went to areas containing partner caches sooner than would be expected, indicating memory for the general areas containing caches made by other pinyon jays. In Experiments 2 and 3 birds were placed closer to each other and allowed to recover one or two days after caching. In Experiment 2, both free-flying and caged observers found caches with accuracies above chance. Cachers made significantly fewer errors than observers. During Experiment 3, caged observers saw the cachers recover some seeds one day after they were cached. On the next day cachers and observers were separately allowed to visit all cache sites. Both cachers and observers performed accurately and did not differ in accuracy. Neither group discriminated between extant and depleted caches. Observational spatial memory in pinyon jays may allow economical cache robbery by wild pinyon jays under some circumstances, particularly shortly after caches are created.
APA, Harvard, Vancouver, ISO, and other styles
3

DRACH, N., A. GEFFLAUT, P. JOUBERT, and A. SEZNEC. "ABOUT CACHE ASSOCIATIVITY IN LOW-COST SHARED MEMORY MULTI-MICROPROCESSORS." Parallel Processing Letters 05, no. 03 (September 1995): 475–87. http://dx.doi.org/10.1142/s0129626495000436.

Full text
Abstract:
Sizes of on-chip caches on current commercial microprocessors range from 16 Kbytes to 36 Kbytes. These microprocessors can be directly used in the design of a low cost single-bus shared memory multiprocessors without using any second-level cache. In this paper, we explore the viability of such a multi-microprocessor. Simulations results clearly establish that performance of such a system will be quite poor if on-chip caches are direct-mapped. On the other hand, when the on-chip caches are partially associative, the achieved level of performance is quite promising. In particular, two recently proposed innovative cache structures, the skewed-associative cache organization and the semi-unified cache organization are shown to work fine.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Wei, and Xiaoyang Zeng. "Decision Tree-Based Adaptive Reconfigurable Cache Scheme." Algorithms 14, no. 6 (June 1, 2021): 176. http://dx.doi.org/10.3390/a14060176.

Full text
Abstract:
Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can optimize the average memory access time of cache without modifying the cache coherent protocol. By monitoring the application running state, the cache associativity is periodically tuned to the optimal cache associativity, which is determined by the decision tree model. This paper implements the proposed decision tree-based adaptive reconfigurable cache in the GEM5 simulator and designs the key modules using Verilog HDL. The simulation results show that the proposed decision tree-based adaptive reconfigurable cache reduces the average memory access time compared with other adaptive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Ming Qian, Jie Tao Diao, Nan Li, Xi Wang, and Kai Bu. "A Study on Reconfiguring On-Chip Cache with Non-Volatile Memory." Applied Mechanics and Materials 644-650 (September 2014): 3421–25. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3421.

Full text
Abstract:
NVM has become a promising technology to partly replace SRAM as on-chip cache and reduce the gap between the core and cache. To take all advantages of NVM and SRAM, we propose a Hybrid Cache, constructing on-chip cache hierarchies with different technologies. As shown in article, hybrid cache performance and power consumption of Hybrid Cache have a large advantage over caches base on single technologies. In addition, we have shown some other methods that can optimize the performance of hybrid cache.
APA, Harvard, Vancouver, ISO, and other styles
6

Prihozhy, A. A. "Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms." «System analysis and applied information science», no. 4 (December 30, 2019): 10–18. http://dx.doi.org/10.21122/2309-4923-2019-4-10-18.

Full text
Abstract:
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to predict the future accesses to memory. The hit and miss rate depends on the cache type: direct mapped, set associative and fully associative cache. The least recently used replacement policy serves the sets. The miss rate strongly depends on the executed algorithm. The all pairs shortest paths algorithms solve many practical problems, and it is important to know what algorithm and what cache type match best. This paper presents a technique of simulating the direct mapped, k-way associative and fully associative cache during the algorithm execution, to measure the frequency of read data to cache and write data to memory operations. We have measured the frequencies versus the cache size, the data block size, the amount of processed data, the type of cache, and the type of algorithm. After comparing the basic and blocked Floyd-Warshall algorithms, we conclude that the blocked algorithm well localizes data accesses within one block, but it does not localize data dependencies among blocks. The direct mapped cache significantly loses the associative cache; we can improve its performance by appropriate mapping virtual addresses to physical locations.
APA, Harvard, Vancouver, ISO, and other styles
7

Mutanga, Alfred. "A SystemC Cache Simulator for a Multiprocessor Shared Memory System." International Letters of Social and Humanistic Sciences 13 (October 2013): 75–87. http://dx.doi.org/10.18052/www.scipress.com/ilshs.13.75.

Full text
Abstract:
In this research we built a SystemC Level-1 data cache system in a distributed shared memory architectural environment, with each processor having its own local cache. Using a set of Fast-Fourier Transform and Random trace files we evaluated the cache performance, based on the number of cache hits/misses, of the caches using snooping and directory-based cache coherence protocols. A series of experiments were carried out, with the results of the experiments showing that the directory-based MOESI cache coherency protocol has a performance edge over the snooping Valid-Invalid cache coherency protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Clayton, N. S., D. P. Griffiths, N. J. Emery, and A. Dickinson. "Elements of episodic–like memory in animals." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 356, no. 1413 (September 29, 2001): 1483–91. http://dx.doi.org/10.1098/rstb.2001.0947.

Full text
Abstract:
A number of psychologists have suggested that episodic memory is a uniquely human phenomenon and, until recently, there was little evidence that animals could recall a unique past experience and respond appropriately. Experiments on food–caching memory in scrub jays question this assumption. On the basis of a single caching episode, scrub jays can remember when and where they cached a variety of foods that differ in the rate at which they degrade, in a way that is inexplicable by relative familiarity. They can update their memory of the contents of a cache depending on whether or not they have emptied the cache site, and can also remember where another bird has hidden caches, suggesting that they encode rich representations of the caching event. They make temporal generalizations about when perishable items should degrade and also remember the relative time since caching when the same food is cached in distinct sites at different times. These results show that jays form integrated memories for the location, content and time of caching. This memory capability fulfils Tulving's behavioural criteria for episodic memory and is thus termed ‘episodic–like’. We suggest that several features of episodic memory may not be unique to humans.
APA, Harvard, Vancouver, ISO, and other styles
9

Aasaraai, Kaveh, and Andreas Moshovos. "NCOR: An FPGA-Friendly Nonblocking Data Cache for Soft Processors with Runahead Execution." International Journal of Reconfigurable Computing 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/915178.

Full text
Abstract:
Soft processors often use data caches to reduce the gap between processor and main memory speeds. To achieve high efficiency, simple, blocking caches are used. Such caches are not appropriate for processor designs such as Runahead and out-of-order execution that require nonblocking caches to tolerate main memory latencies. Instead, these processors use non-blocking caches to extract memory level parallelism and improve performance. However, conventional non-blocking cache designs are expensive and slow on FPGAs as they use content-addressable memories (CAMs). This work proposes NCOR, an FPGA-friendly non-blocking cache that exploits the key properties of Runahead execution. NCOR does not require CAMs and utilizes smart cache controllers. A 4 KB NCOR operates at 329 MHz on Stratix III FPGAs while it uses only 270 logic elements. A 32 KB NCOR operates at 278 Mhz and uses 269 logic elements.
APA, Harvard, Vancouver, ISO, and other styles
10

Shukur, Hanan, Subhi Zeebaree, Rizgar Zebari, Omar Ahmed, Lailan Haji, and Dildar Abdulqader. "Cache Coherence Protocols in Distributed Systems." Journal of Applied Science and Technology Trends 1, no. 3 (June 24, 2020): 92–97. http://dx.doi.org/10.38094/jastt1329.

Full text
Abstract:
Distributed systems performance is affected significantly by cache coherence protocols due to their role in data consistency maintaining. Also, cache coherent protocols have a great task for keeping the interconnection of caches in a multiprocessor environment. Moreover, the overall performance of distributed shared memory multiprocessor system is influenced by the used cache coherence protocol type. The major challenge of shared memory devices is to maintain the cache coherently. Therefore, in past years many contributions have been presented to address the cache issues and to improve the performance of distributed systems. This paper reviews in a systematic way a number of methods used for the cache-coherent protocols in a distributed system.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Cache memory"

1

Sehat, Kamiar. "Evaluation of caches and cache coherency." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brewer, Jeffery R. "Reconfigurable cache memory /." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1885437651&sid=8&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brewer, Jeffery Ramon. "Reconfigurable Cache Memory." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/theses/48.

Full text
Abstract:
AN ABSTRACT OF THE THESIS OF Jeffery R. Brewer, for the Master degree in Electrical Computer Engineer, presented on May 22, 2009 at Southern Illinois University Carbondale. TITLE: Reconfigurable Cache Memory MAJOR PROFESSOR: Dr. Nazeih Botros As chip designers continue to push the performance of microprocessors to higher levels the energy demand grows. The increase need for integrated chips that provide energy savings without degrading performance is paramount. The cache memory is typically over fifty percent of the size of today's microprocessor chip, and consumes a significant percentage of the total power. Therefore, by designing a reconfigurable cache that's able to dynamically adjust to a smaller cache size without encountering a significant degrade in performance, we are able to realize power conservation. Tournament caching is a reconfigurable method that tracks the current performance of the cache and compares it to possible smaller or larger cache size [1] . The results in this thesis shows that reconfigurable cache memory implemented with a configuration mechanism like Tournament caching would take advantage of associativity and cache size while providing energy conservation. i
APA, Harvard, Vancouver, ISO, and other styles
4

Gieske, Edmund Joseph. "Critical Words Cache Memory." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1208368190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Cheng-Chieh. "Optimizing cache utilization in modern cache hierarchies." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/19571.

Full text
Abstract:
Memory wall is one of the major performance bottlenecks in modern computer systems. SRAM caches have been used to successfully bridge the performance gap between the processor and the memory. However, SRAM cache’s latency is inversely proportional to its size. Therefore, simply increasing the size of caches could result in negative impact on performance. To solve this problem, modern processors employ multiple levels of caches, each of a different size, forming the so called memory hierarchy. Upon a miss, the processor will start to lookup the data from the highest level (L1 cache) to the lowest level (main memory). Such a design can effectively reduce the negative performance impact of simply using a large cache. However, because SRAM has lower storage density compared to other volatile storage, the size of an SRAM cache is restricted by the available on-chip area. With modern applications requiring more and more memory, researchers are continuing to look at techniques for increasing the effective cache capacity. In general, researchers are approaching this problem from two angles: maximizing the utilization of current SRAM caches or exploiting new technology to support larger capacity in cache hierarchies. The first part of this thesis focuses on how to maximize the utilization of existing SRAM cache. In our first work, we observe that not all words belonging to a cache block are accessed around the same time. In fact, a subset of words are consistently accessed sooner than others. We call this subset of words as critical words. In our study, we found these critical words can be predicted by using access footprint. Based on this observation, we propose critical-words-only cache (co cache). Unlike the conventional cache which stores all words that belongs to a block, co-cache only stores the words that we predict as critical. In this work, we convert an L2 cache to a co-cache and use L1s access footprint information to predict critical words. Our experiments show the co-cache can outperform a conventional L2 cache in the workloads whose working-set-sizes are greater than the L2 cache size. To handle the workloads whose working-set-sizes fit in the conventional L2, we propose the adaptive co-cache (acocache) which allows the co-cache to be configured back to the conventional cache. The second part of this thesis focuses on how to efficiently enable a large capacity on-chip cache. In the near future, 3D stacking technology will allow us to stack one or multiple DRAM chip(s) onto the processor. The total size of these chips is expected to be on the order of hundreds of megabytes or even few gigabytes. Recent works have proposed to use this space as an on-chip DRAM cache. However, the tags of the DRAM cache have created a classic space/time trade-off issue. On the one hand, we would like the latency of a tag access to be small as it would contribute to both hit and miss latencies. Accordingly, we would like to store these tags in a faster media such as SRAM. However, with hundreds of megabytes of die-stacked DRAM cache, the space overhead of the tags would be huge. For example, it would cost around 12 MB of SRAM space to store all the tags of a 256MB DRAM cache (if we used conventional 64B blocks). Clearly this is too large, considering that some of the current chip multiprocessors have an L3 that is smaller. Prior works have proposed to store these tags along with the data in the stacked DRAM array (tags-in-DRAM). However, this scheme increases the access latency of the DRAM cache. To optimize access latency in the DRAM cache, we propose aggressive tag cache (ATCache). Similar to a conventional cache, the ATCache caches recently accessed tags to exploit temporal locality; it exploits spatial locality by prefetching tags from nearby cache sets. In addition, we also address the high miss latency issue and cache pollution caused by excessive prefetching. To reduce this overhead, we propose a cost-effective prefetching, which is a combination of dynamic prefetching granularity tunning and hit-prefetching, to throttle the number of sets prefetched. Our proposed ATCache (which consumes 0.4% of overall tag size) can satisfy over 60% of DRAM cache tag accesses on average. The last proposed work in this thesis is a DRAM-Cache-Aware (DCA) DRAM controller. In this work, we first address the challenge of scheduling requests in the DRAM cache. While many recent DRAM works have built their techniques based on a tagsin- DRAM scheme, storing these tags in the DRAM array, however, increases the complexity of a DRAM cache request. In contrast to a conventional request to DRAM main memory, a request to the DRAM cache will now translate into multiple DRAM cache accesses (tag and data). In this work, we address challenges of how to schedule these DRAM cache accesses. We start by exploring whether or not a conventional DRAM controller will work well in this scenario. We introduce two potential designs and study their limitations. From this study, we derive a set of design principles that an ideal DRAM cache controller must satisfy. We then propose a DRAM-cache-aware (DCA) DRAM controller that is based on these design principles. Our experimental results show that DCA can outperform the baseline over 14%.
APA, Harvard, Vancouver, ISO, and other styles
6

GIESKE, EDMUND J. "B+ TREE CACHE MEMORY PERFORMANCE." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1092344402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Van, Vleet Taylor. "Dynamic cache-line sizes /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/6899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivasan, James Richard. "Improving cache utilisation." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ramaswamy, Subramanian. "Active management of Cache resources." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24663.

Full text
Abstract:
This dissertation addresses two sets of challenges facing processor design as the industry enters the deep sub-micron region of semiconductor design. The first set of challenges relates to the memory bottleneck. As the focus shifts from scaling processor frequency to scaling the number of cores, performance growth demands increasing die area. Scaling the number of cores also places a concurrent area demand in the form of larger caches. While on-chip caches occupy 50-60% of area and consume 20-30% of energy expended on-chip, their performance and energy efficiencies are less than 15% and 1% respectively for a range of benchmarks! The second set of challenges is posed by transistor leakage and process variation (inter-die and intra-die) at future technology nodes. Leakage power is anticipated to increase exponentially and sharply lower defect-free yield with successive technology generations. For performance scaling to continue, cache efficiencies have to improve significantly. This thesis proposes and evaluates a broad family of such improvements. This dissertation first contributes a model for cache efficiencies and finds them to be extremely low - performance efficiencies less than 15% and energy efficiencies in the order of 1%. Studying the sources of inefficiency leads to a framework for efficiency improvement based on two interrelated strategies. The approach for improving energy efficiency primarily relies on sizing the cache to match the application memory footprint during a program phase while powering down all remaining cache sets. Importantly, the sized is fully functional with no references to inactive sets. Improving performance efficiency primarily relies on cache shaping, i.e., changing the placement function and thereby the manner in which memory shares the cache. Sizing and shaping are applied at different phase of the design cycle: i) post-manufacturing & offline, ii) at compile-time, and at iii) run-time. This thesis proposes and explores techniques at each phase collectively realizing a repertoire of techniques for future memory system designers. The techniques use a combination of HW-SW techniques and are demonstrated to provide substantive improvements with modest overheads.
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Krishna. "Visible synchronization-based cache coherence." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0001/MQ44885.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Cache memory"

1

Handy, Jim. The cache memory book. Boston: Academic Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jacob, Bruce. Memory systems: Cache, DRAM, disk. Burlington, MA: Morgan Kaufmann Publishers, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Instruments, Texas. Cache memory management: Data book. [S.l.]: Texas Instruments, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Balasubramonian, Rajeev. Multi-core cache hierarchies. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Milo, Tomašević, and Milutinović Veljko, eds. The Cache-coherence problem in shared-memory multiprocessors: Hardware solutions. Los Alamitos, Calif: IEEE Computer Society Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sorin, Daniel J. A primer on memory consistency and cache coherence. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Motorola, ed. MC88200 cache/memory management unit user's manual. 2nd ed. Englewood Cliffs: Prentice Hall, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

inc, Motorola, ed. MC88200 cache/memory management unit user's manual. 2nd ed. Englewood Cliffs, N.J: Prentice Hall, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nicol, David. Massively parallel algorithms for trace-driven cache simulations. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shmueli, Oded. Data sufficiency for queries on cache. Palo Alto, CA: Hewlett-Packard Laboratories, Technical Publications Department, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cache memory"

1

Jaulent, P., L. Baticle, and P. Pillot. "Cache Memory." In 68020 68030 Microprocessors and their Coprocessors, 114–28. London: Macmillan Education UK, 1988. http://dx.doi.org/10.1007/978-1-349-10178-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lutsyk, Petro, Jonas Oberhauser, and Wolfgang J. Paul. "Cache Memory Systems." In A Pipelined Multi-Core Machine with Operating System Support, 217–42. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43243-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Piyush. "Cache Oblivious Algorithms." In Algorithms for Memory Hierarchies, 193–212. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36574-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Plattner, Hasso. "Aggregate Cache." In A Course in In-Memory Data Management, 191–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55270-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmed, Jameel, Mohammed Yakoob Siyal, Shaheryar Najam, and Zohaib Najam. "Multiprocessors and Cache Memory." In Fuzzy Logic Based Power-Efficient Real-Time Multi-Core System, 1–15. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3120-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sardashti, Somayeh, Angelos Arelakis, Per Stenström, and David A. Wood. "Cache/Memory Link Compression." In A Primer on Compression in the Memory Hierarchy, 45–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-031-01751-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sardashti, Somayeh, Angelos Arelakis, Per Stenström, and David A. Wood. "Cache Compression." In A Primer on Compression in the Memory Hierarchy, 21–32. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-031-01751-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saldanha, Craig, and Mikko H. Lipasti. "Power-Efficient Cache Coherence." In High Performance Memory Systems, 63–78. New York, NY: Springer New York, 2004. http://dx.doi.org/10.1007/978-1-4419-8987-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kowarschik, Markus, and Christian Weiß. "An Overview of Cache Optimization Techniques and Cache-Aware Numerical Algorithms." In Algorithms for Memory Hierarchies, 213–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36574-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gjessing, Stein, David B. Gustavson, James R. Goodman, David V. James, and Ernst H. Kristiansen. "The SCI Cache Coherence Protocol." In Scalable Shared Memory Multiprocessors, 219–37. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3604-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cache memory"

1

Cheng, L., and A. A. Sawchuk. "Optical solutions for cache memories in parallel computers." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1993. http://dx.doi.org/10.1364/oam.1993.mzz.1.

Full text
Abstract:
A cache is a high speed memory located between processors and main memory to fill the speed gap between them. In a loosely coupled parallel computer, each processor has its own cache memory. The processor accesses information from its cache memory, which stores information obtained from the main memory through an interconnection network. A major challenge in this system is to keep the data in all the caches consistent with that in main memory. This is referred to as the cache coherence problem. One solution is to have a bus between the caches, and supply each cache with a controller which listens to the bus and updates the cache whenever a change is made in main memory. This approach is complicated and is limited by the bandwidth of the bus [1], In a tightly coupled parallel computer, the cache memory can be shared by all processors and there is no cache coherence problem. However, with conventional VLSI implementation, only one processor can access the cache memory at a given time and the performance degrades dramatically [2]. We present optical solutions to the above cache problems. We describe an optical bus which updates the multiple caches in loosely coupled parallel computers to eliminate the bandwidth limitation. Optical or optoelectronic cache memory is proposed for shared cache in tightly coupled parallel computers to allow parallel access. We examine potential architectures and devices for both cases.
APA, Harvard, Vancouver, ISO, and other styles
2

Murta, Cristina Duarte, and Virgílio A. F. Almeida. "Cache na WWW: Limitações e Potencial." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19806.

Full text
Abstract:
WWW caching systems are essential for Web performance and scalability. Web cache workloads and goals show important differences when compared with traditional caching systems, such as memory caches. This article points out and discusses these differences. In order to illustrate our discussion, we present characterization of ten workloads from Web caches. The analysis focuses on the parameters response size and access patterns. We discuss the influence of the characterized parameters in cache performance according to relevant metrics for the WWW and we show directions for the design of Web cache replacement policies.
APA, Harvard, Vancouver, ISO, and other styles
3

Zawodny and Kogge. "Cache-In-Memory." In Innovative Architecture for Future Generation High-Performance Processors and Systems IWIA-01. IEEE, 2001. http://dx.doi.org/10.1109/iwia.2001.955191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Krause, Arthur, Francis Moreira, Valéria Girelli, and Philippe Olivier Navaux. "Poluição de Cache e Thrashing em Aplicações Paralelas de Alto Desempenho." In XX Simpósio em Sistemas Computacionais de Alto Desempenho. Sociedade Brasileira de Computação, 2019. http://dx.doi.org/10.5753/wscad.2019.8683.

Full text
Abstract:
Conforme os processadores evoluem, o desempenho dos sistemas computacionais se torna cada vez mais limitado pelo tempo de acesso à memória. Caches são empregadas a fim de contornar este problema, mas é necessária uma gerência inteligente dos dados que são armazenados nelas para impedir que problemas como poluição e thrashing degradem seu desempenho. Neste trabalho é apresentada uma análise da poluição de cache e thrashing em aplicações paralelas de alto desempenho. Os resultados mostram que caches com maior associatividade sofrem mais com estes problemas. Até 28% dos cache misses na L1 poderiam ser evitados com uma política de substituição de cache mais inteligente, chegando a até 62% na cache L2 e 98% na LLC. As processors evolve, the performance of computer systems becomes increasingly limited by the memory access time. Caches are employed in order to get around this problem, but an intelligent management of the data that is stored in them is necessary to prevent problems such as pollution and thrashing from degrading their performance. In this work, an analysis of cache and thrashing pollution in high performance parallel applications is presented. The results show that caches with greater associativity suffer more from these problems. Up to 28% of cache misses in the L1 cache could be avoided with a smarter replacement policy, up to 62% in the L2 cache and 98% in the LLC.
APA, Harvard, Vancouver, ISO, and other styles
5

Shang, Xiaojing, Ming Ling, Shan Shen, Tianxiang Shao, and Jun Yang. "RRS cache." In MEMSYS '19: The International Symposium on Memory Systems. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357526.3357535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hamkalo, José Luis, and Bruno Cernuschi-Frías. "A Taxonomy for Cache Memory Misses." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19773.

Full text
Abstract:
One way to understand the causes of cache memory misses is to use a classification for them. Usually statistical models such as the 3C model are used to make the classification. In the present work a new definition for the 3C model: compulsory, capacity and conflict misses are given. The corresponding operational definitions are given, which are based on the use of the LRU stack distances. The proposed model is called a deterministic 3C model or D3C. The D3C model classifies the memory references in an individual way, conforming a taxonomy, and then it is possible to analyze when a memory reference belongs to a given category. Also the passage of a given memory reference from one category to another when some cache parameter is modified may be studied. The D3C model does not present anomalies such as negative conflict miss rales, as in the 3C model. Several patterns for memory access are theoretically analyzed for the 3C and D3C models, showing that the results given by the D3C model are intuitive and have easy interpretation. The 3C model underestimates the conftict misses and overestimates the capacity misses when compared with the D3C model. This difference comes from the references that hit in the cache under study and miss in a fully associative cache of the same size with LRU replacement policy. The amount of these references was measured using SPEC95 benchmarks in trace driven simulations. It is shown that high percent values are obtained for these references for usual cache configurations, and therefore these references have an important participation in the cache statistics.
APA, Harvard, Vancouver, ISO, and other styles
7

Siddique, Nafiul Alam, and Abdel-Hameed A. Badawy. "SprBlk cache." In MEMSYS 2017: The International Symposium on Memory Systems, 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3132402.3132441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Backes, Luna, and Daniel A. Jiménez. "The impact of cache inclusion policies on cache management techniques." In MEMSYS '19: The International Symposium on Memory Systems. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357526.3357547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shidal, Jonathan, Ari J. Spilo, Paul T. Scheid, Ron K. Cytron, and Krishna M. Kavi. "Recycling trash in cache." In ISMM '15: International Symposium on Memory Management. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2754169.2754183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shin, Seunghee, Sihong Kim, and Yan Solihin. "Dense Footprint Cache." In MEMSYS '16: The Second International Symposium on Memory Systems. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2989081.2989096.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cache memory"

1

Chiarulli, Donald M., and Steven P. Levitan. Optoelectronic Cache Memory System Architecture. Fort Belvoir, VA: Defense Technical Information Center, December 1999. http://dx.doi.org/10.21236/ada371774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bianchini, Ricardo, Mark E. Crovella, Leonidas Kontothanassiss, and Thomas J. LeBlanc. Memory Contention in Scalable Cache-Coherent Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, April 1993. http://dx.doi.org/10.21236/ada272946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hill, Mark D. Aspects of Cache Memory and Instruction Buffer Performance. Fort Belvoir, VA: Defense Technical Information Center, November 1987. http://dx.doi.org/10.21236/ada604007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nayyar, Raman. Performance Analysis of a Hierarchical, Cache-Coherent, Shared Memory Based, Multi-processor System. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marchetti, M., L. I. Kontothanassis, R. Bianchini, and M. L. Scott. Using Simple Page Placement Policies to Reduce the Cost of Cache Fills in Coherent Shared-Memory Systems. Fort Belvoir, VA: Defense Technical Information Center, September 1994. http://dx.doi.org/10.21236/ada289887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Prem. Instrumentation to Characterize Cache-Memory Buffers and Regenerators for Optically-Digital Communication and Processing at the Quantum Limit. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada387445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Learn, Mark Walter. Mitigation of cache memory using an embedded hard-core PPC440 processor in a Virtex-5 Field Programmable Gate Array. Office of Scientific and Technical Information (OSTI), February 2010. http://dx.doi.org/10.2172/984165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dubnicki, Cezary. The Effects of Block Size on the Performance of Coherent Caches in Shared-Memory Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, May 1993. http://dx.doi.org/10.21236/ada272838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography