Tesis sobre el tema "Cache codée"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 22 mejores tesis para su investigación sobre el tema "Cache codée".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Parrinello, Emanuele. "Fundamental Limits of Shared-Cache Networks". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS491.
Texto completoIn the context of communication networks, the emergence of predictable content has brought to the fore the use of caching as a fundamental ingredient for handling the exponential growth in data volumes. This thesis aims at providing the fundamental limits of shared-cache networks where the communication to users is aided by a small set of caches. Our shared-cache model, not only captures heterogeneous wireless cellular networks, but it can also represent a model for users requesting multiple files simultaneously, and it can be used as a simple yet effective way to deal with the so-called subpacketization bottleneck of coded caching. Furthermore, we will also see how our techniques developed for caching networks can find application in the context of heterogeneous coded distributed computing
Zhao, Hui. "High performance cache-aided downlink systems : novel algorithms and analysis". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS366.
Texto completoThe thesis first addresses the worst-user bottleneck of wireless coded caching, which is known to severely diminish cache-aided multicasting gains. We present a novel scheme, called aggregated coded caching, which can fully recover the coded caching gains by capitalizing on the shared side information brought about by the effectively unavoidable file-size constraint. The thesis then transitions to scenarios with transmitters with multi-antenna arrays. In particular, we now consider the multi-antenna cache-aided multi-user scenario, where the multi-antenna transmitter delivers coded caching streams, thus being able to serve multiple users at a time, with a reduced radio frequency (RF) chains. By doing so, coded caching can assist a simple analog beamformer (only a single RF chain), thus incurring considerable power and hardware savings. Finally, after removing the RF-chain limitation, the thesis studies the performance of the vector coded caching technique, and reveals that this technique can achieve, under several realistic assumptions, a multiplicative sum-rate boost over the optimized cacheless multi-antenna counterpart. In particular, for a given downlink MIMO system already optimized to exploit both multiplexing and beamforming gains, our analysis answers a simple question: What is the multiplicative throughput boost obtained from introducing reasonably-sized receiver-side caches?
Brunero, Federico. "Unearthing the Impact of Structure in Data and in Topology for Caching and Computing Networks". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS368.pdf.
Texto completoCaching has shown to be an excellent expedient for the purposes of reducing the traffic load in data networks. An information-theoretic study of caching, known as coded caching, represented a key breakthrough in understanding how memory can be effectively transformed into data rates. Coded caching also revealed the deep connection between caching and computing networks, which similarly show the same need for novel algorithmic solutions to reduce the traffic load. Despite the vast literature, there remain some fundamental limitations, whose resolution is critical. For instance, it is well-known that the coding gain ensured by coded caching not only is merely linear in the overall caching resources, but also turns out to be the Achilles heel of the technique in most practical settings. This thesis aims at improving and deepening the understanding of the key role that structure plays either in data or in topology for caching and computing networks. First, we explore the fundamental limits of caching under some information-theoretic models that impose structure in data, where by this we mean that we assume to know in advance what data are of interest to whom. Secondly, we investigate the impressive ramifications of having structure in network topology. Throughout the manuscript, we also show how the results in caching can be employed in the context of distributed computing
Beg, Azam Muhammad. "Improving instruction fetch rate with code pattern cache for superscalar architecture". Diss., Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-06202005-103032.
Texto completoPalki, Anand B. "CACHE OPTIMIZATION AND PERFORMANCE EVALUATION OF A STRUCTURED CFD CODE - GHOST". UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/363.
Texto completoGupta, Saurabh. "PERFORMANCE EVALUATION AND OPTIMIZATION OF THE UNSTRUCTURED CFD CODE UNCLE". UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/360.
Texto completoSeyr, Luciana. "Manejo do solo e ensacamento do cacho em pomar de bananeira 'Nanicão'". Universidade Estadual de Londrina. Centro de Ciências Agrárias. Programa de Pós-Graduação em Agronomia, 2011. http://www.bibliotecadigital.uel.br/document/?code=vtls000166653.
Texto completoBrazil is the fourth largest producer of bananas, with an annual production of 6.99 million tons. Banana is a fruit of great economic and social importance, since it is grown from North to South of the country, generating jobs, income and food for millions of Brazilians, throughout the year. It is the third most produced fruit of the state of Paraná, with an area of 9,900 ha. Most of the Brazilian's production is destined for the domestic market, since it is the second most consumed fruit in the country, and also due to the low quality of most of the product. Such a poor quality is due to the lack of technology of the conditions in which it is grown, from the planting to the harvest. A technology which has already been used in other crops, but it is still not well known among banana producers, is the use of cover crops for soil protection against erosion. This management is particularly important for the implementation of the banana crops, because until the beginning of production, follows a period of about 13 months in which the ground is bare, exposed to erosion. Another important technology for the quality of fruit is the bagging of bunches soon after its formation, protecting until the harvest. In spite of this technique has proven to have advantages in other conditions, for the state of Paraná there is no data concerning the use of bagging bunches. Thus, the work has been divided into two subprojects, both held in the Northern of Paraná. The objective of the first was to evaluate the effects of the use of green manure on the establishment of a banana crop. The second was to evaluate the effect of bagging bunches of bananas, and its cost to the growers.
Kristipati, Pavan K. "Performance optimization of a structured CFD code GHOST on commodity cluster architectures /". Lexington, Ky. : [University of Kentucky Libraries], 2008. http://hdl.handle.net/10225/976.
Texto completoTitle from document title page (viewed on February 3, 2009). Document formatted into pages; contains: xi, 144 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 139-143).
Malik, Adeel. "Stochastic Coded Caching Networks : a Study of Cache-Load Imbalance and Random User Activity". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS045.pdf.
Texto completoIn this thesis, we elevate coded caching from their purely information-theoretic framework to a stochastic setting where the stochasticity of the networks originates from the heterogeneity in users’ request behaviors. Our results highlight that stochasticity in the cache-aided networks can lead to the vanishing of the gains of coded caching. We determine the exact extent of the cache-load imbalance bottleneck of coded caching in stochastic networks, which has never been explored before. Our work provides techniques to mitigate the impact of this bottleneck for the scenario where the user-to-cache state associations are restricted by proximity constraints between users and helper nodes (i.e., shared-cache setting) as well as for the scenario where user-to-cache state associations strategies are considered, as a design parameter (i.e., subpacketization-constrained setting)
Dias, Wanderson Roger Azevedo. "Arquitetura pdccm em hardware para compressão/descompressão de instruções em sistemas embarcados". Universidade Federal do Amazonas, 2009. http://tede.ufam.edu.br/handle/tede/2950.
Texto completoFundação de Amparo à Pesquisa do Estado do Amazonas
In the development of the design of embedded systems several factors must be led in account, such as: physical size, weight, mobility, energy consumption, memory, cooling, security requirements, trustiness and everything ally to a reduced cost and of easy utilization. But, on the measure that the systems become more heterogeneous they admit major complexity in its development. There are several techniques to optimize the execution time and power usage in embedded systems. One of these techniques is the code compression, however, most existing proposals focus on decompress and they assume that the code is compressed in compilation time. Therefore, this work proposes the development of an specific architecture, with its prototype in hardware (using VHDL and FPGAs), special for the process of compression/decompression code. Thus, it is proposed a technique called PDCCM (Processor Memory Cache Compressor Decompressor). The results are obtained via simulation and prototyping. In the analysis, benchmark programs such as MiBench had been used. Also a method of compression, called of MIC was considered (Middle Instruction Compression), which was compared with the traditional Huffman compression method. Therefore, in the architecture PDCCM the MIC method showed better performance in relation to the Huffman method for some programs of the MiBench analyzed that are widely used in embedded systems, resulting in 26% less of the FPGA logic elements, 71% more in the frequency of the clock MHz and in the 36% plus on the compression of instruction compared with Huffman, besides allowing the compression/decompression in time of execution.
No desenvolvimento do projeto de sistemas embarcados vários fatores têm que ser levados em conta, tais como: tamanho físico, peso, mobilidade, consumo de energia, memória, refrescância, requisitos de segurança, confiabilidade e tudo isso aliado a um custo reduzido e de fácil utilização. Porém, à medida que os sistemas tornam-se mais heterogêneos os mesmos admitem maior complexidade em seu desenvolvimento. Existem diversas técnicas para otimizar o tempo de execução e o consumo de energia em sistemas embarcados. Uma dessas técnicas é a compressão de código, não obstante, a maioria das propostas existentes focaliza na descompressão e assumem que o código é comprimido em tempo de compilação. Portanto, este trabalho propõe o desenvolvimento de uma arquitetura, com respectiva prototipação em hardware (usando VHDL e FPGAs), para o processo de compressão/descompressão de código. Assim, propõe-se a técnica denominada de PDCCM (Processor Decompressor Cache Compressor Memory). Os resultados são obtidos via simulação e prototipação. Na análise usaram-se programas do benchmark MiBench. Foi também proposto um método de compressão, denominado de MIC (Middle Instruction Compression), o qual foi comparado com o tradicional método de compressão de Huffman. Portanto, na arquitetura PDCCM o método MIC apresentou melhores desempenhos computacionais em relação ao método de Huffman para alguns programas do MiBench analisados que são muito usados em sistemas embarcados, obtendo 26% a menos dos elementos lógicos do FPGA, 71% a mais na freqüência do clock em MHz e 36% a mais na compressão das instruções comparando com o método de Huffman, além de permitir a compressão/descompressão em tempo de execução.
Patterson, Jason Robert Carey. "VGO : very global optimizer". Thesis, Queensland University of Technology, 2001.
Buscar texto completoCarrascal, Manzanares Carlos. "Parallélisation d’un code éléments finis spectraux. Application au contrôle non destructif par ultrasons". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS586.
Texto completoThe subject of this thesis is to study numerous ways to optimize the high order spectral finite element method (SFEM) computation time. The goal is to improve performance based on easily accessible architectures, namely SIMD multicore processors and graphics processors. As the computational kernels are limited by memory accesses (indicating a low arithmetic intensity), most of the optimizations presented are aimed at reducing and accelerating memory accesses. Improved matrix and vectors indexing, a combination of loop transformations, task parallelism (multithreading) and data parallelism (SIMD instructions) are transformations aimed at cache memory optimal use, registers intensive use and multicore SIMD parallelization. The results are convincing: the proposed optimizations increase the performance (between x6 and x11) and speed up the computation (between x9 and x16). The SIMDized implementation is up to x4 better than the vectorized implementation. The GPU implementation is between two and three times faster than the CPU one, knowing that a NVLink high-speed connection will allow a correct masking of memory transfers. The proposed transformations form a methodology to optimize intensive computation codes on common architectures and to make the most of the possibilities offered by multithreading and SIMD instructions
Dridi, Noura. "Estimation aveugle de chaînes de Markov cachées simples et doubles : Application au décodage de codes graphiques". Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0022.
Texto completoSince its birth, the technology of barcode is well investigated for automatic identification. When reading, a barcode can be degraded by a blur , caused by a bad focalisation and/ or a camera movement. The goal of this thesis is the optimisation of the receiver of 1D and 2D barcode from hidden and double Markov model and blind statistical estimation approaches. The first phase of our work consists of modelling the original image and the observed one using Hidden Markov model. Then, new algorithms for joint blur estimation and symbol detection are proposed, which take into account the non-stationarity of the hidden Markov process. Moreover, a method to select the most relevant model of the blur is proposed, based on model selection criterion. The method is also used to estimate the blur length. Finally, a new algorithm based on the double Markov chain is proposed to deal with digital communication through a long memory channel. Estimation of such channel is not possible using the classical detection algorithms based on the maximum likelihood due to the prohibitive complexity. New algorithm giving good trade off between complexity and performance is provided
Liu, Chun-Cheng y 劉俊成. "Enhanced Heterogeneous Code Cache Management Scheme for Dynamic Binary Translation". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78064859901464682041.
Texto completo國立清華大學
資訊工程學系
98
Recently, DBT has gained much attentions on embedded systems. However, the memory resource in embedded systems is often limited. This leads to the overhead of code re-translation and causes significant performance degradation. To reduce this overhead, Heterogeneous Code Cache (HCC), is proposed to split the code cache among SPM and main memory to avoid re-translation of code fragments. Although HCC is effective in handling applications with large working set, it ignores the execution frequencies of program fragments. Frequently executed program fragments can be stored in main memory and thus causes performance loss. To address this problem, an enhanced Heterogeneous Code Cache management scheme which considers program behaviors is proposed in this thesis. Experimental results show that the proposed management scheme can effectively improve the access ratio of SPM from 49.48% to 95.06%. This leads to 42.68% improvement of performance as compared with the management scheme proposed in the previous work.
Meng-Chun, Wueng. "Design of Code Caches in Active RMI". 2003. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611341549.
Texto completoWueng, Meng-Chun y 翁孟君. "Design of Code Caches in Active RMI". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/29692126895466665053.
Texto completo元智大學
資訊工程學系
91
In distributed computing, Remote Method Invocation (RMI) provides an easy and transparent programming interface that simplifies programmers in designing distributed applications. However, based on the end-to-end network model, when a large amount of client requests bursts to the server, the server may become a centralized bottleneck if its workload is heavy. Another is that the network traffic is congested along the paths between the clients and the server. Therefore, not only the clients wait for the responses in a lengthy time, but also all network services on the paths are influenced. Furthermore, even in a formal situation, RMI services are fragile when server or network failures occur. Although introducing the extension of a multi-tier design can relieve these problematic situations, the clients still need to be aware of these explicitly added middle tiers. As a result, the complexity of RPC application design will be more increased. Active networks provide a new network infrastructure in which intermediate active routers can provide extra computing power. In this thesis, how to improve RMI application performance on active networks and solve the foregoing problems are discussed. Furthermore, an active RMI running on ANTS active networks architecture to improve Java RMI is proposed, called ActiveRMI. Three advantages are achieved. First, the workload of the remote servers is shared with intermediate active routers. Second, the packet transmission is localized between the clients and the nearby intermediate active routers. As a result, the total amount of transmitted network packets is thus reduced. The service response time is also shortened. We implement code cache in ANTS based on FreeBSD, and the RMI application performance is evaluated by testing programs. Although the conducted experimental results are preliminary, remote RMI services indeed can be migrated to proximate active routers. Therefore, the workload of the remote servers is alleviated and the user response time is improved approximately 4%. Many issues still need to be further discussed, however, the feature of the dynamic service deployment of active networks indeed improves the performance of network services. It is worthy to be explored in the future.
Liu, Chia-Lun y 劉家倫. "Dynamic Binary Translation for Multi-Threaded Programs with Shared Code Cache". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/40583090410769099911.
Texto completo國立交通大學
資訊科學與工程研究所
101
We present a process-level ARM-to-x86/64 dynamic binary translator which can efficiently emulate multi-threaded binaries based on mc2llvm. The difficulty of translating multi-threaded binaries is the synchronization overhead incurred by the translator, which has a great impact on the performance. We find the performance bottleneck of the synchronization and solve it by (1) shortening the lock section as we can (2) using the concurrent data structure (3) using the thread-private memory. In addition, we add trace compilation in mc2llvm to speed up the emulation. Code generation of traces is done by specific threads in our system. In our experiment, our system is faster than QEMU by 8.8X when emulating benchmarks with 8 guest threads.
Ku, Chang-Jung y 顧長榮. "Designing a Power-aware Embedded System with Code Compression and Linked Cache". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/62903343890650401477.
Texto completo朝陽科技大學
資訊工程系碩士班
94
In designing an embedded system, three issues -- hardware cost, system performance, and power consumption -- have to be taken carefully into consideration. We present an embedded system with cache design which considers performance and power consumption based on the frequency with which instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. According to the locality of executed programs, 90% execution time is used by 10% of the static object codes. As a result, we compress 90% of the static object codes to obtain our main data compression ratio to reduce the use of memory space. However, performance and power consumption are relevant to execution process, so we compress 10% of the frequently executed object codes to improve both performance and power consumption by reducing the number of memory access times. We encode the frequently executed instructions as shorter code words and then we pack continuous code words into a pseudo instruction. Once the decompression engine fetches one pseudo instruction, it can extract multiple instructions. Therefore, memory access can be efficiently reduced because of space locality. From our simulation results, our method with one 256-instruction reference table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 33.08% than pre-cache with compressing all instructions. However, when one 512-instruction reference table is used, the ratio of the power consumption is reduced by 39.58% . According to the simulation results, our proposed methods based on the frequencies of executed instructions result in low power consumption, performance improvement and reduced memory.
Li, Chong-Jian y 李重建. "An Energy-Efficient Code Compression Scheme For Embedded Cache by Address Translation". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/64206670660898124806.
Texto completo國立中正大學
資訊工程研究所
89
In portable products, with more and more increasing functions and work speed, the power dissipation is more and more plenty. Cache consumes the greater part of power dissipation in processor, so we will present a new low power cache architecture to reduce power dissipation in cache. We present two subjects in this paper. One is that separate dictionary code compression. This compression scheme is that uses two dictionaries to compressed cache and memory instructions individually in order to reduce power dissipation and obtain good compression ratio. Another is low power cache architecture that uses address translation and combines code compression. Our low power cache architecture is to replace the tag array with an address translation. This architecture can reduce power dissipation for cache hit and cache miss. In addition, we combine a compression method that is dictionary based compression scheme with our low power cache architecture. The compression method can increase the code density and cache hit ratio, and the power of instruction decompression is minimized. Furthermore, the instructions in cache are duplicated for memory. If the instruction is in cache, the processor will not access the main memory. As a result of characteristic of our cache architecture and compression method, we can further compact the instruction space in memory with Address Translator for reducing main memory size. Moreover, since the instructions in the cache are duplicated form memory, we may further compact the instruction space in the memory by eliminating these duplicate instructions via additional address translation. The experimental results show that this cache architecture can efficiently reduce power dissipation and main memory size can be reduced cache size.
Chien, Chia-Hung y 簡嘉宏. "A Separate Code Cache Model for a Parallel Multi-Core System Emulator Based on QEMU". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/09894241610457505087.
Texto completo國立清華大學
資訊工程學系
99
QEMU is a fast processor emulator by adopting dynamic binary translation techniques to achieve high emulation efficiency. With QEMU, various operating systems and programs created for one ISA can be run on a machine with a different ISA. However, the current design of QEMU is only suitable for single-core processor emulation. When executing a multi-threaded application on a multi-core machine, QEMU emulates the execution of the application in serial and cannot take advantage of the parallelism available in the application and the underlying hardware. In this work, we propose a novel design of a multi-threaded QEMU, called P-QEMU, which can effectively deploy multiple simulated virtual CPUs on the underlying multi-core machine. The main idea of the design is to add a Separate Code Cache model to the execution flow of QEMU. To evaluate the design, we emulate an ARM11 MPCore by running P-QEMU on a quad-core x86 i7 system and use SPLASH-2, PARSEC, and CoreMark as benchmarks. The experimental results show that the performance of P-QEMU is, on average, 3.79 times faster than that of QEMU and is scalable on the quad-core i7 system for the SPLASH-2 benchmark suite.
Suresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. https://etd.iisc.ac.in/handle/2005/438.
Texto completoSuresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. http://hdl.handle.net/2005/438.
Texto completo