Siga este enlace para ver otros tipos de publicaciones sobre el tema: Cache codée.

Tesis sobre el tema "Cache codée"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 22 mejores tesis para su investigación sobre el tema "Cache codée".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Parrinello, Emanuele. "Fundamental Limits of Shared-Cache Networks". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS491.

Texto completo
Resumen
Dans le contexte des réseaux de diffusion de contenu, les avantages puissants de la combinaison du cache et de la multidiffusion codée ont été démontrés pour les réseaux où chaque utilisateur est équipé de son propre cache isolé. Cette thèse vise à fournir les principes fondamentaux des réseaux à cache partagé où la communication avec les utilisateurs est assistée par un petit ensemble de caches, chacun d'entre eux servant un nombre arbitraire d'utilisateurs. Notre modèle de cache partagé, non seulement capture les réseaux cellulaires sans fil hétérogènes où les petites stations de base assistées par cache coexistent avec une macro station de base, mais il peut également représenter un modèle pour les réseaux où les utilisateurs demandent plusieurs fichiers. Nous verrons également comment ce problème des demandes de fichiers multiples se pose dans le contexte de calcul distribué codé, où nous appliquerons les mêmes idées et techniques que celles utilisées pour les réseaux avec cache. De plus, il est bien connu que la limitation du nombre de caches à une valeur beaucoup plus petite que le nombre d'utilisateurs peut être inévitable dans des scénarios pratiques où la taille des fichiers est finie et limitée. C'est pourquoi nous pensons que l'étude des réseaux de caches partagés est d'une importance critique pour le développement des techniques de mise en cache codée
In the context of communication networks, the emergence of predictable content has brought to the fore the use of caching as a fundamental ingredient for handling the exponential growth in data volumes. This thesis aims at providing the fundamental limits of shared-cache networks where the communication to users is aided by a small set of caches. Our shared-cache model, not only captures heterogeneous wireless cellular networks, but it can also represent a model for users requesting multiple files simultaneously, and it can be used as a simple yet effective way to deal with the so-called subpacketization bottleneck of coded caching. Furthermore, we will also see how our techniques developed for caching networks can find application in the context of heterogeneous coded distributed computing
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Hui. "High performance cache-aided downlink systems : novel algorithms and analysis". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS366.

Texto completo
Resumen
La thèse aborde d'abord le pire goulot d'étranglement de la mise en cache codée sans fil, qui est connue pour diminuer considérablement les gains de multidiffusion assistée par cache. Nous présentons un nouveau schéma, appelé mise en cache codée agrégée, qui peut récupérer entièrement les gains de mise en cache codée en capitalisant sur les informations partagées apportées par la contrainte de taille de fichier effectivement inévitable. La thèse passe ensuite à des scénarios avec des émetteurs avec des réseaux multi-antennes. En particulier, nous considérons maintenant le scénario multi-utilisateurs assisté par cache multi-antennes, où l'émetteur multi-antennes délivre des flux de mise en cache codés, pouvant ainsi servir plusieurs utilisateurs à la fois, avec des chaînes de radio fréquences (RF) réduites. Ce faisant, la mise en cache codée peut aider un simple formateur de faisceau analogique (une seule chaîne RF), entraînant ainsi des économies considérables d'énergie et de matériel. Enfin, après avoir supprimé la limitation de la chaîne RF, la thèse étudie les performances de la technique de mise en cache codée par vecteur et révèle que cette technique peut atteindre, sous plusieurs hypothèses réalistes, une augmentation multiplicative du taux de somme par rapport à la contrepartie optimisée multi-antennes sans cache. En particulier, pour un système MIMO de liaison descendante déjà optimisé pour exploiter à la fois les gains de multiplexage et de formation de faisceaux, notre analyse répond à une question simple: quelle est l'augmentation de débit multiplicative obtenue en introduisant des caches côté récepteur de taille raisonnable ?
The thesis first addresses the worst-user bottleneck of wireless coded caching, which is known to severely diminish cache-aided multicasting gains. We present a novel scheme, called aggregated coded caching, which can fully recover the coded caching gains by capitalizing on the shared side information brought about by the effectively unavoidable file-size constraint. The thesis then transitions to scenarios with transmitters with multi-antenna arrays. In particular, we now consider the multi-antenna cache-aided multi-user scenario, where the multi-antenna transmitter delivers coded caching streams, thus being able to serve multiple users at a time, with a reduced radio frequency (RF) chains. By doing so, coded caching can assist a simple analog beamformer (only a single RF chain), thus incurring considerable power and hardware savings. Finally, after removing the RF-chain limitation, the thesis studies the performance of the vector coded caching technique, and reveals that this technique can achieve, under several realistic assumptions, a multiplicative sum-rate boost over the optimized cacheless multi-antenna counterpart. In particular, for a given downlink MIMO system already optimized to exploit both multiplexing and beamforming gains, our analysis answers a simple question: What is the multiplicative throughput boost obtained from introducing reasonably-sized receiver-side caches?
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Brunero, Federico. "Unearthing the Impact of Structure in Data and in Topology for Caching and Computing Networks". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS368.pdf.

Texto completo
Resumen
La mise en cache s'est avérée être un excellent moyen de réduire la charge de trafic dans les réseaux de données. Une étude de la mise en cache en théorie de l'information, connue sous le nom de mise en cache codée, a représenté une percée clé dans la compréhension de la façon dont la mémoire peut être efficacement transformée en débit de données. La mise en cache codée a également révélé le lien profond entre la mise en cache et les réseaux informatiques, qui présentent le même besoin de solutions algorithmiques novatrices pour réduire la charge de trafic. Malgré la vaste littérature, il reste quelques limitations fondamentales, dont la résolution est critique. Par exemple, il est bien connu que le gain de codage assuré par la mise en cache codée n'est pas seulement linéaire dans les ressources globales de mise en cache, mais s'avère également être le talon d'Achille de la technique dans la plupart des paramètres pratiques. Cette thèse a pour but d'améliorer et d'approfondir la compréhension du rôle clé que joue la structure, que ce soit dans les données ou dans la topologie, pour le cache et les réseaux informatiques. Premièrement, nous explorons les limites fondamentales de la mise en cache sous certains modèles de la théorie de l'information qui imposent une structure dans les données, ce qui signifie que nous supposons savoir à l'avance quelles données sont intéressantes pour qui. Deuxièmement, nous étudions les ramifications impressionnantes de la structure de la topologie des réseaux. Tout au long du manuscrit, nous montrons également comment les résultats de la mise en cache peuvent être utilisés dans le contexte de l'informatique distribuée
Caching has shown to be an excellent expedient for the purposes of reducing the traffic load in data networks. An information-theoretic study of caching, known as coded caching, represented a key breakthrough in understanding how memory can be effectively transformed into data rates. Coded caching also revealed the deep connection between caching and computing networks, which similarly show the same need for novel algorithmic solutions to reduce the traffic load. Despite the vast literature, there remain some fundamental limitations, whose resolution is critical. For instance, it is well-known that the coding gain ensured by coded caching not only is merely linear in the overall caching resources, but also turns out to be the Achilles heel of the technique in most practical settings. This thesis aims at improving and deepening the understanding of the key role that structure plays either in data or in topology for caching and computing networks. First, we explore the fundamental limits of caching under some information-theoretic models that impose structure in data, where by this we mean that we assume to know in advance what data are of interest to whom. Secondly, we investigate the impressive ramifications of having structure in network topology. Throughout the manuscript, we also show how the results in caching can be employed in the context of distributed computing
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Beg, Azam Muhammad. "Improving instruction fetch rate with code pattern cache for superscalar architecture". Diss., Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-06202005-103032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Palki, Anand B. "CACHE OPTIMIZATION AND PERFORMANCE EVALUATION OF A STRUCTURED CFD CODE - GHOST". UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/363.

Texto completo
Resumen
This research focuses on evaluating and enhancing the performance of an in-house, structured, 2D CFD code - GHOST, on modern commodity clusters. The basic philosophy of this work is to optimize the cache performance of the code by splitting up the grid into smaller blocks and carrying out the required calculations on these smaller blocks. This in turn leads to enhanced code performance on commodity clusters. Accordingly, this work presents a discussion along with a detailed description of two techniques: external and internal blocking, for data access optimization. These techniques have been tested on steady, unsteady, laminar, and turbulent test cases and the results are presented. The critical hardware parameters which influenced the code performance were identified. A detailed study investigating the effect of these parameters on the code performance was conducted and the results are presented. The modified version of the code was also ported to the current state-of-art architectures with successful results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Gupta, Saurabh. "PERFORMANCE EVALUATION AND OPTIMIZATION OF THE UNSTRUCTURED CFD CODE UNCLE". UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/360.

Texto completo
Resumen
Numerous advancements made in the field of computational sciences have made CFD a viable solution to the modern day fluid dynamics problems. Progress in computer performance allows us to solve a complex flow field in practical CPU time. Commodity clusters are also gaining popularity as computational research platform for various CFD communities. This research focuses on evaluating and enhancing the performance of an in-house, unstructured, 3D CFD code on modern commodity clusters. The fundamental idea is to tune the codes to optimize the cache behavior of the node on commodity clusters to achieve enhanced code performance. Accordingly, this work presents discussion of various available techniques for data access optimization and detailed description of those which yielded improved code performance. These techniques were tested on various steady, unsteady, laminar, and turbulent test cases and the results are presented. The critical hardware parameters which influenced the code performance were identified. A detailed study investigating the effect of these parameters on the code performance was conducted and the results are presented. The successful single node improvements were also efficiently tested on parallel platform. The modified version of the code was also ported to different hardware architectures with successful results. Loop blocking is established as a predictor of code performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Seyr, Luciana. "Manejo do solo e ensacamento do cacho em pomar de bananeira 'Nanicão'". Universidade Estadual de Londrina. Centro de Ciências Agrárias. Programa de Pós-Graduação em Agronomia, 2011. http://www.bibliotecadigital.uel.br/document/?code=vtls000166653.

Texto completo
Resumen
O Brasil é o quarto produtor mundial de banana, com uma produção anual de 6,99 milhões de toneladas. É uma fruta de grande importância econômica e social, visto que é cultivada de Norte a Sul do país, garantindo emprego, renda e alimento para milhões de brasileiros durante o ano inteiro. No Paraná é a terceira fruta mais produzida no estado, com área de 9.900 ha. A maior parte da produção brasileira é destinada ao mercado interno, visto que é a segunda fruta mais consumida no país, e também devido à baixa qualidade da maior parte do produto. Essa baixa qualidade é o reflexo da falta de tecnologia das condições em que é cultivada, desde a condução na lavoura até a colheita. Uma tecnologia já utilizada em outras culturas mas pouco difundida entre os produtores de banana é o uso de plantas de cobertura para proteção do solo contra a erosão. Esse manejo é importante principalmente na implantação do bananal, pois até o início da produção decorre um tempo de cerca de 13 meses no qual o solo fica descoberto, exposto a erosão. Outra tecnologia importante para a qualidade das frutas é o ensacamento dos cachos logo após a sua formação, permanecendo como proteção até a colheita. Apesar dessa técnica ter vantagens já comprovadas em outras condições, para o estado do Paraná não existem dados relativos ao uso do ensacamento dos cachos. Assim, o trabalho foi dividido em dois projetos, ambos realizados no norte do Paraná. O objetivo do primeiro foi avaliar os efeitos da utilização de adubação verde de inverno, no estabelecimento de um pomar de bananeiras; e o segundo foi avaliar o efeito do ensacamento de cachos de banana, e o seu custo para o agricultor.
Brazil is the fourth largest producer of bananas, with an annual production of 6.99 million tons. Banana is a fruit of great economic and social importance, since it is grown from North to South of the country, generating jobs, income and food for millions of Brazilians, throughout the year. It is the third most produced fruit of the state of Paraná, with an area of 9,900 ha. Most of the Brazilian's production is destined for the domestic market, since it is the second most consumed fruit in the country, and also due to the low quality of most of the product. Such a poor quality is due to the lack of technology of the conditions in which it is grown, from the planting to the harvest. A technology which has already been used in other crops, but it is still not well known among banana producers, is the use of cover crops for soil protection against erosion. This management is particularly important for the implementation of the banana crops, because until the beginning of production, follows a period of about 13 months in which the ground is bare, exposed to erosion. Another important technology for the quality of fruit is the bagging of bunches soon after its formation, protecting until the harvest. In spite of this technique has proven to have advantages in other conditions, for the state of Paraná there is no data concerning the use of bagging bunches. Thus, the work has been divided into two subprojects, both held in the Northern of Paraná. The objective of the first was to evaluate the effects of the use of green manure on the establishment of a banana crop. The second was to evaluate the effect of bagging bunches of bananas, and its cost to the growers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kristipati, Pavan K. "Performance optimization of a structured CFD code GHOST on commodity cluster architectures /". Lexington, Ky. : [University of Kentucky Libraries], 2008. http://hdl.handle.net/10225/976.

Texto completo
Resumen
Thesis (M.S.)--University of Kentucky, 2008.
Title from document title page (viewed on February 3, 2009). Document formatted into pages; contains: xi, 144 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 139-143).
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Malik, Adeel. "Stochastic Coded Caching Networks : a Study of Cache-Load Imbalance and Random User Activity". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS045.pdf.

Texto completo
Resumen
Dans cette thèse, nous élevons la mise en cache codée de son cadre purement théorique de l'information à un cadre stochastique où la stochasticité des réseaux provient de l'hétérogénéité des comportements de demande des utilisateurs. Nos résultats soulignent que la stochasticité des réseaux assistés par cache peut conduire à la disparition des gains de la mise en cache codée. Nous déterminons l'étendue exacte du goulot d'étranglement du déséquilibre de la charge du cache codé dans les réseaux stochastiques, ce qui n'a jamais été exploré auparavant. Notre travail fournit des techniques pour atténuer l'impact de ce goulot d'étranglement pour le scénario où les associations d'état entre l'utilisateur et le cache sont restreintes par des contraintes de proximité entre les utilisateurs et les nœuds d'aide (c.-à-d. un cadre de cache partagé) ainsi que pour le scénario où les stratégies d'associations d'état entre l'utilisateur et le cache sont considérées comme un paramètre de conception (c.-à-d. un cadre contraint par la mise en sous-paquets)
In this thesis, we elevate coded caching from their purely information-theoretic framework to a stochastic setting where the stochasticity of the networks originates from the heterogeneity in users’ request behaviors. Our results highlight that stochasticity in the cache-aided networks can lead to the vanishing of the gains of coded caching. We determine the exact extent of the cache-load imbalance bottleneck of coded caching in stochastic networks, which has never been explored before. Our work provides techniques to mitigate the impact of this bottleneck for the scenario where the user-to-cache state associations are restricted by proximity constraints between users and helper nodes (i.e., shared-cache setting) as well as for the scenario where user-to-cache state associations strategies are considered, as a design parameter (i.e., subpacketization-constrained setting)
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dias, Wanderson Roger Azevedo. "Arquitetura pdccm em hardware para compressão/descompressão de instruções em sistemas embarcados". Universidade Federal do Amazonas, 2009. http://tede.ufam.edu.br/handle/tede/2950.

Texto completo
Resumen
Made available in DSpace on 2015-04-11T14:03:12Z (GMT). No. of bitstreams: 1 DISSERTACAO - WANDERSON ROGER.pdf: 2032449 bytes, checksum: f75ada58e34bb5da29e9716bc5899cab (MD5) Previous issue date: 2009-04-30
Fundação de Amparo à Pesquisa do Estado do Amazonas
In the development of the design of embedded systems several factors must be led in account, such as: physical size, weight, mobility, energy consumption, memory, cooling, security requirements, trustiness and everything ally to a reduced cost and of easy utilization. But, on the measure that the systems become more heterogeneous they admit major complexity in its development. There are several techniques to optimize the execution time and power usage in embedded systems. One of these techniques is the code compression, however, most existing proposals focus on decompress and they assume that the code is compressed in compilation time. Therefore, this work proposes the development of an specific architecture, with its prototype in hardware (using VHDL and FPGAs), special for the process of compression/decompression code. Thus, it is proposed a technique called PDCCM (Processor Memory Cache Compressor Decompressor). The results are obtained via simulation and prototyping. In the analysis, benchmark programs such as MiBench had been used. Also a method of compression, called of MIC was considered (Middle Instruction Compression), which was compared with the traditional Huffman compression method. Therefore, in the architecture PDCCM the MIC method showed better performance in relation to the Huffman method for some programs of the MiBench analyzed that are widely used in embedded systems, resulting in 26% less of the FPGA logic elements, 71% more in the frequency of the clock MHz and in the 36% plus on the compression of instruction compared with Huffman, besides allowing the compression/decompression in time of execution.
No desenvolvimento do projeto de sistemas embarcados vários fatores têm que ser levados em conta, tais como: tamanho físico, peso, mobilidade, consumo de energia, memória, refrescância, requisitos de segurança, confiabilidade e tudo isso aliado a um custo reduzido e de fácil utilização. Porém, à medida que os sistemas tornam-se mais heterogêneos os mesmos admitem maior complexidade em seu desenvolvimento. Existem diversas técnicas para otimizar o tempo de execução e o consumo de energia em sistemas embarcados. Uma dessas técnicas é a compressão de código, não obstante, a maioria das propostas existentes focaliza na descompressão e assumem que o código é comprimido em tempo de compilação. Portanto, este trabalho propõe o desenvolvimento de uma arquitetura, com respectiva prototipação em hardware (usando VHDL e FPGAs), para o processo de compressão/descompressão de código. Assim, propõe-se a técnica denominada de PDCCM (Processor Decompressor Cache Compressor Memory). Os resultados são obtidos via simulação e prototipação. Na análise usaram-se programas do benchmark MiBench. Foi também proposto um método de compressão, denominado de MIC (Middle Instruction Compression), o qual foi comparado com o tradicional método de compressão de Huffman. Portanto, na arquitetura PDCCM o método MIC apresentou melhores desempenhos computacionais em relação ao método de Huffman para alguns programas do MiBench analisados que são muito usados em sistemas embarcados, obtendo 26% a menos dos elementos lógicos do FPGA, 71% a mais na freqüência do clock em MHz e 36% a mais na compressão das instruções comparando com o método de Huffman, além de permitir a compressão/descompressão em tempo de execução.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Patterson, Jason Robert Carey. "VGO : very global optimizer". Thesis, Queensland University of Technology, 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Carrascal, Manzanares Carlos. "Parallélisation d’un code éléments finis spectraux. Application au contrôle non destructif par ultrasons". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS586.

Texto completo
Resumen
Le sujet de cette thèse consiste à étudier diverses pistes pour optimiser le temps de calcul de la méthode des éléments finis spectraux d'ordre élevé (SFEM). L’objectif est d’améliorer la performance en se basant sur des architectures facilement accessibles, à savoir des processeurs multicœurs SIMD et des processeurs graphiques. Les noyaux de calcul étant limités par les accès mémoire (signe d’une faible intensité arithmétique), la plupart des optimisations présentées visent la réduction et l’accélération des accès mémoire. Une indexation améliorée des matrices et vecteurs, une combinaison des transformations de boucles, un parallélisme de tâches (multithreading) et un parallélisme de données (instructions SIMD) sont les transformations visant l’utilisation optimale de la mémoire cache, l’utilisation intensive des registres et la parallélisation multicœur SIMD. Les résultats sont concluants : les optimisations proposées augmentent la performance (entre x6 et x11) et accélèrent le calcul (entre x9 et x16). L’implémentation codée explicitement avec des instructions SIMD est jusqu’à x4 plus performant que l’implémentation vectorisée. L’implémentation GPU est entre 2 et 3 fois plus rapide qu’en CPU, sachant qu’une connexion haut débit NVLink permettrait un meilleur masquage des transferts de mémoire. Les transformations proposées par cette thèse composent une méthodologie pour optimiser des codes de calcul intensif sur des architectures courantes et pour tirer parti au maximum des possibilités offertes par le multithreading et les instructions SIMD
The subject of this thesis is to study numerous ways to optimize the high order spectral finite element method (SFEM) computation time. The goal is to improve performance based on easily accessible architectures, namely SIMD multicore processors and graphics processors. As the computational kernels are limited by memory accesses (indicating a low arithmetic intensity), most of the optimizations presented are aimed at reducing and accelerating memory accesses. Improved matrix and vectors indexing, a combination of loop transformations, task parallelism (multithreading) and data parallelism (SIMD instructions) are transformations aimed at cache memory optimal use, registers intensive use and multicore SIMD parallelization. The results are convincing: the proposed optimizations increase the performance (between x6 and x11) and speed up the computation (between x9 and x16). The SIMDized implementation is up to x4 better than the vectorized implementation. The GPU implementation is between two and three times faster than the CPU one, knowing that a NVLink high-speed connection will allow a correct masking of memory transfers. The proposed transformations form a methodology to optimize intensive computation codes on common architectures and to make the most of the possibilities offered by multithreading and SIMD instructions
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Dridi, Noura. "Estimation aveugle de chaînes de Markov cachées simples et doubles : Application au décodage de codes graphiques". Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0022.

Texto completo
Resumen
Depuis leur création, les codes graphiques constituent un outil d'identification automatique largement exploité en industrie. Cependant, les performances de lecture sont limitées par un flou optique et un flou de mouvement. L'objectif de la thèse est l'optimisation de lecture des codes 1D et 2D en exploitant des modèles de Markov cachés simples et doubles, et des méthodes d'estimation aveugles. En premier lieu, le système de lecture de codes graphiques est modélisé par une chaîne de Markov cachée, et des nouveaux algorithmes pour l'estimation du canal et la détection des symboles sont développés. Ils tiennent compte de la non stationnarité de la chaîne de Markov. De plus une méthode d'estimation de la taille du flou et de sa forme est proposée. La méthode utilise des critères de sélection permettant de choisir le modèle de dégradation le plus adéquat. Enfin nous traitons le problème de complexité qui est particulièrement important dans le cas d'un canal à mémoire longue. La solution proposée consiste à modéliser le canal à mémoire longue par une chaîne de Markov double. Sur la base de ce modèle, des algorithmes offrant un rapport optimisé performance-complexité sont présentés
Since its birth, the technology of barcode is well investigated for automatic identification. When reading, a barcode can be degraded by a blur , caused by a bad focalisation and/ or a camera movement. The goal of this thesis is the optimisation of the receiver of 1D and 2D barcode from hidden and double Markov model and blind statistical estimation approaches. The first phase of our work consists of modelling the original image and the observed one using Hidden Markov model. Then, new algorithms for joint blur estimation and symbol detection are proposed, which take into account the non-stationarity of the hidden Markov process. Moreover, a method to select the most relevant model of the blur is proposed, based on model selection criterion. The method is also used to estimate the blur length. Finally, a new algorithm based on the double Markov chain is proposed to deal with digital communication through a long memory channel. Estimation of such channel is not possible using the classical detection algorithms based on the maximum likelihood due to the prohibitive complexity. New algorithm giving good trade off between complexity and performance is provided
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Liu, Chun-Cheng y 劉俊成. "Enhanced Heterogeneous Code Cache Management Scheme for Dynamic Binary Translation". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78064859901464682041.

Texto completo
Resumen
碩士
國立清華大學
資訊工程學系
98
Recently, DBT has gained much attentions on embedded systems. However, the memory resource in embedded systems is often limited. This leads to the overhead of code re-translation and causes significant performance degradation. To reduce this overhead, Heterogeneous Code Cache (HCC), is proposed to split the code cache among SPM and main memory to avoid re-translation of code fragments. Although HCC is effective in handling applications with large working set, it ignores the execution frequencies of program fragments. Frequently executed program fragments can be stored in main memory and thus causes performance loss. To address this problem, an enhanced Heterogeneous Code Cache management scheme which considers program behaviors is proposed in this thesis. Experimental results show that the proposed management scheme can effectively improve the access ratio of SPM from 49.48% to 95.06%. This leads to 42.68% improvement of performance as compared with the management scheme proposed in the previous work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Meng-Chun, Wueng. "Design of Code Caches in Active RMI". 2003. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611341549.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Wueng, Meng-Chun y 翁孟君. "Design of Code Caches in Active RMI". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/29692126895466665053.

Texto completo
Resumen
碩士
元智大學
資訊工程學系
91
In distributed computing, Remote Method Invocation (RMI) provides an easy and transparent programming interface that simplifies programmers in designing distributed applications. However, based on the end-to-end network model, when a large amount of client requests bursts to the server, the server may become a centralized bottleneck if its workload is heavy. Another is that the network traffic is congested along the paths between the clients and the server. Therefore, not only the clients wait for the responses in a lengthy time, but also all network services on the paths are influenced. Furthermore, even in a formal situation, RMI services are fragile when server or network failures occur. Although introducing the extension of a multi-tier design can relieve these problematic situations, the clients still need to be aware of these explicitly added middle tiers. As a result, the complexity of RPC application design will be more increased. Active networks provide a new network infrastructure in which intermediate active routers can provide extra computing power. In this thesis, how to improve RMI application performance on active networks and solve the foregoing problems are discussed. Furthermore, an active RMI running on ANTS active networks architecture to improve Java RMI is proposed, called ActiveRMI. Three advantages are achieved. First, the workload of the remote servers is shared with intermediate active routers. Second, the packet transmission is localized between the clients and the nearby intermediate active routers. As a result, the total amount of transmitted network packets is thus reduced. The service response time is also shortened. We implement code cache in ANTS based on FreeBSD, and the RMI application performance is evaluated by testing programs. Although the conducted experimental results are preliminary, remote RMI services indeed can be migrated to proximate active routers. Therefore, the workload of the remote servers is alleviated and the user response time is improved approximately 4%. Many issues still need to be further discussed, however, the feature of the dynamic service deployment of active networks indeed improves the performance of network services. It is worthy to be explored in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Liu, Chia-Lun y 劉家倫. "Dynamic Binary Translation for Multi-Threaded Programs with Shared Code Cache". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/40583090410769099911.

Texto completo
Resumen
碩士
國立交通大學
資訊科學與工程研究所
101
We present a process-level ARM-to-x86/64 dynamic binary translator which can efficiently emulate multi-threaded binaries based on mc2llvm. The difficulty of translating multi-threaded binaries is the synchronization overhead incurred by the translator, which has a great impact on the performance. We find the performance bottleneck of the synchronization and solve it by (1) shortening the lock section as we can (2) using the concurrent data structure (3) using the thread-private memory. In addition, we add trace compilation in mc2llvm to speed up the emulation. Code generation of traces is done by specific threads in our system. In our experiment, our system is faster than QEMU by 8.8X when emulating benchmarks with 8 guest threads.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ku, Chang-Jung y 顧長榮. "Designing a Power-aware Embedded System with Code Compression and Linked Cache". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/62903343890650401477.

Texto completo
Resumen
碩士
朝陽科技大學
資訊工程系碩士班
94
In designing an embedded system, three issues -- hardware cost, system performance, and power consumption -- have to be taken carefully into consideration. We present an embedded system with cache design which considers performance and power consumption based on the frequency with which instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. According to the locality of executed programs, 90% execution time is used by 10% of the static object codes. As a result, we compress 90% of the static object codes to obtain our main data compression ratio to reduce the use of memory space. However, performance and power consumption are relevant to execution process, so we compress 10% of the frequently executed object codes to improve both performance and power consumption by reducing the number of memory access times. We encode the frequently executed instructions as shorter code words and then we pack continuous code words into a pseudo instruction. Once the decompression engine fetches one pseudo instruction, it can extract multiple instructions. Therefore, memory access can be efficiently reduced because of space locality. From our simulation results, our method with one 256-instruction reference table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 33.08% than pre-cache with compressing all instructions. However, when one 512-instruction reference table is used, the ratio of the power consumption is reduced by 39.58% . According to the simulation results, our proposed methods based on the frequencies of executed instructions result in low power consumption, performance improvement and reduced memory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Li, Chong-Jian y 李重建. "An Energy-Efficient Code Compression Scheme For Embedded Cache by Address Translation". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/64206670660898124806.

Texto completo
Resumen
碩士
國立中正大學
資訊工程研究所
89
In portable products, with more and more increasing functions and work speed, the power dissipation is more and more plenty. Cache consumes the greater part of power dissipation in processor, so we will present a new low power cache architecture to reduce power dissipation in cache. We present two subjects in this paper. One is that separate dictionary code compression. This compression scheme is that uses two dictionaries to compressed cache and memory instructions individually in order to reduce power dissipation and obtain good compression ratio. Another is low power cache architecture that uses address translation and combines code compression. Our low power cache architecture is to replace the tag array with an address translation. This architecture can reduce power dissipation for cache hit and cache miss. In addition, we combine a compression method that is dictionary based compression scheme with our low power cache architecture. The compression method can increase the code density and cache hit ratio, and the power of instruction decompression is minimized. Furthermore, the instructions in cache are duplicated for memory. If the instruction is in cache, the processor will not access the main memory. As a result of characteristic of our cache architecture and compression method, we can further compact the instruction space in memory with Address Translator for reducing main memory size. Moreover, since the instructions in the cache are duplicated form memory, we may further compact the instruction space in the memory by eliminating these duplicate instructions via additional address translation. The experimental results show that this cache architecture can efficiently reduce power dissipation and main memory size can be reduced cache size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chien, Chia-Hung y 簡嘉宏. "A Separate Code Cache Model for a Parallel Multi-Core System Emulator Based on QEMU". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/09894241610457505087.

Texto completo
Resumen
碩士
國立清華大學
資訊工程學系
99
QEMU is a fast processor emulator by adopting dynamic binary translation techniques to achieve high emulation efficiency. With QEMU, various operating systems and programs created for one ISA can be run on a machine with a different ISA. However, the current design of QEMU is only suitable for single-core processor emulation. When executing a multi-threaded application on a multi-core machine, QEMU emulates the execution of the application in serial and cannot take advantage of the parallelism available in the application and the underlying hardware. In this work, we propose a novel design of a multi-threaded QEMU, called P-QEMU, which can effectively deploy multiple simulated virtual CPUs on the underlying multi-core machine. The main idea of the design is to add a Separate Code Cache model to the execution flow of QEMU. To evaluate the design, we emulate an ARM11 MPCore by running P-QEMU on a quad-core x86 i7 system and use SPLASH-2, PARSEC, and CoreMark as benchmarks. The experimental results show that the performance of P-QEMU is, on average, 3.79 times faster than that of QEMU and is scalable on the quad-core i7 system for the SPLASH-2 benchmark suite.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Suresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. https://etd.iisc.ac.in/handle/2005/438.

Texto completo
Resumen
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Suresha, *. "Caching Techniques For Dynamic Web Servers". Thesis, 2006. http://hdl.handle.net/2005/438.

Texto completo
Resumen
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía