Literatura científica selecionada sobre o tema "Cache partagé"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Cache partagé".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Cache partagé"
Buyck, Jennifer, e Olivier Perrier. "De la fête comme projet de territoire. Réflexions liminaires autour de «La ferme du Bonheur»". Géo-Regards 9, n.º 1 (2016): 43–60. http://dx.doi.org/10.33055/georegards.2016.009.01.43.
Texto completo da fonteGenin, Christine. "Lire Claude Simon lisant Proust". Tangence, n.º 112 (23 de maio de 2017): 109–31. http://dx.doi.org/10.7202/1039909ar.
Texto completo da fonteLoehr, Joël. "Le jeu déréglé du burlesque : du Roman comique (1651) à Molloy (1951)". Quêtes littéraires, n.º 13 (30 de dezembro de 2023): 96–106. http://dx.doi.org/10.31743/ql.16862.
Texto completo da fonteVoirol, Jérémie. "Récit ethnographique d’une expérience partagée de la fête de San Juan/Inti Raymi à Otavalo (Andes équatoriennes)". Ethnologies 35, n.º 1 (9 de setembro de 2014): 51–74. http://dx.doi.org/10.7202/1026451ar.
Texto completo da fonteNgangop, Joseph. "représentation du marronnage et du maquis, ou la mémoire reconstruite : Au seuil d’un nouveau cri de Bertène Juminer et de Demain est encore loin de Victor Bouadjio". Voix Plurielles 19, n.º 2.2 (26 de novembro de 2022): 688–703. http://dx.doi.org/10.26522/vp.v19i2.4127.
Texto completo da fonteHajok, Alicja, e Lidia Miladi. "Émergence de sens multiples dans le discours : sur l’exemple des structures lexico-syntaxiques des slogans". Roczniki Humanistyczne 71, n.º 6 (31 de agosto de 2023): 113–29. http://dx.doi.org/10.18290/rh23716.7.
Texto completo da fonteDUSAILLANT-FERNANDES, VALÉRIE. "Écrire sur l’être vulnérable : déconstruire le cliché de la « bulle » autistique chez Laurent Demoulin et Élisabeth de Fontenay". Australian Journal of French Studies: Volume 57, Issue 3 57, n.º 3 (1 de dezembro de 2020): 293–306. http://dx.doi.org/10.3828/ajfs.2020.26.
Texto completo da fonteMarchand, Suzanne. "Cachez ce sang que je ne saurais voir. Les menstruations au Québec (1900-1950)". Études 10 (22 de janeiro de 2013): 69–80. http://dx.doi.org/10.7202/1013541ar.
Texto completo da fonteWadbled, Nathanaël. "Fantasmer en laissant le corps dans le placard. Normalisation de la jouissance et impossibilité du rapport sexuel dans Royal Opera de Lionel Soukaz". Voix Plurielles 15, n.º 2 (9 de dezembro de 2018): 83–95. http://dx.doi.org/10.26522/vp.v15i2.2076.
Texto completo da fonteGIESING, Cornelia Bernhardette. "“Le loup dans la bergerie”: Narrations et identités des Bijaa, sujets conquéreurs de l’ancien royaume de Kasa en Sénégambie. Hommage à Stephan Bühnen (1950-2015)". Varia Historia 36, n.º 71 (agosto de 2020): 361–93. http://dx.doi.org/10.1590/0104-87752020000200005.
Texto completo da fonteTeses / dissertações sobre o assunto "Cache partagé"
Liu, Hao. "Protocoles scalables de cohérence des caches pour processeurs manycore à espace d'adressage partagé visant la basse consommation". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066059/document.
Texto completo da fonteThe TSAR architecture (Tera-Scale ARchitecture) developed jointly by Lip6 Bull and CEA-LETI is a CC-NUMA manycore architecture which is scalable up to 1024 cores. The DHCCP cache coherence protocol in the TSAR architecture is a global directory protocol using the write-through policy in the L1 cache for scalability purpose, but this write policy causes a high power consumption which we want to reduce. Currently the biggest semiconductors companies, such as Intel or AMD, use the MESI MOESI protocols in their multi-core processors. These protocols use the write-back policy to reduce the high power consumption due to writes. However, the complexity of implementation and the sharp increase in the coherence traffic when the number of processors increases limits the scalability of these protocols beyond a few dozen cores. In this thesis, we propose a new cache coherence protocol using a hybrid method to process write requests in the L1 private cache : for exclusive lines, the L1 cache controller chooses the write-back policy in order to modify locally the lines as well as eliminate the write traffic for exclusive lines. For shared lines, the L1 cache controller uses the write-through policy to simplify the protocol and in order to guarantee the scalability. We also optimized the current solution for the TLB coherence problem in the TSAR architecture. The new method which is called CC-TLB not only improves the performance, but also reduces the energy consumption. Finally, this thesis introduces a new micro cache between the core and the L1 cache, which allows to reduce the number of accesses to the instruction cache, in order to save energy
Liu, Hao. "Protocoles scalables de cohérence des caches pour processeurs manycore à espace d'adressage partagé visant la basse consommation". Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066059.
Texto completo da fonteThe TSAR architecture (Tera-Scale ARchitecture) developed jointly by Lip6 Bull and CEA-LETI is a CC-NUMA manycore architecture which is scalable up to 1024 cores. The DHCCP cache coherence protocol in the TSAR architecture is a global directory protocol using the write-through policy in the L1 cache for scalability purpose, but this write policy causes a high power consumption which we want to reduce. Currently the biggest semiconductors companies, such as Intel or AMD, use the MESI MOESI protocols in their multi-core processors. These protocols use the write-back policy to reduce the high power consumption due to writes. However, the complexity of implementation and the sharp increase in the coherence traffic when the number of processors increases limits the scalability of these protocols beyond a few dozen cores. In this thesis, we propose a new cache coherence protocol using a hybrid method to process write requests in the L1 private cache : for exclusive lines, the L1 cache controller chooses the write-back policy in order to modify locally the lines as well as eliminate the write traffic for exclusive lines. For shared lines, the L1 cache controller uses the write-through policy to simplify the protocol and in order to guarantee the scalability. We also optimized the current solution for the TLB coherence problem in the TSAR architecture. The new method which is called CC-TLB not only improves the performance, but also reduces the energy consumption. Finally, this thesis introduces a new micro cache between the core and the L1 cache, which allows to reduce the number of accesses to the instruction cache, in order to save energy
Parrinello, Emanuele. "Fundamental Limits of Shared-Cache Networks". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS491.
Texto completo da fonteIn the context of communication networks, the emergence of predictable content has brought to the fore the use of caching as a fundamental ingredient for handling the exponential growth in data volumes. This thesis aims at providing the fundamental limits of shared-cache networks where the communication to users is aided by a small set of caches. Our shared-cache model, not only captures heterogeneous wireless cellular networks, but it can also represent a model for users requesting multiple files simultaneously, and it can be used as a simple yet effective way to deal with the so-called subpacketization bottleneck of coded caching. Furthermore, we will also see how our techniques developed for caching networks can find application in the context of heterogeneous coded distributed computing
Malik, Adeel. "Stochastic Coded Caching Networks : a Study of Cache-Load Imbalance and Random User Activity". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS045.pdf.
Texto completo da fonteIn this thesis, we elevate coded caching from their purely information-theoretic framework to a stochastic setting where the stochasticity of the networks originates from the heterogeneity in users’ request behaviors. Our results highlight that stochasticity in the cache-aided networks can lead to the vanishing of the gains of coded caching. We determine the exact extent of the cache-load imbalance bottleneck of coded caching in stochastic networks, which has never been explored before. Our work provides techniques to mitigate the impact of this bottleneck for the scenario where the user-to-cache state associations are restricted by proximity constraints between users and helper nodes (i.e., shared-cache setting) as well as for the scenario where user-to-cache state associations strategies are considered, as a design parameter (i.e., subpacketization-constrained setting)
Gao, Yang. "Contrôleur de cache générique pour une architecture manycore massivement parallèle à mémoire partagée cohérente". Paris 6, 2011. http://www.theses.fr/2011PA066296.
Texto completo da fonteHardy, Damien. "Analyse pire cas pour processeur multi-cœurs disposant de caches partagés". Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00557058.
Texto completo da fonteHardy, Damien. "Analyse pire cas pour processeur multi-coeurs disposant de caches partagés". Rennes 1, 2010. http://www.theses.fr/2010REN1S143.
Texto completo da fonteHard real-time systems are subject to timing constraints and failure to respect them can cause economic, ecological or human disasters. The validation process which guarantees the safety of such software, by ensuring the respect of these constraints in all situations including the worst case, is based on the knowledge of the worst case execution time of each task. However, determining the worst case execution time is a difficult problem for modern architectures because of complex hardware mechanisms that could cause significant execution time variability. This document focuses on the analysis of the worst case timing behavior of cache hierarchies, to determine their contribution to the worst case execution time. Several approaches are proposed to predict and improve the worst case execution time of tasks running on multicore processors with a cache hierarchy in which some cache levels are shared between cores
Wan, Kai. "Limites fondamentales de stockage pour les réseaux de diffusion de liens partagés et les réseaux de combinaison". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS217/document.
Texto completo da fonteIn this thesis, we investigated the coded caching problem by building the connection between coded caching with uncoded placement and index coding, and leveraging the index coding results to characterize the fundamental limits of coded caching problem. We mainly analysed the caching problem in shared-link broadcast model and in combination networks. In the first part of this thesis, for cache-aided shared-link broadcast networks, we considered the constraint that content is placed uncoded within the caches. When the cache contents are uncoded and the user demands are revealed, the caching problem can be connected to an index coding problem. We derived fundamental limits for the caching problem by using tools for the index coding problem. A novel index coding achievable scheme was first derived based on distributed source coding. This inner bound was proved to be strictly better than the widely used “composite (index) coding” inner bound by leveraging the ignored correlation among composites and the non-unique decoding. For the centralized caching problem, an outer bound under the constraint of uncoded cache placement is proposed based on the “acyclic index coding outer bound”. This outer bound is proved to be achieved by the cMAN scheme when the number of files is not less than the number of users, and by the proposed novel index coding achievable scheme otherwise. For the decentralized caching problem, this thesis proposes an outer bound under the constraint that each user stores bits uniformly and independently at random. This outer bound is achieved by dMAN when the number of files is not less than the number of users, and by our proposed novel index coding inner bound otherwise. In the second part of this thesis, we considered the centralized caching problem in two-hop relay networks, where the server communicates with cache-aided users through some intermediate relays. Because of the hardness of analysis on the general networks, we mainly considered a well-known symmetric relay networks, combination networks, including H relays and binom{H}{r} users where each user is connected to a different r-subset of relays. We aimed to minimize the max link-load for the worst cases. We derived outer and inner bounds in this thesis. For the outer bound, the straightforward way is that each time we consider a cut of x relays and the total load transmitted to these x relays could be outer bounded by the outer bound for the shared-link model including binom{x}{r} users. We used this strategy to extend the outer bounds for the shared-link model and the acyclic index coding outer bound to combination networks. In this thesis, we also tightened the extended acyclic index coding outer bound in combination networks by further leveraging the network topology and joint entropy of the various random variables. For the achievable schemes, there are two approaches, separation and non-separation. In the separation approach, we use cMAN cache placement and multicast message generation independent of the network topology. We then deliver cMAN multicast messages based on the network topology. In the non-separation approach, we design the placement and/or the multicast messages on the network topology. We proposed four delivery schemes on separation approach. On non-separation approach, firstly for any uncoded cache placement, we proposed a delivery scheme by generating multicast messages on network topology. Moreover, we also extended our results to more general models, such as combination networks with cache-aided relays and users, and caching systems in more general relay networks. Optimality results were given under some constraints and numerical evaluations showed that our proposed schemes outperform the state-of-the-art
Dumas, Julie. "Représentation dynamique de la liste des copies pour le passage à l'échelle des protocoles de cohérence de cache". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM093/document.
Texto completo da fonteCache coherence protocol scalability problem for parallel architecture is also a problem for on chip architecture, following the emergence of manycores architectures. There are two protocol classes : snooping and directory-based.Protocols based on snooping, which send coherence information to all caches, generate a lot of messages whose few are useful.On the other hand, directory-based protocols send messages only to caches which need them. The most obvious implementation uses a full bit vector whose size depends only on the number of cores. This bit vector represents the sharing set. To scale, a coherence protocol must produce a reasonable number of messages and limit hardware ressources used by the coherence and in particular for the sharing set.To evaluate and compare protocols and their sharing set, we first propose a method based on trace injection in a high-level cache model. This method enables a very fast architectural exploration of cache coherence protocols.We also propose a new dynamic sharing set for cache coherence protocols, which is scalable. With 64 cores, 93% of cache blocks are shared by up to 8 cores.Futhermore, knowing that the operating system looks to place communicating tasks close to each other. Our dynamic sharing set takes advantage from these two observations by using a bit vector for a subset of copies and a linked list. The bit vector corresponds to a rectangle which stores the exact sharing set. The position and shape of this rectangle evolve over application's lifetime. Several algorithms for coherent rectangle placement are proposed and evaluated. Finally, we make a comparison with sharing sets from the state of the art
Busseuil, Rémi. "Exploration d'architecture d'accélérateurs à mémoire distribuée". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20218/document.
Texto completo da fonteAlthough the accelerators market is dominated by heterogeneous MultiProcessor Systems-on-Chip (MPSoC), i.e. with different specialized processors, a growing interest is put on another type of MPSoC, composed by an array of identical processors. Even if these processors achieved lower performance to power ratio, the better flexibility and programmability of these homogeneous MPSoC allow an easier adaptation to the load, and offer a wider space of configurations. In this context, this thesis exposes the development of a scalable homogeneous MPSoC – i.e. with linear performance scaling – and different kind of adaptive mechanisms and programming model on it.This architecture is based on an array of MicroBlaze-like processors, each having its own memory, and connected through a 2D NoC. A modular RTOS was build on top of it. Thanks to a complex communication stack, different adaptive mechanisms were made: a “redirected data” task migration mechanism, reducing the impact of the migration mechanism for data-flow applications, and a “remote execution” mechanism. Instead of migrate the instruction code from a memory to another, this last consists in only migrate the execution, keeping the code in its initial memory. The different experiments shows faster reactivity but lower performance of this mechanism compared to migration.This development naturally led to the creation of a shared memory programming model. To achieve this, a scalable hardware/software memory consistency and cache coherency mechanism has been made, through the PThread library development. Experiments show the advantage of using NoC based homogeneous MPSoC with a brand programming model
Capítulos de livros sobre o assunto "Cache partagé"
Heusch, Carlos. "3 - Écrire dans le secret : la tradition médiévale du savoir caché". In Le partage du secret, 78. Armand Colin, 2013. http://dx.doi.org/10.3917/arco.darb.2013.01.0078.
Texto completo da fonteBernot, Gabriel. "Le potentiel cach� des personnes autistes sans langage". In Construction et partage du monde interne, 121. ERES, 2018. http://dx.doi.org/10.3917/eres.amy.2018.02.0121.
Texto completo da fonteAslanov, Cyril. "De l’intertexte au paratexte". In Marina Tsvetaeva et l'Europe, 45–56. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.3362.
Texto completo da fonte