Segui questo link per vedere altri tipi di pubblicazioni sul tema: Wireless caching.

Tesi sul tema "Wireless caching"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-19 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Wireless caching".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Chupisanyarote, Sanpetch. "Content Caching in Opportunistic Wireless Networks". Thesis, KTH, Kommunikationsnät, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91875.

Testo completo
Abstract (sommario):
Wireless networks have become popular in the last two decades and their use is since growing significantly. There is a concern that the existing resource of centralized networks may not sufficiently serve the enormous demand of customers. This thesis proposes one solution that has a potential to improve the network. We introduce decentralized networks particularly wireless ad-hoc networks, where users communicate and exchange information only with their neighbors. Thus, our main focus is to enhance the performance of data dissemination in wireless ad-hoc networks. In this thesis, we first examine a content distribution concept, in which nodes only focus on downloading and sharing the contents that are of their own interest. We call it private content and it is stored in a private cache. Then, we design and implement a relay-request caching strategy, where a node will generously help to fetch contents that another node asks for, although the contents are not of its interest. The node is not interested in these contents but fetches them on behalf of others; they are considered public contents. Thesepublic contents are stored in a public cache. We also propose three public caching options for optimizing network resources: relay request on demand, hop-limit, and greedy relay request. The proposed strategies are implemented in the OMNeT++ simulator and evaluated on mobility traces from Legion Studio. We also campare our novel caching strategy with an optimal channel choice strategy. The results are analyzed and they show that the use of public cache in the relay request strategy can enhance the performance marginally while overhead increases significantly.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Xu, Ji. "Data caching in wireless mobile networks /". View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20XU.

Testo completo
Abstract (sommario):
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 57-60). Also available in electronic version. Access restricted to campus users.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

AlHassoun, Yousef. "TOWARDS EFFICIENT CODED CACHING FOR WIRELESS NETWORKS". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1575632062797432.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Jerry Chun-Ping Computer Science &amp Engineering Faculty of Engineering UNSW. "Wireless network caching scheme for a cost saving wireless data access". Awarded by:University of New South Wales. School of Computer Science & Engineering, 2006. http://handle.unsw.edu.au/1959.4/25521.

Testo completo
Abstract (sommario):
Recent widespread use of computer and wireless communication technologies has increased the demand of data services via wireless channels. However, providing high data rate in wireless system is expensive due to many technical and physical limitations. Unlike voice service, data service can tolerate delays and allow burst transfer of information, thus, an alternative approach had to be formulated. This approach is known as ???Infostation.??? Infostation is an inexpensive, high speed wireless disseminator that features discontinuous coverage and high radio transmission rate by using many short-distance high bandwidth local wireless stations in a large terrain. As opposed to ubiquitous networks, each infostation provides independent wireless connectivity at relative shorter distance compare to traditional cellular network. However, due to the discontinuous nature of infostation network, there is no data service available between stations, and the clients become completely disconnected from the outside world. During, the disconnected period, the clients have to access information locally. Thus, the need for a good wireless network caching scheme has arisen. In this dissertation, we explore the use of the infostation model for disseminating and caching of data. Our initial approach focuses on large datasets that exhibit hierarchical structure. In order to facilitate information delivery, we exploit the hierarchical nature of the file structure, then propose generic content scheduling and cache management strategies for infostations. We examine the performance of our proposed strategies with the network simulator Qualnet. Our simulation results demonstrate the improvement in increasing the rate of successful data access, thus alleviating excessive waiting overheads during disconnected periods. Moreover, our technique allows infostations to be combined with traditional cellular networks and avoid accessing data via scarce and expensive wireless channel for the purpose of cost reduction.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Hui, Chui Ying. "Broadcast algorithms and caching strategies for mobile transaction processing". HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/781.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Sengupta, Avik. "Fundamentals of Cache Aided Wireless Networks". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73583.

Testo completo
Abstract (sommario):
Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in content-centric next generation 5G wireless networks by leveraging localized content storage and delivery. Caching generally works in two phases namely (i) storage phase where parts of popular content is pre-fetched and stored in caches at the network edge during time of low network load and (ii) delivery phase where content is distributed to users at times of high network load by leveraging the locally stored content. Cache-aided networks therefore have the potential to leverage storage at the network edge to increase bandwidth efficiency. In this dissertation we ask the following question - What are the theoretical and practical guarantees offered by cache aided networks for reliable content distribution while minimizing transmission rates and increasing network efficiency? We furnish an answer to this question by identifying fundamental Shannon-type limits for cache aided systems. To this end, we first consider a cache-aided network where the cache storage phase is assisted by a central server and users can demand multiple files at each transmission interval. To service these demands, we consider two delivery models - (i) centralized content delivery where demands are serviced by the central server; and (ii) device-to-device-assisted distributed delivery where demands are satisfied by leveraging the collective content of user caches. For such cache aided networks, we develop a new technique for characterizing information theoretic lower bounds on the fundamental storage-rate trade-off. Furthermore, using the new lower bounds, we establish the optimal storage-rate trade-off to within a constant multiplicative gap and show that, for the case of multiple demands per user, treating each set of demands independently is order-optimal. To address the concerns of privacy in multicast content delivery over such cache-aided networks, we introduce the problem of caching with secure delivery. We propose schemes which achieve information theoretic security in cache-aided networks and show that the achievable rate is within a constant multiplicative factor of the information theoretic optimal secure rate. We then extend our theoretical analysis to the wireless domain by studying a cloud and cache-aided wireless network from a perspective of low-latency content distribution. To this end, we define a new performance metric namely normalized delivery time, or NDT, which captures the worst-case delivery latency. We propose achievable schemes with an aim to minimize the NDT and derive information theoretic lower bounds which show that the proposed schemes achieve optimality to within a constant multiplicative factor of 2 for all values of problem parameters. Finally, we consider the problem of caching and content distribution in a multi-small-cell heterogeneous network from a reinforcement learning perspective for the case when the popularity of content is unknown. We propose a novel topology-aware learning-aided collaborative caching algorithm and show that collaboration among multiple small cells for cache-aided content delivery outperforms local caching in most network topologies of practical interest. The results presented in this dissertation show definitively that cache-aided systems help in appreciable increase of network efficiency and are a viable solution for the ever evolving capacity demands in the wireless communications landscape.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hosny, Sameh Shawky Ibrahim. "MOBILITY AND CONTENT TRADING IN DEVICE-TO-DEVICE CACHING NETWORKS". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1480629254438794.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ghorbel, Asma. "Limites Fondamentales De Stockage Dans Les Réseaux Sans Fil". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC031/document.

Testo completo
Abstract (sommario):
Le stockage de contenu populaire dans des caches disponibles aux utilisateurs, est une technique émergente qui permet de réduire le trafic dans les réseaux sans fil. En particulier, le coded caching proposée par Maddah-Ali et Niesen a été considéré comme une approche prometteuse pour atteindre un temps de livraison constant au fur et à mesure que la dimension augmente. Toutefois, plusieurs limitations empêchent ses applications. Nous avons adressé les limitations de coded caching dans les réseaux sans fil et avons proposé des schémas de livraison qui exploitent le gain de coded caching. Dans la première partie de la thèse, nous étudions la région de capacité pour un canal à effacement avec cache et retour d'information. Nous proposons un schéma et prouvons son optimalité pour des cas particuliers. Ces résultats sontgénéralisés pour le canal à diffusion avec desantennes multiples et retour d'information. Dans la deuxième partie, nous étudions la livraison de contenu sur un canal d'atténuation asymétrique, où la qualité du canal varie à travers les utilisateurs et le temps. En supposant que les demandes des utilisateurs arrivent de manière dynamique, nous concevons un schéma basé sur une structure de queues et nous prouvons qu’il maximise la fonction d'utilité par rapport à tous les schémas limités au cache décentralisé. Dans la dernière partie, nous étudions la planification opportuniste pour un canal d'atténuation asymétrique, en assurant une métrique de justice entre des utilisateurs. Nous proposons une politique de planification simple à base de seuil avec une complexité linéaire et qui exige seulement un bit de retour de chaque utilisateur
Caching, i.e. storing popular contents at caches available at end users, has received a significant interest as a technique to reduce the peak traffic in wireless networks. In particular, coded caching proposed by Maddah-Ali and Niesen has been considered as a promising approach to achieve a constant delivery time as the dimension grows. However, several limitations prevent its applications in practical wireless systems. Throughout the thesis, we address the limitations of classical coded caching in various wireless channels. Then, we propose novel delivery schemes that exploit opportunistically the underlying wireless channels while preserving partly the promising gain of coded caching. In the first part of the thesis, we study the achievable rate region of the erasure broadcast channel with cache and state feedback. We propose an achievable schemeand prove its optimality for special cases of interest. These results are generalized to the multi-antenna broadcast channel with state feedback. In the second part, we study the content delivery over asymmetric block-fading broadcast channels, where the channel quality varies across users and time. Assuming that user requests arrive dynamically, we design an online scheme based on queuing structure and prove that it maximizes the alpha-fair utility among all schemes restricted to decentralized placement. In the last part, we study opportunistic scheduling over the asymmetric fading broadcast channel and aim to design a scalable delivery scheme while ensuring fairness among users. We propose a simple threshold-based scheduling policy of linear complexity that requires only a one-bit feedback from each user
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Bayat, Mozhgan [Verfasser], Giuseppe [Akademischer Betreuer] Caire, Giuseppe [Gutachter] Caire, Olav [Gutachter] Tirkkonen e Mari [Gutachter] Kobayashi. "Coded caching over realistic and scalable wireless networks / Mozhgan Bayat ; Gutachter: Giuseppe Caire, Olav Tirkkonen, Mari Kobayashi ; Betreuer: Giuseppe Caire". Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1223981703/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

ElAzzouni, Sherif. "Algorithm Design for Low Latency Communication in Wireless Networks". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587049831134061.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Kakar, Jaber Ahmad [Verfasser], Aydin [Gutachter] Sezgin, Mikael [Gutachter] Skoglund e Mérouane [Gutachter] Debbah. "Interference management in wireless caching and distributed computing / Jaber Ahmad Kakar ; Gutachter: Aydin Sezgin, Mikael Skoglund, Mérouane Debbah ; Fakultät für Elektrotechnik und Informationstechnik". Bochum : Ruhr-Universität Bochum, 2021. http://d-nb.info/1232496235/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Yen, Chen-Tai, e 顏辰泰. "A Study on Online Cooperative Caching in Wireless Networks". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/kxbdqr.

Testo completo
Abstract (sommario):
碩士
國立交通大學
電子研究所
106
In this thesis, we study a delay-aware cooperative online content caching mechanism in wireless networks with limited caching space in each base station (BS) and unknown popularity of the contents. An online algorithm is proposed for cooperative caching in multi-BS coordinated systems that does not require any knowledge of the contents popularities. When a new request from a user arrives at a BS, the BS responds the request by direct delivery if the requested content is currently cached in its storage space; otherwise, the BS sends a message to the Mobility Management Entity (MME) to initiate the online content caching mechanism. The proposed cooperative online content caching algorithm (COCA) then decides in which BS the requested content should be cached with considerations of three important factors: the residual cache storage space in each BS, the number of coordinated connections each BS is involved with other BSs, and the number of served users in each BS. In addition, due to limited storage space in the cache, the proposed COCA algorithm eliminates the least recently used (LRU) contents to free up the space.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Hsu, Yi-Chen, e 許益晨. "Transmission and Caching Strategies for Wireless P2P-UEP Streaming". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/21582957639443731522.

Testo completo
Abstract (sommario):
碩士
國立屏東科技大學
資訊管理系所
99
Driven by the popularity of the multimedia information and wide deployment of wireless/mobile networks, the mobile video streaming has become one of the major trends for network applications. The emerging peer-to-peer (P2P) overlay networks provide a promising scalable solution to the video streaming in the wired Internet. However, in wireless network environments, implementation of P2P is still challenged by the limited and asymmetric network bandwidth, high packet loss rate and mobile device’s resources. In this paper, we focused on the data transport issues and proposed a mobile-P2P data transport framework for supporting a news-on-demand (NOD) service in such an environment. Our framework consists of three parts. First, UEP (unequal error protection)-packed UDP is utilized for data transmission to fit into the best-effort transport scenario of the Internet, but still flexibly provide adaptive video quality. Second, a cache management control (CMC) is devised for improving the data sharing among peers. Finally, a multiple stop-and-wait (MSW) protocol is proposed to support a scalable adaptive multi-source download. The simulation result shows that our scheme is able to shorten the startup delay and ensure high continuity of video playback in each peer. The system can also quickly self-adapt to the network churn and integrate heterogeneous peers to leverage the data sharing efficiency.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Akon, Mursalin. "Hit and Bandwidth Optimal Caching for Wireless Data Access Networks". Thesis, 2011. http://hdl.handle.net/10012/5701.

Testo completo
Abstract (sommario):
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment where updates are injected by both the server and the clients, distributed all over networks. In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system. Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Zhang, Yongqiang. "Computation Offloading and Service Caching in Heterogeneous MEC Wireless Networks". Thesis, 2021. http://hdl.handle.net/10754/668910.

Testo completo
Abstract (sommario):
Mobile edge computing (MEC) can dramatically promote the compu- tation capability and prolong the lifetime of mobile users by offloading computation- intensive tasks to edge cloud. In this thesis, a spatial-random two-tier heterogeneous network (HetNet) is modelled to feature random node distribution, where the small- cell base stations (SBSs) and the macro base stations (MBSs) are cascaded with resource-limited servers and resource-unlimited servers, respectively. Only a certain type of application services and finite number of offloaded tasks can be cached and processed in the resource-limited edge server. For that setup, we investigate the per- formance of two offloading strategies corresponding to integrated access and backhaul (IAB)-enabled MEC networks and traditional cellular MEC networks. By using tools from stochastic geometry and queuing theory, we derive the average delay for the two different strategies, in order to better understand the influence of IAB on MEC networks. Simulations results are provided to verify the derived expressions and to reveal various system-level insights.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Tseng, Yung-Chih, e 曾勇智. "A neighbor caching mechanism for fast handoff in IEEE 802.11 wireless network". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/09997524414866393290.

Testo completo
Abstract (sommario):
碩士
國立東華大學
電機工程學系
94
Multimedia is a delay sensitive application in internet. And what is more, because mobile station is moving all the time causing the consequent handoff processes inevitable in wireless network environment. Since the handoff processes tend to break the communication, main goal of this thesis is to reduce the break time and arrival delay while using the multimedia applications in internet. In this paper, we proposed the Neighbor Graph plus the Cache mechanism to reduce the probing latency. The NCG can save the related information of channel and Mac-address of Neighbor AP that are gathered during Neighbor Graph establishment, later, we classified The signal strength are classified to level 1 to 6 that can later be used in the next handoff, to perform fast-handoff. As result, greatly reduce the handoff latency. We used the OPNET Modeler 11.0 as simulation tool to verify. In the simulation, we proposed 3 scenarios to analyze and discuss. The simulation results show the improvement of probing latency by using Neighbor Cache Graph method compare with conventional Layer 2 handoff methods while VoIP applications are running under in their environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

WU, Shao-Kang, e 吳紹康. "An Agent Mechanism and Caching Algorithm for Real-time Multimedia Delivery over 3G Wireless Networks". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/38383194639055912148.

Testo completo
Abstract (sommario):
碩士
國立中興大學
資訊科學研究所
93
It is known that RTP (Real Time transport Protocol) is a communication protocol can be used to transmit multimedia with feedback control about network status offered by RTCP (Real Time Control Protocol). In this article, we design a Real-Time video streaming control mechanism for QoS of 3G communication system, and we propose a transcoding-enabled caching algorithm to share the transcoded load of streaming server, we adapt Double Feedback Streaming Agent (DFSA) in transport data flow to transmitting more efficiently feedback control towards network under-utilization bandwidth. DFSA mechanism is one of the ways to use RTP in 3G environment and keep its property. Furthermore, DFSA respectively towards the problems occurred to 3G core network or radio to modify. We take advantage of proxy cache mechanism which places in the edge of network to reduce flow and delay. This mechanism will obtain the connection speed and device capability of presently client’s information offered by DFSA. By the way of implementing caching algorithm, we get the new transcoding media bit-rate and cache it to decrease the system load, therefore we can acquire higher and more stable quality of service. We toward our architecture, we use NS2 to respectively analyze the loss rate and jitter in RTCP report. We address related issues of transmitting streaming media in recently 3G environment and draw a conclusion.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Dias, André Filipe Pinheiro. "QoE over-the-top multimedia over wireless networks". Master's thesis, 2018. http://hdl.handle.net/10773/29108.

Testo completo
Abstract (sommario):
One of the goals of an operator is to improve the Quality of Experience (QoE) of a client in networks where Over-the-top (OTT) content is being delivered. The appearance of services like YouTube, Netflix or Twitch, where in the first case it contains more than 300 hours of video per minute in the platform, brings issues to the managed data networks that already exist, as well as challenges to fix them. Video traffic corresponds to 75% of the whole transmitted data on the Internet. This way, not only the Internet did become the ’de facto’ video transmission path, but also the general data traffic continues to exponentially increase, due to the desire to consume more content. This thesis presents two model proposals and architecture that aim to improve the users’ quality of experience, by predicting the amount of video in advance liable of being prefetched, as a way to optimize the delivery efficiency where the quality of service cannot be guaranteed. The prefetch is done in the clients’ closest cache server. For that, an Analytic Hierarchy Process (AHP) is used, where through a subjective method of attribute comparison, and from the application of a weighted function on the measured quality of service metrics, the amount of prefetch is achieved. Besides this method, artificial intelligence techniques are also taken into account. With neural networks, there is an attempt of selflearning with the behavior of OTT networks with more than 14.000 hours of video consumption under different quality conditions, to try to estimate the experience felt and maximize it, without the normal service delivery degradation. At last, both methods are evaluated and a proof of concept is made with users in a high speed train.
Um dos objetivos de um operador é melhorar a qualidade de experiência do cliente em redes onde existem conteúdos Over-the-top (OTT) a serem entregues. O aparecimento de serviços como o YouTube, Netflix ou Twitch, onde no primeiro caso são carregadas mais de 300 horas de vídeo por minuto na plataforma, vem trazer problemas às redes de dados geridas que já existiam, assim como desafios para os resolver. O tráfego de vídeo corresponde a 75% de todos os dados transmitidos na Internet. Assim, não só a Internet se tornou o meio de transmissão de vídeo ’de facto’, como o tráfego de dados em geral continua a crescer exponencialmente, proveniente do desejo de consumir mais conteúdos. Esta tese apresenta duas propostas de modelos e arquitetura que pretendem melhorar a qualidade de experiência do utilizador, ao prever a quantidade de vídeo em avanço passível de ser précarregado, de forma a optimizar a eficiência de entrega das redes onde a qualidade de serviço não é possível de ser garantida. O pré-carregamento dos conteúdos é feito no servidor de cache mais próximo do cliente. Para tal, é utilizado um processo analítico hierárquico (AHP), onde através de um método subjetivo de comparação de atributos, e da aplicação de uma função de valores ponderados nas medições das métricas de qualidade de serviço, é obtida a quantidade a pré-carregar. Além deste método, é também proposta uma abordagem com técnicas de inteligência artificial. Através de redes neurais, há uma tentativa de auto-aprendizagem do comportamento das redes OTT com mais de 14.000 horas de consumo de vídeo sobre diferentes condições de qualidade, para se tentar estimar a experiência sentida e maximizar a mesma, sem degradação da entrega de serviço normal. No final, ambos os métodos propostos são avaliados num cenário de utilizadores num comboio a alta velocidade.
Mestrado em Engenharia de Computadores e Telemática
Gli stili APA, Harvard, Vancouver, ISO e altri
19

(10214267), Chih-Hua Chang. "Optimal Network Coding Under Some Less-Restrictive Network Models". Thesis, 2021.

Cerca il testo completo
Abstract (sommario):
Network Coding is a critical technique when designing next-generation network systems, since the use of network coding can significantly improve the throughput and performance (delay/reliability) of the system. In the traditional design paradigm without network coding, different information flows are transported in a similar way like commodity flows such that the flows are kept separated while being forwarded in the network. However, network coding allows nodes in the network to not only forward the packet but also process the incoming information messages with the goal of either improving the throughput, reducing delay, or increasing the reliability. Specifically, network coding is a critical tool when designing absolute Shannon-capacity-achieving schemes for various broadcasting and multi-casting applications. In this thesis, we study the optimal network schemes for some applications with less restrictive network models. A common component of the models/approaches is how to use network coding to take advantage of a broadcast communication channel.

In the first part of the thesis, we consider the system of one server transmitting K information flows, one for each of K users (destinations), through a broadcast packet erasure channels with ACK/NACK. The capacity region of 1-to-K broadcast packet erasure channels with ACK/NACK is known for some scenarios, e.g., K<=3, etc. However, existing achievability schemes with network coding either require knowing the target rate in advance, and/or have a complicated description of the achievable rate region that is difficult to prove whether it matches the capacity or not. In this part, we propose a new network coding protocol with the following features: (i) Its achievable rate region is identical to the capacity region for all the scenarios in which the capacity is known; (ii) Its achievable rate region is much more tractable and has been used to derive new capacity rate vectors; (iii) It employs sequential encoding that naturally handles dynamic packet arrivals; (iv) It automatically adapts to unknown packet arrival rates; (v) It is based on GF(q) with q>=K. Numerically, for K=4, it admits an average control overhead 1.1% (assuming each packet has 1000 bytes), average encoding memory usage 48.5 packets, and average per-packet delay 513.6 time slots, when operating at 95% of the capacity.

In the second part, we focus on the coded caching system of one server and K users, each user k has cache memory size Mk and demand a file among the N files currently stored at server. The coded caching system consists of two phases: Phase 1, the placement phase: Each user accesses the N files and fills its cache memory during off-peak hours; and Phase 2, the delivery phase: During the peak hours, each user submits his/her own file request and the server broadcasts a set of packet simultaneously to K users with the goal of successfully delivering the desired packets to each user. Due to the high complexity of coded caching problem with heterogeneous file size and heterogeneous cache memory size for arbitrary N and K, prior works focus on solving the optimal worst-case rate with homogeneous file size and mostly focus on designing order-optimal coded caching schemes with user-homogeneous file popularity that attain the lower bound within a constant factor. In this part, we derive the average rate capacity for microscopic 2-user/2-file (N=K=2) coded caching problem with heterogeneous files size, cache memory size, and user-dependent heterogeneous file popularity. The study will shed some further insights on the complexity and optimal scheme design of general coded caching problem with full heterogeneity.

In the third part, we further study the coded caching system of one server, K= 2 users, and N>=2 files and focus on the user-dependent file popularity of the two users. In order to approach the exactly optimal uniform average rate of the system, we simplify the file demand popularity to binary outputs, i.e., each user either has no interest (with probability 0) or positive uniform interest (with a constant probability) to each of the N file. Under this model, the file popularity of each user is characterized by his/her file demand set of positive interest in the N files. Specifically, we analyze the case of two user (K=2). We show the exact capacity results of one overlapped file of the two file demand sets for arbitrary N and two overlapped files of the two file demand sets for N = 3. To investigate the performance of large overlapped files we also present the average rate capacity under the constraint of selfish and uncoded prefetching with explicit prefetching schemes that achieve those capacities. All the results allow for arbitrary (and not necessarily identical) users' cache capacities and number of files in each file demand set.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia