Rozprawy doktorskie na temat „Codes for low-latency streaming”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 17 najlepszych rozpraw doktorskich naukowych na temat „Codes for low-latency streaming”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Goel, Ashvin. "Operating system support for low-latency streaming /". Full text open access at:, 2003. http://content.ohsu.edu/u?/etd,194.
Pełny tekst źródłaTay, Kah Keng. "Low-latency network coding for streaming video multicast". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46523.
Pełny tekst źródłaIncludes bibliographical references (p. 95-98).
Network coding has been successfully employed to increase throughput for data transfers. However, coding inherently introduces packet inter-dependencies and adds decoding delays which increase latency. This makes it difficult to apply network coding to real-time video streaming where packets have tight arrival deadlines. This thesis presents FLOSS, a wireless protocol for streaming video multicast. At the core of FLOSS is a novel network code. This code maximizes the decoding opportunities at every receiver, and at the same time minimizes redundancy and decoding latency. Instead of sending packets plainly to a single receiver, a sender mixes in packets that are immediately beneficial to other receivers. This simple technique not only allows us to achieve the coding benefits of increased throughput, it also decreases delivery latency, unlike other network coding approaches. FLOSS performs coding over a rolling window of packets from a video flow, and determines with feedback the optimal set of packet transmissions needed to get video across in a timely and reliable manner. A second important characteristic of FLOSS is its ability to perform both interand intra-flow network coding at the same time. Our technique extends easily to support multiple video streams, enabling us to effectively and transparently apply network coding and opportunistic routing to video multicast in a wireless mesh. We devise VSSIM*, an improved video quality metric based on [46]. Our metric addresses a significant limitation of prior art and allows us to evaluate video with streaming errors like skipped and repeated frames. We have implemented FLOSS using Click [22]. Through experiments on a 12-node testbed, we demonstrate that our protocol outperforms both a protocol that does not use network coding and one that does so naively. We show that the improvement in video quality comes from increased throughput, decreased latency and opportunistic receptions from our scheme.
by Kah Keng Tay.
M.Eng.
Tafleen, Sana. "Fault Tolerance Strategies for Low-Latency Live Video Streaming". Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13420002.
Pełny tekst źródłaThis paper describes the effect of failures on various video QoS metrics like delay, packet loss, and recovery time. SDN network has been used to guarantee reliability and efficient data transmission. There are many failures that can occur within the SDN mesh network or between the non-SDN and the SDN network. There is a need for both reliable and low-latency transmission of live video streams, especially in situations such as public safety or public gathering events. This is because everyone is trying to use the limited network at the same time. That leads to oversubscription and network outages, and computing devices may fail. Existing mechanisms built into TCP/IP and video streaming protocols, and fault tolerance strategies (such as buffering), are inadequate due to low latency and reliability requirements for live streaming, especially in the presence of limited bandwidth and computational power of mobile or edge devices. The objective of this paper is to develop an efficient fault tolerant strategy at the source-side to produce a high-quality video with low latency and data loss. To recover the lost data during failures, buffering approach is used to store chunks in a buffer and retransmit the lost frames, requested by the receiver.
Ben, Yahia Mariem. "Low latency video streaming solutions based on HTTP/2". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0136/document.
Pełny tekst źródłaAdaptive video streaming techniques enable the delivery of content that is encoded at various levels of quality and split into temporal segments. Before downloading a segment, the client runs an adaptation algorithm to determine the level of quality that best matches the network resources. For immersive video streaming this adaptation mechanism should also consider the head movement of a user watching the 360° video to maximize the quality of the viewed portion. However, this adaptation may suffer from errors, which impact the end user’s quality of experience. In this case, an HTTP/1 client must wait for the download of the next segment to choose a suitable quality. In this thesis, we propose to use the HTTP/2 protocol instead to address this problem. First, we focus live streaming video. We design a strategy to discard video frames when the band width is very variable in order so as to avoid the rebuffering events and the accumulation of delays. The customer requests each video frame in an HTTP/2 stream which allows to control the delivery of frames by leveraging the HTTP/2 features at the level of the dedicated stream. Besides, we use the priority and reset stream features of HTTP/2 to optimize the delivery of immersive videos. We propose a strategy to benefit from the improvement of the user’s head movements prediction overtime. The results show that HTTP/2 allows to optimize the use of network resources and to adapt to the latencies required by each service
Tideström, Jakob. "Investigation into low latency live video streaming performance of WebRTC". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249446.
Pełny tekst źródłaEftersom WebRTC är menat för peer-to-peer realtidskommunikation så har den förmågan att strömma video med låg latens. Denna avhandling utnyttjar den här förmågan för att strömma livevideo i ett klient-server-scenario. Med en uppsättning som omfattar en lokal sändare, server, och klient strömmas en statisk videofil som en live-video. Prestandan jämförs med hur de samtida liveströmningsteknikerna HTTP Live Streaming respective Dynamic Adaptive Streaming over HTTP strömmar samma innehåll. Slutsatsen är att WebRTC lyckas uppnå lägre latens än båda de andra teknikerna men utan relativt mycket finjustering så försämras kvaliteten på strömmen.
Bhat, Amit. "Low-latency Estimates for Window-Aggregate Queries over Data Streams". PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/161.
Pełny tekst źródłaGazi, Orhan. "Parallelized Architectures For Low Latency Turbo Structures". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608110/index.pdf.
Pełny tekst źródłaSonono, Tofik. "Interoperable Retransmission Protocols with Low Latency and Constrained Delay : A Performance Evaluation of RIST and SRT". Thesis, KTH, Kommunikationssystem, CoS, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254897.
Pełny tekst źródłaI mediabranschen finns det en efterfrågan på utrustning som har inslag av interoperabilitet.Anledningen till detta är att någon som köper produkter från en viss återförsäljare inte vill låsas in idenna återförsäljares ”ekosystem” i flera år framöver. Då en studio sällan uppgraderar hela sinproduktionskedja på samma gång ger interoperabilitet möjligheten att köpa utrustning från andraåterförsäljare när man ska uppgradera något i produktionslinan. Detta leder till en merkonkurrenskraftig marknad samt ger incentiv till nya innovativa lösningar. Detta examensarbete går ut på att utvärdera lösningar som tagits fram för att främjainteroperabilitet och jämföra dem med en existerande proprietärlösning. Reliable Internet StreamTransport (RIST) och Secure Reliable Transport (SRT) är två protokoll som tagits fram för just dettasyfte. Utmaningen med att utvärdera dessa protokoll är att i en labbmiljö få resultat som reflekteraranvändandet av protokollen i verkligheten. Detta har gjorts med hjälp av ett program som tagitsfram i detta examensarbete. Med detta program har testandet kunnat automatiseras. Resultaten i detta examensarbete visar potential hos båda RIST och SRT. SRT är i vissascenarion till och med bättre än den proprietära lösningen. Protokollen visar något buggigtbeteende i vissa instanser, såsom att i vissa fal sluta fungera och inte kunna återgå till normalfunktion utan manuell interaktion. Allt som allt är dock protokollen i de flesta fallen testade i dettaexamensarbete ett godtyckligt alternativ till den jämförda proprietära lösningen.
Lai, Hsu-Te, i 賴旭德. "Low Latency and Efficient Packet Scheduling for Streaming Applications". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/30454995901658632809.
Pełny tekst źródła國立中央大學
資訊工程研究所
91
Adequate bandwidth allocations and strict delay requirements are critical for real time applications. Packet scheduling algorithms like Class Based Queue (CBQ), Nested Deficit Round Robin (Nested-DRR) are designed to ensure the bandwidth reservation function. However, they might cause unsteady packet latencies and introduce extra application handling overhead, such as allocating a large buffer for playing the media stream. High and unstable latency of packets might jeopardize the corresponding Quality of Service since real-time applications prefer low playback latency. Existing scheduling algorithms which keep latency of packets stable require knowing the details of individual flows. GPS (General Processor Sharing)-like algorithms does not consider the real behavior of a stream. A real stream is not perfectly smooth after forwarded by routers. GPS-like algorithms will introduce extra delay on the stream which is not perfectly smooth. This thesis presents an algorithm which provides low latency and efficient packet scheduling service for streaming applications called LLEPS.
Huang, Ting-Chun, i 黃亭鈞. "Realizing Low Latency Real-Time Video Streaming Service with TCP". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/34440272109505633081.
Pełny tekst źródła國立臺灣海洋大學
資訊工程學系
103
Most real-time video streams are delivered using UDP. Compared against TCP, UDP does not have the head-of-line blocking effect, and therefore the performance does not drop dramatically due to packet losses. However, UDP does not offer a reliable packet delivery service, and it may not work in certain network setups including traffic shaping, firewall, and NAT. Researchers have attempted to solve the aforementioned problem using SCTP. However, the performance of SCTP on real-time video streaming is not clear, and it is not built-in for most off-the-shelf operating systems including both desktop and mobile OSes. As a result, it could not be a good choice for the demanding real-time multimedia streaming applications such as cloud gaming and video surveillance. Based on the observation, we proposed a real-time video streaming protocol design based on TCP, which is called multiple-flow TCP model. In this model, we leverage concurrent TCP flows to deliver multimedia streams. In addition to take the benefits of reliable packet delivery, the performance drop caused on packet losses can be mitigated and therefore improve the overall throughput. Our evaluation shows that the multiple-flow TCP model has a similar performance to UDP, and it offers the benefits of TCP and SCTP. We further conduct user studies to understand real user experiences regarding the performance of the proposed model. It also shows that the multiple-flow TCP model can perform better than TCP and SCTP in terms of real-timeliness and video quality.
Li, Chun-Hsiao, i 李純孝. "A low video latency feedback mechanism for SVC streaming in WiMedia MAC". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/77666207085930133827.
Pełny tekst źródła國立交通大學
電子工程系所
97
Scalable Video Coding (SVC) is a video compression technology which can provide flexible bitstream extraction according to the requirements of application and network bandwidth. WiMedia Ultra-WideBand is a wireless communication technique which contains the characteristics of low power and high data-rate. It is suitable for the Wireless Personal Area Network (WPAN) and communication network at home. Thought the cross layer design makes the wireless multimedia transmission over short distance become not only possible but also more flexible and efficient. In this thesis, we proposed a cross-layer architecture including a feedback scheme which takes advantage of the characteristics of SVC and WiMedia, on the Medium Access Control (MAC) layer of transmitter. Thought the monitoring of bandwidth fluctuation by MAC layer, we can estimate the available bandwidth of next period and dynamically adjust the feedback extraction bitrate after considering the issue of video latency on the buffer of decoder in order to reduce the impact of the variation of network bandwidth and channel condition on real time video communication. Finally, with the proposed architecture and mechanism, we will simulate the performance of video latency, bandwidth utilization and buffer condition on WiMedia with SVC as our application in system level.
Chen, Zhi-Zhan, i 陳志展. "A Low Video Latency Feedback Scheme for SVC Streaming in WIMAX MAC". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/48363563832984271771.
Pełny tekst źródła國立交通大學
電子工程系所
97
Scalable video coding (SVC) is a video compression technique which provides flexible bitstream extraction according to device types and network bandwidth, and WIMAX is a new wireless communication technique. Combining the two techniques to transmit multimedia information is an important application in the future. Our research goal is to reduce the impact of the variation of network bandwidth and channel condition on real time video communication based on above newly developed techniques. In our thesis, we propose a cross-layer architecture between SVC system and WIMAX MAC, and we also propose a mechanism to decide the bitrate of the extracted SVC bitstream. According to the latest bandwidth information, this mechanism could decide a proper bitrate to fit the bandwidth of a period in the future, and then feedback this bitrate to SVC extractor. Thus SVC extractor can extract the proper size of SVC bitstream to transmit. Finally, with the proposed architecture and mechanism, we can achieve good performance in video latency, bandwidth utilization, and buffer condition during the real time video transportation.
HSU, CHI YAO, i 許智堯. "Design and Implementation of a Multiple Video Streaming System for Low Latency P2P Architecture". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/55670359228488949456.
Pełny tekst źródła國立清華大學
資訊工程學系
100
With the advancement in technology, more and more network applications have been implemented to satisfy users’ requirements. The online meeting system is one of these applications. It provides a network platform for attendee to interact with each other and viewers can watch the meeting as well. However, when the number of viewers gets larger, the server needs more output bandwidth to support the stream transferring. Thus, sharing video in online meeting system can be a huge burden to the server. In this thesis, we propose a P2P-based live broadcasting system for the online meeting system. By using the P2P technology in live broadcasting, we can reduce the bandwidth consumption, decrease the server’s loading and make large scale video broadcasting service an inexpensive application. With this system, video streams in the meeting room are transferred together to each viewer. Users can watch these video streams on the web page. The system also provides the rescue mechanism to handle the case that peers get insufficient input stream due to the network congestion. With this method, we can ensure that the system can have higher performance. Besides, we implement this system and design some experiments to test the performance of system. During experiments, we simulate the environments that will cause the rescue method to happen. The experiment results are recorded and analyzed for further suggestions for the service.
Lu, Ya Cheng, i 呂亞正. "Novel Architecture Design of Low-Latency Decoders and Two-Tier Coding for Turbo Codes". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/49352439643214940710.
Pełny tekst źródła長庚大學
電機工程學系
98
The transmission of information from the source to its destination has to be done in a way that the content and quality of the received information should be as close as possible to that of the transmitted information. Thus, there has been an increasing demand for efficient and reliable communication systems. The major concern of communication system is to minimize the error probability at the receiver end by making efficient use of the power and bandwidth resources, while keeping the system complexity reasonable to implement. In 1993, a breakthrough in error control coding was the invention of turbo codes proposed by Berrou et al., which facilitate the operation of communication systems close to the Shannon limit. Turbo coding is based on the combinations of two recursive systematic convolutional (RSC) codes, an interleaver and the soft-input soft-output (SISO) iterative decoding using MAP algorithm. However, the complex computation and long latency of turbo decoding make turbo codes impractical not only in hardware implementation but also in some applications. This dissertation investigates new iterative decoding techniques applied to turbo coding schemes. These techniques are used to approach channel capacity with lower complexity and latency. After briefing the fundamentals of coding for error control in communication systems and the important concepts of turbo coding, we submitted three low-latency decoding architectures. First, we proposed a systolic sliding window (SSW) scheme and VLSI architecture based on Log-MAP algorithm. The proposed low-latency SSW scheme reduced the decoding delay of a MAP decoder from 2N to 2L (N and L are the length of frame and sliding window, respectively). Simulation results and further issues on implementation, such as metric normalization and data length, are also explained. Second, we proposed a concurrent decoding algorithm which reduces the decoding delay of iteration from 4N to 2N. Based on the concurrent algorithm, we designed the architecture of the concurrent turbo decoder (CTD) using only one single MAP decoder. The CTD approximately reduces the decoding latency by half while offers a comparable BER performance. Furthermore, we proposed two parallel turbo decoder (PTD) schemes based on the concurrent algorithm, which can perform decoding computations for all component codes concurrently. Additionally, because decoding processes corresponding to different component codes perform concurrently, the (de)interleaving delay is eliminated. Thus, with a K-level parallel scheme (K is the number of CTD), the PTD obtained an iterative delay reduction by a factor of 1/2K, while that of other existed parallel turbo decoding architectures is only 1/K. Finally, to improve the BER performance of a general turbo codes, we proposed a two-tier turbo coding scheme and a modified MAP algorithm. According to the simulation results, the new coding scheme achieves about 1.0 dB additional coding gain, compared to the general turbo decoding scheme at a BER = 10-6, with a frame length of 8192. Compared with other related approaches, the proposed architectures of turbo coding are lower complexity, lower latency and more regular. Therefore, it is adequate for VLSI implementation.
Chen, Chih-Yen, i 陳致諺. "A Low-Latency and High-Scalability Real-time Streaming System for E-learning Purpose using WebRTC". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/jd37f8.
Pełny tekst źródła國立中央大學
資訊工程學系
103
With the rapid development of network communication technology and the growth of hardware capacity in recent years, the content which users can share to network is more abundant. Multimedia applications have gone deeply into people's lives, such as education purpose. E-learning seems to be an effective way to acquire knowledge for students. But studies have shown that e-learning also comes with some issues, such as cannot guarantee the efficiency of learning when students learning by themselves. In order to guarantee the learning efficiency, in this thesis, we combine WebRTC, content delivery networks and a WebRTC-supported open source streaming server, and propose an online real-time classroom. In the scenario, students have to login the website and attend the class at the scheduled time. And the students’ problem can be solved by having a real-time video discussion with the teacher. In addition to providing a good e-learning model for students, this system also solves the problem of the shortage of teachers, let one teacher be able to teach thousands of students at a certain time. We believe that students can retrieve a good learning experience and learn in a more effective way in their learning processes.
Chen, Wei-zhi, i 陳韋志. "Enhanced Harmonic Proportional Bandwidth Allocation Strategy with High Utilization and Low Latency Properties for Streaming Servers". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/46602360241271405617.
Pełny tekst źródła國立成功大學
資訊工程學系碩博士班
95
More and more people like watching the multimedia file in the network. With clients’ different demand or different service charge, streaming servers must provide different levels of quality of service (QoS) to different clients rather than providing the same service quality to all. Real-time trans-coding technology is useful in providing such differentiated service that streaming servers can choose to degrade the quality of service instead of rejecting the client’s requests when there are a burst of requests. Although with the differentiated service in the streaming server, it is possible that the client who requests the service early can get services but the client who requests the service late does not because most bandwidth allocation strategies generally allocate the bandwidth to clients without any reservation. Additionally, it is impossible to predict when the client will access the service to avoid such problem, and thus the server usually limits the bandwidth to use in each time interval, e.g., the server allocates the fixed bandwidth in fixed time for clients who request the service during this period to guarantee them to have the service at any time. But there is still one problem for this method, i.e., not everyone uses services completely. Someone may finish the service early due to some factors, e.g., user preference or losing connection with the server. This can release the bandwidth and decrease the total bandwidth utilization of the streaming server. This paper proposes an enhanced harmonic proportional bandwidth allocation strategy to reuse the released bandwidth. Our strategy uses one part of all released bandwidth (i.e.,β) to raise quality of service of clients to increase total bandwidth utilization and uses the other part of all released bandwidth (i.e., 1-β) to reduce the delay time of clients by serving more clients who are not allowed to be served originally where β is between 0 and 1, inclusively. In the simulation, we try to discuss what the value of is better when simultaneously considering the two factors including the total bandwidth utilization and the delay time of clients, respectively. Furthermore, our proposed strategy is validated through the simulation that the high utilization and lower latency properties for streaming servers can be obtained no matter whether the probability of clients that leave services is high or low even in different request scheduling policies.
CHEN, ZE-LIN, i 陳澤霖. "Implementation and Design of Low-Switching Latency Buffering Mechanism for Adaptive Streaming in Cloud-based Multimedia Transcoding System". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/s32vc8.
Pełny tekst źródła國立中正大學
通訊工程研究所
105
In recent years, intelligent terminals and internet are used widely. The demands of watching movies on the internet are increasing. Due to the migration, the multimedia streaming technologies are developed. The multimedia streaming services and applications are provided rapidly, such as live TV, long-distance E-learning, etc. Thus, multimedia streaming technology is one of the current popular technologies. HTTP (Hypertext Transport Protocol) is one of the main application protocols these days. In adaptive Streaming of HTTP, many companies developed their own adaptive streaming, such as Adobe, Microsoft, Apple and MPEG-DASH (Dynamic Adaptive Streaming over HTTP) of ISO standard. The client can dynamically select the adequate bit rate for multimedia streaming according to the network conditions or the restrictions imposed by users. In this paper, MPEG-DASH and HTTP are used in our system. Based on the previous research result, we propose a mechanism called Dual-Playout Buffer (DPB), which is a mechanism for fluency of video streaming according to users’ network environments. By using two buffers to manage different kinds of quality of video, we can switch buffers to reduce the reaction time of switching quality according to the situation of users’ network.