Gotowa bibliografia na temat „360 VIDEO STREAMING”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „360 VIDEO STREAMING”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "360 VIDEO STREAMING"

1

Bradis, Nikolai Valerievich. "360 Video Protection and Streaming". International Journal of Advanced Trends in Computer Science and Engineering 8, nr 6 (15.12.2019): 3289–96. http://dx.doi.org/10.30534/ijatcse/2019/99862019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jeong, JongBeom, Dongmin Jang, Jangwoo Son i Eun-Seok Ryu. "3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming". Sensors 18, nr 9 (18.09.2018): 3148. http://dx.doi.org/10.3390/s18093148.

Pełny tekst źródła
Streszczenie:
Recently, with the increasing demand for virtual reality (VR), experiencing immersive contents with VR has become easier. However, a tremendous amount of calculation and bandwidth is required when processing 360 videos. Moreover, additional information such as the depth of the video is required to enjoy stereoscopic 360 contents. Therefore, this paper proposes an efficient method of streaming high-quality 360 videos. To reduce the bandwidth when streaming and synthesizing the 3DoF+ 360 videos, which supports limited movements of the user, a proper down-sampling ratio and quantization parameter are offered from the analysis of the graph between bitrate and peak signal-to-noise ratio. High-efficiency video coding (HEVC) is used to encode and decode the 360 videos, and the view synthesizer produces the video of intermediate view, providing the user with an immersive experience.
Style APA, Harvard, Vancouver, ISO itp.
3

Nguyen, Anh, i Zhisheng Yan. "Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays". Sensors 23, nr 8 (15.04.2023): 4016. http://dx.doi.org/10.3390/s23084016.

Pełny tekst źródła
Streszczenie:
Predicting where users will look inside head-mounted displays (HMDs) and fetching only the relevant content is an effective approach for streaming bulky 360 videos over bandwidth-constrained networks. Despite previous efforts, anticipating users’ fast and sudden head movements is still difficult because there is a lack of clear understanding of the unique visual attention in 360 videos that dictates the users’ head movement in HMDs. This in turn reduces the effectiveness of streaming systems and degrades the users’ Quality of Experience. To address this issue, we propose to extract salient cues unique in the 360 video content to capture the attentive behavior of HMD users. Empowered by the newly discovered saliency features, we devise a head-movement prediction algorithm to accurately predict users’ head orientations in the near future. A 360 video streaming framework that takes full advantage of the head movement predictor is proposed to enhance the quality of delivered 360 videos. Practical trace-driven results show that the proposed saliency-based 360 video streaming system reduces the stall duration by 65% and the stall count by 46%, while saving 31% more bandwidth than state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
4

Fan, Ching-Ling, Wen-Chih Lo, Yu-Tung Pai i Cheng-Hsin Hsu. "A Survey on 360° Video Streaming". ACM Computing Surveys 52, nr 4 (18.09.2019): 1–36. http://dx.doi.org/10.1145/3329119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wong, En Sing, Nur Haliza Abdul Wahab, Faisal Saeed i Nouf Alharbi. "360-Degree Video Bandwidth Reduction: Technique and Approaches Comprehensive Review". Applied Sciences 12, nr 15 (28.07.2022): 7581. http://dx.doi.org/10.3390/app12157581.

Pełny tekst źródła
Streszczenie:
Recently, the usage of 360-degree videos has prevailed in various sectors such as education, real estate, medical, entertainment and more. The development of the Virtual World “Metaverse” demanded a Virtual Reality (VR) environment with high immersion and a smooth user experience. However, various challenges are faced to provide real-time streaming due to the nature of high-resolution 360-degree videos such as high bandwidth requirement, high computing power and low delay tolerance. To overcome these challenges, streaming methods such as Dynamic Adaptive Streaming over HTTP (DASH), Tiling, Viewport-Adaptive and Machine Learning (ML) are discussed. Moreover, the superiorities of the development of 5G and 6G networks, Mobile Edge Computing (MEC) and Caching and the Information-Centric Network (ICN) approaches to optimize the 360-degree video streaming are elaborated. All of these methods strike to improve the Quality of Experience (QoE) and Quality of Service (QoS) of VR services. Next, the challenges faced in QoE modeling and the existing objective and subjective QoE assessment methods of 360-degree video are presented. Lastly, potential future research that utilizes and further improves the existing methods substantially is discussed. With the efforts of various research studies and industries and the gradual development of the network in recent years, a deep fake virtual world, “Metaverse” with high immersion and conducive for daily life working, learning and socializing are around the corner.
Style APA, Harvard, Vancouver, ISO itp.
6

Garcia, Henrique D., Mylène C. Q. Farias, Ravi Prakash i Marcelo M. Carvalho. "Statistical characterization of tile decoding time of HEVC-encoded 360° video". Electronic Imaging 2020, nr 9 (26.01.2020): 285–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-285.

Pełny tekst źródła
Streszczenie:
In this paper, we present a statistical characterization of tile decoding time of 360° videos encoded via HEVC that considers different tiling patterns and quality levels (i.e., bitrates). In particular, we present results for probability density function estimation of tile decoding time based on a series of experiments carried out over a set of 360° videos with different spatial and temporal characteristics. Additionally, we investigate the extent to which tile decoding time is correlated with tile bitrate (at chunk level), so that DASH-based video streaming can make possible use of such an information to infer tile decoding time. The results of this work may help in the design of queueing or control theory-based adaptive bitrate (ABR) algorithms for 360° video streaming.
Style APA, Harvard, Vancouver, ISO itp.
7

Nguyen, Dien, Tuan Le, Sangsoon Lee i Eun-Seok Ryu. "SHVC Tile-Based 360-Degree Video Streaming for Mobile VR: PC Offloading Over mmWave". Sensors 18, nr 11 (1.11.2018): 3728. http://dx.doi.org/10.3390/s18113728.

Pełny tekst źródła
Streszczenie:
360-degree video streaming for high-quality virtual reality (VR) is challenging for current wireless systems because of the huge bandwidth it requires. However, millimeter wave (mmWave) communications in the 60 GHz band has gained considerable interest from the industry and academia because it promises gigabit wireless connectivity in the huge unlicensed bandwidth (i.e., up to 7 GHz). This massive unlicensed bandwidth offers great potential for addressing the demand for 360-degree video streaming. This paper investigates the problem of 360-degree video streaming for mobile VR using the SHVC, the scalable of High-Efficiency Video Coding (HEVC) standard and PC offloading over 60 GHz networks. We present a conceptual architecture based on advanced tiled-SHVC and mmWave communications. This architecture comprises two main parts. (1) Tile-based SHVC for 360-degree video streaming and optimizing parallel decoding. (2) Personal Computer (PC) offloading mechanism for transmitting uncompressed video (viewport only). The experimental results show that our tiled extractor method reduces the bandwidth required for 360-degree video streaming by more than 47% and the tile partitioning mechanism was improved by up to 25% in terms of the decoding time. The PC offloading mechanism was also successful in offloading 360-degree decoded (or viewport only) video to mobile devices using mmWave communication and the proposed transmission schemes.
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Xiaolei, Di Wu i Ishfaq Ahmad. "Optimized viewport‐adaptive 360‐degree video streaming". CAAI Transactions on Intelligence Technology 6, nr 3 (3.03.2021): 347–59. http://dx.doi.org/10.1049/cit2.12011.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Podborski, Dimitri, Emmanuel Thomas, Miska M. Hannuksela, Sejin Oh, Thomas Stockhammer i Stefan Pham. "360-Degree Video Streaming with MPEG-DASH". SMPTE Motion Imaging Journal 127, nr 7 (sierpień 2018): 20–27. http://dx.doi.org/10.5594/jmi.2018.2838779.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Peng, Shuai, Jialu Hu, Han Xiao, Shujie Yang i Changqiao Xu. "Viewport-Driven Adaptive 360◦ Live Streaming Optimization Framework". Journal of Networking and Network Applications 1, nr 4 (styczeń 2022): 139–49. http://dx.doi.org/10.33969/j-nana.2021.010401.

Pełny tekst źródła
Streszczenie:
Virtual reality (VR) video streaming and 360◦ panoramic video have received extensive attention in recent years, which can bring users an immersive experience. However, the ultra-high bandwidth and ultra-low latency requirements of virtual reality video or 360◦ panoramic video also put tremendous pressure on the carrying capacity of the current network. In fact, since the user’s field of view (a.k.a viewport) is limited when watching a panoramic video and users can only watch about 20%∼30% of the video content, it is not necessary to directly transmit all high-resolution content to the user. Therefore, predicting the user’s future viewing viewport can be crucial for selective streaming and further bitrate decisions. Combined with the tile-based adaptive bitrate (ABR) algorithm for panoramic video, video content within the user’s viewport can be transmitted at a higher resolution, while areas outside the viewport can be transmitted at a lower resolution. This paper mainly proposes a viewport-driven adaptive 360◦ live streaming optimization framework, which combines viewport prediction and ABR algorithm to optimize the transmission of live 360◦ panoramic video. However, existing viewport prediction always suffers from low prediction accuracy and does not support real-time performance. With the advantage of convolutional network (CNN) in image processing and long short-term memory (LSTM) in temporal series processing, we propose an online-updated viewport prediction model called LiveCL which mainly utilizes CNN to extract the spatial characteristics of video frames and LSTM to learn the temporal characteristics of the user’s viewport trajectories. With the help of the viewport prediction and ABR algorithm, unnecessary bandwidth consumption can be effectively reduced. The main contributions of this work include: (1) a framework for 360◦ video transmission is proposed; (2) an online real-time viewport prediction model called LiveCL is proposed to optimize 360◦ video transmission combined with a novel ABR algorithm, which outperforms the existing model. Based on the public 360◦ video dataset, the tile accuracy, recall, precision, and frame accuracy of LiveCL are better than those of the latest model. Combined with related adaptive bitrate algorithms, the proposed viewport prediction model can reduce the transmission bandwidth by about 50%.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "360 VIDEO STREAMING"

1

Kattadige, Chamara Manoj Madarasinghe. "Network and Content Intelligence for 360 Degree Video Streaming Optimization". Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/29904.

Pełny tekst źródła
Streszczenie:
In recent years, 360° videos, a.k.a. spherical frames, became popular among users creating an immersive streaming experience. Along with the advances in smart- phones and Head Mounted Devices (HMD) technology, many content providers have facilitated to host and stream 360° videos in both on-demand and live stream- ing modes. Therefore, many different applications have already arisen leveraging these immersive videos, especially to give viewers an impression of presence in a digital environment. For example, with 360° videos, now it is possible to connect people in a remote meeting in an interactive way which essentially increases the productivity of the meeting. Also, creating interactive learning materials using 360° videos for students will help deliver the learning outcomes effectively. However, streaming 360° videos is not an easy task due to several reasons. First, 360° video frames are 4–6 times larger than normal video frames to achieve the same quality as a normal video. Therefore, delivering these videos demands higher bandwidth in the network. Second, processing relatively larger frames requires more computational resources at the end devices, particularly for end user devices with limited resources. This will impact not only the delivery of 360° videos but also many other applications running on shared resources. Third, these videos need to be streamed with very low latency requirements due their interactive nature. Inability to satisfy these requirements can result in poor Quality of Experience (QoE) for the user. For example, insufficient bandwidth incurs frequent rebuffer- ing and poor video quality. Also, inadequate computational capacity can cause faster battery draining and unnecessary heating of the device, causing discomfort to the user. Motion or cyber–sickness to the user will be prevalent if there is an unnecessary delay in streaming. These circumstances will hinder providing im- mersive streaming experiences to the much-needed communities, especially those who do not have enough network resources. To address the above challenges, we believe that enhancements to the three main components in video streaming pipeline, server, network and client, are essential. Starting from network, it is beneficial for network providers to identify 360° video flows as early as possible and understand their behaviour in the network to effec- tively allocate sufficient resources for this video delivery without compromising the quality of other services. Content servers, at one end of this streaming pipeline, re- quire efficient 360° video frame processing mechanisms to support adaptive video streaming mechanisms such as ABR (Adaptive Bit Rate) based streaming, VP aware streaming, a streaming paradigm unique to 360° videos that select only part of the larger video frame that fall within the user-visible region, etc. On the other end, the client can be combined with edge-assisted streaming to deliver 360° video content with reduced latency and higher quality. Following the above optimization strategies, in this thesis, first, we propose a mech- anism named 360NorVic to extract 360° video flows from encrypted video traffic and analyze their traffic characteristics. We propose Machine Learning (ML) mod- els to classify 360° and normal videos under different scenarios such as offline, near real-time, VP-aware streaming and Mobile Network Operator (MNO) level stream- ing. Having extracted 360° video traffic traces both in packet and flow level data at higher accuracy, we analyze and understand the differences between 360° and normal video patterns in the encrypted traffic domain that is beneficial for effec- tive resource optimization for enhancing 360° video delivery. Second, we present a WGAN (Wesserstien Generative Adversarial Network) based data generation mechanism (namely VideoTrain++) to synthesize encrypted network video traffic, taking minimal data. Leveraging synthetic data, we show improved performance in 360° video traffic analysis, especially in ML-based classification in 360NorVic. Thirdly, we propose an effective 360° video frame partitioning mechanism (namely VASTile) at the server side to support VP-aware 360° video streaming with dy- namic tiles (or variable tiles) of different sizes and locations on the frame. VASTile takes a visual attention map on the video frames as the input and applies a com- putational geometric approach to generate a non-overlapping tile configuration to cover the video frames adaptive to the visual attention. We present VASTile as a scalable approach for video frame processing at the servers and a method to re- duce bandwidth consumption in network data transmission. Finally, by applying VASTile to the individual user VP at the client side and utilizing cache storage of Multi Access Edge Computing (MEC) servers, we propose OpCASH, a mech- anism to personalize the 360° video streaming with dynamic tiles with the edge assistance. While proposing an ILP based solution to effectively select cached variable tiles from MEC servers that might not be identical to the requested VP tiles by user, but still effectively cover the same VP region, OpCASH maximize the cache utilization and reduce the number of requests to the content servers in congested core network. With this approach, we demonstrate the gain in latency and bandwidth saving and video quality improvement in personalized 360° video streaming.
Style APA, Harvard, Vancouver, ISO itp.
2

Corbillon, Xavier. "Enable the next generation of interactive video streaming". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0103/document.

Pełny tekst źródła
Streszczenie:
Les vidéos omnidirectionnelles, également appelées vidéos sphériques ou vidéos360°, sont des vidéos avec des pixels enregistrés dans toutes les directions de l’espace. Un utilisateur qui regarde un tel contenu avec un Casques de Réalité Virtuelle (CRV) peut sélectionner la partie de la vidéo à afficher, usuellement nommée viewport, en bougeant la tête. Pour se sentir totalement immergé à l’intérieur du contenu, l’utilisateur a besoin de voir au moins 90 viewports par seconde en 4K. Avec les technologies de streaming traditionnelles, fournir une telle qualité nécessiterait un débit de plus de100 Mbit s−1, ce qui est bien trop élevé. Dans cette thèse, je présente mes contributions pour rendre possible le streaming de vidéos omnidirectionnelles hautement immersives sur l’Internet. On peut distinguer six contributions : une proposition d’architecture de streaming viewport adaptatif réutilisant une partie des technologies existantes ; une extension de cette architecture pour des vidéos à six degrés de liberté ; deux études théoriques des vidéos à qualité spatiale non-homogène; un logiciel open source de manipulation des vidéos 360°; et un jeu d’enregistrements de déplacements d’utilisateurs regardant des vidéos 360°
Omnidirectional videos, also denoted as spherical videos or 360° videos, are videos with pixels recorded from a given viewpoint in every direction of space. A user watching such an omnidirectional content with a Head Mounted Display (HMD) can select the portion of the videoto display, usually denoted as viewport, by moving her head. To feel high immersion inside the content a user needs to see viewport with 4K resolutionand 90 Hz frame rate. With traditional streaming technologies, providing such quality would require a data rate of more than 100 Mbit s−1, which is far too high compared to the median Internet access band width. In this dissertation, I present my contributions to enable the streaming of highly immersive omnidirectional videos on the Internet. We can distinguish six contributions : a viewport-adaptive streaming architecture proposal reusing a part of existing technologies ; an extension of this architecture for videos with six degrees of freedom ; two theoretical studies of videos with non homogeneous spatial quality ; an open-source software for handling 360° videos ; and a dataset of recorded users’ trajectories while watching 360° videos
Style APA, Harvard, Vancouver, ISO itp.
3

Almquist, Mathias, i Viktor Almquist. "Analysis of 360° Video Viewing Behaviour". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144405.

Pełny tekst źródła
Streszczenie:
In this thesis we study users' viewing motions when watching 360° videos in order to provide information that can be used to optimize future view-dependent streaming protocols. More specifically, we develop an application that plays a sequence of 360° videos on an Oculus Rift Head Mounted Display and records the orientation and rotation velocity of the headset during playback. The application is used during an extensive user study in order to collect more than 21 hours of viewing data which is then analysed to expose viewing patterns, useful for optimizing 360° streaming protocols.
Style APA, Harvard, Vancouver, ISO itp.
4

Almquist, Mathias, i Viktor Almquist. "Analysis of 360° Video Viewing Behaviours". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144907.

Pełny tekst źródła
Streszczenie:
In this thesis we study users' viewing motions when watching 360° videos in order to provide information that can be used to optimize future view-dependent streaming protocols. More specifically, we develop an application that plays a sequence of 360° videos on an Oculus Rift Head Mounted Display and records the orientation and rotation velocity of the headset during playback. The application is used during an extensive user study in order to collect more than 21 hours of viewing data which is then analysed to expose viewing patterns, useful for optimizing 360° streaming protocols.
Style APA, Harvard, Vancouver, ISO itp.
5

Lindskog, Eric. "Developing an emulator for 360° video : intended for algorithm development". Thesis, Linköpings universitet, Databas och informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171369.

Pełny tekst źródła
Streszczenie:
Streaming 360° video has become more commonplace with content delivery services such as YouTube having support for it. By its nature, 360° video requires more bandwidth as only a fraction of the image is actually in view, while the user is expecting the same "in view" quality as with a regular video. Several studies and lots of work have been done to mitigate this higher demand for bandwidth. One solution is advanced algorithms that take in to account the direction that the user is looking when fetching the video from the server; e.g., by fetching content that is not in the user’s view at a lower quality or by not fetching this data at all. Developing these algorithms is a timely process, especially in the later stages where tweaking one parameter might require the video to be re-encoded, and therefore taking up time that could otherwise be spent on getting results and continued iteration on the algorithm. The viewer should also be considered as the best experience might not correlate with the mathematically best solution calculated by the algorithm. This thesis presents a modular emulator that allows for easy implementation of fetching algorithms that make use of state-of-the-art techniques. It intends to reduce the time it takes to iterate over an algorithm through removing the need to set up a server and encode the video in all of the wanted quality levels when a parameter change would require it. It also makes it easy to include the viewer in the process so that the subjective performance is taken into consideration. The emulator is evaluated through the implementation and evaluation of two algorithms, one serving as a baseline to the second one, which is based on an algorithm developed by another group of researchers. These algorithms are tested on two different types of 360° videos, under four different network conditions and with two values for the maximum buffer size. The results from the evaluation of the two algorithms suggest that the emulator functions as intended from a technical point of view, and as such fulfills its purpose. There is, however, future work that would further prove the emulators performance in regards to replicating real scenarios and a few examples are suggested.
Style APA, Harvard, Vancouver, ISO itp.
6

Mittal, Ashutosh. "Novel Approach to Optimize Bandwidth Consumption for Video Streaming using Eye Tracking". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-212061.

Pełny tekst źródła
Streszczenie:
Recent leaps in eye tracking technology have made it possible to enable eye tracking as a cheap, reliable and efficient addition to the human computer interaction technologies. This thesis looks into the possibility of utilizing it for client aware video streaming. Increasingly people are consuming high quality video content on wireless network devices, thus there is need to optimize bandwidth consumption for efficient delivery of such high resolution content, both for 2D and 360°videos.This work proposes SEEN (Smart Eye-tracking Enabled Networking), a novel approach to streaming video content using real time eye tracking information. This uses HEVC video tiling techniques to display high and low qualities in the same video frame depending on where the user is looking. The viability of proposed approach is validated using extensive user testing conducted on a Quality of Experience (QoE) testbed which was also developed as part of this thesis. Test results show significant bandwidth savings of up to 71% for 2D videos on standard 4K screens, and up to 83% for 360°videos on Virtual Reality (VR) headsets for acceptable QoE ratings. A comparative study on viewport tracking and eye tracking for VR headsets is also included in the thesis in order to further advocate the necessity of eye tracking.This research was conducted in collaboration with Ericsson, Tobii and KTH under the umbrella project SEEN: Smart Eye-tracking Enabled Networking.
Nya framsteg inom ögonstyrningsteknologi har möjliggjort att betrakta ögonstyrning (o.k.s. eyetracking) som ett billigt, pålitligt och effektivt tillägg till teknologier för människa-dator interaktion. Det här examensarbetet utforskar möjligheten att använda ögonstyrning för klientmedveten videoströmning. Allt fler personer förbrukar videoinnehåll av hög kvalitet genom trådlösa nätverk, därmed finns det ett behov av att optimera bandbreddskonsumtionen för effektiv leverans av ett sådant högkvalitativt innehåll, både för 2Doch 360°-videor.Det här arbetet introducerar SEEN (Smart Eye-tracking Enabled Networking), en ny approach för att strömma videoinnehåll, som bygger på realtidsinformation från ögonstyrning. Den använder HEVC-metoder för rutindelning av video för att visa högkvalitativt och lågkvalitativt innehåll i samma videoram, beroende på vart användaren tittar. Lönsamheten av den föreslagna approachen validerades med hjälp av omfattande användartester utförda på en testbädd för upplevelsekvalité (Quality of Experience, QoE) som också utvecklades som en del av det här examensarbetet. Testresultaten visar betydande bandbreddsbesparingar på upp till 71% för 2D-videor på vanliga 4K-skärmar samt upp till 83% för 360°-videor på VR-headset för acceptabla QoE-betyg. En komparativ studie om viewport tracking och ögonstyrning i VR-headset är också inkluderad i det här examensarbetet för att ytterligare förespråka behovet av ögonstyrning.Denna forskning genomfördes i samarbete med Ericsson, Tobii och KTH under paraplyprojektet SEEN: Smart Eye-tracking Enabled Networking.
Style APA, Harvard, Vancouver, ISO itp.
7

Timoncini, Riccardo. "Streaming audio e video nei sistemi Peer-To-Peer TV: il caso Sopcast P2PTV". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3670/.

Pełny tekst źródła
Streszczenie:
La tesi si propone di affrontare il tema del Live Streaming in sistemi P2P con particolare riferimento a Sopcast, un applicativo di P2PTV. Viene fatto un ricorso storico riguardo alla nascita dello streaming e al suo sviluppo, vengono descritte le caratteristiche, il protocollo di comunicazione e i modelli più diffusi per il live streaming P2P. Inoltre si tratterà come viene garantita la qualità del servizio e valutate le performance di un servizio P2PTV.
Style APA, Harvard, Vancouver, ISO itp.
8

Yang, Cheng-Yu, i 楊正宇. "Visual attention guided 360-degree video streaming". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/c4yada.

Pełny tekst źródła
Streszczenie:
碩士
國立中正大學
電機工程研究所
107
In recent years, with the development of multimedia video, smart phones and Virtual Reality (VR) headsets are around us. The video content that we watch every day is gradually developing in a variety of ways. For example, the 360-degree video is popular and many Youtuber and Facebook users upload the 360-degree videos in reporting their travel and in broadcasting the live events. To offer the immersive experience, storage and transmission bandwidth should be taken into account. The huge data amount of 360-degree videos makes it a challenge for efficient transmission and storage. In a limited bandwidth network, the playback of 360-degree video has some problems, such as freeze frames or poor quality of the demanded viewport, due to the huge amount of data. This could degrade the quality of user experience. Therefore, efficient compression and transmission of low-latency 360-degree video/video is important. Based on the human visual characteristics, this work propose techniques of 360-degree image coding and 360-degree video streaming. For the proposed image coding technique, the saliency map is used to modify the distortion during the RDO (rate-distortion optimization) process while it is used to predict the ROI (region of interest) for the proposed video coding technique. The experimental results show that up to 14.71% bitrate is achieved for the proposed image coding technique. For the 360-degree video streaming, this work allocates more resource for the ROIs during the rate control process to make sure a high quality of viewport demanded by the user is offered. Considering the variance of network bandwidth, MPEG-DASH is adopted and the proposed technique of 360-degree video streaming is implemented. Both subjective and objective experiments indicate the superiority of the proposed technique over the anchor scheme.
Style APA, Harvard, Vancouver, ISO itp.
9

PURWAR, ABHINAV. "FOV PREDICTION FOR 360 VIDEO STREAMING IN VIRTUAL REALITY". Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16427.

Pełny tekst źródła
Streszczenie:
In the transmission and review of 360 degree videos on VR devices (HMDs) initiate numerous specialized difficulties. 360 degree video are omnidirectional spherical video and on an approximate 4~6 times high resolution then normal 2D videos of same quality. Since 360-degree video are round video so having omnidirectional perspective of the scene. However human having Field of View (FOV) approximately 90 ~114 degree so only 1/4th of the part is viewed at time. Moreover, VR devices require to respond within 11ms to head movements of user to show the next Field of View (FOV). In this task I research the issue of foreseeing the Field-of-Views (FOV) of client's viewing 360° video utilizing VR gadgets ahead of time. Existing and mostly present solutions while watching the 360 video first find out the current orientation and FOV of user’s and then request for high quality of video pertaining to FOV region and rest for low quality which induce bad user experience for some time called as latency to adapt the high quality. I am developing solution to predict FOV in advance that simultaneouslyinclude content related historical data and sensor data to predict the viewer’s Field of View( FOV) in advance. The sensor-related features include VR devices orientation sensor like magnetometer and accelerometer, while the historical data include the complete data of previous watching orientation of video by different users. Based on historical data and sensor data, I shall train the system and validate design alternatives. Which will help in identify the better design to reduce the view latency and bandwidth required for 360 Video. Advantages of this solution will be (i) Lower bandwidth usage (ii) Low latency and (iii) short running time.
Style APA, Harvard, Vancouver, ISO itp.
10

Lo, Wen-Chih, i 羅文志. "Edge-Assisted 360-degree Video Streaming for Head-Mounted Virtual Reality". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/849u53.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
資訊工程學系所
106
Over the past years, 360◦ video streaming is getting increasingly popular. Watching these videos with Head-Mounted Displays (HMDs), also known as VR headsets, gives a better immersive experience than using traditional planar monitors. However, several open challenges keep state-of-the-art technology away from the immersive viewing experience, including high bandwidth consumption, long turn around latency, and heterogeneous HMD devices. In this thesis, we propose an edge-assisted 360◦ video streaming system, which leverage edge networks to perform viewport rendering. We formulate the optimization problem to determine which HMD client should be served without overloading the edge devices. We design an algorithm to solve the problem as mentioned earlier, and a real testbed is implemented to prove the concept. The resulting edge-assisted 360◦ video streaming system is evaluated through extensive experiments with an open-sourced 360◦ viewing dataset. With the assistance of edge devices, we can reduce the bandwidth usage and computation workload on HMD devices when serving the viewers. Also, the lower network latency is guaranteed. We also conduct several extensive experiment. The results show that compared to current 360◦ video streaming platforms, like YouTube, our edge-assisted rendering platform can: (i) save up to 62% in bandwidth consumption, (ii) achieve higher viewing video quality at a given bitrate, (iii) reduce the computation workload for those lightweight HMDs. Our proposed system and the viewing dataset are open-sourced and can be leveraged by researchers and engineers to improve the 360◦ video streaming further.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "360 VIDEO STREAMING"

1

Curcio, Igor D. D., Dmitrii Monakhov, Ari Hourunranta i Emre Baris Aksu. "Tile Priorities in Adaptive 360-Degree Video Streaming". W Lecture Notes in Computer Science, 212–23. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54407-2_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sekine, Arisa, i Masaki Bandai. "Tile Quality Selection Method in 360-Degree Tile-Based Video Streaming". W Advances in Intelligent Systems and Computing, 535–44. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44038-1_49.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zou, Wenjie, i Fuzheng Yang. "Measuring Quality of Experience of Novel 360-Degree Streaming Video During Stalling". W Communications and Networking, 417–24. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78130-3_43.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Yaru, Li Yu, Chunyu Lin, Yao Zhao i Moncef Gabbouj. "Convolutional Neural Network Based Inter-Frame Enhancement for 360-Degree Video Streaming". W Advances in Multimedia Information Processing – PCM 2018, 57–66. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00767-6_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Li, Yunqiao, Yiling Xu, Shaowei Xie, Liangji Ma i Jun Sun. "Two-Layer FoV Prediction Model for Viewport Dependent Streaming of 360-Degree Videos". W Communications and Networking, 501–9. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06161-6_49.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Skondras, Emmanouil, Konstantina Siountri, Angelos Michalas i Dimitrios D. Vergados. "Personalized Real-Time Virtual Tours in Places With Cultural Interest". W Destination Management and Marketing, 802–20. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2469-5.ch044.

Pełny tekst źródła
Streszczenie:
Virtual tours using drones enhance the experience the users perceive from a place with cultural interest. Drones equipped with 360o cameras perform real-time video streaming of the cultural sites. The user preferences about each monument type should be considered in order to decide the appropriate flying route for the drone. This article describes a scheme for supporting personalized real-time virtual tours at sites with cultural interest using drones. The user preferences are modeled using the MPEG-21 and the MPEG-7 standards, while Web Ontology Language (OWL) ontologies are used for metadata structure and semantics. The Metadata-Aware Analytic Network Process (MANP) algorithm is proposed in order to weigh the user preferences for each monument type. Subsequently, the Trapezoidal Fuzzy Topsis for Heritage Route Selection (TFT-HRS) algorithm ranks the candidate heritage routes. Finally, after each virtual tour, the user preferences metadata are updated in order in order the scheme to continuously learn about the user preferences.
Style APA, Harvard, Vancouver, ISO itp.
7

Rowe, Neil C. "Critical Issues in Content Repurposing for Small Devices". W Encyclopedia of Multimedia Technology and Networking, Second Edition, 293–98. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch040.

Pełny tekst źródła
Streszczenie:
Content repurposing is the reorganizing of data for presentation on different display hardware (Singh, 2004). It has been particularly important recently with the growth of handheld devices such as “personal digital assistants” (PDAs), sophisticated telephones, and other small specialized devices. Unfortunately, such devices pose serious problems for multimedia delivery. With their small screens (240 by 320 for a basic Palm PDA), one cannot display much information (like most of a Web page); with their low bandwidths, one cannot display video and audio transmissions from a server (“streaming”) with much quality; and with their small storage capabilities, large media files cannot be stored for later playback. Furthermore, new devices and old ones with new characteristics have been appearing at a high rate, so software vendors are having difficulty keeping pace. So some real-time, systematic, and automated planning could be helpful in figuring how to show desired data, especially multimedia, on a broad range of devices.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "360 VIDEO STREAMING"

1

Lu, Yiyun, Yifei Zhu i Zhi Wang. "Personalized 360-Degree Video Streaming". W MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Son, Jangwoo, Dongmin Jang i Eun-Seok Ryu. "Implementing 360 video tiled streaming system". W MMSys '18: 9th ACM Multimedia Systems Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3204949.3208119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Xianda, Tianxiang Tan i Guohong Cao. "Popularity-Aware 360-Degree Video Streaming". W IEEE INFOCOM 2021 - IEEE Conference on Computer Communications. IEEE, 2021. http://dx.doi.org/10.1109/infocom42981.2021.9488856.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Xing, Qingyang Xiao, Vijay Gopalakrishnan, Bo Han, Feng Qian i Matteo Varvello. "360° Innovations for Panoramic Video Streaming". W HotNets-XVI: The 16th ACM Workshop on Hot Topics in Networks. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3152434.3152443.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Nguyen, Duc V., Hoang Van Trung, Hoang Le Dieu Huong, Truong Thu Huong, Nam Pham Ngoc i Truong Cong Thang. "Scalable 360 Video Streaming using HTTP/2". W 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2019. http://dx.doi.org/10.1109/mmsp.2019.8901805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Silva, Rodrigo M. A., Bruno Feijó, Pablo B. Gomes, Thiago Frensh i Daniel Monteiro. "Real time 360° video stitching and streaming". W SIGGRAPH '16: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2945078.2945148.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chopra, Lovish, Sarthak Chakraborty, Abhijit Mondal i Sandip Chakraborty. "PARIMA: Viewport Adaptive 360-Degree Video Streaming". W WWW '21: The Web Conference 2021. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3442381.3450070.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Seo, Bong-Seok, Eunyoung Jeong, ChangJong Hyun, Dongho You i Dong Ho Kim. "360- Degree Video Streaming Using Stitching Information". W 2019 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2019. http://dx.doi.org/10.1109/icce.2019.8661926.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Nasrabadi, Afshin Taghavi, Anahita Mahzari, Joseph D. Beshay i Ravi Prakash. "Adaptive 360-degree video streaming using layered video coding". W 2017 IEEE Virtual Reality (VR). IEEE, 2017. http://dx.doi.org/10.1109/vr.2017.7892319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nasrabadi, Afshin Taghavi, Anahita Mahzari, Joseph D. Beshay i Ravi Prakash. "Adaptive 360-Degree Video Streaming using Scalable Video Coding". W MM '17: ACM Multimedia Conference. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123266.3123414.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii