Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: 360 VIDEO STREAMING.

Artykuły w czasopismach na temat „360 VIDEO STREAMING”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „360 VIDEO STREAMING”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bradis, Nikolai Valerievich. "360 Video Protection and Streaming". International Journal of Advanced Trends in Computer Science and Engineering 8, nr 6 (15.12.2019): 3289–96. http://dx.doi.org/10.30534/ijatcse/2019/99862019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jeong, JongBeom, Dongmin Jang, Jangwoo Son i Eun-Seok Ryu. "3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming". Sensors 18, nr 9 (18.09.2018): 3148. http://dx.doi.org/10.3390/s18093148.

Pełny tekst źródła
Streszczenie:
Recently, with the increasing demand for virtual reality (VR), experiencing immersive contents with VR has become easier. However, a tremendous amount of calculation and bandwidth is required when processing 360 videos. Moreover, additional information such as the depth of the video is required to enjoy stereoscopic 360 contents. Therefore, this paper proposes an efficient method of streaming high-quality 360 videos. To reduce the bandwidth when streaming and synthesizing the 3DoF+ 360 videos, which supports limited movements of the user, a proper down-sampling ratio and quantization parameter are offered from the analysis of the graph between bitrate and peak signal-to-noise ratio. High-efficiency video coding (HEVC) is used to encode and decode the 360 videos, and the view synthesizer produces the video of intermediate view, providing the user with an immersive experience.
Style APA, Harvard, Vancouver, ISO itp.
3

Nguyen, Anh, i Zhisheng Yan. "Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays". Sensors 23, nr 8 (15.04.2023): 4016. http://dx.doi.org/10.3390/s23084016.

Pełny tekst źródła
Streszczenie:
Predicting where users will look inside head-mounted displays (HMDs) and fetching only the relevant content is an effective approach for streaming bulky 360 videos over bandwidth-constrained networks. Despite previous efforts, anticipating users’ fast and sudden head movements is still difficult because there is a lack of clear understanding of the unique visual attention in 360 videos that dictates the users’ head movement in HMDs. This in turn reduces the effectiveness of streaming systems and degrades the users’ Quality of Experience. To address this issue, we propose to extract salient cues unique in the 360 video content to capture the attentive behavior of HMD users. Empowered by the newly discovered saliency features, we devise a head-movement prediction algorithm to accurately predict users’ head orientations in the near future. A 360 video streaming framework that takes full advantage of the head movement predictor is proposed to enhance the quality of delivered 360 videos. Practical trace-driven results show that the proposed saliency-based 360 video streaming system reduces the stall duration by 65% and the stall count by 46%, while saving 31% more bandwidth than state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
4

Fan, Ching-Ling, Wen-Chih Lo, Yu-Tung Pai i Cheng-Hsin Hsu. "A Survey on 360° Video Streaming". ACM Computing Surveys 52, nr 4 (18.09.2019): 1–36. http://dx.doi.org/10.1145/3329119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wong, En Sing, Nur Haliza Abdul Wahab, Faisal Saeed i Nouf Alharbi. "360-Degree Video Bandwidth Reduction: Technique and Approaches Comprehensive Review". Applied Sciences 12, nr 15 (28.07.2022): 7581. http://dx.doi.org/10.3390/app12157581.

Pełny tekst źródła
Streszczenie:
Recently, the usage of 360-degree videos has prevailed in various sectors such as education, real estate, medical, entertainment and more. The development of the Virtual World “Metaverse” demanded a Virtual Reality (VR) environment with high immersion and a smooth user experience. However, various challenges are faced to provide real-time streaming due to the nature of high-resolution 360-degree videos such as high bandwidth requirement, high computing power and low delay tolerance. To overcome these challenges, streaming methods such as Dynamic Adaptive Streaming over HTTP (DASH), Tiling, Viewport-Adaptive and Machine Learning (ML) are discussed. Moreover, the superiorities of the development of 5G and 6G networks, Mobile Edge Computing (MEC) and Caching and the Information-Centric Network (ICN) approaches to optimize the 360-degree video streaming are elaborated. All of these methods strike to improve the Quality of Experience (QoE) and Quality of Service (QoS) of VR services. Next, the challenges faced in QoE modeling and the existing objective and subjective QoE assessment methods of 360-degree video are presented. Lastly, potential future research that utilizes and further improves the existing methods substantially is discussed. With the efforts of various research studies and industries and the gradual development of the network in recent years, a deep fake virtual world, “Metaverse” with high immersion and conducive for daily life working, learning and socializing are around the corner.
Style APA, Harvard, Vancouver, ISO itp.
6

Garcia, Henrique D., Mylène C. Q. Farias, Ravi Prakash i Marcelo M. Carvalho. "Statistical characterization of tile decoding time of HEVC-encoded 360° video". Electronic Imaging 2020, nr 9 (26.01.2020): 285–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-285.

Pełny tekst źródła
Streszczenie:
In this paper, we present a statistical characterization of tile decoding time of 360° videos encoded via HEVC that considers different tiling patterns and quality levels (i.e., bitrates). In particular, we present results for probability density function estimation of tile decoding time based on a series of experiments carried out over a set of 360° videos with different spatial and temporal characteristics. Additionally, we investigate the extent to which tile decoding time is correlated with tile bitrate (at chunk level), so that DASH-based video streaming can make possible use of such an information to infer tile decoding time. The results of this work may help in the design of queueing or control theory-based adaptive bitrate (ABR) algorithms for 360° video streaming.
Style APA, Harvard, Vancouver, ISO itp.
7

Nguyen, Dien, Tuan Le, Sangsoon Lee i Eun-Seok Ryu. "SHVC Tile-Based 360-Degree Video Streaming for Mobile VR: PC Offloading Over mmWave". Sensors 18, nr 11 (1.11.2018): 3728. http://dx.doi.org/10.3390/s18113728.

Pełny tekst źródła
Streszczenie:
360-degree video streaming for high-quality virtual reality (VR) is challenging for current wireless systems because of the huge bandwidth it requires. However, millimeter wave (mmWave) communications in the 60 GHz band has gained considerable interest from the industry and academia because it promises gigabit wireless connectivity in the huge unlicensed bandwidth (i.e., up to 7 GHz). This massive unlicensed bandwidth offers great potential for addressing the demand for 360-degree video streaming. This paper investigates the problem of 360-degree video streaming for mobile VR using the SHVC, the scalable of High-Efficiency Video Coding (HEVC) standard and PC offloading over 60 GHz networks. We present a conceptual architecture based on advanced tiled-SHVC and mmWave communications. This architecture comprises two main parts. (1) Tile-based SHVC for 360-degree video streaming and optimizing parallel decoding. (2) Personal Computer (PC) offloading mechanism for transmitting uncompressed video (viewport only). The experimental results show that our tiled extractor method reduces the bandwidth required for 360-degree video streaming by more than 47% and the tile partitioning mechanism was improved by up to 25% in terms of the decoding time. The PC offloading mechanism was also successful in offloading 360-degree decoded (or viewport only) video to mobile devices using mmWave communication and the proposed transmission schemes.
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Xiaolei, Di Wu i Ishfaq Ahmad. "Optimized viewport‐adaptive 360‐degree video streaming". CAAI Transactions on Intelligence Technology 6, nr 3 (3.03.2021): 347–59. http://dx.doi.org/10.1049/cit2.12011.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Podborski, Dimitri, Emmanuel Thomas, Miska M. Hannuksela, Sejin Oh, Thomas Stockhammer i Stefan Pham. "360-Degree Video Streaming with MPEG-DASH". SMPTE Motion Imaging Journal 127, nr 7 (sierpień 2018): 20–27. http://dx.doi.org/10.5594/jmi.2018.2838779.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Peng, Shuai, Jialu Hu, Han Xiao, Shujie Yang i Changqiao Xu. "Viewport-Driven Adaptive 360◦ Live Streaming Optimization Framework". Journal of Networking and Network Applications 1, nr 4 (styczeń 2022): 139–49. http://dx.doi.org/10.33969/j-nana.2021.010401.

Pełny tekst źródła
Streszczenie:
Virtual reality (VR) video streaming and 360◦ panoramic video have received extensive attention in recent years, which can bring users an immersive experience. However, the ultra-high bandwidth and ultra-low latency requirements of virtual reality video or 360◦ panoramic video also put tremendous pressure on the carrying capacity of the current network. In fact, since the user’s field of view (a.k.a viewport) is limited when watching a panoramic video and users can only watch about 20%∼30% of the video content, it is not necessary to directly transmit all high-resolution content to the user. Therefore, predicting the user’s future viewing viewport can be crucial for selective streaming and further bitrate decisions. Combined with the tile-based adaptive bitrate (ABR) algorithm for panoramic video, video content within the user’s viewport can be transmitted at a higher resolution, while areas outside the viewport can be transmitted at a lower resolution. This paper mainly proposes a viewport-driven adaptive 360◦ live streaming optimization framework, which combines viewport prediction and ABR algorithm to optimize the transmission of live 360◦ panoramic video. However, existing viewport prediction always suffers from low prediction accuracy and does not support real-time performance. With the advantage of convolutional network (CNN) in image processing and long short-term memory (LSTM) in temporal series processing, we propose an online-updated viewport prediction model called LiveCL which mainly utilizes CNN to extract the spatial characteristics of video frames and LSTM to learn the temporal characteristics of the user’s viewport trajectories. With the help of the viewport prediction and ABR algorithm, unnecessary bandwidth consumption can be effectively reduced. The main contributions of this work include: (1) a framework for 360◦ video transmission is proposed; (2) an online real-time viewport prediction model called LiveCL is proposed to optimize 360◦ video transmission combined with a novel ABR algorithm, which outperforms the existing model. Based on the public 360◦ video dataset, the tile accuracy, recall, precision, and frame accuracy of LiveCL are better than those of the latest model. Combined with related adaptive bitrate algorithms, the proposed viewport prediction model can reduce the transmission bandwidth by about 50%.
Style APA, Harvard, Vancouver, ISO itp.
11

Shafi, Rabia, Wan Shuai i Muhammad Usman Younus. "360-Degree Video Streaming: A Survey of the State of the Art". Symmetry 12, nr 9 (10.09.2020): 1491. http://dx.doi.org/10.3390/sym12091491.

Pełny tekst źródła
Streszczenie:
360-degree video streaming is expected to grow as the next disruptive innovation due to the ultra-high network bandwidth (60–100 Mbps for 6k streaming), ultra-high storage capacity, and ultra-high computation requirements. Video consumers are more interested in the immersive experience instead of conventional broadband televisions. The visible area (known as user’s viewport) of the video is displayed through Head-Mounted Display (HMD) with a very high frame rate and high resolution. Delivering the whole 360-degree frames in ultra-high-resolution to the end-user significantly adds pressure to the service providers’ overall intention. This paper surveys 360-degree video streaming by focusing on different paradigms from capturing to display. It overviews different projections, compression, and streaming techniques that either incorporate the visual features or spherical characteristics of 360-degree video. Next, the latest ongoing standardization efforts for enhanced degree-of-freedom immersive experience are presented. Furthermore, several 360-degree audio technologies and a wide range of immersive applications are consequently deliberated. Finally, some significant research challenges and implications in the immersive multimedia environment are presented and explained in detail.
Style APA, Harvard, Vancouver, ISO itp.
12

Kim, Hyun-Wook, i Sung-Hyun Yang. "Region of interest–based segmented tiled adaptive streaming using head-mounted display tracking sensing data". International Journal of Distributed Sensor Networks 15, nr 12 (grudzień 2019): 155014771989453. http://dx.doi.org/10.1177/1550147719894533.

Pełny tekst źródła
Streszczenie:
To support 360 virtual reality video streaming services, high resolutions of over 8K and network streaming technology that guarantees consistent quality of service are required. To this end, we propose 360 virtual reality video player technology and a streaming protocol based on MPEG Dynamic Adaptive Streaming over HTTP Spatial Representation Description to support the player. The player renders the downsized video as the base layer, which has a quarter of the resolution of the original video, and high-quality video tiles consisting of tiles obtained from the tiled-encoded high-quality video (over 16K resolution) as the enhanced layer. Furthermore, we implemented the system and conducted experiments to measure the network bandwidth for 16K video streaming and switching latency arising from changes in the viewport. From the results, we confirmed that the player has a switching latency of less than 1000 ms and a maximum network download bandwidth requirement of 100 Mbps.
Style APA, Harvard, Vancouver, ISO itp.
13

van Kasteren, Anouk, Kjell Brunnström, John Hedlund i Chris Snijders. "Quality of Experience Assessment of 360-degree video". Electronic Imaging 2020, nr 11 (26.01.2020): 91–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.11.hvei-091.

Pełny tekst źródła
Streszczenie:
The research domain on the Quality of Experience (QoE) of 2D video streaming has been well established. However, a new video format is emerging and gaining popularity and availability: VR 360-degree video. The processing and transmission of 360-degree videos brings along new challenges such as large bandwidth requirements and the occurrence of different distortions. The viewing experience is also substantially different from 2D video, it offers more interactive freedom on the viewing angle but can also be more demanding and cause cybersickness. Further research on the QoE of 360-videos specifically is thus required. The goal of this study is to complement earlier research by (Tran, Ngoc, Pham, Jung, and Thank, 2017) testing the effects of quality degradation, freezing, and content on the QoE of 360-videos. Data will be gathered through subjective tests where participants watch degraded versions of 360-videos through an HMD. After each video they will answer questions regarding their quality perception, experience, perceptual load, and cybersickness. Results of the first part show overall rather low QoE ratings and it decreases even more as quality is degraded and freezing events are added. Cyber sickness was found not to be an issue.
Style APA, Harvard, Vancouver, ISO itp.
14

Curcio, Igor D. D., Henri Toukomaa i Deepa Naik. "360-Degree Video Streaming and Its Subjective Quality". SMPTE Motion Imaging Journal 127, nr 7 (sierpień 2018): 28–38. http://dx.doi.org/10.5594/jmi.2018.2838818.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Jeppsson, Mattis, Håvard N. Espeland, Tomas Kupka, Ragnar Langseth, Andreas Petlund, Qiaoqiao Peng, Chuansong Xue i in. "Efficient Live and On-Demand Tiled HEVC 360 VR Video Streaming". International Journal of Semantic Computing 13, nr 03 (wrzesień 2019): 367–91. http://dx.doi.org/10.1142/s1793351x19400166.

Pełny tekst źródła
Streszczenie:
360∘ panorama video displayed through Virtual Reality (VR) glasses or large screens offers immersive user experiences, but as such technology becomes commonplace, the need for efficient streaming methods of such high-bitrate videos arises. In this respect, the attention that 360∘ panorama video has received lately is huge. Many methods have already been proposed, and in this paper, we shed more light on the different trade-offs in order to save bandwidth while preserving the video quality in the user’s field-of-view (FoV). Using 360∘ VR content delivered to a Gear VR head-mounted display with a Samsung Galaxy S7 and to a Huawei Q22 set-top-box, we have tested various tiling schemes analyzing the tile layout, the tiling and encoding overheads, mechanisms for faster quality switching beyond the DASH segment boundaries and quality selection configurations. In this paper, we present an efficient end-to-end design and real-world implementation of such a 360∘ streaming system. Furthermore, in addition to researching an on-demand system, we also go beyond the existing on-demand solutions and present a live streaming system which strikes a trade-off between bandwidth usage and the video quality in the user’s FoV. We have created an architecture that combines RTP and DASH, and our system multiplexes a single HEVC hardware decoder to provide faster quality switching than at the traditional GOP boundaries. We demonstrate the performance and illustrate the trade-offs through real-world experiments where we can report comparable bandwidth savings to existing on-demand approaches, but with faster quality switches when the FoV changes.
Style APA, Harvard, Vancouver, ISO itp.
16

Ha, Van Kha Ly, Rifai Chai i Hung T. Nguyen. "A Telepresence Wheelchair with 360-Degree Vision Using WebRTC". Applied Sciences 10, nr 1 (3.01.2020): 369. http://dx.doi.org/10.3390/app10010369.

Pełny tekst źródła
Streszczenie:
This paper presents an innovative approach to develop an advanced 360-degree vision telepresence wheelchair for healthcare applications. The study aims at improving a wide field of view surrounding the wheelchair to provide safe wheelchair navigation and efficient assistance for wheelchair users. A dual-fisheye camera is mounted in front of the wheelchair to capture images which can be then streamed over the Internet. A web real-time communication (WebRTC) protocol was implemented to provide efficient video and data streaming. An estimation model based on artificial neural networks was developed to evaluate the quality of experience (QoE) of video streaming. Experimental results confirmed that the proposed telepresence wheelchair system was able to stream a 360-degree video surrounding the wheelchair smoothly in real-time. The average streaming rate of the entire 360-degree video was 25.83 frames per second (fps), and the average peak signal to noise ratio (PSNR) was 29.06 dB. Simulation results of the proposed QoE estimation scheme provided a prediction accuracy of 94%. Furthermore, the results showed that the designed system could be controlled remotely via the wireless Internet to follow the desired path with high accuracy. The overall results demonstrate the effectiveness of our proposed approach for the 360-degree vision telepresence wheelchair for assistive technology applications.
Style APA, Harvard, Vancouver, ISO itp.
17

Jiang, Xiaolan, Si Ahmed Naas, Yi-Han Chiang, Stephan Sigg i Yusheng Ji. "SVP: Sinusoidal Viewport Prediction for 360-Degree Video Streaming". IEEE Access 8 (2020): 164471–81. http://dx.doi.org/10.1109/access.2020.3022062.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Jeong, Jong-Beom, Soonbin Lee, Dongmin Jang i Eun-Seok Ryu. "Towards 3DoF+ 360 Video Streaming System for Immersive Media". IEEE Access 7 (2019): 136399–408. http://dx.doi.org/10.1109/access.2019.2942771.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Nguyen, Hung, Thu Ngan Dao, Ngoc Son Pham, Tran Long Dang, Trung Dung Nguyen i Thu Huong Truong. "An Accurate Viewport Estimation Method for 360 Video Streaming using Deep Learning". EAI Endorsed Transactions on Industrial Networks and Intelligent Systems 9, nr 4 (21.09.2022): e2. http://dx.doi.org/10.4108/eetinis.v9i4.2218.

Pełny tekst źródła
Streszczenie:
Nowadays, Virtual Reality is becoming more and more popular, and 360 video is a very important part of the system. 360 video transmission over the Internet faces many difficulties due to its large size. Therefore, to reduce the network bandwidth requirement of 360-degree video, Viewport Adaptive Streaming (VAS) was proposed. An important issue in VAS is how to estimate future user viewing direction. In this paper, we propose an algorithm called GLVP (GRU-LSTM-based-Viewport-Prediction) to estimate the typical view for the VAS system. The results show that our method can improve viewport estimation from 9.5% to near 20%compared with other methods.
Style APA, Harvard, Vancouver, ISO itp.
20

Li, David, Ruofei Du, Adharsh Babu, Camelia D. Brumar i Amitabh Varshney. "A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming". IEEE Transactions on Visualization and Computer Graphics 27, nr 5 (maj 2021): 2638–47. http://dx.doi.org/10.1109/tvcg.2021.3067762.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Nguyen, D. V., Huyen T. T. Tran i Truong Cong Thang. "A client-based adaptation framework for 360-degree video streaming". Journal of Visual Communication and Image Representation 59 (luty 2019): 231–43. http://dx.doi.org/10.1016/j.jvcir.2019.01.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Cho, Sooyoung, Daeyeol Kim, Changhyung Kim, Kyoung-Yoon Jeong i Chae-Bong Sohn. "360-Degree Video Traffic Reduction Using Cloud Streaming in Mobile". Wireless Personal Communications 105, nr 2 (17.09.2018): 635–54. http://dx.doi.org/10.1007/s11277-018-5984-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

De Cicco, Luca, Saverio Mascolo, Vittorio Palmisano i Giuseppe Ribezzo. "Reducing the network bandwidth requirements for 360∘ immersive video streaming". Internet Technology Letters 2, nr 4 (23.06.2019): e118. http://dx.doi.org/10.1002/itl2.118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

NGUYEN, Duc V., Huyen T. T. TRAN i Truong Cong THANG. "Adaptive Tiling Selection for Viewport Adaptive Streaming of 360-degree Video". IEICE Transactions on Information and Systems E102.D, nr 1 (1.01.2019): 48–51. http://dx.doi.org/10.1587/transinf.2018mul0001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Zhang, Xue, Gene Cheung, Yao Zhao, Patrick Le Callet, Chunyu Lin i Jack Z. G. Tan. "Graph Learning Based Head Movement Prediction for Interactive 360 Video Streaming". IEEE Transactions on Image Processing 30 (2021): 4622–36. http://dx.doi.org/10.1109/tip.2021.3073283.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Nguyen, Duc, Nguyen Viet Hung, Nguyen Tien Phong, Truong Thu Huong i Truong Cong Thang. "Scalable Multicast for Live 360-Degree Video Streaming Over Mobile Networks". IEEE Access 10 (2022): 38802–12. http://dx.doi.org/10.1109/access.2022.3165657.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Zhou, Chao, Zhenhua Li, Joe Osgood i Yao Liu. "On the Effectiveness of Offset Projections for 360-Degree Video Streaming". ACM Transactions on Multimedia Computing, Communications, and Applications 14, nr 3s (9.08.2018): 1–24. http://dx.doi.org/10.1145/3209660.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Yaqoob, Abid, Ting Bi i Gabriel-Miro Muntean. "A Survey on Adaptive 360° Video Streaming: Solutions, Challenges and Opportunities". IEEE Communications Surveys & Tutorials 22, nr 4 (2020): 2801–38. http://dx.doi.org/10.1109/comst.2020.3006999.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

van Kasteren, Anouk, Kjell Brunnström, John Hedlund i Chris Snijders. "Quality of experience of 360 video – subjective and eye-tracking assessment of encoding and freezing distortions". Multimedia Tools and Applications 81, nr 7 (14.02.2022): 9771–802. http://dx.doi.org/10.1007/s11042-022-12065-1.

Pełny tekst źródła
Streszczenie:
AbstractThe research domain on the Quality of Experience (QoE) of 2D video streaming has been well established. However, a new video format is emerging and gaining popularity and availability: VR 360-degree video. The processing and transmission of 360-degree videos brings along new challenges such as large bandwidth requirements and the occurrence of different distortions. The viewing experience is also substantially different from 2D video, it offers more interactive freedom on the viewing angle but can also be more demanding and cause cybersickness. The first goal of this article is to complement earlier research by Tran, et al. (2017) [39] testing the effects of quality degradation, freezing, and content on the QoE of 360-videos. The second goal is to test the contribution of visual attention as an influence factor in the QoE assessment. Data was gathered through subjective tests where participants watched degraded versions of 360-videos through a Head-Mounted Display with integrated eye-tracking sensors. After each video they answered questions regarding their quality perception, experience, perceptual load, and cybersickness. Our results showed that the participants rated the overall QoE rather low, and the ratings decreased with added degradations and freezing events. Cyber sickness was found not to be an issue. The effects of the manipulations on visual attention were minimal. Attention was mainly directed by content, but also by surprising elements. The addition of eye-tracking metrics did not further explain individual differences in subjective ratings. Nevertheless, it was found that looking at moving objects increased the negative effect of freezing events and made participants less sensitive to quality distortions. More research is needed to conclude whether visual attention is an influence factor on the QoE in 360-video.
Style APA, Harvard, Vancouver, ISO itp.
30

Мороз, В., i А. Щербаков. "Research and development of an automated video surveillance system to perform special functions." КОМП’ЮТЕРНО-ІНТЕГРОВАНІ ТЕХНОЛОГІЇ: ОСВІТА, НАУКА, ВИРОБНИЦТВО, nr 37 (28.12.2019): 89–96. http://dx.doi.org/10.36910/6775-2524-0560-2019-37-13.

Pełny tekst źródła
Streszczenie:
A complex algorithm for creating an automated system for recording and displaying information from aircraft and observation in interactive operator control mode was presented. An architecture for encrypted transmission of video streaming from several cameras from an aircraft with in-flight video stabilization and projection of a virtual reality helmet on a 360-degree perspective was proposed.
Style APA, Harvard, Vancouver, ISO itp.
31

Kim, San, Jihyeok Yun, Bokyun Jo, Ji Ho Kim, Ho Gyun Chae i Doug Young Suh. "View Direction Adaptive 360 Degree Video Streaming System Based on Projected Area". Journal of Computer and Communications 06, nr 01 (2018): 203–12. http://dx.doi.org/10.4236/jcc.2018.61020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Chiariotti, Federico. "A survey on 360-degree video: Coding, quality of experience and streaming". Computer Communications 177 (wrzesień 2021): 133–55. http://dx.doi.org/10.1016/j.comcom.2021.06.029.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Le, Tuan Thanh, Dien Van Nguyen i Eun-Seok Ryu. "Computing Offloading Over mmWave for Mobile VR: Make 360 Video Streaming Alive". IEEE Access 6 (2018): 66576–89. http://dx.doi.org/10.1109/access.2018.2878519.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Nguyen, Duc V., Huyen T. T. Tran, Anh T. Pham i Truong Cong Thang. "An Optimal Tile-Based Approach for Viewport-Adaptive 360-Degree Video Streaming". IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, nr 1 (marzec 2019): 29–42. http://dx.doi.org/10.1109/jetcas.2019.2899488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

de la Fuente, Yago Sanchez, Gurdeep Singh Bhullar, Robert Skupin, Cornelius Hellge i Thomas Schierl. "Delay Impact on MPEG OMAF’s Tile-Based Viewport-Dependent 360° Video Streaming". IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, nr 1 (marzec 2019): 18–28. http://dx.doi.org/10.1109/jetcas.2019.2899516.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Wu, Chenglei, Ruixiao Zhang, Zhi Wang i Lifeng Sun. "A Spherical Convolution Approach for Learning Long Term Viewport Prediction in 360 Immersive Video". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (23.06.2020): 14003–40. http://dx.doi.org/10.1609/aaai.v34i01.7377.

Pełny tekst źródła
Streszczenie:
Viewport prediction for 360 video forecasts a viewer’s viewport when he/she watches a 360 video with a head-mounted display, which benefits many VR/AR applications such as 360 video streaming and mobile cloud VR. Existing studies based on planar convolutional neural network (CNN) suffer from the image distortion and split caused by the sphere-to-plane projection. In this paper, we start by proposing a spherical convolution based feature extraction network to distill spatial-temporal 360 information. We provide a solution for training such a network without a dedicated 360 image or video classification dataset. We differ with previous methods, which base their predictions on image pixel-level information, and propose a semantic content and preference based viewport prediction scheme. In this paper, we adopt a recurrent neural network (RNN) network to extract a user's personal preference of 360 video content from minutes of embedded viewing histories. We utilize this semantic preference as spatial attention to help network find the "interested'' regions on a future video. We further design a tailored mixture density network (MDN) based viewport prediction scheme, including viewport modeling, tailored loss function, etc, to improve efficiency and accuracy. Our extensive experiments demonstrate the rationality and performance of our method, which outperforms state-of-the-art methods, especially in long-term prediction.
Style APA, Harvard, Vancouver, ISO itp.
37

Wang, Yimeng, Mridul Agarwal, Tian Lan i Vaneet Aggarwal. "Learning-Based Online QoE Optimization in Multi-Agent Video Streaming". Algorithms 15, nr 7 (28.06.2022): 227. http://dx.doi.org/10.3390/a15070227.

Pełny tekst źródła
Streszczenie:
Video streaming has become a major usage scenario for the Internet. The growing popularity of new applications, such as 4K and 360-degree videos, mandates that network resources must be carefully apportioned among different users in order to achieve the optimal Quality of Experience (QoE) and fairness objectives. This results in a challenging online optimization problem, as networks grow increasingly complex and the relevant QoE objectives are often nonlinear functions. Recently, data-driven approaches, deep Reinforcement Learning (RL) in particular, have been successfully applied to network optimization problems by modeling them as Markov decision processes. However, existing RL algorithms involving multiple agents fail to address nonlinear objective functions on different agents’ rewards. To this end, we leverage MAPG-finite, a policy gradient algorithm designed for multi-agent learning problems with nonlinear objectives. It allows us to optimize bandwidth distributions among multiple agents and to maximize QoE and fairness objectives on video streaming rewards. Implementing the proposed algorithm, we compare the MAPG-finite strategy with a number of baselines, including static, adaptive, and single-agent learning policies. The numerical results show that MAPG-finite significantly outperforms the baseline strategies with respect to different objective functions and in various settings, including both constant and adaptive bitrate videos. Specifically, our MAPG-finite algorithm maximizes QoE by 15.27% and maximizes fairness by 22.47% compared to the standard SARSA algorithm for a 2000 KB/s link.
Style APA, Harvard, Vancouver, ISO itp.
38

Vatanen, Anna, Heidi Spets, Maarit Siromaa, Mirka Rauniomaa i Tiina Keisanen. "Experiences in Collecting 360° Video Data and Collaborating Remotely in Virtual Reality". QuiViRR: Qualitative Video Research Reports 3 (1.09.2022): a0005. http://dx.doi.org/10.54337/ojs.quivirr.v3.2022.a0005.

Pełny tekst źródła
Streszczenie:
This paper reports on a pilot project called Remote Research and Collaboration Using VR and 360° Video (RReCo) that was carried out in late Spring 2021 at the University of Oulu, Finland. The project explored new ways of collecting, viewing and analysing video data for the purposes of engaging in remote, collaborative research on social interaction and activity. Here we share our experiences in collecting different types of video data, especially 360°, and relate those to our user experiences in analysing the data together in virtual reality. Our remote multisite data sessions were organised using software for immersive qualitative analytics, virtual reality and live streaming. In this paper, we also reflect on the similarities and differences between our data sets, especially with view to how awareness of different technical setups may help in making informed choices, and thereby increase the reliability of research on social interaction.
Style APA, Harvard, Vancouver, ISO itp.
39

Yaqoob, Abid, Mohammed Amine Togou i Gabriel-Miro Muntean. "Dynamic Viewport Selection-Based Prioritized Bitrate Adaptation for Tile-Based 360° Video Streaming". IEEE Access 10 (2022): 29377–92. http://dx.doi.org/10.1109/access.2022.3157339.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Nguyen, Thanh Cong, i Ji-Hoon Yun. "Predictive Tile Selection for 360-Degree VR Video Streaming in Bandwidth-Limited Networks". IEEE Communications Letters 22, nr 9 (wrzesień 2018): 1858–61. http://dx.doi.org/10.1109/lcomm.2018.2848915.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Jiang, Zhiqian, Xu Zhang, Wei Huang, Hao Chen, Yiling Xu, Jenq-Neng Hwang, Zhan Ma i Jun Sun. "A Hierarchical Buffer Management Approach to Rate Adaptation for 360-Degree Video Streaming". IEEE Transactions on Vehicular Technology 69, nr 2 (luty 2020): 2157–70. http://dx.doi.org/10.1109/tvt.2019.2960866.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Xiaoyi, Xinjue Hu, Ling Zhong, Shervin Shirmohammadi i Lin Zhang. "Cooperative Tile-Based 360° Panoramic Streaming in Heterogeneous Networks Using Scalable Video Coding". IEEE Transactions on Circuits and Systems for Video Technology 30, nr 1 (styczeń 2020): 217–31. http://dx.doi.org/10.1109/tcsvt.2018.2886805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Chen, Xiaolei, Baoning Cao i Ishfaq Ahmad. "Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network". Mobile Information Systems 2021 (9.11.2021): 1–12. http://dx.doi.org/10.1155/2021/8501990.

Pełny tekst źródła
Streszczenie:
Live virtual reality (VR) streaming (a.k.a., 360-degree video streaming) has become increasingly popular because of the rapid growth of head‐mounted displays and 5G networking deployment. However, the huge bandwidth and the energy required to deliver live VR frames in the wireless video sensor network (WVSN) become bottlenecks, making it impossible for the application to be deployed more widely. To solve the bandwidth and energy challenges, VR video viewport prediction has been proposed as a feasible solution. However, the existing works mainly focuses on the bandwidth usage and prediction accuracy and ignores the resource consumption of the server. In this study, we propose a lightweight neural network-based viewport prediction method for live VR streaming in WVSN to overcome these problems. In particular, we (1) use a compressed channel lightweight network (C-GhostNet) to reduce the parameters of the whole model and (2) use an improved gate recurrent unit module (GRU-ECA) and C-GhostNet to process the video data and head movement data separately to improve the prediction accuracy. To evaluate the performance of our method, we conducted extensive experiments using an open VR user dataset. The experiments results demonstrate that our method achieves significant server resource saving, real-time performance, and high prediction accuracy, while achieving low bandwidth usage and low energy consumption in WVSN, which meets the requirement of live VR streaming.
Style APA, Harvard, Vancouver, ISO itp.
44

Chuang, Shu-Min, Chia-Sheng Chen i Eric Hsiao-Kuang Wu. "The Implementation of Interactive VR Application and Caching Strategy Design on Mobile Edge Computing (MEC)". Electronics 12, nr 12 (16.06.2023): 2700. http://dx.doi.org/10.3390/electronics12122700.

Pełny tekst źródła
Streszczenie:
Virtual reality (VR) and augmented reality (AR) have been proposed as revolutionary applications for the next generation, especially in education. Many VR applications have been designed to promote learning via virtual environments and 360° video. However, due to the strict requirements of end-to-end latency and network bandwidth, numerous VR applications using 360° video streaming may not achieve a high-quality experience. To address this issue, we propose relying on tile-based 360° video streaming and the caching capacity in Mobile Edge Computing (MEC) to predict the field of view (FoV) in the head-mounted device, then deliver the required tiles. Prefetching tiles in MEC can save the bandwidth of the backend link and support multiple users. Smart caching decisions may reduce the memory at the edge and compensate for the FoV prediction error. For instance, caching whole tiles at each small cell has a higher storage cost compared to caching one small cell that covers multiple users. In this paper, we define a tile selection, caching, and FoV coverage model as the Tile Selection and Caching Problem and propose a heuristic algorithm to solve it. Using a dataset of real users’ head movements, we compare our algorithm to the Least Recently Used (LRU) and Least Frequently Used (LFU) caching policies. The results show that our proposed approach improves FoV coverage by 30% and reduces caching costs by 25% compared to LFU and LRU.
Style APA, Harvard, Vancouver, ISO itp.
45

Lee, Dongwon, Minji Choi i Joohyun Lee. "Prediction of Head Movement in 360-Degree Videos Using Attention Model". Sensors 21, nr 11 (25.05.2021): 3678. http://dx.doi.org/10.3390/s21113678.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos.
Style APA, Harvard, Vancouver, ISO itp.
46

Li, Jie, Ling Han, Cong Zhang, Qiyue Li i Weitao Li. "Adaptive Panoramic Video Multicast Streaming with Limited FoV Feedback". Complexity 2020 (18.12.2020): 1–14. http://dx.doi.org/10.1155/2020/8832715.

Pełny tekst źródła
Streszczenie:
Virtual reality (VR) provides an immersive 360-degree viewing experience and has been widely used in many areas. However, the transmission of panoramic video usually places a large demand on bandwidth; thus, it is difficult to ensure a reliable quality of experience (QoE) under a limited bandwidth. In this paper, we propose a field-of-view (FoV) prediction methodology based on limited FoV feedback that can fuse the heat map and FoV information to generate a user view. The former is obtained through saliency detection, while the latter is extracted from some user perspectives randomly, and it contains the FoV information of all users. Then, we design a QoE-driven panoramic video streaming system with a client/server (C/S) architecture, in which the server performs rate adaptation based on the bandwidth and the predicted FoV. We then formulate it as a nonlinear integer programming (NLP) problem and propose an optimal algorithm that combines the Karush–Kuhn–Tucker (KKT) conditions with the branch-and-bound method to solve this problem. Finally, we evaluate our system in a simulation environment, and the results show that the system performs better than the baseline.
Style APA, Harvard, Vancouver, ISO itp.
47

Park, Sohee, Arani Bhattacharya, Zhibo Yang, Samir R. Das i Dimitris Samaras. "Mosaic: Advancing User Quality of Experience in 360-Degree Video Streaming With Machine Learning". IEEE Transactions on Network and Service Management 18, nr 1 (marzec 2021): 1000–1015. http://dx.doi.org/10.1109/tnsm.2021.3053183.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Li, Jie, Ransheng Feng, Wei Sun, Zhi Liu i Qiyue Li. "QoE-Driven Coupled Uplink and Downlink Rate Adaptation for 360-Degree Video Live Streaming". IEEE Communications Letters 24, nr 4 (kwiecień 2020): 863–67. http://dx.doi.org/10.1109/lcomm.2020.2966193.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Zare, Alireza, Maryam Homayouni, Alireza Aminlou, Miska M. Hannuksela i Moncef Gabbouj. "6K and 8K Effective Resolution with 4K HEVC Decoding Capability for 360 Video Streaming". ACM Transactions on Multimedia Computing, Communications, and Applications 15, nr 2s (12.08.2019): 1–22. http://dx.doi.org/10.1145/3335053.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Zou, Junni, Chenglin Li, Chengming Liu, Qin Yang, Hongkai Xiong i Eckehard Steinbach. "Probabilistic Tile Visibility-Based Server-Side Rate Adaptation for Adaptive 360-Degree Video Streaming". IEEE Journal of Selected Topics in Signal Processing 14, nr 1 (styczeń 2020): 161–76. http://dx.doi.org/10.1109/jstsp.2019.2956716.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii