Journal articles on the topic 'Video Quality Measure'

To see the other types of publications on this topic, follow the link: Video Quality Measure.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video Quality Measure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Barkowsky, Marcus, Jens Bialkowski, BjÖrn Eskofier, Roland Bitto, and AndrÉ Kaup. "Temporal Trajectory Aware Video Quality Measure." IEEE Journal of Selected Topics in Signal Processing 3, no. 2 (April 2009): 266–79. http://dx.doi.org/10.1109/jstsp.2009.2015375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hekstra, A. P., J. G. Beerends, D. Ledermann, F. E. de Caluwe, S. Kohler, R. H. Koenen, S. Rihs, M. Ehrsam, and D. Schlauss. "PVQM – A perceptual video quality measure." Signal Processing: Image Communication 17, no. 10 (November 2002): 781–98. http://dx.doi.org/10.1016/s0923-5965(02)00056-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Arndt, Sebastian, Jan-Niklas Antons, Robert Schleicher, Sebastian Moller, and Gabriel Curio. "Using Electroencephalography to Measure Perceived Video Quality." IEEE Journal of Selected Topics in Signal Processing 8, no. 3 (June 2014): 366–76. http://dx.doi.org/10.1109/jstsp.2014.2313026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dumic, Emil, and Anamaria Bjelopera. "No-Reference Objective Video Quality Measure for Frame Freezing Degradation." Sensors 19, no. 21 (October 26, 2019): 4655. http://dx.doi.org/10.3390/s19214655.

Full text
Abstract:
In this paper we present a novel no-reference video quality measure, NR-FFM (no-reference frame–freezing measure), designed to estimate quality degradations caused by frame freezing of streamed video. The performance of the measure was evaluated using 40 degraded video sequences from the laboratory for image and video engineering (LIVE) mobile database. Proposed quality measure can be used in different scenarios such as mobile video transmission by itself or in combination with other quality measures. These two types of applications were presented and studied together with considerations on relevant normalization issues. The results showed promising correlation values between the user assigned quality and the estimated quality scores.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Gang, Ainiwaer Aizimaiti, and Yan Liu. "Quaternion Model of Fast Video Quality Assessment Based on Structural Similarity Normalization." Applied Mechanics and Materials 380-384 (August 2013): 3982–85. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3982.

Full text
Abstract:
Video quality evaluation methods have been widely studied because of an increasing need in variety of video processing applications, such as compression, analysis, communication, enhancement and restoration. The quaternion models are also widely used to measure image or video quality. In this paper, we proposed a new quaternion model which mainly describes the contour feature, surface feature and temporal information of the video. We use structure similarity comparison to normalize four quaternion parts respectively, because each part of the quaternion use different metric. Structure similarity comparison is also used to measure the difference between reference videos and distortion videos. The results of experiments show that the new method has good correlation with perceived video quality when tested on the video quality experts group (VQEG) Phase I FR-TV test data set.
APA, Harvard, Vancouver, ISO, and other styles
6

Leszczuk, Mikołaj, Lucjan Janowski, Jakub Nawała, and Atanas Boev. "Objective Video Quality Assessment Method for Face Recognition Tasks." Electronics 11, no. 8 (April 7, 2022): 1167. http://dx.doi.org/10.3390/electronics11081167.

Full text
Abstract:
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used in video processing systems to evaluate videos whose quality is degraded by different processing scenarios. However, they are not suitable for video sequences used for recognition tasks (Target Recognition Videos, TRV). Therefore, correctly estimating the performance of the video processing pipeline in both manual and Computer Vision (CV) recognition tasks is still a major research challenge. There is a need for objective methods to evaluate video quality for recognition tasks. In response to this need, we show in this paper that it is possible to develop the new concept of an objective model for evaluating video quality for face recognition tasks. The model is trained, tested and validated on a representative set of image sequences. The set of degradation scenarios is based on the model of a digital camera and how the luminous flux reflected from the scene eventually becomes a digital image. The resulting degraded images are evaluated using a CV library for face recognition as well as VQI. The measured accuracy of a model, expressed as the value of the F-measure parameter, is 0.87.
APA, Harvard, Vancouver, ISO, and other styles
7

Alsrehin, Nawaf O., and Ahmad F. Klaib. "VMQ: an algorithm for measuring the Video Motion Quality." Bulletin of Electrical Engineering and Informatics 8, no. 1 (March 1, 2019): 231–38. http://dx.doi.org/10.11591/eei.v8i1.1418.

Full text
Abstract:
This paper proposes a new full-reference algorithm, called Video Motion Quality (VMQ) that evaluates the relative motion quality of the distorted video generated from the reference video based on all the frames from both videos. VMQ uses any frame-based metric to compare frames from the original and distorted videos. It uses the time stamp for each frame to measure the intersection values. VMQ combines the comparison values with the intersection values in an aggregation function to produce the final result. To explore the efficiency of the VMQ, we used a set of raw, uncompressed videos to generate a new set of encoded videos. These encoded videos are then used to generate a new set of distorted videos which have the same video bit rate and frame size but with reduced frame rate. To evaluate the VMQ, we applied the VMQ by comparing the encoded videos with the distorted videos and recorded the results. The initial evaluation results showed compatible trends with most of subjective evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
8

Moore, Peter Thomas, Neil O’Hare, Kevin P. Walsh, Neil Ward, and Niamh Conlon. "Objective video quality measure for application to tele-echocardiography." Medical & Biological Engineering & Computing 46, no. 8 (July 10, 2008): 807–13. http://dx.doi.org/10.1007/s11517-008-0364-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Xuanyi, Irene Cheng, Zhenkun Zhou, and Anup Basu. "RAVA: Region-Based Average Video Quality Assessment." Sensors 21, no. 16 (August 15, 2021): 5489. http://dx.doi.org/10.3390/s21165489.

Full text
Abstract:
Video has become the most popular medium of communication over the past decade, with nearly 90 percent of the bandwidth on the Internet being used for video transmission. Thus, evaluating the quality of an acquired or compressed video has become increasingly important. The goal of video quality assessment (VQA) is to measure the quality of a video clip as perceived by a human observer. Since manually rating every video clip to evaluate quality is infeasible, researchers have attempted to develop various quantitative metrics that estimate the perceptual quality of video. In this paper, we propose a new region-based average video quality assessment (RAVA) technique extending image quality assessment (IQA) metrics. In our experiments, we extend two full-reference (FR) image quality metrics to measure the feasibility of the proposed RAVA technique. Results on three different datasets show that our RAVA method is practical in predicting objective video scores.
APA, Harvard, Vancouver, ISO, and other styles
10

Hasan, Md Mehedi, Md Ariful Islam, Sejuti Rahman, Michael R. Frater, and John F. Arnold. "No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System." Applied Sciences 12, no. 19 (October 7, 2022): 10090. http://dx.doi.org/10.3390/app121910090.

Full text
Abstract:
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features.
APA, Harvard, Vancouver, ISO, and other styles
11

Álvarez, Alberto, Laura Pozueco, Sergio Cabrero, Xabiel G. Pañeda, Roberto García, David Melendi, and Gabriel Díaz. "A Framework to Measure and Estimate Video Quality in SVC Real-Time Adaptive Systems." International Journal of Business Data Communications and Networking 10, no. 1 (January 2014): 47–64. http://dx.doi.org/10.4018/ijbdcn.2014010103.

Full text
Abstract:
Effectively adapting the content to network conditions in real-time is an important matter in best-effort networks like the Internet. Scalable Video Coding (SVC) is an interesting alternative to implement such systems. However, some problems of the performance evaluation of SVC based adaptive systems have not been solved. The authors review the main efforts directed to measure video quality on SVC related systems and discuss the limitations of each one. This paper elaborates a framework to measure video quality metrics in real adaptive SVC based streams. An estimation method for full reference video quality metrics is proposed. This method reduces reference information required and it is able to provide real-time accurate results simply using metadata regarding the video quality of the reference layers. The video quality of several streams that have been generated using a real-time adaptive system is first measured with the elaborated framework and then estimated with the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
12

Scholler, Simon, Sebastian Bosse, Matthias Sebastian Treder, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Muller, and Thomas Wiegand. "Toward a Direct Measure of Video Quality Perception Using EEG." IEEE Transactions on Image Processing 21, no. 5 (May 2012): 2619–29. http://dx.doi.org/10.1109/tip.2012.2187672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

YANG, J. X., D. M. TAN, and H. R. WU. "AN IMPAIRMENT METRIC FOR VIDEO TEMPORAL FLUCTUATION MEASURE." International Journal of Image and Graphics 11, no. 02 (April 2011): 251–64. http://dx.doi.org/10.1142/s0219467811004081.

Full text
Abstract:
Temporal fluctuations are often observed in digitally compressed videos. However, it is difficult to accurately measure these fluctuation intensities with the traditional peak signal-to-noise ratio (PSNR) since the PSNR only provides a generic quality measure. Although specialized metrics have been proposed for temporal fluctuation measurement, e.g., the sum of squared differences (SSD) and the motion compensated SSD (MCSSD), these first difference based algorithms may falsely treat smooth continuous change of pixel values as temporal fluctuations. To overcome this problem, a motion estimated mean scaled absolute second difference (MEMSASD) is proposed here. The performance of the MEMSASD is examined using a number of video sequences with varying degrees of temporal fluctuations, generated by an H.264/AVC compliant codec using standard test video sequences. Compared with the PSNR and the SSD, the behavior of the MCSSD and the proposed metric provide better reflections of temporal fluctuation intensities as perceived by the human visual system (HVS), in terms of the Pearson correlation coefficient. The MEMSASD metric has an advantage over MCSSD in that it avoids misclassification of temporal fluctuations of pixels with smooth continuous change along the temporal axis.
APA, Harvard, Vancouver, ISO, and other styles
14

Hameed, Abdul, Benjamin Balas, and Rui Dai. "Thin-slice vision: inference of confidence measure from perceptual video quality." Journal of Electronic Imaging 25, no. 6 (December 21, 2016): 060501. http://dx.doi.org/10.1117/1.jei.25.6.060501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Narwaria, Manish, Matthieu Perreira Da Silva, and Patrick Le Callet. "HDR-VQM: An objective quality measure for high dynamic range video." Signal Processing: Image Communication 35 (July 2015): 46–60. http://dx.doi.org/10.1016/j.image.2015.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

R.Bulli Babu, Dr, SK Shahid Afridi, and S. Satya Vasavi. "A new enhancement to avoid video distortion in wireless multihop networks." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 326. http://dx.doi.org/10.14419/ijet.v7i2.7.10608.

Full text
Abstract:
The protocols of typical routing algorithms which are designed for the wireless networks long back are specifically application oriented. Taking consideration at present, with the increase in the usage of the wireless networks the problem we are facing is video traffic. The very important thing is to maintain a good quality of video. At present users are opting videos in high quality that are to be delivered smoothly into their devices. The nature of video is changed due to 1) Because of the compression distortion occurs at the source. 2) With both interference and the errors induced on wireless channels distortion occurs. Hence, in this paper we work for the reduction of the distortion that occurs in video traffic. In order to overcome this problem we opted a wireless network in which the flow of application contains video traffic. For the clients the reduction of the distortion is very difficult. We cannot minimize the video distortions by using the link quality based routing measures. To understand this we construct an analytical framework first, then on the video distortions for accessing the impact. By using an analytical framework we design a routing measure for the reduction of distortion. With our experiment results we evaluate that our protocol is best for the reduction of video distortion and also for the reduction of the experience of user de-gradations.
APA, Harvard, Vancouver, ISO, and other styles
17

Laureshyn, Aliaksei, and Mikael Nilsson. "How Accurately Can We Measure from Video? Practical Considerations and Enhancements of the Camera Calibration Procedure." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 43 (June 8, 2018): 24–33. http://dx.doi.org/10.1177/0361198118774194.

Full text
Abstract:
The accuracy of position measurements from videos depends greatly on the quality of the camera calibration model parameters. This paper investigates how such factors as camera height and the selection of calibration points affect the quality of the final calibration model. A series of controlled experiments were performed in traffic or similar-to-traffic environments, in which the accuracy of measurements from videos was compared with measurements taken with other tools. To enhance the calibration process, a multi-camera approach is suggested that utilizes the information about “common points” – points seen on several cameras but with unknown world coordinates. The performed tests showed that calibration quality can greatly benefit from this approach. The paper is addressed primarily to traffic researchers developing their own video-based tools for road user observations.
APA, Harvard, Vancouver, ISO, and other styles
18

Tyukhtyaev, Dmitry. "Researching Video Conference Services on IEEE 802.11x Wireless Networks." NBI Technologies, no. 4 (December 2021): 13–18. http://dx.doi.org/10.15688/nbit.jvolsu.2021.4.2.

Full text
Abstract:
The purpose of the study was to determine the dependence of the quality of video conferencing services on the characteristics of wireless communication channels and the number of users in a given network. The characteristics of the signal strength in a wireless network, measured in decibels (dB) were described in this article. The article discusses subjective and objective methods for assessing video. The PSNR and VQM metrics and the MSU Video Quality Measurement Tool software, created by the computer graphics laboratory of the Moscow State University, were used as an objective method for assessing video. For the subjective method, the DSCQS method was used. The PSNR (peak signal to noise ratio) metric is one of the most commonly used metrics. PSNR measures the peak signal-to-noise ratio between the original signal and the signal at the output of the system. PSNR does not measure all video-specific parameters, as the fidelity of the image is constantly changing depending on the visual complexity of the image, the available bit rate and even the compression method. The Video Quality Measurement (VQM) metric is described in Recommendation ITU-R BT.1683. The test results show that VQM has a high correlation with subjective methods for assessing video quality and claims to become the standard in the field of objective quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
19

CH, Subrahmanyam, Venkata Rao D, and Usha Rani N. "Low bit Rate Video Quality Analysis Using NRDPF-VQA Algorithm." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 1 (February 1, 2015): 71. http://dx.doi.org/10.11591/ijece.v5i1.pp71-77.

Full text
Abstract:
<p>In this work, we propose NRDPF-VQA (No Reference Distortion Patch Features Video Quality Assessment) model aims to use to measure the video quality assessment for H.264/AVC (Advanced Video Coding). The proposed method takes advantage of the contrast changes in the video quality by luminance changes. The proposed quality metric was tested by using LIVE video database. The experimental results show that the new index performance compared with the other NR-VQA models that require training on LIVE video databases, CSIQ video database, and VQEG HDTV video database. The values are compared with human score index analysis of DMOS.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Naji, Ali, and Javaan Chahl. "Contactless Cardiac Activity Detection Based on Head Motion Magnification." International Journal of Image and Graphics 17, no. 01 (January 2017): 1750001. http://dx.doi.org/10.1142/s0219467817500012.

Full text
Abstract:
The aim of this study is to remotely measure cardiac activity (heart pulse, total cycle length and pulse width) from videos based on a head motion at different positions of the head (front, back and side). As the head motion resulting from the cardiac cycle of blood from the heart to the head via the carotid arteries is not visible to the naked eye and to preserve the signal strength in the video, we used wavelet decomposition and a Chebychev filter to develop a standard Eulerian video magnification in terms of noise removal and execution time. We used both magnification systems to measure cardiac activity and statistically compare the results using Bland–Altman method. Also, we proposed a new video quality system based on fuzzy interface system to select which magnification system has better magnification quality and gives better results for the heart pulse rate. The experimental results on several videos captured from 10 healthy subjects show that the proposed contactless system of heart pulse has an accuracy of 98.3% when magnified video based on the developing magnification system was used and an accuracy of 97.4% when magnified video based on Eulerian magnification system was used instead. The proposed system has low computational complexity, making it suitable for advancing health care applications, mobile health applications and telemedicine.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Shouning, Baoyu Zheng, and Yujuan Zhao. "Hierarchical Objective Quality Assessment for CS Video in WMSN." International Journal of Distributed Sensor Networks 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/237565.

Full text
Abstract:
Compressive sensing (CS) is a sub-Nyquist sampling way while still enabling exact reconstruction, which is applicable to WMSN. In this paper, based on the characteristic of CS video in WMSN, we proposed a hierarchical objective CS video quality assessment (HOCSVQA) approach to get CS video quality index (CSVQI) from three levels, measurement level, stream level, and packet level, respectively. This approach cannot only keep the convenience and real-time characteristic of objective video assessment, but also reflect the QoE to a certain extent due to the coefficients regressed from subjective video assessment experiments. A set of experiments on subjective CS video quality assessment and another set of verification experiments are designed and settled. The CS video quality index, CSVQI, assessed by the model we proposed maintained a high correlation with data from verification experiments under statistical correlation measure.
APA, Harvard, Vancouver, ISO, and other styles
22

Martínez-Rach, Miguel O., Pablo Piñol, Otoniel M. López, Manuel Perez Malumbres, José Oliver, and Carlos Tavares Calafate. "On the Performance of Video Quality Assessment Metrics under Different Compression and Packet Loss Scenarios." Scientific World Journal 2014 (2014): 1–18. http://dx.doi.org/10.1155/2014/743604.

Full text
Abstract:
When comparing the performance of video coding approaches, evaluating different commercial video encoders, or measuring the perceived video quality in a wireless environment, Rate/distortion analysis is commonly used, where distortion is usually measured in terms of PSNR values. However, PSNR does not always capture the distortion perceived by a human being. As a consequence, significant efforts have focused on defining an objective video quality metric that is able to assess quality in the same way as a human does. We perform a study of some available objective quality assessment metrics in order to evaluate their behavior in two different scenarios. First, we deal with video sequences compressed by different encoders at different bitrates in order to properly measure the video quality degradation associated with the encoding system. In addition, we evaluate the behavior of the quality metrics when measuring video distortions produced by packet losses in mobile ad hoc network scenarios with variable degrees of network congestion and node mobility. Our purpose is to determine if the analyzed metrics can replace the PSNR while comparing, designing, and evaluating video codec proposals, and, in particular, under video delivery scenarios characterized by bursty and frequent packet losses, such as wireless multihop environments.
APA, Harvard, Vancouver, ISO, and other styles
23

Kerentseva, Nina, and Aleksandr Trofimov. "Analytical Overview of Metrics Used to Assess the Quality of Multimedia Information." NBI Technologies, no. 3 (November 2022): 27–31. http://dx.doi.org/10.15688/nbit.jvolsu.2022.3.5.

Full text
Abstract:
Data quality is an indicator that characterizes any transmitted information that can be measured. The very word “measure” suggests the evaluation of these data, the quality of which can be described and quantified. The metric, in turn, is essentially an objective assessment during testing, which makes it possible to determine data distortions that occur during transmission, encoding, digitization, compression, and decoding of video data. The article considers such metrics as PSNR and VQM, and analyzes the ITU-R-BT.500-8.11 standard. A brief overview of the MSU Video Quality Measurement Tool is presented, as well as a simulated video conferencing image acquisition using one of the most popular H264 codecs. The formulas for calculating the PSNR metric, defined through the mean square error (MSE), estimating the loss of image quality by comparing the received video with the downloaded (reference video) are presented. The VQM (Video Quality Metric) metric is considered, which evaluates the distortion in the transmitted video caused by the passage of network packets through the cable line of the transmission system (decoding errors or coding errors), as well as methods for subjective and objective evaluation of images in video conferencing. The advantages and disadvantages of each of the evaluation methods are considered.
APA, Harvard, Vancouver, ISO, and other styles
24

Liang, Mei Yuan, Xin Chen Zhang, and Qian Chen. "Real-Time Video Quality Assessment Methods Based on No-Reference for Video Conference System." Applied Mechanics and Materials 263-266 (December 2012): 198–201. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.198.

Full text
Abstract:
After a deep investigation on the performance measure of the video conference and the video quality assessment methods, the whole plan of video quality assessment (VQA) for video conference system is proposed. Without reference video source, a novel No-Reference (NR) VQA method based on both the coding layer information and the packet layer transmission performance is introduced. The experimental result shows that the proposed method can obtain the dependable score which close to the subjective assessment.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Yadong, Hongying Zhang, and Ran Duan. "Total Variation Based Perceptual Image Quality Assessment Modeling." Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/294870.

Full text
Abstract:
Visual quality measure is one of the fundamental and important issues to numerous applications of image and video processing. In this paper, based on the assumption that human visual system is sensitive to image structures (edges) and image local luminance (light stimulation), we propose a new perceptual image quality assessment (PIQA) measure based on total variation (TV) model (TVPIQA) in spatial domain. The proposed measure compares TVs between a distorted image and its reference image to represent the loss of image structural information. Because of the good performance of TV model in describing edges, the proposed TVPIQA measure can illustrate image structure information very well. In addition, the energy of enclosed regions in a difference image between the reference image and its distorted image is used to measure the missing luminance information which is sensitive to human visual system. Finally, we validate the performance of TVPIQA measure with Cornell-A57, IVC, TID2008, and CSIQ databases and show that TVPIQA measure outperforms recent state-of-the-art image quality assessment measures.
APA, Harvard, Vancouver, ISO, and other styles
26

Patel, Manish, Mit M. Patel, and Robert T. Cristel. "Quality and Reliability of YouTube for Patient Information on Neurotoxins." Facial Plastic Surgery 36, no. 06 (December 2020): 773–77. http://dx.doi.org/10.1055/s-0040-1719100.

Full text
Abstract:
YouTube is a common source of medical information for patients. This is the first study to assess the reliability and educational value of YouTube videos on neurotoxin procedures. YouTube.com was searched on June 15, 2020 using the keyword “Botox” or “neurotoxin.” A total of 100 videos were reviewed. Sixty-one videos met the inclusion criteria and were included in the final analysis. Video characteristics were noted, and a score was assigned to each video using the Journal of the American Medical Association (JAMA) benchmark criteria and the Global Quality Score (GQS) to measure source reliability and educational value, respectively. A total of 61 videos that met the inclusion criteria had an average length of 589 seconds (9 minutes and 49 seconds), 210,673 views, 5,295 likes, 318 dislikes, and 478 comments. A total of 30 videos (49%) were posted with an intention to educate patients while 31 videos (51%) were posted with the intention to detail a personal experience with neurotoxin. Patient education videos were significantly more reliable (P JAMA< 0.001) and had more educational value (P GQS < 0.001) but were less popular than “personal experience videos.” Personal-experience videos posted by patients had higher popularity, more likes and comments, yet lower scores on reliability and education. Patients will continue to seek educational material online, and clinicians should utilize this information to help primarily educate patients with standardized and accurate information about their treatment. Key Points
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Liqun, Jing Yang, Zheng Wang, Liping Zhou, Weiling Chen, and Yiwen Xu. "Compressed Video Quality Index Based on Saliency-Aware Artifact Detection." Sensors 21, no. 19 (September 26, 2021): 6429. http://dx.doi.org/10.3390/s21196429.

Full text
Abstract:
Video coding technology makes the required storage and transmission bandwidth of video services decrease by reducing the bitrate of the video stream. However, the compressed video signals may involve perceivable information loss, especially when the video is overcompressed. In such cases, the viewers can observe visually annoying artifacts, namely, Perceivable Encoding Artifacts (PEAs), which degrade their perceived video quality. To monitor and measure these PEAs (including blurring, blocking, ringing and color bleeding), we propose an objective video quality metric named Saliency-Aware Artifact Measurement (SAAM) without any reference information. The SAAM metric first introduces video saliency detection to extract interested regions and further splits these regions into a finite number of image patches. For each image patch, the data-driven model is utilized to evaluate intensities of PEAs. Finally, these intensities are fused into an overall metric using Support Vector Regression (SVR). In experiment section, we compared the SAAM metric with other popular video quality metrics on four publicly available databases: LIVE, CSIQ, IVP and FERIT-RTRK. The results reveal the promising quality prediction performance of the SAAM metric, which is superior to most of the popular compressed video quality evaluation models.
APA, Harvard, Vancouver, ISO, and other styles
28

Kowalczyk, Paweł, Jacek Izydorczyk, and Marcin Szelest. "Evaluation Methodology for Object Detection and Tracking in Bounding Box Based Perception Modules." Electronics 11, no. 8 (April 8, 2022): 1182. http://dx.doi.org/10.3390/electronics11081182.

Full text
Abstract:
The aim of this work is to formulate a new metric to be used in the automotive industry for the evaluation process of software used to detect vehicles on video data. To achieve this goal, we have formulated a new concept for measuring the degree of matching between rectangles for industrial use. We propose new measure based on three sub-measures focused on the area of the rectangle, its shape, and distance. These sub-measures are merged into a General similarity measure to avoid problems with poor adaptability of the Jaccard index to practical issues of recognition. Additionally, we create method of calculation of detection quality in the sequence of video frames that summarizes the local quality and adds information about possible late detection. Experiments with real and artificial data have confirmed that we have created flexible tools that can reduce time needed to evaluate detection software efficiently, and provide more detailed information about the quality of detection than the Jaccard index. Their use can significantly speed up data analysis and capture the weaknesses and limitations of the detection system under consideration. Our detection quality assessment method can be of interest to all engineers involved in machine recognition of video data.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiang, Dan, Xin Chen Zhang, and Mei Yan Liang. "The Research on Mobile Video Quality Analysis and Evaluation System for TD Network." Applied Mechanics and Materials 333-335 (July 2013): 799–802. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.799.

Full text
Abstract:
Nowadays, mobile WEB video application got rapid development, and the video service quality evaluation and the analysis of the network performance have become an urgent demand. In this paper, we propose a new framework and method to assess the subjective quality of mobile web video based on no-reference (NR) model. Through discuss the structure of TD mobile network and the application of web video, the paper built a set of testing process for mobile video on demand service and fault location method based on the result of network structure and the business of video quality evaluation. The actual test results from TD network shows the proposed way can measure the web video quality and find the bottleneck location of the transmitting network. The evaluation result of video on demand quality is in accordance with the subjective feeling of the user. The benchmark contrast testing method proved to be practical and feasible in the real test, thus providing readily available tools for video service quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
30

Ben Amor, M., M. C. Larabi, F. Kammoun, and N. Masmoudi. "A no reference quality metric to measure the blocking artefacts for video sequences." Imaging Science Journal 64, no. 7 (October 2, 2016): 408–17. http://dx.doi.org/10.1080/13682199.2016.1236066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jofri, Muhamad Hanif, Ida Aryanie Bahrudin, Noor Zuraidin Mohd Safar, Juliana Mohamed, and Abdul Halim Omar. "User Quality of Experience (QoE) Satisfaction for Video Content Selection (VCS) Framework in Smartphone Devices." Baghdad Science Journal 18, no. 4(Suppl.) (December 20, 2021): 1387. http://dx.doi.org/10.21123/bsj.2021.18.4(suppl.).1387.

Full text
Abstract:
Video streaming is widely available nowadays. Moreover, since the pandemic hit all across the globe, many people stayed home and used streaming services for news, education, and entertainment. However, when streaming in session, user Quality of Experience (QoE) is unsatisfied with the video content selection while streaming on smartphone devices. Users are often irritated by unpredictable video quality format displays on their smartphone devices. In this paper, we proposed a framework video selection scheme that targets to increase QoE user satisfaction. We used a video content selection algorithm to map the video selection that satisfies the user the most regarding streaming quality. Video Content Selection (VCS) are classified into video attributes groups. The level of VCS streaming will gradually decrease to consider the least video selection that users will not accept depending on video quality. To evaluate the satisfaction level, we used the Mean Opinion Score (MOS) to measure the adaptability of user acceptance towards video streaming quality. The final results show that the proposed algorithm shows that the user satisfies the video selection, by altering the video attributes.
APA, Harvard, Vancouver, ISO, and other styles
32

Chang, Wen-Chung. "Automated quality inspection of camera zooming with real-time vision." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 232, no. 12 (January 17, 2017): 2236–41. http://dx.doi.org/10.1177/0954405416683973.

Full text
Abstract:
Industrial automated production technologies have been the research focus of many recent studies, comprising the two research streams of automated assembly and automated product testing. Camera lens-shake detection is an effective way to measure the quality of video cameras during zooming. Conventional testing methods involve time-consuming manual operation procedures. This study proposes a novel automated camera lens-shake detection method, in which real-time visual tracking of two arbitrary features is used to measure and analyze camera zooming quality. The camera lens-shake detection approach can be used to screen out video cameras for the purpose of quality control. It can be effectively employed to replace conventional testing methods and enhance efficiency and stability of product manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
33

Ramadhona, Della Vya, M. Rusni Eka Putra, and Muhammad Suhdy. "PENGEMBANGAN MEDIA VIDEO PADA MATERI TEKNIK DASAR BOLA BASKET SISWA SMK YAYASAN BUDI UTOMO LUBUKLINGGAU." Jurnal Perspektif Pendidikan 16, no. 1 (June 9, 2022): 103–11. http://dx.doi.org/10.31540/jpp.v16i1.1623.

Full text
Abstract:
This study aims to develop instructional video media on the basic technical material of basketball and to determine the quality of the video in terms of validity and practicality. This research is a development research that refers to the development of the modified ADDIE model in 3 stages, namely Analysis, Design, and Development. The instruments used to measure the quality of the learning videos developed include validation sheets and practicality questionnaires. The product of this research is a video on the basic technical material of basketball for class X students of SMK Budi Utomo Foundation. The results showed that: (1) the LKS design seen from the aspect of validity was included in the very valid category of 81.3%., (2) the quality of the LKS viewed from the practical aspect was categorized as practical from the one to one test categorized as Very Practical and the small group with a percentage of 95 ,4% categorized as Very Practical
APA, Harvard, Vancouver, ISO, and other styles
34

Hadi, Wildan Jameel, Suhad Malallah Kadhem, and Ayad Rodhan Abbas. "Fast discrimination of fake video manipulation." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 3 (June 1, 2022): 2582. http://dx.doi.org/10.11591/ijece.v12i3.pp2582-2587.

Full text
Abstract:
<span>Deepfakes have become possible using artificial intelligence techniques, replacing one person’s face with another person’s face (primarily a public figure), making the latter do or say things he would not have done. Therefore, contributing to a solution for video credibility has become a critical goal that we will address in this paper. Our work exploits the visible artifacts (blur inconsistencies) which are generated by the manipulation process. We analyze focus quality and its ability to detect these artifacts. Focus measure operators in this paper include image Laplacian and image gradient groups, which are very fast to compute and do not need a large dataset for training. The results showed that i) the Laplacian group operators, as a value, may be lower or higher in the fake video than its value in the real video, depending on the quality of the fake video, so we cannot use them for deepfake detection and ii) the gradient-based measure (GRA7) decreases its value in the fake video in all cases, whether the fake video is of high or low quality and can help detect deepfake.</span>
APA, Harvard, Vancouver, ISO, and other styles
35

Suriyawansa, Kushnara, Nuwan Kodagoda, Lochandaka Ranathunga, and Nor Aniza Binti Abdullah. "An Approach to Measure the Pedagogy in Slides with Voice-Over Type Instructional Videos." Electronic Journal of e-Learning 20, no. 4 (November 15, 2022): 483–97. http://dx.doi.org/10.34190/ejel.20.4.2314.

Full text
Abstract:
: E-learning provides a more suitable learning environment for educators and learners. Hence, the e-learning community has rapidly increased over the years. At present, e-learning environments use several materials to transfer knowledge to learners. Out of various types of materials, instructional videos have become the prominent source of information in popular e-learning environments. Instructional videos in e-learning can be categorised into many types based on the content presentation method. Voice-over slides are popular instructional videos at present. Since the instructional materials are a key component of an e-learning environment, maintaining the required quality of these materials is vital. Using proper pedagogy in instructional materials mainly enhances the quality. There are many pedagogical principles proposed over the years, and these principles address different features of all instructional materials. Therefore, it is necessary to derive the applicable pedagogical principles based on the features available in the instructional video to analyse the pedagogy in an instructional video. The investigation related to this paper aims to derive an approach to analysing and quantifying the pedagogy in slides with voice-over type instructional videos. In this study, a thorough literature review first identified the pedagogical principles applicable to slides with voice-over type instructional videos. Next, the prominence of the identified pedagogical principles was derived from a Likert-scale survey conducted by several Information Technology undergraduates. The survey response assigned a rank to each identified pedagogical principle. The rank was derived by the mean score obtained for each pedagogical principle. Next, one-way ANOVA tests were performed for the mean score of pedagogical principles to derive their prominence levels. The ANOVA tests were conducted by adding adjacent pedagogical principles with a non-identical rank until the ANOVA results claimed a significant difference in variances. A new level was defined when the ANOVA results showed a significant difference. This study proposed a pedagogical score calculation formula using these levels (derived from the statistical analysis) to derive a qualitative measure of the use of pedagogy in slides with voice-over type instructional videos. The pedagogical evaluators and content developers can use the proposed scoring method to quantify and compare the pedagogy in slides with voice-over instructional videos.
APA, Harvard, Vancouver, ISO, and other styles
36

Nuutinen, Mikko, Toni Virtanen, and Jukka Häkkinen. "Performance measure of image and video quality assessment algorithms: subjective root-mean-square error." Journal of Electronic Imaging 25, no. 2 (March 28, 2016): 023012. http://dx.doi.org/10.1117/1.jei.25.2.023012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Pakpahan, Nurmi F. D. B. "QUICKLY UNDERSTANDING VIDEO LEARNING BASE ON ENVIRONMENTAL SCIENCE TO IMPROVE ABILITY OF SENSITIVITY TO THE SITUATION OF CIVIL ENGINEERING STUDENTS." International Journal of Innovative Research in Advanced Engineering 8, no. 12 (December 31, 2021): 359–64. http://dx.doi.org/10.26562/ijirae.2021.v0812.006.

Full text
Abstract:
It takes the material in the form of global environmental issues such as human relationships with the environment, ecological principles, natural resources, environmental pollution, environmental ethics, and environmental impact analysis (AMDAL) (environmental impact assessment). This research aims to develop videos for the learning of civil engineering students to increase sensitivity to the environment. This research is development research with 4D model design, which is carried out up to the 3rd stage of the four existing steps: define, design, develop, and disseminate. The research object is a video-based learning medium in environmental science courses, and the research subjects are three validators and 30 students who program Environmental Science courses. Research instruments consist of 1). Validation sheet to measure the feasibility of video-based media containing 15 statement items with preferred Likert scale scores of 1 to 5 and score ranges of 1-75; 2). Test of learning results to measure the feasibility of video-based media after learning containing 20 questions with a score of 0 or 1 and a score range of 0-100. The results showed: first, video-based learning media used in learning activities in Environmental Science courses are categorized as very feasible. The assessment results of the three validators showed that the average value of video-based learning media eligibility of 4.73 with very proper wording. Details of aspects of the assessment include the presentation of material with an average score of 4.78, narrative quality of 4.67, writing the balance of 4.78, quality of music (back sound) of 4.56, the harmony of the colour of the average score of 4.89. Second, after applying video-based learning media in Environmental Science courses, student learning outcomes show "excellent" criteria. The average score of 30 students is 80.17, indicating that the video-based learning media in Environmental Science courses is feasible.
APA, Harvard, Vancouver, ISO, and other styles
38

Rajagopalan, Pradeep, and Sanjay Kumar Gengaiyan. "PRIVACY INFORMATION PROTECTION IN AN ENCRYPTED COMPRESSED H.264 VIDEO BITSTREAM." International Journal of Students' Research in Technology & Management 3, no. 4 (September 27, 2015): 343–45. http://dx.doi.org/10.18510/ijsrtm.2015.349.

Full text
Abstract:
The paper presents that encryption of compressed video bit streams and hiding privacy information to protect videos during transmission or cloud storage. Digital video sometimes needs to be stored and processed in an encrypted format to maintain security and privacy. Here, data hiding directly in the encrypted version of H.264/AVC video stream is approached, which includes the following three parts. By analyzing he property of H.264/AVC codec, the code words of intra prediction modes, the code words of motion vector differences, and the code words of residual coefficients are encrypted with stream ciphers. Then, a data hider may embed additional data in the encrypted domain by using wrapping technique, without knowing the original video content. The paper results shows that used methods provides better performance in terms of computation efficiency, high data security and video quality after decryption. The parameters such as RMSE, PSNR, CC are evaluated to measure its efficiency
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Hyun-Wook, and Sung-Hyun Yang. "Region of interest–based segmented tiled adaptive streaming using head-mounted display tracking sensing data." International Journal of Distributed Sensor Networks 15, no. 12 (December 2019): 155014771989453. http://dx.doi.org/10.1177/1550147719894533.

Full text
Abstract:
To support 360 virtual reality video streaming services, high resolutions of over 8K and network streaming technology that guarantees consistent quality of service are required. To this end, we propose 360 virtual reality video player technology and a streaming protocol based on MPEG Dynamic Adaptive Streaming over HTTP Spatial Representation Description to support the player. The player renders the downsized video as the base layer, which has a quarter of the resolution of the original video, and high-quality video tiles consisting of tiles obtained from the tiled-encoded high-quality video (over 16K resolution) as the enhanced layer. Furthermore, we implemented the system and conducted experiments to measure the network bandwidth for 16K video streaming and switching latency arising from changes in the viewport. From the results, we confirmed that the player has a switching latency of less than 1000 ms and a maximum network download bandwidth requirement of 100 Mbps.
APA, Harvard, Vancouver, ISO, and other styles
40

Angela, Angela, Fandi Halim, and Chatrine Sylvia. "Pengukuran Pengalaman Pengguna Aplikasi Platform Pembelajaran dan Konferensi Video Menggunakan Framework UEQ+." JURNAL MEDIA INFORMATIKA BUDIDARMA 6, no. 2 (April 25, 2022): 1238. http://dx.doi.org/10.30865/mib.v6i2.3878.

Full text
Abstract:
This study aims to measure and evaluate the user experience of the Microsoft Teams application as a learning and video conferencing platform using the UEQ+ framework, which is the development of the UEQ method. Through the UEQ+ framework, questionnaires can be designed by customizing user experience variables according to the application to be measured, thus the research results are expected to be more accurate and relevant. The scale of user experience measured in this questionnaire includes: efficiency, perspicuity, dependability, trust, usefulness, intuitive use, trustworthiness of content, quality of content, and clarity. After the questionnaires were distributed, 149 data were obtained which could be processed using data processing tools which being provided by UEQ+ called Data Analysis Tools. In conclusion, respondents have positive impression of Microsoft Teams as a video conferencing application and learning platform. The most important scale that represents the quality of Microsoft Teams is usefulness, clarity, trustworthiness of content, and quality of content.
APA, Harvard, Vancouver, ISO, and other styles
41

White, Michael D., Kristy Latour, Martina Giordano, Tavis Taylor, and Nitin Agarwal. "Reliability and quality of online patient education videos for lateral lumbar interbody fusion." Journal of Neurosurgery: Spine 33, no. 5 (November 2020): 652–57. http://dx.doi.org/10.3171/2020.4.spine191539.

Full text
Abstract:
OBJECTIVEThere is an increasing trend among patients and their families to seek medical knowledge on the internet. Patients undergoing surgical interventions, including lateral lumbar interbody fusion (LLIF), often rely on online videos as a first source of knowledge to familiarize themselves with the procedure. In this study the authors sought to investigate the reliability and quality of LLIF-related online videos.METHODSIn December 2018, the authors searched the YouTube platform using 3 search terms: lateral lumbar interbody fusion, LLIF surgery, and LLIF. The relevance-based ranking search option was used, and results from the first 3 pages were investigated. Only videos from universities, hospitals, and academic associations were included for final evaluation. By means of the DISCERN instrument, a validated measure of reliability and quality for online patient education resources, 3 authors of the present study independently evaluated the quality of information.RESULTSIn total, 296 videos were identified by using the 3 search terms. Ten videos met inclusion criteria and were further evaluated. The average (± SD) DISCERN video quality assessment score for these 10 videos was 3.42 ± 0.16. Two videos (20%) had an average score above 4, corresponding to a high-quality source of information. Of the remaining 8 videos, 6 (60%) scored moderately, in the range of 3–4, indicating that the publication is reliable but important information is missing. The final 2 videos (20%) had a low average score (2 or below), indicating that they are unlikely to be of any benefit and should not be used. Videos with intraoperative clips were significantly more popular, as indicated by the numbers of likes and views (p = 0.01). There was no correlation between video popularity and DISCERN score (p = 0.104). In August 2019, the total number of views for the 10 videos in the final analysis was 537,785.CONCLUSIONSThe findings of this study demonstrate that patients who seek to access information about LLIF by using the YouTube platform will be presented with an overall moderate quality of educational content on this procedure. Moreover, compared with videos that provide patient information on treatments used in other medical fields, videos providing information on LLIF surgery are still exiguous. In view of the increasing trend to seek medical knowledge on the YouTube platform, and in order to support and optimize patient education on LLIF surgery, the authors encourage academic neurosurgery institutions in the United States and worldwide to implement the release of reliable video educational content.
APA, Harvard, Vancouver, ISO, and other styles
42

Abdul Rahman, Farah Diyana, Dimitris Agrafiotis, and Ahmad Imran Ibrahim. "Edge Dissimilarity Reduced-Reference Quality Metric with Low Overhead Bitrate." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 2 (May 1, 2018): 631. http://dx.doi.org/10.11591/ijeecs.v10.i2.pp631-640.

Full text
Abstract:
In multimedia transmission, it is important to rely on an objective quality metric which accurately represents the subjective quality of processed images and video sequences. Reduced-reference metrics make use of side-information that is transmitted to the receiver for estimating the quality of the received sequence with low complexity. In this paper, an Edge-based Dissimilarity Reduced-Reference video quality metric with low overhead bitrate is proposed. The metric is evaluated by finding the dissimilarity between the edge information of original and distorted sequences. The edge degradation can be detected in this manner as perceived video quality is highly associated with edge structural. Due to the high overhead using the Soergel distance, it is pertinent to find a way to reduce the overhead while maintaining the edge information that can convey the quality measure of the sequences. The effects of different edge detection operator, video resolution and file compressor are investigated. The aim of this paper is to significantly reduce the bitrate required in order to transmit the side information overhead as the reduced reference video quality metric. From the results obtained, the side information extracted using Sobel edge detector maintained consistency throughout the reduction of spatial and temporal down-sample.
APA, Harvard, Vancouver, ISO, and other styles
43

Zeng, Li, and Keke Guo. "Virtual Reality Software and Data Processing Algorithms Packaged Online for Videos." Mobile Information Systems 2022 (July 4, 2022): 1–6. http://dx.doi.org/10.1155/2022/2148742.

Full text
Abstract:
Aiming at the problem of virtual reality and data processing algorithm of online video packaging, one transmission scheme uses TILES in HEVC to block the video and then applies MP4Box to pack the video and generate a DASH video stream. A method is proposed to process the same panoramic video with different quality. By designing a new index to measure the complexity of the coding tree unit, this method predicts the depth of the coding tree unit by using the complexity index and spatial correlation of the video, skipping unnecessary traversal range, and realizing fast division of coding units. Experimental results show that compared with the latest HM16.20 reference model, the proposed algorithm can reduce the coding time by 37.25%, the BD-rate only increases by 0.74%, and the video image quality is almost not lost.
APA, Harvard, Vancouver, ISO, and other styles
44

Garcia-Pineda, Miguel, Jaume Segura-Garcia, and Santiago Felici-Castell. "Estimation techniques to measure subjective quality on live video streaming in Cloud Mobile Media services." Computer Communications 118 (March 2018): 27–39. http://dx.doi.org/10.1016/j.comcom.2017.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Salama and Saatchi. "Evaluation of Wirelessly Transmitted Video Quality Using a Modular Fuzzy Logic System." Technologies 7, no. 3 (September 14, 2019): 67. http://dx.doi.org/10.3390/technologies7030067.

Full text
Abstract:
Video transmission over wireless computer networks is increasingly popular as new applications emerge and wireless networks become more widespread and reliable. An ability to quantify the quality of a video transmitted using a wireless computer network is important for determining network performance and its improvement. The process requires analysing the images making up the video from the point of view of noise and associated distortion as well as traffic parameters represented by packet delay, jitter and loss. In this study a modular fuzzy logic based system was developed to quantify the quality of video transmission over a wireless computer network. Peak signal to noise ratio, structural similarity index and image difference were used to represent the user's quality of experience (QoE) while packet delay, jitter and percentage packet loss ratio were used to represent traffic related quality of service (QoS). An overall measure of the video quality was obtained by combining QoE and QoS values. Systematic sampling was used to reduce the number of images processed and a novel scheme was devised whereby the images were partitioned to more sensitively localize distortions. To further validate the developed system, a subjective test involving 25 participants graded the quality of the received video. The image partitioning significantly improved the video quality evaluation. The subjective test results correlated with the developed fuzzy logic approach. The video quality assessment developed in this study was compared against a method that uses spatial efficient entropic differencing and consistent results were observed. The study indicated that the developed fuzzy logic approaches could accurately determine the quality of a wirelessly transmitted video.
APA, Harvard, Vancouver, ISO, and other styles
46

Al Jameel, Mohammed, Triantafyllos Kanakis, Scott Turner, Ali Al-Sherbaz, and Wesam S. Bhaya. "A Reinforcement Learning-Based Routing for Real-Time Multimedia Traffic Transmission over Software-Defined Networking." Electronics 11, no. 15 (August 5, 2022): 2441. http://dx.doi.org/10.3390/electronics11152441.

Full text
Abstract:
Recently, video streaming services consumption has grown massively and is foreseen to increase even more in the future. The tremendous traffic usage has negatively impacted the network’s quality of service due to network congestion and end-to-end customers’ satisfaction represented by the quality of experience, especially during evening peak hours. This paper introduces an intelligent multimedia framework that aims to optimise the network’s quality of service and users’ quality of experience by taking into account the integration of Software-Defined Networking and Reinforcement Learning, which enables exploring, learning, and exploiting potential paths for video streaming flows. Moreover, an objective study was conducted to assess video streaming for various realistic network environments and under low and high traffic loads to obtain two quality of experience metrics; video multimethod assessment fusion and structural similarity index measure. The experimental results validate the effectiveness of the proposed solution strategy, which demonstrated better viewing quality by achieving better customers’ quality of experience, higher throughput and lower data loss compared with the currently existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
47

Sader, Nicholas, Abhaya V. Kulkarni, Matthew E. Eagles, Salim Ahmed, Jenna E. Koschnitzky, and Jay Riva-Cambrin. "The quality of YouTube videos on endoscopic third ventriculostomy and endoscopic third ventriculostomy with choroid plexus cauterization procedures available to families of patients with pediatric hydrocephalus." Journal of Neurosurgery: Pediatrics 25, no. 6 (June 2020): 607–14. http://dx.doi.org/10.3171/2019.12.peds19523.

Full text
Abstract:
OBJECTIVEYouTube has become an important information source for pediatric neurosurgical patients and their families. The goal of this study was to determine whether the informative quality of videos of endoscopic third ventriculostomy (ETV) and endoscopic third ventriculostomy with choroid plexus cauterization (ETV + CPC) is associated with metrics of popularity.METHODSThis cross-sectional study used comprehensive search terms to identify videos pertaining to ETV and ETV + CPC presented on the first 3 pages of search results on YouTube. Two pediatric neurosurgeons, 1 neurosurgery resident, and 2 patient families independently reviewed the selected videos. Videos were assessed for overall informational quality by using a validated 5-point Global Quality Score (GQS) and compared to online metrics of popularity and engagement such as views, likes, likes/views ratio, comments/views ratio, and likes/dislikes ratio. Weighted kappa scores were used to measure agreement between video reviewers.RESULTSA total of 58 videos (47 on ETV, 7 on ETV + CPC, 4 on both) of 120 videos assessed met the inclusion criteria. Video styles included “technical” (62%), “lecture” (24%), “patient testimonial” (4%), and “other” (10%). In terms of GQS, substantial agreement was seen between surgeons (kappa 0.67 [95% CI 0.55, 0.80]) and excellent agreement was found between each surgeon and the neurosurgical resident (0.77 [95% CI 0.66, 0.88] and 0.89 [95% CI 0.82, 0.97]). Only fair to moderate agreement was seen between professionals and patient families, with weighted kappa scores ranging from 0.07 to 0.56. Academic lectures were more likely to be rated good or excellent (64% vs 0%, p < 0.001) versus surgical procedure and testimonial video types. There were significant associations between a better GQS and more likes (p = 0.01), views (p = 0.02), and the likes/dislikes ratio (p = 0.016). The likes/views ratio (p = 0.31) and comments/views ratio (p = 0.35) were not associated with GQS. The number of likes (p = 0.02), views (p = 0.03), and the likes/dislikes ratio (p = 0.015) were significantly associated with video style (highest for lecture-style videos).CONCLUSIONSMedical professionals tended to agree when assessing the overall quality of YouTube videos, but this agreement was not as strongly seen when compared to parental ratings. The online metrics of likes, views, and likes/dislikes ratio appear to predict quality. Neurosurgeons seeking to increase their online footprint via YouTube would be well advised to focus more on the academic lecture style because these were universally better rated.
APA, Harvard, Vancouver, ISO, and other styles
48

Gunanandhini, S., M. Kalamani, M. Bhagavathipriya, and S. Guru prasath. "Wavelet based Video Compression techniques for Industrial monitoring applications." Journal of Physics: Conference Series 2272, no. 1 (July 1, 2022): 012019. http://dx.doi.org/10.1088/1742-6596/2272/1/012019.

Full text
Abstract:
Abstract The principle of industrial machine monitoring and vehicle camera transmission system are focusing on this paper. Video compression is widely used in many industrial applications like continuous monitoring of machines which consumes more storage by capturing every motion detection in machine, hence video coding is highly recommended for video compression without any loss in the actual video. By using wavelet transform delivers the superior localization both in frequency and time domain and its results showcases the better performance while comparing with discrete cosine transform. A comparative analysis has been carried on Video compression using Haar and orthogonal (Daubechies) wavelet. Duplicate coefficients of discrete wavelet transform is reduced by using quantization technique. It aims to attain minimum error while preserving the high peak signal to noise ratio and image quality in the acceptable range. Using PSNR as measure of quality, this paper shows that Daubechies wavelet provides the better quality of video compared to Haar wavelet. Evaluation of performance is depends upon on compression ratio, PSNR, MSE, and SSIM.
APA, Harvard, Vancouver, ISO, and other styles
49

Shin, Yongje, Hyunseok Choi, Youngju Nam, and Euisin Lee. "Video Packet Distribution Scheme for Multimedia Streaming Services in VANETs." Sensors 21, no. 21 (November 5, 2021): 7368. http://dx.doi.org/10.3390/s21217368.

Full text
Abstract:
By leveraging the development of mobile communication technologies and due to the increased capabilities of mobile devices, mobile multimedia services have gained prominence for supporting high-quality video streaming services. In vehicular ad-hoc networks (VANETs), high-quality video streaming services are focused on providing safety and infotainment applications to vehicles on the roads. Video streaming data require elastic and continuous video packet distributions to vehicles to present interactive real-time views of meaningful scenarios on the road. However, the high mobility of vehicles is one of the fundamental and important challenging issues for video streaming services in VANETs. Nevertheless, previous studies neither dealt with suitable data caching for supporting the mobility of vehicles nor provided appropriate seamless packet forwarding for ensuring the quality of service (QoS) and quality of experience (QoE) of real-time video streaming services. To address this problem, this paper proposes a video packet distribution scheme named Clone, which integrates vehicle-to-vehicle and vehicle-to-infrastructure communications to disseminate video packets for video streaming services in VANETs. First, an indicator called current network quality information (CNQI) is defined to measure the feature of data forwarding of each node to its neighbor nodes in terms of data delivery ratio and delay. Based on the CNQI value of each node and the trajectory of the destination vehicle, access points called clones are selected to cache video data packets from data sources. Subsequently, packet distribution optimization is conducted to determine the number of video packets to cache in each clone. Finally, data delivery synchronization is established to support seamless streaming data delivery from a clone to the destination vehicle. The experimental results show that the proposed scheme achieves high-quality video streaming services in terms of QoS and QoE compared with existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
50

Adenowo, A. A., and L. F. Oderinu. "PERFORMANCE ANALYSIS OF ENHANCED VIDEO ENCRYPTION ALGORITHM." Engineering and Technology Research Journal 5, no. 2 (September 20, 2020): 67–75. http://dx.doi.org/10.47545/etrj.2020.5.2.066.

Full text
Abstract:
The prevalence of internet, as well as low-cost mobile computing devices, makes video the preferred option for information archival and transmission. Also, the geometric growth in the generation and use of digital videos is massive and challenging to determine. Meanwhile, this growth has come with security and privacy issues such as unauthorized access, piracy, hacking and other digital attack every year. Restricting unauthorized access measure can be adopted to protect multimedia information, but does not guarantee the physical security of information. Cryptography which is a better and more secure approach is therefore required. Thus, this paper presents performance evaluation of Enhanced Novel Selective Video Encryption Algorithm, an AES-based video encryption algorithm. Properties of I frame are used randomly to generate the encryption key and a jump factor is used to determine which of the remaining P and B frame is selected for encryption. The results show that the more frames selected for encryption/decryption, the higher the encryption/decryption time and the higher the encrypted video size. Also, the PSNR value of this algorithm is around 40dB which indicates that the quality of the decrypted video is as high as that of the original video. The performance of this algorithm does not diminish with increase in size of video and number of frames. Hence, this algorithm is scalable, fast and highly secure for video encryption.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography