Auswahl der wissenschaftlichen Literatur zum Thema „Encoded video stream“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Encoded video stream" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Encoded video stream"

1

Al-Tamimi, Abdel-Karim, Raj Jain und Chakchai So-In. „High-Definition Video Streams Analysis, Modeling, and Prediction“. Advances in Multimedia 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/539396.

Der volle Inhalt der Quelle
Annotation:
High-definition video streams' unique statistical characteristics and their high bandwidth requirements are considered to be a challenge in both network scheduling and resource allocation fields. In this paper, we introduce an innovative way to model and predict high-definition (HD) video traces encoded with H.264/AVC encoding standard. Our results are based on our compilation of over 50 HD video traces. We show that our model, simplified seasonal ARIMA (SAM), provides an accurate representation for HD videos, and it provides significant improvements in prediction accuracy. Such accuracy is vital to provide better dynamic resource allocation for video traffic. In addition, we provide a statistical analysis of HD videos, including both factor and cluster analysis to support a better understanding of video stream workload characteristics and their impact on network traffic. We discuss our methodology to collect and encode our collection of HD video traces. Our video collection, results, and tools are available for the research community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Reljin, Irini, und Branimir Reljin. „Fractal and multifractal analyses of compressed video sequences“. Facta universitatis - series: Electronics and Energetics 16, Nr. 3 (2003): 401–14. http://dx.doi.org/10.2298/fuee0303401r.

Der volle Inhalt der Quelle
Annotation:
The paper considers compressed video streams from the fractal and multifractal (MF) points of view. Video traces in H.263 and MPEG-4 formats generated at the Technical University Berlin and publicly available, were investigated. It was shown that all compressed videos exhibit fractal (long-range dependency) nature and that higher compression ratios provoke more variability of the encoded video stream. This conclusion is approved from the MF spectra of frame size video traces. By analyzing individual frames and their MF spectra the additive nature is approved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grois, Dan, Evgeny Kaminsky und Ofer Hadar. „Efficient Real-Time Video-in-Video Insertion into a Pre-Encoded Video Stream“. ISRN Signal Processing 2011 (14.02.2011): 1–11. http://dx.doi.org/10.5402/2011/975462.

Der volle Inhalt der Quelle
Annotation:
This work relates to the developing and implementing of an efficient method and system for the fast real-time Video-in-Video (ViV) insertion, thereby enabling efficiently inserting a video sequence into a predefined location within a pre-encoded video stream. The proposed method and system are based on dividing the video insertion process into two steps. The first step (i.e., the Video-in-Video Constrained Format (ViVCF) encoder) includes the modification of the conventional H.264/AVC video encoder to support the visual content insertion Constrained Format (CF), including generation of isolated regions without using the Frequent Macroblock Ordering (FMO) slicing, and to support the fast real-time insertion of overlays. Although, the first step is computationally intensive, it should to be performed only once even if different overlays have to be modified (e.g., for different users). The second step for performing the ViV insertion (i.e., the ViVCF inserter) is relatively simple (operating mostly in a bit-domain), and is performed separately for each different overlay. The performance of the presented method and system is demonstrated and compared with the H.264/AVC reference software (JM 12); according to our experimental results, there is a significantly low bit-rate overhead, while there is substantially no degradation in the PSNR quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Stankowski, Jakub, Damian Karwowski, Tomasz Grajek, Krzysztof Wegner, Jakub Siast, Krzysztof Klimaszewski, Olgierd Stankiewicz und Marek Domański. „Analysis of Compressed Data Stream Content in HEVC Video Encoder“. International Journal of Electronics and Telecommunications 61, Nr. 2 (01.06.2015): 121–27. http://dx.doi.org/10.1515/eletel-2015-0015.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, a detailed analysis of the content of the bitstream, produced by the HEVC video encoder is presented. With the use of the HM 10.0 reference software the following statistics were investigated: 1) the amount of data in the encoded stream related to individual frame types, 2) the relationship between the value of the QP and the size of the bitstream at the output of the encoder, 3) contribution of individual types of data to I and B frames. The above mentioned aspects have been thoroughly explored for a wide range of target bitrates. The obtained results became the basis for highlighting guidelines that allow for efficient bitrate control in the HEVC encoder.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Politis, Ilias, Michail Tsagkaropoulos, Thomas Pliakas und Tasos Dagiuklas. „Distortion Optimized Packet Scheduling and Prioritization of Multiple Video Streams over 802.11e Networks“. Advances in Multimedia 2007 (2007): 1–11. http://dx.doi.org/10.1155/2007/76846.

Der volle Inhalt der Quelle
Annotation:
This paper presents a generic framework solution for minimizing video distortion of all multiple video streams transmitted over 802.11e wireless networks, including intelligent packet scheduling and channel access differentiation mechanisms. A distortion prediction model designed to capture the multireferenced frame coding characteristic of H.264/AVC encoded videos is used to predetermine the distortion importance of each video packet in all streams. Two intelligent scheduling algorithms are proposed: the “even-loss distribution,” where each video sender is experiencing the same loss and the “greedy-loss distribution” packet scheduling, where selected packets are dropped over all streams, ensuring that the most significant video stream in terms of picture context and quality characteristics will experience minimum losses. The proposed model has been verified with actual distortion measurements and has been found more accurate than the “additive distortion” model that omits the correlation among lost frames. The paper includes analytical and simulation results from the comparison of both schemes and from their comparison to the simplified additive model, for different video sequences and channel conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Fu Zheng, Jia Run Song und Shu Ai Wan. „A No-Reference Quality Assessment System for Video Streaming over RTP“. Advanced Materials Research 179-180 (Januar 2011): 243–48. http://dx.doi.org/10.4028/www.scientific.net/amr.179-180.243.

Der volle Inhalt der Quelle
Annotation:
In the paper a no-reference system for quality assessment of video streaming over RTP is proposed for monitoring the quality of networked video. The proposed system is composed of four modules, where the quality assessment module utilizes information extracted from the bit-stream by the modules of RTP header analysis, frame header analysis and display buffer simulation. Taking MPEG-4 encoded video stream over RTP as an example, the process of video quality assessment using the proposed system is described in this paper. The proposed system is featured by its high efficiency without sorting to the original video or video decoding, and therefore well suited for real-time networked video applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Ke, Xuejing Li, Jianhua Yang, Jun Wu und Ruifeng Li. „Temporal action detection based on two-stream You Only Look Once network for elderly care service robot“. International Journal of Advanced Robotic Systems 18, Nr. 4 (01.07.2021): 172988142110383. http://dx.doi.org/10.1177/17298814211038342.

Der volle Inhalt der Quelle
Annotation:
Human action segmentation and recognition from the continuous untrimmed sensor data stream is a challenging issue known as temporal action detection. This article provides a two-stream You Only Look Once-based network method, which fuses video and skeleton streams captured by a Kinect sensor, and our data encoding method is used to turn the spatiotemporal temporal action detection into a one-dimensional object detection problem in constantly augmented feature space. The proposed approach extracts spatial–temporal three-dimensional convolutional neural network features from video stream and view-invariant features from skeleton stream, respectively. Furthermore, these two streams are encoded into three-dimensional feature spaces, which are represented as red, green, and blue images for subsequent network input. We proposed the two-stream You Only Look Once-based networks which are capable of fusing video and skeleton information by using the processing pipeline to provide two fusion strategies, boxes-fusion or layers-fusion. We test the temporal action detection performance of two-stream You Only Look Once network based on our data set High-Speed Interplanetary Tug/Cocoon Vehicles-v1, which contains seven activities in the home environment and achieve a particularly high mean average precision. We also test our model on the public data set PKU-MMD that contains 51 activities, and our method also has a good performance on this data set. To prove that our method can work efficiently on robots, we transplanted it to the robotic platform and an online fall down detection experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hamza, Ahmed M., Mohamed Abdelazim, Abdelrahman Abdelazim und Djamel Ait-Boudaoud. „HEVC Rate-Distortion Optimization with Source Modeling“. Electronic Imaging 2021, Nr. 10 (18.01.2021): 259–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.10.ipas-259.

Der volle Inhalt der Quelle
Annotation:
The Rate-Distortion adaptive mechanisms of MPEG-HEVC (High Efficiency Video Coding) and its derivatives are an incremental improvement in the software reference encoder, providing a selective Lagrangian parameter choice which varies by encoding mode (intra or inter) and picture reference level. Since this weighting factor (and the balanced cost functions it impacts) are crucial to the RD optimization process, affecting several encoder decisions and both coding efficiency and quality of the encoded stream, we investigate an improvement by modern reinforcement learning methods. We develop a neural-based agent that learns a real-valued control policy to maximize rate savings by input signal pattern, mapping pixel intensity values from the picture at the coding tree unit level, to the appropriate weighting-parameter. Our testing on reference software yields improvements for coding efficiency performance across different video sequences, in multiple classes of video.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mohammed, Dhrgham Hani, und Laith Ali Abdul-Rahaim. „A Proposed of Multimedia Compression System Using Three - Dimensional Transformation“. Webology 18, SI05 (30.10.2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Der volle Inhalt der Quelle
Annotation:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yamagiwa, Shinichi, und Yuma Ichinomiya. „Stream-Based Visually Lossless Data Compression Applying Variable Bit-Length ADPCM Encoding“. Sensors 21, Nr. 13 (05.07.2021): 4602. http://dx.doi.org/10.3390/s21134602.

Der volle Inhalt der Quelle
Annotation:
Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Encoded video stream"

1

Allouche, Mohamed. „Video tracking for marketing applications“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS033.

Der volle Inhalt der Quelle
Annotation:
Au cours des dernières décennies, la production et la consommation de vidéos ont considérablement augmenté et il est communément admis que 80 % du trafic Internet est constitué de vidéos. Dans ce cadre, les vidéos de marketing sont encore dominées par le contenu payant (c'est-à-dire le contenu créé par l'annonceur qui paie un annonceur pour distribuer ce contenu). Cependant, le contenu vidéo organique progresse lentement mais sûrement. Le terme « contenu organique » fait référence à un contenu dont la création et/ou la distribution n'est pas payante. Dans la plupart des cas, il s'agit d'un contenu créé par l'utilisateur avec une valeur publicitaire implicite, ou d'un contenu publicitaire distribué par un utilisateur sur un réseau social. En pratique, un tel contenu est directement produit par les appareils de l'utilisateur dans un format compressé (par exemple AVC - Advanced Video Coding, HEVC - High efficiency Video Coding ou VVC - Versatile Video Coding) et est souvent partagé par d'autres utilisateurs, sur le même réseau social ou sur des réseaux sociaux différents, créant ainsi une chaîne virtuelle de distribution qui est étudiée par les experts en marketing.Une telle application peut être modélisée par au moins deux cadres scientifiques différents, à savoir la blockchain et l'empreinte (fingerprinting) vidéo. D'une part, si l'on considère d'abord les problèmes de distribution, la blockchain semble être une solution attrayante, car elle prévoit une solution sécurisée, décentralisée et transparente pour suivre les changements de tout actif numérique. Alors que la blockchain a déjà prouvé son efficacité dans une grande variété d'applications de distribution de contenu, ses applications liées au multimédia restent rares et soulèvent des contradictions conceptuelles entre les ressources de calcul/stockage strictement limitées disponibles dans la blockchain et la grande quantité de données représentant le contenu vidéo ainsi que les opérations complexes que le traitement vidéo exige. D'autre part, si l'on considère d'abord les questions relatives au contenu multimédia, chaque étape de la distribution peut être considérée comme une opération de quasi-doublonnage. Ainsi, le suivi d'une vidéo organique peut être assuré par l'empreinte vidéo qui regroupe les efforts de recherche consacrés à l'identification des versions dupliquées et/ou répliquées d'une séquence vidéo donnée dans un ensemble de données vidéo de référence. Alors que le suivi du contenu vidéo dans le domaine non compressé est un domaine de recherche riche, l'empreinte vidéo dans le domaine compressé est encore sous-explorée.La présente thèse étudie la possibilité de tracer un contenu vidéo compressé publicitaire, dans le contexte de sa propagation spontanée et incontrôlée dans un réseau distribué :• le suivi vidéo au moyen de solutions basées sur la blockchain, malgré la grande quantité de données et les exigences de calcul des applications vidéo, a priori incompatibles avec les solutions blockchain actuelles• le fingerprinting vidéo dans le domaine compressé, même si la compression vidéo est censée exclure la redondance visuelle qui permet de retrouver le contenu vidéo.• les synergies applicatives entre la blockchain et le fingerprinting vidéo.Les principaux résultats consistent en la conception, la spécification et la mise en œuvre de :• COLLATE - une architecture de répartition de charge on-chain off-chain, qui permet d'étendre de manière abstraite les ressources informatiques, de stockage et logicielles intimement limitées de n'importe quelle blockchain par des ressources informatiques à usage général ;• COMMON - Compressed dOMain Marketing videO fiNgerprinting, démontrant la possibilité de modéliser des empreintes vidéo compressées dans un cadre d'apprentissage profond• BIDDING - BlockchaIn-baseD viDeo fINgerprintinG, un pipeline de traitement de bout en bout pour coupler l'empreinte vidéo à la solution d'équilibrage de charge de la blockchain
The last decades have seen video production and consumption rise significantly: TV/cinematography, social networking, digital marketing, and video surveillance incrementally and cumulatively turned video content into the predilection type of data to be exchanged, stored, and processed. It is thus commonly considered that 80% of the Internet traffic is video, and intensive and holistic efforts for devising lossy video compression solutions are carried out to reach the trade-off between video data size and their visual quality.Under this framework, marketing videos are still dominated by the paid content (that is, content created by the advertiser that pays an announcer for distributing that content). Yet, organic video content is slowly but surely advancing. In a nutshell, the term organic content refers to a content whose creation and/or distribution is not paid. In most cases, it is a user-created content with implicit advertising value, or some advertising content distributed by a user on a social network. In practice, such a content is directly produced by the user devices in compressed format (e.g. the AVC - Advanced Video Coding, HEVC - High efficiency Video Coding or VVC - Versatile Video Coding) and is often shared by other users, on the same or on different social networks, thus creating a virtual chain distribution that is studied by marketing experts.Such an application can be modeled by at least two different scientific methodological and technical frameworks, namely blockchain and video fingerprinting. On the one hand, should we first consider the distribution issues, blockchain seems an appealing solution, as it makes provisions for a secure, decentralized, and transparent solution to track changes of any digital asset. While blockchain already proved its effectiveness in a large variety of content distribution applications, its multimedia related applications stay scarce and rise conceptual contradictions between the strictly limited computing/storage resources available in blockchain and the large amount of data representing the video content as well as the complex operations video processing requires. On the other hand, should we first consider the multimedia content issues, each step of distribution can be considered as a near duplication operation. Thus, the tracking of organic video can be ensured by video fingerprinting that regroups research efforts devoted to identifying duplicated and/or replicated versions of a given video sequence in a reference video dataset. While tracking video content in uncompressed domain is a rich research field, compressed domain video fingerprinting is still underexplored.The present thesis studies the possibility of tracking advertising compressed video content, in the context of its uncontrolled, spontaneous propagation into a distributed network:• video tracking by means of blockchain-based solutions, despite the large amount of data and the computation requirements of video applications, a priori incompatible with nowadays blockchain solutions• effective compressed domain video fingerprinting, even though video compression is supposed to exclude the very visual redundancy that allows video content to be retrieved.• applicative synergies between blockchain and fingerprinting frameworks.The main results consist in the conception, specification and implementation of:• COLLATE, an on-Chain Off-chain Load baLancing ArchiTecturE, thus making it possible for the intimately constrained computing, storage and software resources of any blockchain to be abstractly extended by general-purpose computing machine resources;• COMMON - Compressed dOMain Marketing videO fiNgerprinting, demonstrating the possibility of modelling compressed modeling video fingerprint under deep learning framework• BIDDING - BlockchaIn-baseD viDeo fINgerprintinG, an end-to-end processing pipeline for coupling compressed domain video fingerprinting to the blockchain load balancing solution
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hu, Wan-Hsun, und 胡萬勳. „A Modified H.264 Encoder to Stream Video for Narrowband Networks“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/46330012096890207087.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
資訊工程學研究所
97
Public and home safeties are important in human life. For the sake of security, it is necessary to have as many monitors of the surveillance system as possible to monitor specific regions. However, there is usually not sufficient bandwidth of networks to return all video streams, especially in narrowband networks. To solve this issue, we propose a modified H.264 encoder, which uses little bandwidth to transfer more video streams as possible. The key concept is to compress all input video streams into one. Because we just know what happens in specific regions, it is not imperative to store high-quality video. The video should be small size instead of precise. We shrink four observed videos to one-quarter and combine them into one as long as the quality is distinct and smooth enough. This way, it is able to reduce the needed bandwidth and have acceptable quality. Still, each video has different frame rate, it is impossible to get samples in all regions when sampling. We modify the original encoder to improve encoding time when some regions get no sample. Finally, we encode the combined image by using modified H.264 encoder. For evaluations, we compare the needed bandwidth of output generated by our system with that of traditional IP camera which is no scaling and combining on the input files, and the encoding time of modified encoder with that of original encoder. As confirmed by performance evaluations, the proposed modified H.264 encoder with limited hardware cost can achieve excellent performance in term of the bandwidth of transmissions and encoding time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Encoded video stream"

1

Zhu, Kanghua, Yongfang Wang, Jian Wu, Yun Zhu und Wei Zhang. „Content Oriented Video Quality Prediction for HEVC Encoded Stream“. In Communications in Computer and Information Science, 338–48. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4211-9_33.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salama, Paul, Ness B. Shroff und Edward J. Delp. „Error Concealment in Encoded Video Streams“. In Signal Recovery Techniques for Image and Video Compression and Transmission, 199–233. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-6514-4_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yu, Tong, und Nicolas Padoy. „Encode the Unseen: Predictive Video Hashing for Scalable Mid-stream Retrieval“. In Computer Vision – ACCV 2020, 427–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69541-5_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Abbate, Maurizio, Ciro D’Elia und Paola Mariano. „A Low Complexity Motion Segmentation Based on Semantic Representation of Encoded Video Streams“. In Image Analysis and Processing – ICIAP 2011, 209–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24088-1_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lal, Chhagan, Vijay Laxmi und M. S. Gaur. „A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams over MANETs“. In Advanced Communication and Networking, 86–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23312-8_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

„Encoded Output Delivered as a Bit Stream“,. In A Practical Guide to Video and Audio Compression, 263–76. Routledge, 2005. http://dx.doi.org/10.4324/9780080488066-19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cycon, H. „Mobile Serverless Video Communication“. In Encyclopedia of Mobile Computing and Commerce, 589–95. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-002-8.ch098.

Der volle Inhalt der Quelle
Annotation:
The main new feature, scalability, addresses schemes for delivery of video to diverse clients over heterogeneous networks, particularly in scenarios where the downstream conditions are not known in advance. The basic idea is that one encoded stream can serve networks with varying bandwidths or clients with different display resolutions or systems with different storage resources, which is an obvious advantage in heterogeneous networks prevalent in mobile applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lawrence, Linju, und R. Shreelekshmi. „Chained Digital Signature for the Improved Video Integrity Verification“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia210284.

Der volle Inhalt der Quelle
Annotation:
The recorded videos from the surveillance cameras can be used as potential evidence in forensic applications. These videos can be easily manipulated or tampered with video editing tools without leaving visible clues. Hence integrity verification is essential before using the videos as evidence. Existing methods mostly depend on the analysis of video data stream and video container for tampering detection. This paper discusses an active video integrity verification method using Elliptic Curve Cryptography and blockchain. The method uses Elliptic Curve Digital Signature Algorithm for calculating digital signature for video content and previous block. The digital signature of the encoded video segment (video content with predetermined size) and that of previous block are kept in each block to form an unbreakable chain. Our method does not consider any coding or compression artifacts of the video file and can be used on any video type and is tested on public-available standard videos with varying sizes and types. The proposed integrity verification scheme has better detection capabilities towards different types of alterations like insertion, copy-paste and deletion and can detect any type of forgery. This method is faster and more resistant to brute force and collision attacks in comparison to existing recent blockchain method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Fleury, Martin, und Laith Al-Jobouri. „Techniques and Tools for Adaptive Video Streaming“. In Intelligent Multimedia Technologies for Networking Applications, 65–101. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2833-5.ch004.

Der volle Inhalt der Quelle
Annotation:
Adaptive video streaming is becoming increasingly necessary as quality expectations rise, while congestion persists and the extension of the Internet to mobile access creates new sources of packet loss. This chapter considers several techniques for adaptive video streaming including live HTTP streaming, bitrate transcoding, scalable video coding, and rate controllers. It also includes additional case studies of congestion control over the wired Internet using fuzzy logic, statistical multiplexing to adapt constant bitrate streams to the bandwidth capacity, and adaptive error correction for the mobile Internet. To guide the reader, the chapter makes a number of comparisons between the main techniques, for example explaining why currently per-encoded video may be better-streamed using adaptive simulcast than by transcoding or scalable video coding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Koumaras, Harilaos, Charalampos Skianis und Anastasios Kourtis. „Analysis and Modeling of H.264 Unconstrained VBR Video Traffic“. In Innovations in Mobile Multimedia Communications and Applications, 227–43. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-563-6.ch016.

Der volle Inhalt der Quelle
Annotation:
In future communication networks, video is expected to represent a large portion of the total traffic, given that especially variable bit rate (VBR) coded video streams, are becoming increasingly popular. Consequently, traffic modeling and characterization of such video services is essential for the efficient traffic control and resource management. Besides, providing an insight of video coding mechanisms, traffic models can be used as a tool for the allocation of network resources, the design of efficient networks for streaming services and the reassurance of specific QoS characteristics to the end users. The new H.264/AVC standard, proposed by the ITU-T Video Coding Expert Group (VCEG) and ISO/IEC Moving Pictures Expert Group (MPEG), is expected to dominate in upcoming multimedia services, due to the fact that it outperforms in many fields the previous encoded standards. This article presents both a frame and a layer (i.e. I, P and B frames) level analysis of H.264 encoded sources. Analysis of the data suggests that the video traffic can be considered as a stationary stochastic process with an autocorrelation function of exponentially fast decay and a marginal frame size distribution of approximately Gamma form. Finally, based on the statistical analysis, an efficient model of H.264 video traffic is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Encoded video stream"

1

Grzelka, Adam, Adrian Dziembowski, Dawid Mieloch und Marek Domański. „The Study of the Video Encoder Efficiency in Decoder-side Depth Estimation Applications“. In WSCG'2022 - 30. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2022. Západočeská univerzita, 2022. http://dx.doi.org/10.24132/csrn.3201.31.

Der volle Inhalt der Quelle
Annotation:
The paper presents a study of a lossy compression impact on depth estimation and virtual view quality. Two scenarios were considered: the approach based on ISO/IEC 23090-12 coder-agnostic MPEG Immersive video standard, and the more general approach based on simulcast video coding. The commonly used compression techniques were tested: VVC (MPEG-I Part 3 / H.266), HEVC (MPEG H part 2 / H.265), AVC (MPEG 4 part 10 / H.264), MPEG-2 (MPEG 2 part 2 / H.262), AV1 (AOMedia Video 1), VP9 (AOMedia VP9). The quality of virtual views generated from the encoded stream was assessed by the IV-PSNR metric which is adapted to synthesized images. The results were presented as a relationship between virtual view quality and the quality of decoded real views. The main conclusion from performed experiments is that encoding quality and virtual view quality are encoder-dependent, therefore, the used video encoder should be carefully chosen to achieve the best quality in decoder-side depth estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kaminsky, Evgeny, Dan Grois und Ofer Hadar. „Efficient real-time Video-in-Video insertion into a pre-encoded video stream for the H.264/AVC“. In 2010 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2010. http://dx.doi.org/10.1109/ist.2010.5548511.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Meddeb, Marwa, Marco Cagnazzo und Beatrice Pesquet-Popescu. „ROI-based rate control using tiles for an HEVC encoded video stream over a lossy network“. In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mu, Mu, Roswitha Gostner, Andreas Mauthe, Gareth Tyson und Francisco Garcia. „Visibility of individual packet loss on H.264 encoded video stream: a user study on the impact of packet loss on perceived video quality“. In IS&T/SPIE Electronic Imaging, herausgegeben von Reza Rejaie und Ketan D. Mayer-Patel. SPIE, 2009. http://dx.doi.org/10.1117/12.815538.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Razavi, R., M. Fleury und M. Ghanbari. „Unequal protection of encoded video streams in bluetooth EDR“. In Packet Video 2007. IEEE, 2007. http://dx.doi.org/10.1109/packet.2007.4397049.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Antsiferova, Anastasia, Alexander Yakovenko, Nickolay Safonov, Dmitriy Kulikov, Alexander Gushin und Dmitriy Vatolin. „Applying Objective Quality Metrics to Video-Codec Comparisons: Choosing the Best Metric for Subjective Quality Estimation“. In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-199-210.

Der volle Inhalt der Quelle
Annotation:
Quality assessment is essential to creating and comparing video compression algorithms. Despite the development of many new quality-assessment methods, well-known and generally accepted codecs comparisons mainly employ classical methods such as PSNR, SSIM, and VMAF. These methods have different variations: temporal pooling techniques, color-component summations and versions. In this paper, we present comparison results for generally accepted video-quality metrics to determine which ones are most relevant to video codecs comparisons. For evaluation we used videos compressed by codecs of different standards at three bitrates, and subjective scores were collected for these videos. Evaluation dataset consists of 789 encoded streams and 320294 subjective scores. VMAF calculated for all Y, U, V color spaced showed the best correlation with subjective quality, and we also showed that the usage of smaller weighting coefficients for U and V components leads to a better correlation with subjective quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rehbein, Gustavo, Eduardo Costa, Guilherme Corrêa, Cristiano Santos und Marcelo Porto. „A Machine-Learning-Driven Fast Video-based Point Cloud Compression (V-PCC)“. In Proceedings of the Brazilian Symposium on Multimedia and the Web, 20–27. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/webmedia.2024.242069.

Der volle Inhalt der Quelle
Annotation:
In recent years, 3D point cloud content has gained attention due to its application possibilities, such as multimedia systems, virtual, augmented, and mixed reality, through the mapping and visualization of environments and/or 3D objects, real-time immersive communications, and autonomous driving systems. However, raw point clouds demand a large amount of data for their representation, and compression is mandatory to allow efficient transmission and storage. The MPEG group proposed the Video-based Point Cloud Compression (V-PCC) standard, which is a dynamic point cloud encoder based on the use of video encoders through projections into 2D space. However, V-PCC demands a high computational cost, demanding fast implementations for real-time processing and, especially, for mobile device applications. In this paper, a machine-learning-based fast implementation of V-PCC is proposed, where the main approach is the use of trained decision trees to speed up the block partitioning process during the point cloud compression. The results show that the proposed fast V-PCC solution is able to achieve an encoding time reduction of 42.73% for the geometry video sub-stream and 55.3% for the attribute video sub-stream, with a minimal impact on bitrate and objective quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Xin, Jun, Ming-Ting Sun und Kangwook Chun. „Bit-allocation for transcoding pre-encoded video streams“. In Electronic Imaging 2002, herausgegeben von C. C. Jay Kuo. SPIE, 2002. http://dx.doi.org/10.1117/12.453054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cen, Nan, Zhangyu Guan und Tommaso Melodia. „Joint decoding of independently encoded compressive multi-view video streams“. In 2013 Picture Coding Symposium (PCS). IEEE, 2013. http://dx.doi.org/10.1109/pcs.2013.6737753.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hong Zhou, Jingli Zhou und Xiaojian Xia. „The motion vector reuse algorithm to improve dual-stream video encoder“. In 2008 9th International Conference on Signal Processing (ICSP 2008). IEEE, 2008. http://dx.doi.org/10.1109/icosp.2008.4697366.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Encoded video stream"

1

Chen, Yongzhou, Ammar Tahir und Radhika Mittal. Controlling Congestion via In-Network Content Adaptation. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-018.

Der volle Inhalt der Quelle
Annotation:
Realizing that it is inherently difficult to match precisely the sending rates at the endhost with the available capacity on dynamic cellular links, we built a system, Octopus, that sends real-time data streams over cellular networks using an imprecise controller (that errs on the side of overestimating network capacity) and then drops appropriate packets in the cellular-network buffers to match the actual capacity. We designed parameterized primitives for implementing the packet-dropping logic, which the applications at the endhost can configure differently to express various content-adaptation policies. Octopus transport encodes the app-specified parameters in packet header fields, which the routers can parse to execute the desired dropping behavior. Our evaluation shows how real-time applications involving standard and volumetric videos can be designed to exploit Octopus for various requirements and achieve a performance that is 1.5 to 18 times better than state-of-the-art schemes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie