Littérature scientifique sur le sujet « Semantic video coding »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Semantic video coding ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Semantic video coding"
Essel, Daniel Danso, Ben-Bright Benuwa et Benjamin Ghansah. « Video Semantic Analysis ». International Journal of Computer Vision and Image Processing 11, no 2 (avril 2021) : 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.
Texte intégralChen, Sovann, Supavadee Aramvith et Yoshikazu Miyanaga. « Learning-Based Rate Control for High Efficiency Video Coding ». Sensors 23, no 7 (30 mars 2023) : 3607. http://dx.doi.org/10.3390/s23073607.
Texte intégralAntoszczyszyn, P. M., J. M. Hannah et P. M. Grant. « Reliable tracking of facial features in semantic-based video coding ». IEE Proceedings - Vision, Image, and Signal Processing 145, no 4 (1998) : 257. http://dx.doi.org/10.1049/ip-vis:19982153.
Texte intégralNOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui et Norihiko KATO. « 2301 Low Bit-Rate Semantic Coding Technology for Lecture Video ». Proceedings of the JSME annual meeting 2005.7 (2005) : 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.
Texte intégralBenuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah et Andriana Sarkodie. « Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis ». Mathematical Problems in Engineering 2018 (5 août 2018) : 1–11. http://dx.doi.org/10.1155/2018/9312563.
Texte intégralPimentel-Niño, M. A., Paresh Saxena et M. A. Vazquez-Castro. « Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness ». Scientific World Journal 2015 (2015) : 1–16. http://dx.doi.org/10.1155/2015/394956.
Texte intégralGuo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que et Jingyu Liu. « SASRT : Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks ». Sensors 19, no 14 (15 juillet 2019) : 3121. http://dx.doi.org/10.3390/s19143121.
Texte intégralStivaktakis, Radamanthys, Grigorios Tsagkatakis et Panagiotis Tsakalides. « Semantic Predictive Coding with Arbitrated Generative Adversarial Networks ». Machine Learning and Knowledge Extraction 2, no 3 (25 août 2020) : 307–26. http://dx.doi.org/10.3390/make2030017.
Texte intégralHerranz, Luis. « Integrating semantic analysis and scalable video coding for efficient content-based adaptation ». Multimedia Systems 13, no 2 (30 juin 2007) : 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.
Texte intégralMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger et Oliver Thiergart. « Real-Time Audio-Visual Analysis for Multiperson Videoconferencing ». Advances in Multimedia 2013 (2013) : 1–21. http://dx.doi.org/10.1155/2013/175745.
Texte intégralThèses sur le sujet "Semantic video coding"
Al-Qayedi, Ali. « Internet video-conferencing using model-based image coding with agent technology ». Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Texte intégralMitrica, Iulia. « Video compression of airplane cockpit screens content ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.
Texte intégralThis thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Chapitres de livres sur le sujet "Semantic video coding"
Mezaris, Vasileios, Nikolaos Thomos, Nikolaos V. Boulgouris et Ioannis Kompatsiaris. « Knowledge-Assisted Analysis of Video for Content-Adaptive Coding and Transmission ». Dans Advances in Semantic Media Adaptation and Personalization, 221–40. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76361_11.
Texte intégralLin, Yu-Tzu, et Chia-Hu Chang. « User-aware Video Coding Based on Semantic Video Understanding and Enhancing ». Dans Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16498.
Texte intégralThomas-Kerr, Joseph, Ian Burnett et Christian Ritz. « What Are You Trying to Say ? Format-Independent Semantic-Aware Streaming and Delivery ». Dans Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16763.
Texte intégralCavallaro, Andrea, et Stefan Winkler. « Perceptual Semantics ». Dans Multimedia Technologies, 1441–55. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-953-3.ch105.
Texte intégralCavallaro, Andrea, et Stefan Winkler. « Perceptual Semantics ». Dans Digital Multimedia Perception and Design, 1–20. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-860-4.ch001.
Texte intégralActes de conférences sur le sujet "Semantic video coding"
Décombas, M., F. Capman, E. Renan, F. Dufaux et B. Pesquet-Popescu. « Seam carving for semantic video coding ». Dans SPIE Optical Engineering + Applications, sous la direction de Andrew G. Tescher. SPIE, 2011. http://dx.doi.org/10.1117/12.895317.
Texte intégralSilva, Michel M., Mario F. M. Campos et Erickson R. Nascimento. « Semantic Hyperlapse : a Sparse Coding-based and Multi-Importance Approach for First-Person Videos ». Dans XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8302.
Texte intégralZhu, Chen, Guo Lu, Rong Xie et Li Song. « Perceptual Video Coding Based on Semantic-Guided Texture Detection and Synthesis ». Dans 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018028.
Texte intégralSilva, Michel M., Mario F. M. Campos et Erickson R. Nascimento. « Semantic Hyperlapse : a Sparse Coding-based and Multi-Importance Approach for First-Person Videos ». Dans Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2020.11364.
Texte intégralYang, Jianping, Jie Zhang et Xiangjun Chen. « Semantic-preload video model based on VOP coding ». Dans 2012 International Conference on Graphic and Image Processing, sous la direction de Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2012827.
Texte intégralGalteri, Leonardo, Marco Bertini, Lorenzo Seidenari, Tiberio Uricchio et Alberto Del Bimbo. « Increasing Video Perceptual Quality with GANs and Semantic Coding ». Dans MM '20 : The 28th ACM International Conference on Multimedia. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3394171.3413508.
Texte intégralXie, Guangqi, Xin Li, Shiqi Lin, Zhibo Chen, Li Zhang, Kai Zhang et Yue Li. « Hierarchical Reinforcement Learning Based Video Semantic Coding for Segmentation ». Dans 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2022. http://dx.doi.org/10.1109/vcip56404.2022.10008806.
Texte intégralHu, Yujie, Youmin Xu, Jianhui Chang et Jian Zhang. « Semantic Neural Rendering-based Video Coding : Towards Ultra-Low Bitrate Video Conferencing ». Dans 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00067.
Texte intégralZhang Liang, Wen Xiangming, Wang Bo et Zheng Wei. « A concept-based approach to video semantic analysis and coding ». Dans 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5689076.
Texte intégralMezaris, Vasileios, Nikolaos Boulgouris et Ioannis Kompatsiaris. « Knowledge-Assisted Video Analysis for Content-Adaptive Coding and Transmission ». Dans 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06). IEEE, 2006. http://dx.doi.org/10.1109/smap.2006.22.
Texte intégral