Auswahl der wissenschaftlichen Literatur zum Thema „Semantic video coding“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Semantic video coding" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Semantic video coding"
Essel, Daniel Danso, Ben-Bright Benuwa und Benjamin Ghansah. „Video Semantic Analysis“. International Journal of Computer Vision and Image Processing 11, Nr. 2 (April 2021): 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.
Der volle Inhalt der QuelleChen, Sovann, Supavadee Aramvith und Yoshikazu Miyanaga. „Learning-Based Rate Control for High Efficiency Video Coding“. Sensors 23, Nr. 7 (30.03.2023): 3607. http://dx.doi.org/10.3390/s23073607.
Der volle Inhalt der QuelleAntoszczyszyn, P. M., J. M. Hannah und P. M. Grant. „Reliable tracking of facial features in semantic-based video coding“. IEE Proceedings - Vision, Image, and Signal Processing 145, Nr. 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.
Der volle Inhalt der QuelleNOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui und Norihiko KATO. „2301 Low Bit-Rate Semantic Coding Technology for Lecture Video“. Proceedings of the JSME annual meeting 2005.7 (2005): 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.
Der volle Inhalt der QuelleBenuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah und Andriana Sarkodie. „Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis“. Mathematical Problems in Engineering 2018 (05.08.2018): 1–11. http://dx.doi.org/10.1155/2018/9312563.
Der volle Inhalt der QuellePimentel-Niño, M. A., Paresh Saxena und M. A. Vazquez-Castro. „Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness“. Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.
Der volle Inhalt der QuelleGuo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que und Jingyu Liu. „SASRT: Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks“. Sensors 19, Nr. 14 (15.07.2019): 3121. http://dx.doi.org/10.3390/s19143121.
Der volle Inhalt der QuelleStivaktakis, Radamanthys, Grigorios Tsagkatakis und Panagiotis Tsakalides. „Semantic Predictive Coding with Arbitrated Generative Adversarial Networks“. Machine Learning and Knowledge Extraction 2, Nr. 3 (25.08.2020): 307–26. http://dx.doi.org/10.3390/make2030017.
Der volle Inhalt der QuelleHerranz, Luis. „Integrating semantic analysis and scalable video coding for efficient content-based adaptation“. Multimedia Systems 13, Nr. 2 (30.06.2007): 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.
Der volle Inhalt der QuelleMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger und Oliver Thiergart. „Real-Time Audio-Visual Analysis for Multiperson Videoconferencing“. Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Der volle Inhalt der QuelleDissertationen zum Thema "Semantic video coding"
Al-Qayedi, Ali. „Internet video-conferencing using model-based image coding with agent technology“. Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Der volle Inhalt der QuelleMitrica, Iulia. „Video compression of airplane cockpit screens content“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.
Der volle Inhalt der QuelleThis thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Buchteile zum Thema "Semantic video coding"
Mezaris, Vasileios, Nikolaos Thomos, Nikolaos V. Boulgouris und Ioannis Kompatsiaris. „Knowledge-Assisted Analysis of Video for Content-Adaptive Coding and Transmission“. In Advances in Semantic Media Adaptation and Personalization, 221–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76361_11.
Der volle Inhalt der QuelleLin, Yu-Tzu, und Chia-Hu Chang. „User-aware Video Coding Based on Semantic Video Understanding and Enhancing“. In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16498.
Der volle Inhalt der QuelleThomas-Kerr, Joseph, Ian Burnett und Christian Ritz. „What Are You Trying to Say? Format-Independent Semantic-Aware Streaming and Delivery“. In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16763.
Der volle Inhalt der QuelleCavallaro, Andrea, und Stefan Winkler. „Perceptual Semantics“. In Multimedia Technologies, 1441–55. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-953-3.ch105.
Der volle Inhalt der QuelleCavallaro, Andrea, und Stefan Winkler. „Perceptual Semantics“. In Digital Multimedia Perception and Design, 1–20. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-860-4.ch001.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Semantic video coding"
Décombas, M., F. Capman, E. Renan, F. Dufaux und B. Pesquet-Popescu. „Seam carving for semantic video coding“. In SPIE Optical Engineering + Applications, herausgegeben von Andrew G. Tescher. SPIE, 2011. http://dx.doi.org/10.1117/12.895317.
Der volle Inhalt der QuelleSilva, Michel M., Mario F. M. Campos und Erickson R. Nascimento. „Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos“. In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8302.
Der volle Inhalt der QuelleZhu, Chen, Guo Lu, Rong Xie und Li Song. „Perceptual Video Coding Based on Semantic-Guided Texture Detection and Synthesis“. In 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018028.
Der volle Inhalt der QuelleSilva, Michel M., Mario F. M. Campos und Erickson R. Nascimento. „Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos“. In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2020.11364.
Der volle Inhalt der QuelleYang, Jianping, Jie Zhang und Xiangjun Chen. „Semantic-preload video model based on VOP coding“. In 2012 International Conference on Graphic and Image Processing, herausgegeben von Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2012827.
Der volle Inhalt der QuelleGalteri, Leonardo, Marco Bertini, Lorenzo Seidenari, Tiberio Uricchio und Alberto Del Bimbo. „Increasing Video Perceptual Quality with GANs and Semantic Coding“. In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413508.
Der volle Inhalt der QuelleXie, Guangqi, Xin Li, Shiqi Lin, Zhibo Chen, Li Zhang, Kai Zhang und Yue Li. „Hierarchical Reinforcement Learning Based Video Semantic Coding for Segmentation“. In 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2022. http://dx.doi.org/10.1109/vcip56404.2022.10008806.
Der volle Inhalt der QuelleHu, Yujie, Youmin Xu, Jianhui Chang und Jian Zhang. „Semantic Neural Rendering-based Video Coding: Towards Ultra-Low Bitrate Video Conferencing“. In 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00067.
Der volle Inhalt der QuelleZhang Liang, Wen Xiangming, Wang Bo und Zheng Wei. „A concept-based approach to video semantic analysis and coding“. In 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5689076.
Der volle Inhalt der QuelleMezaris, Vasileios, Nikolaos Boulgouris und Ioannis Kompatsiaris. „Knowledge-Assisted Video Analysis for Content-Adaptive Coding and Transmission“. In 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06). IEEE, 2006. http://dx.doi.org/10.1109/smap.2006.22.
Der volle Inhalt der Quelle