Добірка наукової літератури з теми "Semantic video coding"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Semantic video coding".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Semantic video coding"
Essel, Daniel Danso, Ben-Bright Benuwa, and Benjamin Ghansah. "Video Semantic Analysis." International Journal of Computer Vision and Image Processing 11, no. 2 (April 2021): 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.
Повний текст джерелаChen, Sovann, Supavadee Aramvith, and Yoshikazu Miyanaga. "Learning-Based Rate Control for High Efficiency Video Coding." Sensors 23, no. 7 (March 30, 2023): 3607. http://dx.doi.org/10.3390/s23073607.
Повний текст джерелаAntoszczyszyn, P. M., J. M. Hannah, and P. M. Grant. "Reliable tracking of facial features in semantic-based video coding." IEE Proceedings - Vision, Image, and Signal Processing 145, no. 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.
Повний текст джерелаNOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui, and Norihiko KATO. "2301 Low Bit-Rate Semantic Coding Technology for Lecture Video." Proceedings of the JSME annual meeting 2005.7 (2005): 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.
Повний текст джерелаBenuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah, and Andriana Sarkodie. "Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis." Mathematical Problems in Engineering 2018 (August 5, 2018): 1–11. http://dx.doi.org/10.1155/2018/9312563.
Повний текст джерелаPimentel-Niño, M. A., Paresh Saxena, and M. A. Vazquez-Castro. "Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness." Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.
Повний текст джерелаGuo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que, and Jingyu Liu. "SASRT: Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks." Sensors 19, no. 14 (July 15, 2019): 3121. http://dx.doi.org/10.3390/s19143121.
Повний текст джерелаStivaktakis, Radamanthys, Grigorios Tsagkatakis, and Panagiotis Tsakalides. "Semantic Predictive Coding with Arbitrated Generative Adversarial Networks." Machine Learning and Knowledge Extraction 2, no. 3 (August 25, 2020): 307–26. http://dx.doi.org/10.3390/make2030017.
Повний текст джерелаHerranz, Luis. "Integrating semantic analysis and scalable video coding for efficient content-based adaptation." Multimedia Systems 13, no. 2 (June 30, 2007): 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.
Повний текст джерелаMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger, and Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing." Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Повний текст джерелаДисертації з теми "Semantic video coding"
Al-Qayedi, Ali. "Internet video-conferencing using model-based image coding with agent technology." Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Повний текст джерелаMitrica, Iulia. "Video compression of airplane cockpit screens content." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.
Повний текст джерелаThis thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Частини книг з теми "Semantic video coding"
Mezaris, Vasileios, Nikolaos Thomos, Nikolaos V. Boulgouris, and Ioannis Kompatsiaris. "Knowledge-Assisted Analysis of Video for Content-Adaptive Coding and Transmission." In Advances in Semantic Media Adaptation and Personalization, 221–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76361_11.
Повний текст джерелаLin, Yu-Tzu, and Chia-Hu Chang. "User-aware Video Coding Based on Semantic Video Understanding and Enhancing." In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16498.
Повний текст джерелаThomas-Kerr, Joseph, Ian Burnett, and Christian Ritz. "What Are You Trying to Say? Format-Independent Semantic-Aware Streaming and Delivery." In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16763.
Повний текст джерелаCavallaro, Andrea, and Stefan Winkler. "Perceptual Semantics." In Multimedia Technologies, 1441–55. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-953-3.ch105.
Повний текст джерелаCavallaro, Andrea, and Stefan Winkler. "Perceptual Semantics." In Digital Multimedia Perception and Design, 1–20. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-860-4.ch001.
Повний текст джерелаТези доповідей конференцій з теми "Semantic video coding"
Décombas, M., F. Capman, E. Renan, F. Dufaux, and B. Pesquet-Popescu. "Seam carving for semantic video coding." In SPIE Optical Engineering + Applications, edited by Andrew G. Tescher. SPIE, 2011. http://dx.doi.org/10.1117/12.895317.
Повний текст джерелаSilva, Michel M., Mario F. M. Campos, and Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos." In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8302.
Повний текст джерелаZhu, Chen, Guo Lu, Rong Xie, and Li Song. "Perceptual Video Coding Based on Semantic-Guided Texture Detection and Synthesis." In 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018028.
Повний текст джерелаSilva, Michel M., Mario F. M. Campos, and Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos." In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2020.11364.
Повний текст джерелаYang, Jianping, Jie Zhang, and Xiangjun Chen. "Semantic-preload video model based on VOP coding." In 2012 International Conference on Graphic and Image Processing, edited by Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2012827.
Повний текст джерелаGalteri, Leonardo, Marco Bertini, Lorenzo Seidenari, Tiberio Uricchio, and Alberto Del Bimbo. "Increasing Video Perceptual Quality with GANs and Semantic Coding." In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413508.
Повний текст джерелаXie, Guangqi, Xin Li, Shiqi Lin, Zhibo Chen, Li Zhang, Kai Zhang, and Yue Li. "Hierarchical Reinforcement Learning Based Video Semantic Coding for Segmentation." In 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2022. http://dx.doi.org/10.1109/vcip56404.2022.10008806.
Повний текст джерелаHu, Yujie, Youmin Xu, Jianhui Chang, and Jian Zhang. "Semantic Neural Rendering-based Video Coding: Towards Ultra-Low Bitrate Video Conferencing." In 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00067.
Повний текст джерелаZhang Liang, Wen Xiangming, Wang Bo, and Zheng Wei. "A concept-based approach to video semantic analysis and coding." In 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5689076.
Повний текст джерелаMezaris, Vasileios, Nikolaos Boulgouris, and Ioannis Kompatsiaris. "Knowledge-Assisted Video Analysis for Content-Adaptive Coding and Transmission." In 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06). IEEE, 2006. http://dx.doi.org/10.1109/smap.2006.22.
Повний текст джерела