Gotowa bibliografia na temat „Semantic video coding”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Semantic video coding”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Semantic video coding"
Essel, Daniel Danso, Ben-Bright Benuwa i Benjamin Ghansah. "Video Semantic Analysis". International Journal of Computer Vision and Image Processing 11, nr 2 (kwiecień 2021): 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.
Pełny tekst źródłaChen, Sovann, Supavadee Aramvith i Yoshikazu Miyanaga. "Learning-Based Rate Control for High Efficiency Video Coding". Sensors 23, nr 7 (30.03.2023): 3607. http://dx.doi.org/10.3390/s23073607.
Pełny tekst źródłaAntoszczyszyn, P. M., J. M. Hannah i P. M. Grant. "Reliable tracking of facial features in semantic-based video coding". IEE Proceedings - Vision, Image, and Signal Processing 145, nr 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.
Pełny tekst źródłaNOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui i Norihiko KATO. "2301 Low Bit-Rate Semantic Coding Technology for Lecture Video". Proceedings of the JSME annual meeting 2005.7 (2005): 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.
Pełny tekst źródłaBenuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah i Andriana Sarkodie. "Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis". Mathematical Problems in Engineering 2018 (5.08.2018): 1–11. http://dx.doi.org/10.1155/2018/9312563.
Pełny tekst źródłaPimentel-Niño, M. A., Paresh Saxena i M. A. Vazquez-Castro. "Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness". Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.
Pełny tekst źródłaGuo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que i Jingyu Liu. "SASRT: Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks". Sensors 19, nr 14 (15.07.2019): 3121. http://dx.doi.org/10.3390/s19143121.
Pełny tekst źródłaStivaktakis, Radamanthys, Grigorios Tsagkatakis i Panagiotis Tsakalides. "Semantic Predictive Coding with Arbitrated Generative Adversarial Networks". Machine Learning and Knowledge Extraction 2, nr 3 (25.08.2020): 307–26. http://dx.doi.org/10.3390/make2030017.
Pełny tekst źródłaHerranz, Luis. "Integrating semantic analysis and scalable video coding for efficient content-based adaptation". Multimedia Systems 13, nr 2 (30.06.2007): 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.
Pełny tekst źródłaMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger i Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing". Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Pełny tekst źródłaRozprawy doktorskie na temat "Semantic video coding"
Al-Qayedi, Ali. "Internet video-conferencing using model-based image coding with agent technology". Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Pełny tekst źródłaMitrica, Iulia. "Video compression of airplane cockpit screens content". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.
Pełny tekst źródłaThis thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Części książek na temat "Semantic video coding"
Mezaris, Vasileios, Nikolaos Thomos, Nikolaos V. Boulgouris i Ioannis Kompatsiaris. "Knowledge-Assisted Analysis of Video for Content-Adaptive Coding and Transmission". W Advances in Semantic Media Adaptation and Personalization, 221–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76361_11.
Pełny tekst źródłaLin, Yu-Tzu, i Chia-Hu Chang. "User-aware Video Coding Based on Semantic Video Understanding and Enhancing". W Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16498.
Pełny tekst źródłaThomas-Kerr, Joseph, Ian Burnett i Christian Ritz. "What Are You Trying to Say? Format-Independent Semantic-Aware Streaming and Delivery". W Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16763.
Pełny tekst źródłaCavallaro, Andrea, i Stefan Winkler. "Perceptual Semantics". W Multimedia Technologies, 1441–55. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-953-3.ch105.
Pełny tekst źródłaCavallaro, Andrea, i Stefan Winkler. "Perceptual Semantics". W Digital Multimedia Perception and Design, 1–20. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-860-4.ch001.
Pełny tekst źródłaStreszczenia konferencji na temat "Semantic video coding"
Décombas, M., F. Capman, E. Renan, F. Dufaux i B. Pesquet-Popescu. "Seam carving for semantic video coding". W SPIE Optical Engineering + Applications, redaktor Andrew G. Tescher. SPIE, 2011. http://dx.doi.org/10.1117/12.895317.
Pełny tekst źródłaSilva, Michel M., Mario F. M. Campos i Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos". W XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8302.
Pełny tekst źródłaZhu, Chen, Guo Lu, Rong Xie i Li Song. "Perceptual Video Coding Based on Semantic-Guided Texture Detection and Synthesis". W 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018028.
Pełny tekst źródłaSilva, Michel M., Mario F. M. Campos i Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos". W Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2020.11364.
Pełny tekst źródłaYang, Jianping, Jie Zhang i Xiangjun Chen. "Semantic-preload video model based on VOP coding". W 2012 International Conference on Graphic and Image Processing, redaktor Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2012827.
Pełny tekst źródłaGalteri, Leonardo, Marco Bertini, Lorenzo Seidenari, Tiberio Uricchio i Alberto Del Bimbo. "Increasing Video Perceptual Quality with GANs and Semantic Coding". W MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413508.
Pełny tekst źródłaXie, Guangqi, Xin Li, Shiqi Lin, Zhibo Chen, Li Zhang, Kai Zhang i Yue Li. "Hierarchical Reinforcement Learning Based Video Semantic Coding for Segmentation". W 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2022. http://dx.doi.org/10.1109/vcip56404.2022.10008806.
Pełny tekst źródłaHu, Yujie, Youmin Xu, Jianhui Chang i Jian Zhang. "Semantic Neural Rendering-based Video Coding: Towards Ultra-Low Bitrate Video Conferencing". W 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00067.
Pełny tekst źródłaZhang Liang, Wen Xiangming, Wang Bo i Zheng Wei. "A concept-based approach to video semantic analysis and coding". W 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5689076.
Pełny tekst źródłaMezaris, Vasileios, Nikolaos Boulgouris i Ioannis Kompatsiaris. "Knowledge-Assisted Video Analysis for Content-Adaptive Coding and Transmission". W 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06). IEEE, 2006. http://dx.doi.org/10.1109/smap.2006.22.
Pełny tekst źródła