Literatura científica selecionada sobre o tema "Semantic video coding"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Semantic video coding".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Semantic video coding"
Essel, Daniel Danso, Ben-Bright Benuwa e Benjamin Ghansah. "Video Semantic Analysis". International Journal of Computer Vision and Image Processing 11, n.º 2 (abril de 2021): 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.
Texto completo da fonteChen, Sovann, Supavadee Aramvith e Yoshikazu Miyanaga. "Learning-Based Rate Control for High Efficiency Video Coding". Sensors 23, n.º 7 (30 de março de 2023): 3607. http://dx.doi.org/10.3390/s23073607.
Texto completo da fonteAntoszczyszyn, P. M., J. M. Hannah e P. M. Grant. "Reliable tracking of facial features in semantic-based video coding". IEE Proceedings - Vision, Image, and Signal Processing 145, n.º 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.
Texto completo da fonteNOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui e Norihiko KATO. "2301 Low Bit-Rate Semantic Coding Technology for Lecture Video". Proceedings of the JSME annual meeting 2005.7 (2005): 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.
Texto completo da fonteBenuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah e Andriana Sarkodie. "Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis". Mathematical Problems in Engineering 2018 (5 de agosto de 2018): 1–11. http://dx.doi.org/10.1155/2018/9312563.
Texto completo da fontePimentel-Niño, M. A., Paresh Saxena e M. A. Vazquez-Castro. "Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness". Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.
Texto completo da fonteGuo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que e Jingyu Liu. "SASRT: Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks". Sensors 19, n.º 14 (15 de julho de 2019): 3121. http://dx.doi.org/10.3390/s19143121.
Texto completo da fonteStivaktakis, Radamanthys, Grigorios Tsagkatakis e Panagiotis Tsakalides. "Semantic Predictive Coding with Arbitrated Generative Adversarial Networks". Machine Learning and Knowledge Extraction 2, n.º 3 (25 de agosto de 2020): 307–26. http://dx.doi.org/10.3390/make2030017.
Texto completo da fonteHerranz, Luis. "Integrating semantic analysis and scalable video coding for efficient content-based adaptation". Multimedia Systems 13, n.º 2 (30 de junho de 2007): 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.
Texto completo da fonteMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger e Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing". Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Texto completo da fonteTeses / dissertações sobre o assunto "Semantic video coding"
Al-Qayedi, Ali. "Internet video-conferencing using model-based image coding with agent technology". Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Texto completo da fonteMitrica, Iulia. "Video compression of airplane cockpit screens content". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.
Texto completo da fonteThis thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Capítulos de livros sobre o assunto "Semantic video coding"
Mezaris, Vasileios, Nikolaos Thomos, Nikolaos V. Boulgouris e Ioannis Kompatsiaris. "Knowledge-Assisted Analysis of Video for Content-Adaptive Coding and Transmission". In Advances in Semantic Media Adaptation and Personalization, 221–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76361_11.
Texto completo da fonteLin, Yu-Tzu, e Chia-Hu Chang. "User-aware Video Coding Based on Semantic Video Understanding and Enhancing". In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16498.
Texto completo da fonteThomas-Kerr, Joseph, Ian Burnett e Christian Ritz. "What Are You Trying to Say? Format-Independent Semantic-Aware Streaming and Delivery". In Recent Advances on Video Coding. InTech, 2011. http://dx.doi.org/10.5772/16763.
Texto completo da fonteCavallaro, Andrea, e Stefan Winkler. "Perceptual Semantics". In Multimedia Technologies, 1441–55. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-953-3.ch105.
Texto completo da fonteCavallaro, Andrea, e Stefan Winkler. "Perceptual Semantics". In Digital Multimedia Perception and Design, 1–20. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-860-4.ch001.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Semantic video coding"
Décombas, M., F. Capman, E. Renan, F. Dufaux e B. Pesquet-Popescu. "Seam carving for semantic video coding". In SPIE Optical Engineering + Applications, editado por Andrew G. Tescher. SPIE, 2011. http://dx.doi.org/10.1117/12.895317.
Texto completo da fonteSilva, Michel M., Mario F. M. Campos e Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos". In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8302.
Texto completo da fonteZhu, Chen, Guo Lu, Rong Xie e Li Song. "Perceptual Video Coding Based on Semantic-Guided Texture Detection and Synthesis". In 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018028.
Texto completo da fonteSilva, Michel M., Mario F. M. Campos e Erickson R. Nascimento. "Semantic Hyperlapse: a Sparse Coding-based and Multi-Importance Approach for First-Person Videos". In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2020.11364.
Texto completo da fonteYang, Jianping, Jie Zhang e Xiangjun Chen. "Semantic-preload video model based on VOP coding". In 2012 International Conference on Graphic and Image Processing, editado por Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2012827.
Texto completo da fonteGalteri, Leonardo, Marco Bertini, Lorenzo Seidenari, Tiberio Uricchio e Alberto Del Bimbo. "Increasing Video Perceptual Quality with GANs and Semantic Coding". In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413508.
Texto completo da fonteXie, Guangqi, Xin Li, Shiqi Lin, Zhibo Chen, Li Zhang, Kai Zhang e Yue Li. "Hierarchical Reinforcement Learning Based Video Semantic Coding for Segmentation". In 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2022. http://dx.doi.org/10.1109/vcip56404.2022.10008806.
Texto completo da fonteHu, Yujie, Youmin Xu, Jianhui Chang e Jian Zhang. "Semantic Neural Rendering-based Video Coding: Towards Ultra-Low Bitrate Video Conferencing". In 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00067.
Texto completo da fonteZhang Liang, Wen Xiangming, Wang Bo e Zheng Wei. "A concept-based approach to video semantic analysis and coding". In 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5689076.
Texto completo da fonteMezaris, Vasileios, Nikolaos Boulgouris e Ioannis Kompatsiaris. "Knowledge-Assisted Video Analysis for Content-Adaptive Coding and Transmission". In 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06). IEEE, 2006. http://dx.doi.org/10.1109/smap.2006.22.
Texto completo da fonte