Gotowa bibliografia na temat „Motion captioning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Motion captioning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Motion captioning"
Iwamura, Kiyohiko, Jun Younes Louhi Kasahara, Alessandro Moro, Atsushi Yamashita i Hajime Asama. "Image Captioning Using Motion-CNN with Object Detection". Sensors 21, nr 4 (10.02.2021): 1270. http://dx.doi.org/10.3390/s21041270.
Pełny tekst źródłaChen, Shaoxiang, i Yu-Gang Jiang. "Motion Guided Spatial Attention for Video Captioning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8191–98. http://dx.doi.org/10.1609/aaai.v33i01.33018191.
Pełny tekst źródłaZhao, Hong, Lan Guo, ZhiWen Chen i HouZe Zheng. "Research on Video Captioning Based on Multifeature Fusion". Computational Intelligence and Neuroscience 2022 (28.04.2022): 1–14. http://dx.doi.org/10.1155/2022/1204909.
Pełny tekst źródłaQi, Mengshi, Yunhong Wang, Annan Li i Jiebo Luo. "Sports Video Captioning via Attentive Motion Representation and Group Relationship Modeling". IEEE Transactions on Circuits and Systems for Video Technology 30, nr 8 (sierpień 2020): 2617–33. http://dx.doi.org/10.1109/tcsvt.2019.2921655.
Pełny tekst źródłaAhmed, Shakil, A. F. M. Saifuddin Saif, Md Imtiaz Hanif, Md Mostofa Nurannabi Shakil, Md Mostofa Jaman, Md Mazid Ul Haque, Siam Bin Shawkat i in. "Att-BiL-SL: Attention-Based Bi-LSTM and Sequential LSTM for Describing Video in the Textual Formation". Applied Sciences 12, nr 1 (29.12.2021): 317. http://dx.doi.org/10.3390/app12010317.
Pełny tekst źródłaJiang, Wenhui, Yibo Cheng, Linxin Liu, Yuming Fang, Yuxin Peng i Yang Liu. "Comprehensive Visual Grounding for Video Description". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2552–60. http://dx.doi.org/10.1609/aaai.v38i3.28032.
Pełny tekst źródłaKim, Heechan, i Soowon Lee. "A Video Captioning Method Based on Multi-Representation Switching for Sustainable Computing". Sustainability 13, nr 4 (19.02.2021): 2250. http://dx.doi.org/10.3390/su13042250.
Pełny tekst źródłaCharmatz, Marc. "Magistrate denies motion to dismiss in cases against Harvard and MIT on web content captioning". Disability Compliance for Higher Education 21, nr 10 (20.04.2016): 1–3. http://dx.doi.org/10.1002/dhe.30174.
Pełny tekst źródłaChen, Jin, Xiaofeng Ji i Xinxiao Wu. "Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 1 (28.06.2022): 276–84. http://dx.doi.org/10.1609/aaai.v36i1.19903.
Pełny tekst źródłaYang, Jiaji, Esyin Chew i Pengcheng Liu. "Service humanoid robotics: a novel interactive system based on bionic-companionship framework". PeerJ Computer Science 7 (13.08.2021): e674. http://dx.doi.org/10.7717/peerj-cs.674.
Pełny tekst źródłaRozprawy doktorskie na temat "Motion captioning"
Radouane, Karim. "Mécanisme d’attention pour le sous-titrage du mouvement humain : Vers une segmentation sémantique et analyse du mouvement interprétables". Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0002.
Pełny tekst źródłaCaptioning tasks mainly focus on images or videos, and seldom on human poses. Yet, poses concisely describe human activities. Beyond text generation quality, we consider the motion caption task as an intermediate step to solve other derived tasks. In this holistic approach, our experiments are centered on the unsupervised learning of semantic motion segmentation and interpretability. We first conduct an extensive literature review of recent methods for human pose estimation, as a central prerequisite for pose-based captioning. Then, we take an interest in pose-representation learning, with an emphasis on the use of spatiotemporal graph-based learning, which we apply and evaluate on a real-world application (protective behavior detection). As a result, we win the AffectMove challenge. Next, we delve into the core of our contributions in motion captioning, where: (i) We design local recurrent attention for synchronous text generation with motion. Each motion and its caption are decomposed into primitives and corresponding sub-captions. We also propose specific metrics to evaluate the synchronous mapping between motion and language segments. (ii) We initiate the construction of a motion-language dataset to enable supervised segmentation. (iii) We design an interpretable architecture with a transparent reasoning process through spatiotemporal attention, showing state-of-the-art results on the two reference datasets, KIT-ML and HumanML3D. Effective tools are proposed for interpretability evaluation and illustration. Finally, we conduct a thorough analysis of potential applications: unsupervised action segmentation, sign language translation, and impact in other scenarios
Książki na temat "Motion captioning"
Sahlin, Ingrid. Tal och undertexter i textade svenska TV-program: Probleminventering och förslag till en analysmodell. Göteborg: Acta Universitatis Gothoburgensis, 2001.
Znajdź pełny tekst źródłaRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2016.
Znajdź pełny tekst źródłaRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaRobson, Gary D. The Closed Captioning Handbook. Focal Press, 2004.
Znajdź pełny tekst źródłaFox, Wendy. Can Integrated Titles Improve the Viewing Experience? Saint Philip Street Press, 2020.
Znajdź pełny tekst źródła(Editor), Jorge Diaz-Cintas, Pilar Orero (Editor) i Aline Remael (Editor), red. Media for All: Subtitling for the Deaf, Audio Description, and Sign Language (Approaches to Translation Studies 30) (Approaches to Translation Studies). Rodopi, 2007.
Znajdź pełny tekst źródłaCzęści książek na temat "Motion captioning"
Hai-Jew, Shalin. "Image on the Street Is . . ." W Advances in Media, Entertainment, and the Arts, 1–45. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9821-3.ch001.
Pełny tekst źródłaStreszczenia konferencji na temat "Motion captioning"
Iwamura, Kiyohiko, Jun Younes Louhi Kasahara, Alessandro Moro, Atsushi Yamashita i Hajime Asama. "Potential of Incorporating Motion Estimation for Image Captioning". W 2021 IEEE/SICE International Symposium on System Integration (SII). IEEE, 2021. http://dx.doi.org/10.1109/ieeeconf49454.2021.9382725.
Pełny tekst źródłaChen, Shaoxiang, i Yu-Gang Jiang. "Motion Guided Region Message Passing for Video Captioning". W 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00157.
Pełny tekst źródłaBosch Ruiz, Marc, Christopher M. Gifford, Agata Ciesielski, Scott Almes, Rachel Ellison i Gordon Christie. "Captioning of full motion video from unmanned aerial platforms". W Geospatial Informatics IX, redaktorzy Kannappan Palaniappan, Gunasekaran Seetharaman i Peter J. Doucette. SPIE, 2019. http://dx.doi.org/10.1117/12.2518163.
Pełny tekst źródłaHu, Yimin, Guorui Yu, Yuejie Zhang, Rui Feng, Tao Zhang, Xuequan Lu i Shang Gao. "Motion-Aware Video Paragraph Captioning via Exploring Object-Centered Internal Knowledge". W ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10096625.
Pełny tekst źródłaQi, Mengshi, Yunhong Wang, Annan Li i Jiebo Luo. "Sports Video Captioning by Attentive Motion Representation based Hierarchical Recurrent Neural Networks". W MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3265845.3265851.
Pełny tekst źródłaMori, Yuki, Tsubasa Hirakawa, Takayoshi Yamashita i Hironobu Fujiyoshi. "Image Captioning for Near-Future Events from Vehicle Camera Images and Motion Information". W 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2021. http://dx.doi.org/10.1109/iv48863.2021.9575562.
Pełny tekst źródłaKaushik, Prashant, Vikas Saxena i Amarjeet Prajapati. "A Novel Method for Sequence Generation for Video Captioning by Estimating the Objects Motion in Temporal Domain". W 2024 2nd International Conference on Disruptive Technologies (ICDT). IEEE, 2024. http://dx.doi.org/10.1109/icdt61202.2024.10489570.
Pełny tekst źródła