Auswahl der wissenschaftlichen Literatur zum Thema „Motion captioning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Motion captioning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Motion captioning"
Iwamura, Kiyohiko, Jun Younes Louhi Kasahara, Alessandro Moro, Atsushi Yamashita und Hajime Asama. „Image Captioning Using Motion-CNN with Object Detection“. Sensors 21, Nr. 4 (10.02.2021): 1270. http://dx.doi.org/10.3390/s21041270.
Der volle Inhalt der QuelleChen, Shaoxiang, und Yu-Gang Jiang. „Motion Guided Spatial Attention for Video Captioning“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8191–98. http://dx.doi.org/10.1609/aaai.v33i01.33018191.
Der volle Inhalt der QuelleZhao, Hong, Lan Guo, ZhiWen Chen und HouZe Zheng. „Research on Video Captioning Based on Multifeature Fusion“. Computational Intelligence and Neuroscience 2022 (28.04.2022): 1–14. http://dx.doi.org/10.1155/2022/1204909.
Der volle Inhalt der QuelleQi, Mengshi, Yunhong Wang, Annan Li und Jiebo Luo. „Sports Video Captioning via Attentive Motion Representation and Group Relationship Modeling“. IEEE Transactions on Circuits and Systems for Video Technology 30, Nr. 8 (August 2020): 2617–33. http://dx.doi.org/10.1109/tcsvt.2019.2921655.
Der volle Inhalt der QuelleAhmed, Shakil, A. F. M. Saifuddin Saif, Md Imtiaz Hanif, Md Mostofa Nurannabi Shakil, Md Mostofa Jaman, Md Mazid Ul Haque, Siam Bin Shawkat et al. „Att-BiL-SL: Attention-Based Bi-LSTM and Sequential LSTM for Describing Video in the Textual Formation“. Applied Sciences 12, Nr. 1 (29.12.2021): 317. http://dx.doi.org/10.3390/app12010317.
Der volle Inhalt der QuelleJiang, Wenhui, Yibo Cheng, Linxin Liu, Yuming Fang, Yuxin Peng und Yang Liu. „Comprehensive Visual Grounding for Video Description“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2552–60. http://dx.doi.org/10.1609/aaai.v38i3.28032.
Der volle Inhalt der QuelleKim, Heechan, und Soowon Lee. „A Video Captioning Method Based on Multi-Representation Switching for Sustainable Computing“. Sustainability 13, Nr. 4 (19.02.2021): 2250. http://dx.doi.org/10.3390/su13042250.
Der volle Inhalt der QuelleCharmatz, Marc. „Magistrate denies motion to dismiss in cases against Harvard and MIT on web content captioning“. Disability Compliance for Higher Education 21, Nr. 10 (20.04.2016): 1–3. http://dx.doi.org/10.1002/dhe.30174.
Der volle Inhalt der QuelleChen, Jin, Xiaofeng Ji und Xinxiao Wu. „Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 1 (28.06.2022): 276–84. http://dx.doi.org/10.1609/aaai.v36i1.19903.
Der volle Inhalt der QuelleYang, Jiaji, Esyin Chew und Pengcheng Liu. „Service humanoid robotics: a novel interactive system based on bionic-companionship framework“. PeerJ Computer Science 7 (13.08.2021): e674. http://dx.doi.org/10.7717/peerj-cs.674.
Der volle Inhalt der QuelleDissertationen zum Thema "Motion captioning"
Radouane, Karim. „Mécanisme d’attention pour le sous-titrage du mouvement humain : Vers une segmentation sémantique et analyse du mouvement interprétables“. Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0002.
Der volle Inhalt der QuelleCaptioning tasks mainly focus on images or videos, and seldom on human poses. Yet, poses concisely describe human activities. Beyond text generation quality, we consider the motion caption task as an intermediate step to solve other derived tasks. In this holistic approach, our experiments are centered on the unsupervised learning of semantic motion segmentation and interpretability. We first conduct an extensive literature review of recent methods for human pose estimation, as a central prerequisite for pose-based captioning. Then, we take an interest in pose-representation learning, with an emphasis on the use of spatiotemporal graph-based learning, which we apply and evaluate on a real-world application (protective behavior detection). As a result, we win the AffectMove challenge. Next, we delve into the core of our contributions in motion captioning, where: (i) We design local recurrent attention for synchronous text generation with motion. Each motion and its caption are decomposed into primitives and corresponding sub-captions. We also propose specific metrics to evaluate the synchronous mapping between motion and language segments. (ii) We initiate the construction of a motion-language dataset to enable supervised segmentation. (iii) We design an interpretable architecture with a transparent reasoning process through spatiotemporal attention, showing state-of-the-art results on the two reference datasets, KIT-ML and HumanML3D. Effective tools are proposed for interpretability evaluation and illustration. Finally, we conduct a thorough analysis of potential applications: unsupervised action segmentation, sign language translation, and impact in other scenarios
Bücher zum Thema "Motion captioning"
Sahlin, Ingrid. Tal och undertexter i textade svenska TV-program: Probleminventering och förslag till en analysmodell. Göteborg: Acta Universitatis Gothoburgensis, 2001.
Den vollen Inhalt der Quelle findenRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2016.
Den vollen Inhalt der Quelle findenRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenRobson, Gary D. Closed Captioning Handbook. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenRobson, Gary D. The Closed Captioning Handbook. Focal Press, 2004.
Den vollen Inhalt der Quelle findenFox, Wendy. Can Integrated Titles Improve the Viewing Experience? Saint Philip Street Press, 2020.
Den vollen Inhalt der Quelle finden(Editor), Jorge Diaz-Cintas, Pilar Orero (Editor) und Aline Remael (Editor), Hrsg. Media for All: Subtitling for the Deaf, Audio Description, and Sign Language (Approaches to Translation Studies 30) (Approaches to Translation Studies). Rodopi, 2007.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Motion captioning"
Hai-Jew, Shalin. „Image on the Street Is . . .“ In Advances in Media, Entertainment, and the Arts, 1–45. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9821-3.ch001.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Motion captioning"
Iwamura, Kiyohiko, Jun Younes Louhi Kasahara, Alessandro Moro, Atsushi Yamashita und Hajime Asama. „Potential of Incorporating Motion Estimation for Image Captioning“. In 2021 IEEE/SICE International Symposium on System Integration (SII). IEEE, 2021. http://dx.doi.org/10.1109/ieeeconf49454.2021.9382725.
Der volle Inhalt der QuelleChen, Shaoxiang, und Yu-Gang Jiang. „Motion Guided Region Message Passing for Video Captioning“. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00157.
Der volle Inhalt der QuelleBosch Ruiz, Marc, Christopher M. Gifford, Agata Ciesielski, Scott Almes, Rachel Ellison und Gordon Christie. „Captioning of full motion video from unmanned aerial platforms“. In Geospatial Informatics IX, herausgegeben von Kannappan Palaniappan, Gunasekaran Seetharaman und Peter J. Doucette. SPIE, 2019. http://dx.doi.org/10.1117/12.2518163.
Der volle Inhalt der QuelleHu, Yimin, Guorui Yu, Yuejie Zhang, Rui Feng, Tao Zhang, Xuequan Lu und Shang Gao. „Motion-Aware Video Paragraph Captioning via Exploring Object-Centered Internal Knowledge“. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10096625.
Der volle Inhalt der QuelleQi, Mengshi, Yunhong Wang, Annan Li und Jiebo Luo. „Sports Video Captioning by Attentive Motion Representation based Hierarchical Recurrent Neural Networks“. In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3265845.3265851.
Der volle Inhalt der QuelleMori, Yuki, Tsubasa Hirakawa, Takayoshi Yamashita und Hironobu Fujiyoshi. „Image Captioning for Near-Future Events from Vehicle Camera Images and Motion Information“. In 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2021. http://dx.doi.org/10.1109/iv48863.2021.9575562.
Der volle Inhalt der QuelleKaushik, Prashant, Vikas Saxena und Amarjeet Prajapati. „A Novel Method for Sequence Generation for Video Captioning by Estimating the Objects Motion in Temporal Domain“. In 2024 2nd International Conference on Disruptive Technologies (ICDT). IEEE, 2024. http://dx.doi.org/10.1109/icdt61202.2024.10489570.
Der volle Inhalt der Quelle