Auswahl der wissenschaftlichen Literatur zum Thema „Deep Video Representations“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Deep Video Representations" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Deep Video Representations"
Feichtenhofer, Christoph, Axel Pinz, Richard P. Wildes und Andrew Zisserman. „Deep Insights into Convolutional Networks for Video Recognition“. International Journal of Computer Vision 128, Nr. 2 (29.10.2019): 420–37. http://dx.doi.org/10.1007/s11263-019-01225-w.
Der volle Inhalt der QuellePandeya, Yagya Raj, Bhuwan Bhattarai und Joonwhoan Lee. „Deep-Learning-Based Multimodal Emotion Classification for Music Videos“. Sensors 21, Nr. 14 (20.07.2021): 4927. http://dx.doi.org/10.3390/s21144927.
Der volle Inhalt der QuelleLjubešić, Nikola. „‟Deep lexicography” – Fad or Opportunity?“ Rasprave Instituta za hrvatski jezik i jezikoslovlje 46, Nr. 2 (30.10.2020): 839–52. http://dx.doi.org/10.31724/rihjj.46.2.21.
Der volle Inhalt der QuelleKumar, Vidit, Vikas Tripathi und Bhaskar Pant. „Learning Unsupervised Visual Representations using 3D Convolutional Autoencoder with Temporal Contrastive Modeling for Video Retrieval“. International Journal of Mathematical, Engineering and Management Sciences 7, Nr. 2 (14.03.2022): 272–87. http://dx.doi.org/10.33889/ijmems.2022.7.2.018.
Der volle Inhalt der QuelleVihlman, Mikko, und Arto Visala. „Optical Flow in Deep Visual Tracking“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12112–19. http://dx.doi.org/10.1609/aaai.v34i07.6890.
Der volle Inhalt der QuelleRouast, Philipp V., und Marc T. P. Adam. „Learning Deep Representations for Video-Based Intake Gesture Detection“. IEEE Journal of Biomedical and Health Informatics 24, Nr. 6 (Juni 2020): 1727–37. http://dx.doi.org/10.1109/jbhi.2019.2942845.
Der volle Inhalt der QuelleLi, Jialu, Aishwarya Padmakumar, Gaurav Sukhatme und Mohit Bansal. „VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 17 (24.03.2024): 18517–26. http://dx.doi.org/10.1609/aaai.v38i17.29813.
Der volle Inhalt der QuelleHu, Yueyue, Shiliang Sun, Xin Xu und Jing Zhao. „Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 10 (03.04.2020): 13811–12. http://dx.doi.org/10.1609/aaai.v34i10.7177.
Der volle Inhalt der QuelleDong, Zhen, Chenchen Jing, Mingtao Pei und Yunde Jia. „Deep CNN based binary hash video representations for face retrieval“. Pattern Recognition 81 (September 2018): 357–69. http://dx.doi.org/10.1016/j.patcog.2018.04.014.
Der volle Inhalt der QuellePsallidas, Theodoros, und Evaggelos Spyrou. „Video Summarization Based on Feature Fusion and Data Augmentation“. Computers 12, Nr. 9 (15.09.2023): 186. http://dx.doi.org/10.3390/computers12090186.
Der volle Inhalt der QuelleDissertationen zum Thema "Deep Video Representations"
Yang, Yang. „Learning Hierarchical Representations for Video Analysis Using Deep Learning“. Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5892.
Der volle Inhalt der QuellePh.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Sudhakaran, Swathikiran. „Deep Neural Architectures for Video Representation Learning“. Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/369191.
Der volle Inhalt der QuelleSudhakaran, Swathikiran. „Deep Neural Architectures for Video Representation Learning“. Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3731/1/swathi_thesis_rev1.pdf.
Der volle Inhalt der QuelleSun, Shuyang. „Designing Motion Representation in Videos“. Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19724.
Der volle Inhalt der QuelleMazari, Ahmed. „Apprentissage profond pour la reconnaissance d’actions en vidéos“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS171.
Der volle Inhalt der QuelleNowadays, video contents are ubiquitous through the popular use of internet and smartphones, as well as social media. Many daily life applications such as video surveillance and video captioning, as well as scene understanding require sophisticated technologies to process video data. It becomes of crucial importance to develop automatic means to analyze and to interpret the large amount of available video data. In this thesis, we are interested in video action recognition, i.e. the problem of assigning action categories to sequences of videos. This can be seen as a key ingredient to build the next generation of vision systems. It is tackled with AI frameworks, mainly with ML and Deep ConvNets. Current ConvNets are increasingly deeper, data-hungrier and this makes their success tributary of the abundance of labeled training data. ConvNets also rely on (max or average) pooling which reduces dimensionality of output layers (and hence attenuates their sensitivity to the availability of labeled data); however, this process may dilute the information of upstream convolutional layers and thereby affect the discrimination power of the trained video representations, especially when the learned action categories are fine-grained
„Video2Vec: Learning Semantic Spatio-Temporal Embedding for Video Representations“. Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40765.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Computer Science 2016
(7486115), Gagandeep Singh Khanuja. „A STUDY OF REAL TIME SEARCH IN FLOOD SCENES FROM UAV VIDEOS USING DEEP LEARNING TECHNIQUES“. Thesis, 2019.
Den vollen Inhalt der Quelle findenSouček, Tomáš. „Detekce střihů a vyhledávání známých scén ve videu s pomocí metod hlubokého učení“. Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-434967.
Der volle Inhalt der QuelleBücher zum Thema "Deep Video Representations"
Aguayo, Angela J. Documentary Resistance. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190676216.001.0001.
Der volle Inhalt der QuelleAnderson, Crystal S. Soul in Seoul. University Press of Mississippi, 2020. http://dx.doi.org/10.14325/mississippi/9781496830098.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Deep Video Representations"
Loban, Rhett. „Designing to produce deep representations“. In Embedding Culture into Video Games and Game Design, 140–52. London: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003276289-10.
Der volle Inhalt der QuelleYao, Yuan, Zhiyuan Liu, Yankai Lin und Maosong Sun. „Cross-Modal Representation Learning“. In Representation Learning for Natural Language Processing, 211–40. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_7.
Der volle Inhalt der QuelleMao, Feng, Xiang Wu, Hui Xue und Rong Zhang. „Hierarchical Video Frame Sequence Representation with Deep Convolutional Graph Network“. In Lecture Notes in Computer Science, 262–70. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11018-5_24.
Der volle Inhalt der QuelleBecerra-Riera, Fabiola, Annette Morales-González und Heydi Méndez-Vázquez. „Exploring Local Deep Representations for Facial Gender Classification in Videos“. In Progress in Artificial Intelligence and Pattern Recognition, 104–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01132-1_12.
Der volle Inhalt der QuelleZhao, Kemeng, Liangrui Peng, Ning Ding, Gang Yao, Pei Tang und Shengjin Wang. „Deep Representation Learning for License Plate Recognition in Low Quality Video Images“. In Advances in Visual Computing, 202–14. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47966-3_16.
Der volle Inhalt der QuelleChen, Yixiong, Chunhui Zhang, Li Liu, Cheng Feng, Changfeng Dong, Yongfang Luo und Xiang Wan. „USCL: Pretraining Deep Ultrasound Image Diagnosis Model Through Video Contrastive Representation Learning“. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 627–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87237-3_60.
Der volle Inhalt der QuelleDhurgadevi, M., D. Vimal Kumar, R. Senthilkumar und K. Gunasekaran. „Detection of Video Anomaly in Public With Deep Learning Algorithm“. In Advances in Psychology, Mental Health, and Behavioral Studies, 81–95. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-4143-8.ch004.
Der volle Inhalt der QuelleAsma, Stephen T. „Drama In The Diorama: The Confederation & Art and Science“. In Stuffed Animals & pickled Heads, 240–88. Oxford University PressNew York, NY, 2001. http://dx.doi.org/10.1093/oso/9780195130508.003.0007.
Der volle Inhalt der QuelleVerma, Gyanendra K. „Emotions Modelling in 3D Space“. In Multimodal Affective Computing: Affective Information Representation, Modelling, and Analysis, 128–47. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124453123010013.
Der volle Inhalt der QuelleNandal, Priyanka. „Motion Imitation for Monocular Videos“. In Examining the Impact of Deep Learning and IoT on Multi-Industry Applications, 118–35. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7511-6.ch008.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Deep Video Representations"
Morere, Olivier, Hanlin Goh, Antoine Veillard, Vijay Chandrasekhar und Jie Lin. „Co-regularized deep representations for video summarization“. In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351387.
Der volle Inhalt der QuelleYu, Feiwu, Xinxiao Wu, Yuchao Sun und Lixin Duan. „Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/154.
Der volle Inhalt der QuellePernici, Federico, Federico Bartoli, Matteo Bruni und Alberto Del Bimbo. „Memory Based Online Learning of Deep Representations from Video Streams“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00247.
Der volle Inhalt der QuelleJung, Ilchae, Minji Kim, Eunhyeok Park und Bohyung Han. „Online Hybrid Lightweight Representations Learning: Its Application to Visual Tracking“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/140.
Der volle Inhalt der QuelleGarcia-Gonzalez, Jorge, Rafael M. Luque-Baena, Juan M. Ortiz-de-Lazcano-Lobato und Ezequiel Lopez-Rubio. „Moving Object Detection in Noisy Video Sequences Using Deep Convolutional Disentangled Representations“. In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897305.
Der volle Inhalt der QuelleParchami, Mostafa, Saman Bashbaghi, Eric Granger und Saif Sayed. „Using deep autoencoders to learn robust domain-invariant representations for still-to-video face recognition“. In 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2017. http://dx.doi.org/10.1109/avss.2017.8078553.
Der volle Inhalt der QuelleBueno-Benito, Elena, Biel Tura und Mariella Dimiccoli. „Leveraging Triplet Loss for Unsupervised Action Segmentation“. In LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai202306185.
Der volle Inhalt der QuelleKich, Victor Augusto, Junior Costa de Jesus, Ricardo Bedin Grando, Alisson Henrique Kolling, Gabriel Vinícius Heisler und Rodrigo da Silva Guerra. „Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing“. In Anais Estendidos do Simpósio Brasileiro de Games e Entretenimento Digital. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/sbgames_estendido.2021.19659.
Der volle Inhalt der QuelleFan, Tingyu, Linyao Gao, Yiling Xu, Zhu Li und Dong Wang. „D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/126.
Der volle Inhalt der QuelleLi, Yang, Kan Li und Xinxin Wang. „Deeply-Supervised CNN Model for Action Recognition with Trainable Feature Aggregation“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/112.
Der volle Inhalt der Quelle