Artykuły w czasopismach na temat „Deep Video Representations”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Deep Video Representations”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Feichtenhofer, Christoph, Axel Pinz, Richard P. Wildes i Andrew Zisserman. "Deep Insights into Convolutional Networks for Video Recognition". International Journal of Computer Vision 128, nr 2 (29.10.2019): 420–37. http://dx.doi.org/10.1007/s11263-019-01225-w.
Pełny tekst źródłaPandeya, Yagya Raj, Bhuwan Bhattarai i Joonwhoan Lee. "Deep-Learning-Based Multimodal Emotion Classification for Music Videos". Sensors 21, nr 14 (20.07.2021): 4927. http://dx.doi.org/10.3390/s21144927.
Pełny tekst źródłaLjubešić, Nikola. "‟Deep lexicography” – Fad or Opportunity?" Rasprave Instituta za hrvatski jezik i jezikoslovlje 46, nr 2 (30.10.2020): 839–52. http://dx.doi.org/10.31724/rihjj.46.2.21.
Pełny tekst źródłaKumar, Vidit, Vikas Tripathi i Bhaskar Pant. "Learning Unsupervised Visual Representations using 3D Convolutional Autoencoder with Temporal Contrastive Modeling for Video Retrieval". International Journal of Mathematical, Engineering and Management Sciences 7, nr 2 (14.03.2022): 272–87. http://dx.doi.org/10.33889/ijmems.2022.7.2.018.
Pełny tekst źródłaVihlman, Mikko, i Arto Visala. "Optical Flow in Deep Visual Tracking". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12112–19. http://dx.doi.org/10.1609/aaai.v34i07.6890.
Pełny tekst źródłaRouast, Philipp V., i Marc T. P. Adam. "Learning Deep Representations for Video-Based Intake Gesture Detection". IEEE Journal of Biomedical and Health Informatics 24, nr 6 (czerwiec 2020): 1727–37. http://dx.doi.org/10.1109/jbhi.2019.2942845.
Pełny tekst źródłaLi, Jialu, Aishwarya Padmakumar, Gaurav Sukhatme i Mohit Bansal. "VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 18517–26. http://dx.doi.org/10.1609/aaai.v38i17.29813.
Pełny tekst źródłaHu, Yueyue, Shiliang Sun, Xin Xu i Jing Zhao. "Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 10 (3.04.2020): 13811–12. http://dx.doi.org/10.1609/aaai.v34i10.7177.
Pełny tekst źródłaDong, Zhen, Chenchen Jing, Mingtao Pei i Yunde Jia. "Deep CNN based binary hash video representations for face retrieval". Pattern Recognition 81 (wrzesień 2018): 357–69. http://dx.doi.org/10.1016/j.patcog.2018.04.014.
Pełny tekst źródłaPsallidas, Theodoros, i Evaggelos Spyrou. "Video Summarization Based on Feature Fusion and Data Augmentation". Computers 12, nr 9 (15.09.2023): 186. http://dx.doi.org/10.3390/computers12090186.
Pełny tekst źródłaLiu, Shangdong, Puming Cao, Yujian Feng, Yimu Ji, Jiayuan Chen, Xuedong Xie i Longji Wu. "NRVC: Neural Representation for Video Compression with Implicit Multiscale Fusion Network". Entropy 25, nr 8 (4.08.2023): 1167. http://dx.doi.org/10.3390/e25081167.
Pełny tekst źródłaPan, Haixia, Jiahua Lan, Hongqiang Wang, Yanan Li, Meng Zhang, Mojie Ma, Dongdong Zhang i Xiaoran Zhao. "UWV-Yolox: A Deep Learning Model for Underwater Video Object Detection". Sensors 23, nr 10 (18.05.2023): 4859. http://dx.doi.org/10.3390/s23104859.
Pełny tekst źródłaGad, Gad, Eyad Gad, Korhan Cengiz, Zubair Fadlullah i Bassem Mokhtar. "Deep Learning-Based Context-Aware Video Content Analysis on IoT Devices". Electronics 11, nr 11 (4.06.2022): 1785. http://dx.doi.org/10.3390/electronics11111785.
Pełny tekst źródłaLin, Jie, Ling-Yu Duan, Shiqi Wang, Yan Bai, Yihang Lou, Vijay Chandrasekhar, Tiejun Huang, Alex Kot i Wen Gao. "HNIP: Compact Deep Invariant Representations for Video Matching, Localization, and Retrieval". IEEE Transactions on Multimedia 19, nr 9 (wrzesień 2017): 1968–83. http://dx.doi.org/10.1109/tmm.2017.2713410.
Pełny tekst źródłaZhang, Huijun, Ling Feng, Ningyun Li, Zhanyu Jin i Lei Cao. "Video-Based Stress Detection through Deep Learning". Sensors 20, nr 19 (28.09.2020): 5552. http://dx.doi.org/10.3390/s20195552.
Pełny tekst źródłaJiang, Pin, i Yahong Han. "Reasoning with Heterogeneous Graph Alignment for Video Question Answering". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11109–16. http://dx.doi.org/10.1609/aaai.v34i07.6767.
Pełny tekst źródłaMumtaz, Nadia, Naveed Ejaz, Suliman Aladhadh, Shabana Habib i Mi Young Lee. "Deep Multi-Scale Features Fusion for Effective Violence Detection and Control Charts Visualization". Sensors 22, nr 23 (1.12.2022): 9383. http://dx.doi.org/10.3390/s22239383.
Pełny tekst źródłaWu, Lin, Yang Wang, Ling Shao i Meng Wang. "3-D PersonVLAD: Learning Deep Global Representations for Video-Based Person Reidentification". IEEE Transactions on Neural Networks and Learning Systems 30, nr 11 (listopad 2019): 3347–59. http://dx.doi.org/10.1109/tnnls.2019.2891244.
Pełny tekst źródłaMeshchaninov, Viacheslav Pavlovich, Ivan Andreevich Molodetskikh, Dmitriy Sergeevich Vatolin i Alexey Gennadievich Voloboy. "Combining contrastive and supervised learning for video super-resolution detection". Keldysh Institute Preprints, nr 80 (2022): 1–13. http://dx.doi.org/10.20948/prepr-2022-80.
Pełny tekst źródłaHuang, Shaonian, Dongjun Huang i Xinmin Zhou. "Learning Multimodal Deep Representations for Crowd Anomaly Event Detection". Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/6323942.
Pełny tekst źródłaKumar, Vidit, Vikas Tripathi, Bhaskar Pant, Sultan S. Alshamrani, Ankur Dumka, Anita Gehlot, Rajesh Singh, Mamoon Rashid, Abdullah Alshehri i Ahmed Saeed AlGhamdi. "Hybrid Spatiotemporal Contrastive Representation Learning for Content-Based Surgical Video Retrieval". Electronics 11, nr 9 (24.04.2022): 1353. http://dx.doi.org/10.3390/electronics11091353.
Pełny tekst źródłaXu, Ming, Xiaosheng Yu, Dongyue Chen, Chengdong Wu i Yang Jiang. "An Efficient Anomaly Detection System for Crowded Scenes Using Variational Autoencoders". Applied Sciences 9, nr 16 (14.08.2019): 3337. http://dx.doi.org/10.3390/app9163337.
Pełny tekst źródłaBohunicky, Kyle Matthew. "Dear Punchy". Animal Crossing Special Issue 13, nr 22 (16.02.2021): 39–58. http://dx.doi.org/10.7202/1075262ar.
Pełny tekst źródłaRezaei, Fariba, i Mehran Yazdi. "A New Semantic and Statistical Distance-Based Anomaly Detection in Crowd Video Surveillance". Wireless Communications and Mobile Computing 2021 (15.05.2021): 1–9. http://dx.doi.org/10.1155/2021/5513582.
Pełny tekst źródłaDong, Wenkai, Zhaoxiang Zhang i Tieniu Tan. "Attention-Aware Sampling via Deep Reinforcement Learning for Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8247–54. http://dx.doi.org/10.1609/aaai.v33i01.33018247.
Pełny tekst źródłaNida, Nudrat, Muhammad Haroon Yousaf, Aun Irtaza i Sergio A. Velastin. "Instructor Activity Recognition through Deep Spatiotemporal Features and Feedforward Extreme Learning Machines". Mathematical Problems in Engineering 2019 (30.04.2019): 1–13. http://dx.doi.org/10.1155/2019/2474865.
Pełny tekst źródłaHe, Dongliang, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang i Shilei Wen. "StNet: Local and Global Spatial-Temporal Modeling for Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8401–8. http://dx.doi.org/10.1609/aaai.v33i01.33018401.
Pełny tekst źródłaSwinney, Carolyn J., i John C. Woods. "Unmanned Aerial Vehicle Operating Mode Classification Using Deep Residual Learning Feature Extraction". Aerospace 8, nr 3 (16.03.2021): 79. http://dx.doi.org/10.3390/aerospace8030079.
Pełny tekst źródłaZhao, Hu, Yanyun Shen, Zhipan Wang i Qingling Zhang. "MFACNet: A Multi-Frame Feature Aggregating and Inter-Feature Correlation Framework for Multi-Object Tracking in Satellite Videos". Remote Sensing 16, nr 9 (30.04.2024): 1604. http://dx.doi.org/10.3390/rs16091604.
Pełny tekst źródłaKulvinder Singh, Et al. "Enhancing Multimodal Information Retrieval Through Integrating Data Mining and Deep Learning Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 9 (30.10.2023): 560–69. http://dx.doi.org/10.17762/ijritcc.v11i9.8844.
Pełny tekst źródłaGovender, Divina, i Jules-Raymond Tapamo. "Spatio-Temporal Scale Coded Bag-of-Words". Sensors 20, nr 21 (9.11.2020): 6380. http://dx.doi.org/10.3390/s20216380.
Pełny tekst źródłaHuang, Haofeng, Wenhan Yang, Lingyu Duan i Jiaying Liu. "Seeing Dark Videos via Self-Learned Bottleneck Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2321–29. http://dx.doi.org/10.1609/aaai.v38i3.28006.
Pełny tekst źródłaDhar, Moloy. "Object Detection using Deep Learning Approach". International Journal for Research in Applied Science and Engineering Technology 10, nr 6 (30.06.2022): 2963–69. http://dx.doi.org/10.22214/ijraset.2022.44417.
Pełny tekst źródłaMishra,, Vaishnavi. "Synthetic Media Analysis Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (7.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem32494.
Pełny tekst źródłaThakur, Amey. "Generative Adversarial Networks". International Journal for Research in Applied Science and Engineering Technology 9, nr 8 (31.08.2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.
Pełny tekst źródłaWang, Bokun, Caiqian Yang i Yaojing Chen. "Detection Anomaly in Video Based on Deep Support Vector Data Description". Computational Intelligence and Neuroscience 2022 (4.05.2022): 1–6. http://dx.doi.org/10.1155/2022/5362093.
Pełny tekst źródłaChen, Shuang, Zengcai Wang i Wenxin Chen. "Driver Drowsiness Estimation Based on Factorized Bilinear Feature Fusion and a Long-Short-Term Recurrent Convolutional Network". Information 12, nr 1 (22.12.2020): 3. http://dx.doi.org/10.3390/info12010003.
Pełny tekst źródłaRezaei, Behnaz, Yiorgos Christakis, Bryan Ho, Kevin Thomas, Kelley Erb, Sarah Ostadabbas i Shyamal Patel. "Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video". Sensors 19, nr 19 (1.10.2019): 4266. http://dx.doi.org/10.3390/s19194266.
Pełny tekst źródłaBourai, Nour, Hayet Farida Merouani i Akila Djebbar. "Advanced Image Compression Techniques for Medical Applications: Survey". All Sciences Abstracts 1, nr 1 (16.04.2023): 1. http://dx.doi.org/10.59287/as-abstracts.444.
Pełny tekst źródłaMai Magdy, Fahima A. Maghraby i Mohamed Waleed Fakhr. "A 4D Convolutional Neural Networks for Video Violence Detection". Journal of Advanced Research in Applied Sciences and Engineering Technology 36, nr 1 (24.12.2023): 16–25. http://dx.doi.org/10.37934/araset.36.1.1625.
Pełny tekst źródłaChoi, Jinsoo, i Tae-Hyun Oh. "Joint Video Super-Resolution and Frame Interpolation via Permutation Invariance". Sensors 23, nr 5 (24.02.2023): 2529. http://dx.doi.org/10.3390/s23052529.
Pełny tekst źródłaKulkarni, Dr Shrinivasrao B., Abhishek Kuppelur, Akash Shetty, Shashank ,. Bidarakatti i Taranath Sangresakoppa. "Analysis of Physiotherapy Practices using Deep Learning". International Journal for Research in Applied Science and Engineering Technology 12, nr 4 (30.04.2024): 5084–89. http://dx.doi.org/10.22214/ijraset.2024.61194.
Pełny tekst źródłaLiu, Daizong, Dongdong Yu, Changhu Wang i Pan Zhou. "F2Net: Learning to Focus on the Foreground for Unsupervised Video Object Segmentation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 3 (18.05.2021): 2109–17. http://dx.doi.org/10.1609/aaai.v35i3.16308.
Pełny tekst źródłaSun, Zheng, Andrew W. Sumsion, Shad A. Torrie i Dah-Jye Lee. "Learning Facial Motion Representation with a Lightweight Encoder for Identity Verification". Electronics 11, nr 13 (22.06.2022): 1946. http://dx.doi.org/10.3390/electronics11131946.
Pełny tekst źródłaWagner, Travis L., i Ashley Blewer. "“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video". Open Information Science 3, nr 1 (1.01.2019): 32–46. http://dx.doi.org/10.1515/opis-2019-0003.
Pełny tekst źródłaSharif, Md Haidar, Lei Jiao i Christian W. Omlin. "CNN-ViT Supported Weakly-Supervised Video Segment Level Anomaly Detection". Sensors 23, nr 18 (7.09.2023): 7734. http://dx.doi.org/10.3390/s23187734.
Pełny tekst źródłaJeon, DaeHyeon, i Min-Suk Kim. "Deep-Learning-Based Sequence Causal Long-Term Recurrent Convolutional Network for Data Fusion Using Video Data". Electronics 12, nr 5 (24.02.2023): 1115. http://dx.doi.org/10.3390/electronics12051115.
Pełny tekst źródłaWu, Sijie, Kai Zhang, Shaoyi Li i Jie Yan. "Learning to Track Aircraft in Infrared Imagery". Remote Sensing 12, nr 23 (6.12.2020): 3995. http://dx.doi.org/10.3390/rs12233995.
Pełny tekst źródłaKong, Weiqi. "Research Advanced in Multimodal Emotion Recognition Based on Deep Learning". Highlights in Science, Engineering and Technology 85 (13.03.2024): 602–8. http://dx.doi.org/10.54097/p3yprn36.
Pełny tekst źródłaTøttrup, Daniel, Stinus Lykke Skovgaard, Jonas le Fevre Sejersen i Rui Pimentel de Figueiredo. "A Fast and Accurate Approach to Multiple-Vehicle Localization and Tracking from Monocular Aerial Images". Journal of Imaging 7, nr 12 (8.12.2021): 270. http://dx.doi.org/10.3390/jimaging7120270.
Pełny tekst źródła