Artigos de revistas sobre o tema "Deep Video Representations"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Deep Video Representations".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Feichtenhofer, Christoph, Axel Pinz, Richard P. Wildes e Andrew Zisserman. "Deep Insights into Convolutional Networks for Video Recognition". International Journal of Computer Vision 128, n.º 2 (29 de outubro de 2019): 420–37. http://dx.doi.org/10.1007/s11263-019-01225-w.
Texto completo da fontePandeya, Yagya Raj, Bhuwan Bhattarai e Joonwhoan Lee. "Deep-Learning-Based Multimodal Emotion Classification for Music Videos". Sensors 21, n.º 14 (20 de julho de 2021): 4927. http://dx.doi.org/10.3390/s21144927.
Texto completo da fonteLjubešić, Nikola. "‟Deep lexicography” – Fad or Opportunity?" Rasprave Instituta za hrvatski jezik i jezikoslovlje 46, n.º 2 (30 de outubro de 2020): 839–52. http://dx.doi.org/10.31724/rihjj.46.2.21.
Texto completo da fonteKumar, Vidit, Vikas Tripathi e Bhaskar Pant. "Learning Unsupervised Visual Representations using 3D Convolutional Autoencoder with Temporal Contrastive Modeling for Video Retrieval". International Journal of Mathematical, Engineering and Management Sciences 7, n.º 2 (14 de março de 2022): 272–87. http://dx.doi.org/10.33889/ijmems.2022.7.2.018.
Texto completo da fonteVihlman, Mikko, e Arto Visala. "Optical Flow in Deep Visual Tracking". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12112–19. http://dx.doi.org/10.1609/aaai.v34i07.6890.
Texto completo da fonteRouast, Philipp V., e Marc T. P. Adam. "Learning Deep Representations for Video-Based Intake Gesture Detection". IEEE Journal of Biomedical and Health Informatics 24, n.º 6 (junho de 2020): 1727–37. http://dx.doi.org/10.1109/jbhi.2019.2942845.
Texto completo da fonteLi, Jialu, Aishwarya Padmakumar, Gaurav Sukhatme e Mohit Bansal. "VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de março de 2024): 18517–26. http://dx.doi.org/10.1609/aaai.v38i17.29813.
Texto completo da fonteHu, Yueyue, Shiliang Sun, Xin Xu e Jing Zhao. "Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 10 (3 de abril de 2020): 13811–12. http://dx.doi.org/10.1609/aaai.v34i10.7177.
Texto completo da fonteDong, Zhen, Chenchen Jing, Mingtao Pei e Yunde Jia. "Deep CNN based binary hash video representations for face retrieval". Pattern Recognition 81 (setembro de 2018): 357–69. http://dx.doi.org/10.1016/j.patcog.2018.04.014.
Texto completo da fontePsallidas, Theodoros, e Evaggelos Spyrou. "Video Summarization Based on Feature Fusion and Data Augmentation". Computers 12, n.º 9 (15 de setembro de 2023): 186. http://dx.doi.org/10.3390/computers12090186.
Texto completo da fonteLiu, Shangdong, Puming Cao, Yujian Feng, Yimu Ji, Jiayuan Chen, Xuedong Xie e Longji Wu. "NRVC: Neural Representation for Video Compression with Implicit Multiscale Fusion Network". Entropy 25, n.º 8 (4 de agosto de 2023): 1167. http://dx.doi.org/10.3390/e25081167.
Texto completo da fontePan, Haixia, Jiahua Lan, Hongqiang Wang, Yanan Li, Meng Zhang, Mojie Ma, Dongdong Zhang e Xiaoran Zhao. "UWV-Yolox: A Deep Learning Model for Underwater Video Object Detection". Sensors 23, n.º 10 (18 de maio de 2023): 4859. http://dx.doi.org/10.3390/s23104859.
Texto completo da fonteGad, Gad, Eyad Gad, Korhan Cengiz, Zubair Fadlullah e Bassem Mokhtar. "Deep Learning-Based Context-Aware Video Content Analysis on IoT Devices". Electronics 11, n.º 11 (4 de junho de 2022): 1785. http://dx.doi.org/10.3390/electronics11111785.
Texto completo da fonteLin, Jie, Ling-Yu Duan, Shiqi Wang, Yan Bai, Yihang Lou, Vijay Chandrasekhar, Tiejun Huang, Alex Kot e Wen Gao. "HNIP: Compact Deep Invariant Representations for Video Matching, Localization, and Retrieval". IEEE Transactions on Multimedia 19, n.º 9 (setembro de 2017): 1968–83. http://dx.doi.org/10.1109/tmm.2017.2713410.
Texto completo da fonteZhang, Huijun, Ling Feng, Ningyun Li, Zhanyu Jin e Lei Cao. "Video-Based Stress Detection through Deep Learning". Sensors 20, n.º 19 (28 de setembro de 2020): 5552. http://dx.doi.org/10.3390/s20195552.
Texto completo da fonteJiang, Pin, e Yahong Han. "Reasoning with Heterogeneous Graph Alignment for Video Question Answering". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11109–16. http://dx.doi.org/10.1609/aaai.v34i07.6767.
Texto completo da fonteMumtaz, Nadia, Naveed Ejaz, Suliman Aladhadh, Shabana Habib e Mi Young Lee. "Deep Multi-Scale Features Fusion for Effective Violence Detection and Control Charts Visualization". Sensors 22, n.º 23 (1 de dezembro de 2022): 9383. http://dx.doi.org/10.3390/s22239383.
Texto completo da fonteWu, Lin, Yang Wang, Ling Shao e Meng Wang. "3-D PersonVLAD: Learning Deep Global Representations for Video-Based Person Reidentification". IEEE Transactions on Neural Networks and Learning Systems 30, n.º 11 (novembro de 2019): 3347–59. http://dx.doi.org/10.1109/tnnls.2019.2891244.
Texto completo da fonteMeshchaninov, Viacheslav Pavlovich, Ivan Andreevich Molodetskikh, Dmitriy Sergeevich Vatolin e Alexey Gennadievich Voloboy. "Combining contrastive and supervised learning for video super-resolution detection". Keldysh Institute Preprints, n.º 80 (2022): 1–13. http://dx.doi.org/10.20948/prepr-2022-80.
Texto completo da fonteHuang, Shaonian, Dongjun Huang e Xinmin Zhou. "Learning Multimodal Deep Representations for Crowd Anomaly Event Detection". Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/6323942.
Texto completo da fonteKumar, Vidit, Vikas Tripathi, Bhaskar Pant, Sultan S. Alshamrani, Ankur Dumka, Anita Gehlot, Rajesh Singh, Mamoon Rashid, Abdullah Alshehri e Ahmed Saeed AlGhamdi. "Hybrid Spatiotemporal Contrastive Representation Learning for Content-Based Surgical Video Retrieval". Electronics 11, n.º 9 (24 de abril de 2022): 1353. http://dx.doi.org/10.3390/electronics11091353.
Texto completo da fonteXu, Ming, Xiaosheng Yu, Dongyue Chen, Chengdong Wu e Yang Jiang. "An Efficient Anomaly Detection System for Crowded Scenes Using Variational Autoencoders". Applied Sciences 9, n.º 16 (14 de agosto de 2019): 3337. http://dx.doi.org/10.3390/app9163337.
Texto completo da fonteBohunicky, Kyle Matthew. "Dear Punchy". Animal Crossing Special Issue 13, n.º 22 (16 de fevereiro de 2021): 39–58. http://dx.doi.org/10.7202/1075262ar.
Texto completo da fonteRezaei, Fariba, e Mehran Yazdi. "A New Semantic and Statistical Distance-Based Anomaly Detection in Crowd Video Surveillance". Wireless Communications and Mobile Computing 2021 (15 de maio de 2021): 1–9. http://dx.doi.org/10.1155/2021/5513582.
Texto completo da fonteDong, Wenkai, Zhaoxiang Zhang e Tieniu Tan. "Attention-Aware Sampling via Deep Reinforcement Learning for Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 8247–54. http://dx.doi.org/10.1609/aaai.v33i01.33018247.
Texto completo da fonteNida, Nudrat, Muhammad Haroon Yousaf, Aun Irtaza e Sergio A. Velastin. "Instructor Activity Recognition through Deep Spatiotemporal Features and Feedforward Extreme Learning Machines". Mathematical Problems in Engineering 2019 (30 de abril de 2019): 1–13. http://dx.doi.org/10.1155/2019/2474865.
Texto completo da fonteHe, Dongliang, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang e Shilei Wen. "StNet: Local and Global Spatial-Temporal Modeling for Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 8401–8. http://dx.doi.org/10.1609/aaai.v33i01.33018401.
Texto completo da fonteSwinney, Carolyn J., e John C. Woods. "Unmanned Aerial Vehicle Operating Mode Classification Using Deep Residual Learning Feature Extraction". Aerospace 8, n.º 3 (16 de março de 2021): 79. http://dx.doi.org/10.3390/aerospace8030079.
Texto completo da fonteZhao, Hu, Yanyun Shen, Zhipan Wang e Qingling Zhang. "MFACNet: A Multi-Frame Feature Aggregating and Inter-Feature Correlation Framework for Multi-Object Tracking in Satellite Videos". Remote Sensing 16, n.º 9 (30 de abril de 2024): 1604. http://dx.doi.org/10.3390/rs16091604.
Texto completo da fonteKulvinder Singh, Et al. "Enhancing Multimodal Information Retrieval Through Integrating Data Mining and Deep Learning Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9 (30 de outubro de 2023): 560–69. http://dx.doi.org/10.17762/ijritcc.v11i9.8844.
Texto completo da fonteGovender, Divina, e Jules-Raymond Tapamo. "Spatio-Temporal Scale Coded Bag-of-Words". Sensors 20, n.º 21 (9 de novembro de 2020): 6380. http://dx.doi.org/10.3390/s20216380.
Texto completo da fonteHuang, Haofeng, Wenhan Yang, Lingyu Duan e Jiaying Liu. "Seeing Dark Videos via Self-Learned Bottleneck Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2321–29. http://dx.doi.org/10.1609/aaai.v38i3.28006.
Texto completo da fonteDhar, Moloy. "Object Detection using Deep Learning Approach". International Journal for Research in Applied Science and Engineering Technology 10, n.º 6 (30 de junho de 2022): 2963–69. http://dx.doi.org/10.22214/ijraset.2022.44417.
Texto completo da fonteMishra,, Vaishnavi. "Synthetic Media Analysis Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 05 (7 de maio de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32494.
Texto completo da fonteThakur, Amey. "Generative Adversarial Networks". International Journal for Research in Applied Science and Engineering Technology 9, n.º 8 (31 de agosto de 2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.
Texto completo da fonteWang, Bokun, Caiqian Yang e Yaojing Chen. "Detection Anomaly in Video Based on Deep Support Vector Data Description". Computational Intelligence and Neuroscience 2022 (4 de maio de 2022): 1–6. http://dx.doi.org/10.1155/2022/5362093.
Texto completo da fonteChen, Shuang, Zengcai Wang e Wenxin Chen. "Driver Drowsiness Estimation Based on Factorized Bilinear Feature Fusion and a Long-Short-Term Recurrent Convolutional Network". Information 12, n.º 1 (22 de dezembro de 2020): 3. http://dx.doi.org/10.3390/info12010003.
Texto completo da fonteRezaei, Behnaz, Yiorgos Christakis, Bryan Ho, Kevin Thomas, Kelley Erb, Sarah Ostadabbas e Shyamal Patel. "Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video". Sensors 19, n.º 19 (1 de outubro de 2019): 4266. http://dx.doi.org/10.3390/s19194266.
Texto completo da fonteBourai, Nour, Hayet Farida Merouani e Akila Djebbar. "Advanced Image Compression Techniques for Medical Applications: Survey". All Sciences Abstracts 1, n.º 1 (16 de abril de 2023): 1. http://dx.doi.org/10.59287/as-abstracts.444.
Texto completo da fonteMai Magdy, Fahima A. Maghraby e Mohamed Waleed Fakhr. "A 4D Convolutional Neural Networks for Video Violence Detection". Journal of Advanced Research in Applied Sciences and Engineering Technology 36, n.º 1 (24 de dezembro de 2023): 16–25. http://dx.doi.org/10.37934/araset.36.1.1625.
Texto completo da fonteChoi, Jinsoo, e Tae-Hyun Oh. "Joint Video Super-Resolution and Frame Interpolation via Permutation Invariance". Sensors 23, n.º 5 (24 de fevereiro de 2023): 2529. http://dx.doi.org/10.3390/s23052529.
Texto completo da fonteKulkarni, Dr Shrinivasrao B., Abhishek Kuppelur, Akash Shetty, Shashank ,. Bidarakatti e Taranath Sangresakoppa. "Analysis of Physiotherapy Practices using Deep Learning". International Journal for Research in Applied Science and Engineering Technology 12, n.º 4 (30 de abril de 2024): 5084–89. http://dx.doi.org/10.22214/ijraset.2024.61194.
Texto completo da fonteLiu, Daizong, Dongdong Yu, Changhu Wang e Pan Zhou. "F2Net: Learning to Focus on the Foreground for Unsupervised Video Object Segmentation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 3 (18 de maio de 2021): 2109–17. http://dx.doi.org/10.1609/aaai.v35i3.16308.
Texto completo da fonteSun, Zheng, Andrew W. Sumsion, Shad A. Torrie e Dah-Jye Lee. "Learning Facial Motion Representation with a Lightweight Encoder for Identity Verification". Electronics 11, n.º 13 (22 de junho de 2022): 1946. http://dx.doi.org/10.3390/electronics11131946.
Texto completo da fonteWagner, Travis L., e Ashley Blewer. "“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video". Open Information Science 3, n.º 1 (1 de janeiro de 2019): 32–46. http://dx.doi.org/10.1515/opis-2019-0003.
Texto completo da fonteSharif, Md Haidar, Lei Jiao e Christian W. Omlin. "CNN-ViT Supported Weakly-Supervised Video Segment Level Anomaly Detection". Sensors 23, n.º 18 (7 de setembro de 2023): 7734. http://dx.doi.org/10.3390/s23187734.
Texto completo da fonteJeon, DaeHyeon, e Min-Suk Kim. "Deep-Learning-Based Sequence Causal Long-Term Recurrent Convolutional Network for Data Fusion Using Video Data". Electronics 12, n.º 5 (24 de fevereiro de 2023): 1115. http://dx.doi.org/10.3390/electronics12051115.
Texto completo da fonteWu, Sijie, Kai Zhang, Shaoyi Li e Jie Yan. "Learning to Track Aircraft in Infrared Imagery". Remote Sensing 12, n.º 23 (6 de dezembro de 2020): 3995. http://dx.doi.org/10.3390/rs12233995.
Texto completo da fonteKong, Weiqi. "Research Advanced in Multimodal Emotion Recognition Based on Deep Learning". Highlights in Science, Engineering and Technology 85 (13 de março de 2024): 602–8. http://dx.doi.org/10.54097/p3yprn36.
Texto completo da fonteTøttrup, Daniel, Stinus Lykke Skovgaard, Jonas le Fevre Sejersen e Rui Pimentel de Figueiredo. "A Fast and Accurate Approach to Multiple-Vehicle Localization and Tracking from Monocular Aerial Images". Journal of Imaging 7, n.º 12 (8 de dezembro de 2021): 270. http://dx.doi.org/10.3390/jimaging7120270.
Texto completo da fonte