Artykuły w czasopismach na temat „Video frame”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Video frame”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Liu, Dianting, Mei-Ling Shyu, Chao Chen i Shu-Ching Chen. "Within and Between Shot Information Utilisation in Video Key Frame Extraction". Journal of Information & Knowledge Management 10, nr 03 (wrzesień 2011): 247–59. http://dx.doi.org/10.1142/s0219649211002961.
Pełny tekst źródłaGong, Tao, Kai Chen, Xinjiang Wang, Qi Chu, Feng Zhu, Dahua Lin, Nenghai Yu i Huamin Feng. "Temporal ROI Align for Video Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1442–50. http://dx.doi.org/10.1609/aaai.v35i2.16234.
Pełny tekst źródłaAlsrehin, Nawaf O., i Ahmad F. Klaib. "VMQ: an algorithm for measuring the Video Motion Quality". Bulletin of Electrical Engineering and Informatics 8, nr 1 (1.03.2019): 231–38. http://dx.doi.org/10.11591/eei.v8i1.1418.
Pełny tekst źródłaPark, Sunghyun, Kangyeol Kim, Junsoo Lee, Jaegul Choo, Joonseok Lee, Sookyung Kim i Edward Choi. "Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 3 (18.05.2021): 2412–22. http://dx.doi.org/10.1609/aaai.v35i3.16342.
Pełny tekst źródłaChang, Yuchou, i Hong Lin. "Irrelevant frame removal for scene analysis using video hyperclique pattern and spectrum analysis". Journal of Advanced Computer Science & Technology 5, nr 1 (6.02.2016): 1. http://dx.doi.org/10.14419/jacst.v5i1.4035.
Pełny tekst źródłaLi, WenLin, DeYu Qi, ChangJian Zhang, Jing Guo i JiaJun Yao. "Video Summarization Based on Mutual Information and Entropy Sliding Window Method". Entropy 22, nr 11 (12.11.2020): 1285. http://dx.doi.org/10.3390/e22111285.
Pełny tekst źródłaLi, Xin, QiLin Li, Dawei Yin, Lijun Zhang i Dezhong Peng. "Unsupervised Video Summarization Based on An Encoder-Decoder Architecture". Journal of Physics: Conference Series 2258, nr 1 (1.04.2022): 012067. http://dx.doi.org/10.1088/1742-6596/2258/1/012067.
Pełny tekst źródłaMahum, Rabbia, Aun Irtaza, Saeed Ur Rehman, Talha Meraj i Hafiz Tayyab Rauf. "A Player-Specific Framework for Cricket Highlights Generation Using Deep Convolutional Neural Networks". Electronics 12, nr 1 (24.12.2022): 65. http://dx.doi.org/10.3390/electronics12010065.
Pełny tekst źródłaWang, Yifan, Hao Wang, Kaijie Wang i Wei Zhang. "Cloud Gaming Video Coding Optimization Based on Camera Motion-Guided Reference Frame Enhancement". Applied Sciences 12, nr 17 (25.08.2022): 8504. http://dx.doi.org/10.3390/app12178504.
Pełny tekst źródłaKawin, Bruce. "Video Frame Enlargments". Film Quarterly 61, nr 3 (2008): 52–57. http://dx.doi.org/10.1525/fq.2008.61.3.52.
Pełny tekst źródłaSun, Fan, i Xuedong Tian. "Lecture Video Automatic Summarization System Based on DBNet and Kalman Filtering". Mathematical Problems in Engineering 2022 (31.08.2022): 1–10. http://dx.doi.org/10.1155/2022/5303503.
Pełny tekst źródłaHe, Fei, Naiyu Gao, Qiaozhe Li, Senyao Du, Xin Zhao i Kaiqi Huang. "Temporal Context Enhanced Feature Aggregation for Video Object Detection". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 10941–48. http://dx.doi.org/10.1609/aaai.v34i07.6727.
Pełny tekst źródłaSURAJ, M. G., D. S. GURU i S. MANJUNATH. "RECOGNITION OF POSTAL CODES FROM FINGERSPELLING VIDEO SEQUENCE". International Journal of Image and Graphics 11, nr 01 (styczeń 2011): 21–41. http://dx.doi.org/10.1142/s021946781100397x.
Pełny tekst źródłaKim, Jeongmin, i Yong Ju Jung. "Multi-Stage Network for Event-Based Video Deblurring with Residual Hint Attention". Sensors 23, nr 6 (7.03.2023): 2880. http://dx.doi.org/10.3390/s23062880.
Pełny tekst źródłaLi, Xinjie, i Huijuan Xu. "MEID: Mixture-of-Experts with Internal Distillation for Long-Tailed Video Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1451–59. http://dx.doi.org/10.1609/aaai.v37i2.25230.
Pełny tekst źródłaAl Bdour, Nashat. "Encryption of Dynamic Areas of Images in Video based on Certain Geometric and Color Shapes". WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 20 (29.03.2023): 109–18. http://dx.doi.org/10.37394/23209.2023.20.13.
Pełny tekst źródłaZhou, Yuanding, Baopu Li, Zhihui Wang i Haojie Li. "Integrating Temporal and Spatial Attention for Video Action Recognition". Security and Communication Networks 2022 (26.04.2022): 1–8. http://dx.doi.org/10.1155/2022/5094801.
Pełny tekst źródłaZhou, Yuanding, Baopu Li, Zhihui Wang i Haojie Li. "Integrating Temporal and Spatial Attention for Video Action Recognition". Security and Communication Networks 2022 (26.04.2022): 1–8. http://dx.doi.org/10.1155/2022/5094801.
Pełny tekst źródłaZhou, Yuanding, Baopu Li, Zhihui Wang i Haojie Li. "Integrating Temporal and Spatial Attention for Video Action Recognition". Security and Communication Networks 2022 (26.04.2022): 1–8. http://dx.doi.org/10.1155/2022/5094801.
Pełny tekst źródłaSinulingga, Hagai R., i Seong G. Kong. "Key-Frame Extraction for Reducing Human Effort in Object Detection Training for Video Surveillance". Electronics 12, nr 13 (5.07.2023): 2956. http://dx.doi.org/10.3390/electronics12132956.
Pełny tekst źródłaGuo, Quanmin, Hanlei Wang i Jianhua Yang. "Night Vision Anti-Halation Method Based on Infrared and Visible Video Fusion". Sensors 22, nr 19 (2.10.2022): 7494. http://dx.doi.org/10.3390/s22197494.
Pełny tekst źródłaGuo, Xiaoping. "Intelligent Sports Video Classification Based on Deep Neural Network (DNN) Algorithm and Transfer Learning". Computational Intelligence and Neuroscience 2021 (24.11.2021): 1–9. http://dx.doi.org/10.1155/2021/1825273.
Pełny tekst źródłaLi, Dengshan, Rujing Wang, Peng Chen, Chengjun Xie, Qiong Zhou i Xiufang Jia. "Visual Feature Learning on Video Object and Human Action Detection: A Systematic Review". Micromachines 13, nr 1 (31.12.2021): 72. http://dx.doi.org/10.3390/mi13010072.
Pełny tekst źródłaLiu, Yu-Lun, Yi-Tung Liao, Yen-Yu Lin i Yung-Yu Chuang. "Deep Video Frame Interpolation Using Cyclic Frame Generation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8794–802. http://dx.doi.org/10.1609/aaai.v33i01.33018794.
Pełny tekst źródłaMielke, Maja, Peter Aerts, Chris Van Ginneken, Sam Van Wassenbergh i Falk Mielke. "Progressive tracking: a novel procedure to facilitate manual digitization of videos". Biology Open 9, nr 11 (4.11.2020): bio055962. http://dx.doi.org/10.1242/bio.055962.
Pełny tekst źródłaGill, Harsimranjit Singh, Tarandip Singh, Baldeep Kaur, Gurjot Singh Gaba, Mehedi Masud i Mohammed Baz. "A Metaheuristic Approach to Secure Multimedia Big Data for IoT-Based Smart City Applications". Wireless Communications and Mobile Computing 2021 (4.10.2021): 1–10. http://dx.doi.org/10.1155/2021/7147940.
Pełny tekst źródłaYang, Yixin, Zhiqang Xiang i Jianbo Li. "Research on Low Frame Rate Video Compression Algorithm in the Context of New Media". Security and Communication Networks 2021 (27.09.2021): 1–10. http://dx.doi.org/10.1155/2021/7494750.
Pełny tekst źródłaBhuvaneshwari, T., N. Ramadevi i E. Kalpana. "Face Quality Detection in a Video Frame". International Journal for Research in Applied Science and Engineering Technology 11, nr 8 (31.08.2023): 2206–11. http://dx.doi.org/10.22214/ijraset.2023.55559.
Pełny tekst źródłaAlfian, Alfiansyah Imanda Putra, Rusydi Umar i Abdul Fadlil. "Penerapan Metode Localization Tampering dan Hashing untuk Deteksi Rekayasa Video Digital". Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 5, nr 2 (29.04.2021): 400–406. http://dx.doi.org/10.29207/resti.v5i2.3015.
Pełny tekst źródłaYao, Ping. "Key Frame Extraction Method of Music and Dance Video Based on Multicore Learning Feature Fusion". Scientific Programming 2022 (17.01.2022): 1–8. http://dx.doi.org/10.1155/2022/9735392.
Pełny tekst źródłaSaqib, Shazia, i Syed Kazmi. "Video Summarization for Sign Languages Using the Median of Entropy of Mean Frames Method". Entropy 20, nr 10 (29.09.2018): 748. http://dx.doi.org/10.3390/e20100748.
Pełny tekst źródłaYan, Bo, Chuming Lin i Weimin Tan. "Frame and Feature-Context Video Super-Resolution". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 5597–604. http://dx.doi.org/10.1609/aaai.v33i01.33015597.
Pełny tekst źródłaWu, Wei Qiang, Lei Wang, Qin Yu Zhang i Chang Jian Zhang. "The RTP Encapsulation Based on Frame Type Method for AVS Video". Applied Mechanics and Materials 263-266 (grudzień 2012): 1803–8. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.1803.
Pełny tekst źródłaLi, Dengshan, Rujing Wang, Chengjun Xie, Liu Liu, Jie Zhang, Rui Li, Fangyuan Wang, Man Zhou i Wancai Liu. "A Recognition Method for Rice Plant Diseases and Pests Video Detection Based on Deep Convolutional Neural Network". Sensors 20, nr 3 (21.01.2020): 578. http://dx.doi.org/10.3390/s20030578.
Pełny tekst źródłaLee, Ki-Sun, Eunyoung Lee, Bareun Choi i Sung-Bom Pyun. "Automatic Pharyngeal Phase Recognition in Untrimmed Videofluoroscopic Swallowing Study Using Transfer Learning with Deep Convolutional Neural Networks". Diagnostics 11, nr 2 (13.02.2021): 300. http://dx.doi.org/10.3390/diagnostics11020300.
Pełny tekst źródłaLe, Trung-Nghia, Tam V. Nguyen, Quoc-Cuong Tran, Lam Nguyen, Trung-Hieu Hoang, Minh-Quan Le i Minh-Triet Tran. "Interactive Video Object Mask Annotation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 18 (18.05.2021): 16067–70. http://dx.doi.org/10.1609/aaai.v35i18.18014.
Pełny tekst źródłaChen, Yongjie, i Tieru Wu. "SATVSR: Scenario Adaptive Transformer for Cross Scenarios Video Super-Resolution". Journal of Physics: Conference Series 2456, nr 1 (1.03.2023): 012028. http://dx.doi.org/10.1088/1742-6596/2456/1/012028.
Pełny tekst źródłaHOSUR, PRABHUDEV, i ROLANDO CARRASCO. "ENHANCED FRAME-BASED VIDEO CODING TO SUPPORT CONTENT-BASED FUNCTIONALITIES". International Journal of Computational Intelligence and Applications 06, nr 02 (czerwiec 2006): 161–75. http://dx.doi.org/10.1142/s1469026806001939.
Pełny tekst źródłaQu, Zhong, i Teng Fei Gao. "An Improved Algorithm of Keyframe Extraction for Video Summarization". Advanced Materials Research 225-226 (kwiecień 2011): 807–11. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.807.
Pełny tekst źródłaLi, Qian, Rangding Wang i Dawen Xu. "An Inter-Frame Forgery Detection Algorithm for Surveillance Video". Information 9, nr 12 (28.11.2018): 301. http://dx.doi.org/10.3390/info9120301.
Pełny tekst źródłaLv, Changhai, Junfeng Li i Jian Tian. "Key Frame Extraction for Sports Training Based on Improved Deep Learning". Scientific Programming 2021 (1.09.2021): 1–8. http://dx.doi.org/10.1155/2021/1016574.
Pełny tekst źródłaSun, Yunyun, Peng Li, Zhaohui Jiang i Sujun Hu. "Feature fusion and clustering for key frame extraction". Mathematical Biosciences and Engineering 18, nr 6 (2021): 9294–311. http://dx.doi.org/10.3934/mbe.2021457.
Pełny tekst źródłaDesai, Padmashree, C. Sujatha, Saumyajit Chakraborty, Saurav Ansuman, Sanika Bhandari i Sharan Kardiguddi. "Next frame prediction using ConvLSTM". Journal of Physics: Conference Series 2161, nr 1 (1.01.2022): 012024. http://dx.doi.org/10.1088/1742-6596/2161/1/012024.
Pełny tekst źródłaZhang, Xiao-Yu, Haichao Shi, Changsheng Li, Kai Zheng, Xiaobin Zhu i Lixin Duan. "Learning Transferable Self-Attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9227–34. http://dx.doi.org/10.1609/aaai.v33i01.33019227.
Pełny tekst źródłaMashtalir, Sergii, i Olena Mikhnova. "Key Frame Extraction from Video". International Journal of Computer Vision and Image Processing 4, nr 2 (lipiec 2014): 68–79. http://dx.doi.org/10.4018/ijcvip.2014040105.
Pełny tekst źródłaSuin, Maitreya, i A. N. Rajagopalan. "An Efficient Framework for Dense Video Captioning". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12039–46. http://dx.doi.org/10.1609/aaai.v34i07.6881.
Pełny tekst źródłaLiang, Buyun, Na Li, Zheng He, Zhongyuan Wang, Youming Fu i Tao Lu. "News Video Summarization Combining SURF and Color Histogram Features". Entropy 23, nr 8 (30.07.2021): 982. http://dx.doi.org/10.3390/e23080982.
Pełny tekst źródłaRen, Honge, Walid Atwa, Haosu Zhang, Shafiq Muhammad i Mahmoud Emam. "Frame Duplication Forgery Detection and Localization Algorithm Based on the Improved Levenshtein Distance". Scientific Programming 2021 (31.03.2021): 1–10. http://dx.doi.org/10.1155/2021/5595850.
Pełny tekst źródłaKumar, Vikas, Tanupriya Choudhury, Suresh Chandra Satapathy, Ravi Tomar i Archit Aggarwal. "Video super resolution using convolutional neural network and image fusion techniques". International Journal of Knowledge-based and Intelligent Engineering Systems 24, nr 4 (18.01.2021): 279–87. http://dx.doi.org/10.3233/kes-190037.
Pełny tekst źródłaGowda, Shreyank N., Marcus Rohrbach i Laura Sevilla-Lara. "SMART Frame Selection for Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1451–59. http://dx.doi.org/10.1609/aaai.v35i2.16235.
Pełny tekst źródła