Gotowa bibliografia na temat „FAKE VIDEOS”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „FAKE VIDEOS”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "FAKE VIDEOS"
Abidin, Muhammad Indra, Ingrid Nurtanio i Andani Achmad. "Deepfake Detection in Videos Using Long Short-Term Memory and CNN ResNext". ILKOM Jurnal Ilmiah 14, nr 3 (19.12.2022): 178–85. http://dx.doi.org/10.33096/ilkom.v14i3.1254.178-185.
Pełny tekst źródłaLópez-Gil, Juan-Miguel, Rosa Gil i Roberto García. "Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression". Computational Intelligence and Neuroscience 2022 (18.10.2022): 1–12. http://dx.doi.org/10.1155/2022/1332122.
Pełny tekst źródłaArunkumar, P. M., Yalamanchili Sangeetha, P. Vishnu Raja i S. N. Sangeetha. "Deep Learning for Forgery Face Detection Using Fuzzy Fisher Capsule Dual Graph". Information Technology and Control 51, nr 3 (23.09.2022): 563–74. http://dx.doi.org/10.5755/j01.itc.51.3.31510.
Pełny tekst źródłaWang, Shuting (Ada), Min-Seok Pang i Paul Pavlou. "Seeing Is Believing? How Including a Video in Fake News Influences Users’ Reporting of Fake News to Social Media Platforms". MIS Quarterly 45, nr 3 (1.09.2022): 1323–54. http://dx.doi.org/10.25300/misq/2022/16296.
Pełny tekst źródłaDeng, Liwei, Hongfei Suo i Dongjie Li. "Deepfake Video Detection Based on EfficientNet-V2 Network". Computational Intelligence and Neuroscience 2022 (15.04.2022): 1–13. http://dx.doi.org/10.1155/2022/3441549.
Pełny tekst źródłaShahar, Hadas, i Hagit Hel-Or. "Fake Video Detection Using Facial Color". Color and Imaging Conference 2020, nr 28 (4.11.2020): 175–80. http://dx.doi.org/10.2352/issn.2169-2629.2020.28.27.
Pełny tekst źródłaLin, Yih-Kai, i Hao-Lun Sun. "Few-Shot Training GAN for Face Forgery Classification and Segmentation Based on the Fine-Tune Approach". Electronics 12, nr 6 (16.03.2023): 1417. http://dx.doi.org/10.3390/electronics12061417.
Pełny tekst źródłaLiang, Xiaoyun, Zhaohong Li, Zhonghao Li i Zhenzhen Zhang. "Fake Bitrate Detection of HEVC Videos Based on Prediction Process". Symmetry 11, nr 7 (15.07.2019): 918. http://dx.doi.org/10.3390/sym11070918.
Pełny tekst źródłaPei, Pengfei, Xianfeng Zhao, Jinchuan Li, Yun Cao i Xuyuan Lai. "Vision Transformer-Based Video Hashing Retrieval for Tracing the Source of Fake Videos". Security and Communication Networks 2023 (28.06.2023): 1–16. http://dx.doi.org/10.1155/2023/5349392.
Pełny tekst źródłaDas, Rashmiranjan, Gaurav Negi i Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification". Electronic Imaging 2021, nr 4 (18.01.2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.
Pełny tekst źródłaRozprawy doktorskie na temat "FAKE VIDEOS"
Zou, Weiwen. "Face recognition from video". HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1431.
Pełny tekst źródłaLI, Songyu. "A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction". Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-152457.
Pełny tekst źródłaLiu, Yiran. "Consistent and Accurate Face Tracking and Recognition in Videos". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1588598739996101.
Pełny tekst źródłaCheng, Xin. "Nonrigid face alignment for unknown subject in video". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.
Pełny tekst źródłaJin, Yonghua. "A video human face tracker". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0032/MQ62226.pdf.
Pełny tekst źródłaArandjelović, Ognjen. "Automatic face recognition from video". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613375.
Pełny tekst źródłaOmizo, Ryan Masaaki. "Facing Vernacular Video". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339184415.
Pełny tekst źródłaHadid, A. (Abdenour). "Learning and recognizing faces: from still images to video sequences". Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514277597.
Pełny tekst źródłaFernando, Warnakulasuriya Anil Chandana. "Video processing in the compressed domain". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326724.
Pełny tekst źródłaWibowo, Moh Edi. "Towards pose-robust face recognition on video". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/77836/1/Moh%20Edi_Wibowo_Thesis.pdf.
Pełny tekst źródłaKsiążki na temat "FAKE VIDEOS"
Mezaris, Vasileios, Lyndon Nixon, Symeon Papadopoulos i Denis Teyssou, red. Video Verification in the Fake News Era. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0.
Pełny tekst źródłaNational Film Board of Canada, red. Face to face video guide: Video resources for race relations training and education. Montréal: National Film Board of Canada, 1993.
Znajdź pełny tekst źródłaJi, Qiang, Thomas B. Moeslund, Gang Hua i Kamal Nasrollahi, red. Face and Facial Expression Recognition from Real World Videos. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7.
Pełny tekst źródłaBai, Xiang, Yi Fang, Yangqing Jia, Meina Kan, Shiguang Shan, Chunhua Shen, Jingdong Wang i in., red. Video Analytics. Face and Facial Expression Recognition. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12177-8.
Pełny tekst źródłaScreening the face. Houndmills, Basingstoke, Hampshire: Palgrave Macmillan, 2012.
Znajdź pełny tekst źródłaPrager, Alex. Face in the crowd. Washington, DC: Corcoran Gallery of Art, 2013.
Znajdź pełny tekst źródłaNasrollahi, Kamal, Cosimo Distante, Gang Hua, Andrea Cavallaro, Thomas B. Moeslund, Sebastiano Battiato i Qiang Ji, red. Video Analytics. Face and Facial Expression Recognition and Audience Measurement. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56687-0.
Pełny tekst źródłaNoll, Katherine. Scholastic's Pokémon hall of fame. New York: Scholastic, 2004.
Znajdź pełny tekst źródłaLevy, Frederick. 15 Minutes of Fame. New York: Penguin Group USA, Inc., 2008.
Znajdź pełny tekst źródłaKurit︠s︡yn, Vi︠a︡cheslav, Naili︠a︡ Allakhverdieva, Marat Gelʹman i Iulii︠a︡ Sorokina. Lit︠s︡o nevesty: Sovremennoe iskusstvo Kazakhstana = Face of the bride : contemporary art of Kazakhstan. Permʹ: Muzeĭ sovremennogo iskusstva PERMM, 2012.
Znajdź pełny tekst źródłaCzęści książek na temat "FAKE VIDEOS"
Roy, Ritaban, Indu Joshi, Abhijit Das i Antitza Dantcheva. "3D CNN Architectures and Attention Mechanisms for Deepfake Detection". W Handbook of Digital Face Manipulation and Detection, 213–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_10.
Pełny tekst źródłaHedge, Amrita Shivanand, M. N. Vinutha, Kona Supriya, S. Nagasundari i Prasad B. Honnavalli. "CLH: Approach for Detecting Deep Fake Videos". W Communications in Computer and Information Science, 539–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-8059-5_33.
Pełny tekst źródłaHernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez i Aythami Morales. "DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame". W Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.
Pełny tekst źródłaMarkatopoulou, Foteini, Markos Zampoglou, Evlampios Apostolidis, Symeon Papadopoulos, Vasileios Mezaris, Ioannis Patras i Ioannis Kompatsiaris. "Finding Semantically Related Videos in Closed Collections". W Video Verification in the Fake News Era, 127–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_5.
Pełny tekst źródłaKordopatis-Zilos, Giorgos, Symeon Papadopoulos, Ioannis Patras i Ioannis Kompatsiaris. "Finding Near-Duplicate Videos in Large-Scale Collections". W Video Verification in the Fake News Era, 91–126. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_4.
Pełny tekst źródłaSingh, Aadya, Abey Alex George, Pankaj Gupta i Lakshmi Gadhikar. "ShallowFake-Detection of Fake Videos Using Deep Learning". W Conference Proceedings of ICDLAIR2019, 170–78. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67187-7_19.
Pełny tekst źródłaPapadopoulou, Olga, Markos Zampoglou, Symeon Papadopoulos i Ioannis Kompatsiaris. "Verification of Web Videos Through Analysis of Their Online Context". W Video Verification in the Fake News Era, 191–221. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_7.
Pełny tekst źródłaLong, Chengjiang, Arslan Basharat i Anthony Hoogs. "Video Frame Deletion and Duplication". W Multimedia Forensics, 333–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.
Pełny tekst źródłaBoccignone, Giuseppe, Sathya Bursic, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi, Raffaella Lanzarotti i Sabrina Patania. "DeepFakes Have No Heart: A Simple rPPG-Based Method to Reveal Fake Videos". W Image Analysis and Processing – ICIAP 2022, 186–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06430-2_16.
Pełny tekst źródłaBao, Heng, Lirui Deng, Jiazhi Guan, Liang Zhang i Xunxun Chen. "Improving Deepfake Video Detection with Comprehensive Self-consistency Learning". W Communications in Computer and Information Science, 151–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8285-9_11.
Pełny tekst źródłaStreszczenia konferencji na temat "FAKE VIDEOS"
Shang, Jiacheng, i Jie Wu. "Protecting Real-time Video Chat against Fake Facial Videos Generated by Face Reenactment". W 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2020. http://dx.doi.org/10.1109/icdcs47774.2020.00082.
Pełny tekst źródłaLiu, Zhenguang, Sifan Wu, Chejian Xu, Xiang Wang, Lei Zhu, Shuang Wu i Fuli Feng. "Copy Motion From One to Another: Fake Motion Video Generation". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/171.
Pełny tekst źródłaZhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng i Shiming Ge. "Detecting Deepfake Videos with Temporal Dropout 3DCNN". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.
Pełny tekst źródłaCelebi, Naciye, Qingzhong Liu i Muhammed Karatoprak. "A Survey of Deep Fake Detection for Trial Courts". W 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120919.
Pełny tekst źródłaAgarwal, Shruti, Hany Farid, Ohad Fried i Maneesh Agrawala. "Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches". W 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00338.
Pełny tekst źródłaAgarwal, Shruti, Hany Farid, Tarek El-Gaaly i Ser-Nam Lim. "Detecting Deep-Fake Videos from Appearance and Behavior". W 2020 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2020. http://dx.doi.org/10.1109/wifs49906.2020.9360904.
Pełny tekst źródłaAgarwal, Shruti, i Hany Farid. "Detecting Deep-Fake Videos from Aural and Oral Dynamics". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00109.
Pełny tekst źródłaGerstner, Candice R., i Hany Farid. "Detecting Real-Time Deep-Fake Videos Using Active Illumination". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00015.
Pełny tekst źródłaChauhan, Ruby, Renu Popli i Isha Kansal. "A Comprehensive Review on Fake Images/Videos Detection Techniques". W 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2022. http://dx.doi.org/10.1109/icrito56286.2022.9964871.
Pełny tekst źródłaMira, Fahad. "Deep Learning Technique for Recognition of Deep Fake Videos". W 2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET). IEEE, 2023. http://dx.doi.org/10.1109/globconet56651.2023.10150143.
Pełny tekst źródłaRaporty organizacyjne na temat "FAKE VIDEOS"
Grother, Patrick J., George W. Quinn i Mei Lee Ngan. Face in video evaluation (FIVE) face recognition of non-cooperative subjects. Gaithersburg, MD: National Institute of Standards and Technology, marzec 2017. http://dx.doi.org/10.6028/nist.ir.8173.
Pełny tekst źródłaChen, Yi-Chen, Vishal M. Patel, Sumit Shekhar, Rama Chellappa i P. Jonathon Phillips. Video-based face recognition via joint sparse representation. Gaithersburg, MD: National Institute of Standards and Technology, 2013. http://dx.doi.org/10.6028/nist.ir.7906.
Pełny tekst źródłaLee, Yooyoung, P. Jonathon Phillips, James J. Filliben, J. Ross Beveridge i Hao Zhang. Identifying face quality and factor measures for video. National Institute of Standards and Technology, maj 2014. http://dx.doi.org/10.6028/nist.ir.8004.
Pełny tekst źródłaТарасова, Олена Юріївна, i Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.
Pełny tekst źródłaDrury, J., S. Arias, T. Au-Yeung, D. Barr, L. Bell, T. Butler, H. Carter i in. Public behaviour in response to perceived hostile threats: an evidence base and guide for practitioners and policymakers. University of Sussex, 2023. http://dx.doi.org/10.20919/vjvt7448.
Pełny tekst źródłaNeural correlates of face familiarity in institutionalised children and links to attachment disordered behaviour. ACAMH, marzec 2023. http://dx.doi.org/10.13056/acamh.23409.
Pełny tekst źródłaCybervictimization in adolescence and its association with subsequent suicidal ideation/attempt beyond face‐to‐face victimization: a longitudinal population‐based study – video Q & A. ACAMH, wrzesień 2020. http://dx.doi.org/10.13056/acamh.13319.
Pełny tekst źródła