Gotowa bibliografia na temat „FAKE VIDEOS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „FAKE VIDEOS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "FAKE VIDEOS"

1

Abidin, Muhammad Indra, Ingrid Nurtanio i Andani Achmad. "Deepfake Detection in Videos Using Long Short-Term Memory and CNN ResNext". ILKOM Jurnal Ilmiah 14, nr 3 (19.12.2022): 178–85. http://dx.doi.org/10.33096/ilkom.v14i3.1254.178-185.

Pełny tekst źródła
Streszczenie:
Deep-fake in videos is a video synthesis technique by changing the people’s face in the video with others’ face. Deep-fake technology in videos has been used to manipulate information, therefore it is necessary to detect deep-fakes in videos. This paper aimed to detect deep-fakes in videos using the ResNext Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms. The video data was divided into 4 types, namely video with 10 frames, 20 frames, 40 frames and 60 frames. Furthermore, face detection was used to crop the image to 100 x 100 pixels and then the pictures were processed using ResNext CNN and LSTM. The confusion matrix was employed to measure the performance of the ResNext CNN-LSTM algorithm. The indicators used were accuracy, precision, and recall. The results of data classification showed that the highest accuracy value was 90% for data with 40 and 60 frames. While data with 10 frames had the lowest accuracy with 52% only. ResNext CNN-LSTM was able to detect deep-fakes in videos well even though the size of the image was small.
Style APA, Harvard, Vancouver, ISO itp.
2

López-Gil, Juan-Miguel, Rosa Gil i Roberto García. "Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression". Computational Intelligence and Neuroscience 2022 (18.10.2022): 1–12. http://dx.doi.org/10.1155/2022/1332122.

Pełny tekst źródła
Streszczenie:
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.
Style APA, Harvard, Vancouver, ISO itp.
3

Arunkumar, P. M., Yalamanchili Sangeetha, P. Vishnu Raja i S. N. Sangeetha. "Deep Learning for Forgery Face Detection Using Fuzzy Fisher Capsule Dual Graph". Information Technology and Control 51, nr 3 (23.09.2022): 563–74. http://dx.doi.org/10.5755/j01.itc.51.3.31510.

Pełny tekst źródła
Streszczenie:
In digital manipulation, creating fake images/videos or swapping face images/videos with another person is done by using a deep learning algorithm is termed deep fake. Fake pornography is a harmful one because of the inclusion of fake content in the hoaxes, fake news, and fraud things in the financial. The Deep Learning technique is an effective tool in the detection of deep fake images or videos. With the advancement of Generative adversarial networks (GAN) in the deep learning techniques, deep fake has become an essential one in the social media platform. This may threaten the public, therefore detection of deep fake images/videos is needed. For detecting the forged images/videos, many research works have been done and those methods are inefficient in the detection of new threats or newly created forgery images or videos, and also consumption time is high. Therefore, this paper focused on the detection of different types of fake images or videos using Fuzzy Fisher face with Capsule dual graph (FFF-CDG). The data set used in this work is FFHQ, 100K-Faces DFFD, VGG-Face2, and Wild Deep fake. The accuracy for FFHQ datasets, the existing and proposed systems obtained the accuracy of 81.5%, 89.32%, 91.35%, and 95.82% respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Shuting (Ada), Min-Seok Pang i Paul Pavlou. "Seeing Is Believing? How Including a Video in Fake News Influences Users’ Reporting of Fake News to Social Media Platforms". MIS Quarterly 45, nr 3 (1.09.2022): 1323–54. http://dx.doi.org/10.25300/misq/2022/16296.

Pełny tekst źródła
Streszczenie:
Social media platforms, such as Facebook, Instagram, and Twitter, are combating the spread of fakennews by developing systems that allow their users to report fake news. However, it remains unclear whether these reporting systems that harness the “wisdom of the crowd” are effective. Notably, concerns have been raised that the popularity of videos may hamper users’ reporting of fake news. The persuasive power of videos may render fake news more deceptive and less likely to be reported in practice. However, this is neither theoretically nor empirically straightforward, as videos not only affect users’ ability to detect fake news, but also impact their willingness to report and their engagement (i.e., likes, shares, and comments) which would further spread fake news. Using a unique dataset from a leading social media platform, we empirically examine how including a video in a fake news post affects the number of users reporting the post to the platform. Our results indicate that including a video significantly increases the number of users reporting the fake news post to the social media platform. Additionally, we find that the sentiment intensity of the fake news text content, especially when the sentiment is positive, attenuates the effect of including a video. Randomized experiments and a set of mediation analyses are included to uncover the underlying mechanisms. We contribute to the information systems literature by examining how social media platforms can leverage their users to report fake news, and how different formats (e.g., videos and text) of fake news interact to influence users’ reporting behavior. Social media platforms that seek to leverage the “wisdom of the crowd” to combat the proliferation of fake news should consider both the popularity of videos and the role of text sentiment in fake news to adjust their strategies.
Style APA, Harvard, Vancouver, ISO itp.
5

Deng, Liwei, Hongfei Suo i Dongjie Li. "Deepfake Video Detection Based on EfficientNet-V2 Network". Computational Intelligence and Neuroscience 2022 (15.04.2022): 1–13. http://dx.doi.org/10.1155/2022/3441549.

Pełny tekst źródła
Streszczenie:
As technology advances and society evolves, deep learning is becoming easier to operate. Many unscrupulous people are using deep learning technology to create fake pictures and fake videos that seriously endanger the stability of the country and society. Examples include faking politicians to make inappropriate statements, using face-swapping technology to spread false information, and creating fake videos to obtain money. In view of this social problem, based on the original fake face detection system, this paper proposes using a new network of EfficientNet-V2 to distinguish the authenticity of pictures and videos. Moreover, our method was used to deal with two current mainstream large-scale fake face datasets, and EfficientNet-V2 highlighted the superior performance of the new network by comparing the existing detection network with the actual training and testing results. Finally, based on improving the accuracy of the detection system in distinguishing real and fake faces, the actual pictures and videos are detected, and an excellent visualization effect is achieved.
Style APA, Harvard, Vancouver, ISO itp.
6

Shahar, Hadas, i Hagit Hel-Or. "Fake Video Detection Using Facial Color". Color and Imaging Conference 2020, nr 28 (4.11.2020): 175–80. http://dx.doi.org/10.2352/issn.2169-2629.2020.28.27.

Pełny tekst źródła
Streszczenie:
The field of image forgery is widely studied, and with the recent introduction of deep networks based image synthesis, detection of fake image sequences has increased the challenge. Specifically, detecting spoofing attacks is of grave importance. In this study we exploit the minute changes in facial color of human faces in videos to determine real from fake videos. Even when idle, human skin color changes with sub-dermal blood flow, these changes are enhanced under stress and emotion. We show that extracting facial color along a video sequence can serve as a feature for training deep neural networks to successfully determine fake vs real face sequences.
Style APA, Harvard, Vancouver, ISO itp.
7

Lin, Yih-Kai, i Hao-Lun Sun. "Few-Shot Training GAN for Face Forgery Classification and Segmentation Based on the Fine-Tune Approach". Electronics 12, nr 6 (16.03.2023): 1417. http://dx.doi.org/10.3390/electronics12061417.

Pełny tekst źródła
Streszczenie:
There are many techniques for faking videos that can alter the face in a video to look like another person. This type of fake video has caused a number of information security crises. Many deep learning-based detection methods have been developed for these forgery methods. These detection methods require a large amount of training data and thus cannot develop detectors quickly when new forgery methods emerge. In addition, traditional forgery detection refers to a classifier that outputs real or fake versions of the input images. If the detector can output a prediction of the fake area, i.e., a segmentation version of forgery detection, it will be a great help for forensic work. Thus, in this paper, we propose a GAN-based deep learning approach that allows detection of forged regions using a smaller number of training samples. The generator part of the proposed architecture is used to synthesize predicted segmentation which indicates the fakeness of each pixel. To solve the classification problem, a threshold on the percentage of fake pixels is used to decide whether the input image is fake. For detecting fake videos, frames of the video are extracted and it is detected whether they are fake. If the percentage of fake frames is higher than a given threshold, the video is classified as fake. Compared with other papers, the experimental results show that our method has better classification and segmentation.
Style APA, Harvard, Vancouver, ISO itp.
8

Liang, Xiaoyun, Zhaohong Li, Zhonghao Li i Zhenzhen Zhang. "Fake Bitrate Detection of HEVC Videos Based on Prediction Process". Symmetry 11, nr 7 (15.07.2019): 918. http://dx.doi.org/10.3390/sym11070918.

Pełny tekst źródła
Streszczenie:
In order to defraud click-through rate, some merchants recompress the low-bitrate video to a high-bitrate one without improving the video quality. This behavior deceives viewers and wastes network resources. Therefore, a stable algorithm that detects fake bitrate videos is urgently needed. High-Efficiency Video Coding (HEVC) is a worldwide popular video coding standard. Hence, in this paper, a robust algorithm is proposed to detect HEVC fake bitrate videos. Firstly, five effective feature sets are extracted from the prediction process of HEVC, including Coding Unit of I-picture/P-picture partitioning modes, Prediction Unit of I-picture/P-picture partitioning modes, Intra Prediction Modes of I-picture. Secondly, feature concatenation is adopted to enhance the expressiveness and improve the effectiveness of the features. Finally, five single feature sets and three concatenate feature sets are separately sent to the support vector machine for modeling and testing. The performance of the proposed algorithm is compared with state-of-the-art algorithms on HEVC videos of various resolutions and fake bitrates. The results show that the proposed algorithm can not only can better detect HEVC fake bitrate videos, but also has strong robustness against frame deletion, copy-paste, and shifted Group of Picture structure attacks.
Style APA, Harvard, Vancouver, ISO itp.
9

Pei, Pengfei, Xianfeng Zhao, Jinchuan Li, Yun Cao i Xuyuan Lai. "Vision Transformer-Based Video Hashing Retrieval for Tracing the Source of Fake Videos". Security and Communication Networks 2023 (28.06.2023): 1–16. http://dx.doi.org/10.1155/2023/5349392.

Pełny tekst źródła
Streszczenie:
With the increasing negative impact of fake videos on individuals and society, it is crucial to detect different types of forgeries. Existing forgery detection methods often output a probability value, which lacks interpretability and reliability. In this paper, we propose a source-tracing-based solution to find the original real video of a fake video, which can provide more reliable results in practical situations. However, directly applying retrieval methods to traceability tasks is infeasible since traceability tasks require finding the unique source video from a large number of real videos, while retrieval methods are typically used to find similar videos. In addition, training an effective hashing center to distinguish similar real videos is challenging. To address the above issues, we introduce a novel loss function, hash triplet loss, to capture fine-grained features with subtle differences. Extensive experiments show that our method outperforms state-of-the-art methods on multiple datasets of object removal (video inpainting), object addition (video splicing), and object swapping (face swapping), demonstrating excellent robustness and cross-dataset performance. The effectiveness of the hash triplet loss for nondifferentiable optimization problems is validated through experiments in similar video scenes.
Style APA, Harvard, Vancouver, ISO itp.
10

Das, Rashmiranjan, Gaurav Negi i Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification". Electronic Imaging 2021, nr 4 (18.01.2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.

Pełny tekst źródła
Streszczenie:
Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "FAKE VIDEOS"

1

Zou, Weiwen. "Face recognition from video". HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1431.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

LI, Songyu. "A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction". Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-152457.

Pełny tekst źródła
Streszczenie:
This thesis proposes a method to reconstruct a frontal facial video basedon encoding done with the facial profile of another video sequence.The reconstructed facial video will have the similar facial expressionchanges as the changes in the profile video. First, the profiles for boththe reference video and for the test video are captured by edge detection.Then, asymmetrical principal component analysis is used to model thecorrespondence between the profile and the frontal face. This allows en-coding from a profile and decoding of the frontal face of another video.Another solution is to use dynamic time warping to match the profilesand select the best matching corresponding frontal face frame for re-construction. With this method, we can reconstructed the test frontalvideo to make it have the similar changing in facial expressions as thereference video. To improve the quality of the result video, Local Lin-ear Embedding is used to give the result video a smoother transitionbetween frames.
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Yiran. "Consistent and Accurate Face Tracking and Recognition in Videos". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1588598739996101.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Cheng, Xin. "Nonrigid face alignment for unknown subject in video". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.
Style APA, Harvard, Vancouver, ISO itp.
5

Jin, Yonghua. "A video human face tracker". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0032/MQ62226.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Arandjelović, Ognjen. "Automatic face recognition from video". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613375.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Omizo, Ryan Masaaki. "Facing Vernacular Video". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339184415.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hadid, A. (Abdenour). "Learning and recognizing faces: from still images to video sequences". Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514277597.

Pełny tekst źródła
Streszczenie:
Abstract Automatic face recognition is a challenging problem which has received much attention during recent years due to its many applications in different fields such as law enforcement, security applications, human-machine interaction etc. Up to date there is no technique that provides a robust solution for all situations and different applications. From still gray images to face sequences (and passing through color images), this thesis provides new algorithms to learn, detect and recognize faces. It also analyzes some emerging directions such as the integration of facial dynamics in the recognition process. To recognize faces, the thesis proposes a new approach based on Local Binary Patterns (LBP) which consists of dividing the facial image into small regions from which LBP features are extracted and concatenated into a single feature histogram efficiently representing the face image. Then, face recognition is performed using a nearest neighbor classifier in the computed feature space with Chi-square as a dissimilarity metric. The extensive experiments clearly show the superiority of the proposed method over the state-of the-art algorithms on FERET tests. To detect faces, another LBP-based representation which is suitable for low-resolution images, is derived. Using the new representation, a second-degree polynomial kernel SVM classifier is trained to detect frontal faces in complex gray scale images. Experimental results using several complex images show that the proposed approach performs favorably compared to the state-of-art methods. Additionally, experiments with detecting and recognizing low-resolution faces are carried out to demonstrate that the same facial representation can be efficiently used for both the detection and recognition of faces in low-resolution images. To detect faces when the color cue is available, the thesis proposes an approach based on a robust model of skin color, called a skin locus, which is used to extract the skin-like regions. After orientation normalization and based on verifying a set of criteria (face symmetry, presence of some facial features, variance of pixel intensities and connected component arrangement), only facial regions are selected. To learn and visualize faces in video sequences, the recently proposed algorithms for unsupervised learning and dimensionality reduction (LLE and ISOMAP), as well as well known ones (PCA, SOM etc.) are considered and investigated. Some extensions are proposed and a new approach for selecting face models from video sequences is developed. The approach is based on representing the face manifold in a low-dimensional space using the Locally Linear Embedding (LLE) algorithm and then performing K-means clustering. To analyze the emerging direction in face recognition which consists of combining facial shape and dynamic personal characteristics for enhancing face recognition performance, the thesis considers two factors (face sequence length and image quality) and studies their effects on the performance of video-based systems which attempt to use a spatio-temporal representation instead of a still image based one. The extensive experimental results show that motion information enhances automatic recognition but not in a systematic way as in the human visual system. Finally, some key findings of the thesis are considered and used for building a system for access control based on detecting and recognizing faces.
Style APA, Harvard, Vancouver, ISO itp.
9

Fernando, Warnakulasuriya Anil Chandana. "Video processing in the compressed domain". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326724.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wibowo, Moh Edi. "Towards pose-robust face recognition on video". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/77836/1/Moh%20Edi_Wibowo_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis investigates face recognition in video under the presence of large pose variations. It proposes a solution that performs simultaneous detection of facial landmarks and head poses across large pose variations, employs discriminative modelling of feature distributions of faces with varying poses, and applies fusion of multiple classifiers to pose-mismatch recognition. Experiments on several benchmark datasets have demonstrated that improved performance is achieved using the proposed solution.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "FAKE VIDEOS"

1

Mezaris, Vasileios, Lyndon Nixon, Symeon Papadopoulos i Denis Teyssou, red. Video Verification in the Fake News Era. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

National Film Board of Canada, red. Face to face video guide: Video resources for race relations training and education. Montréal: National Film Board of Canada, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ji, Qiang, Thomas B. Moeslund, Gang Hua i Kamal Nasrollahi, red. Face and Facial Expression Recognition from Real World Videos. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bai, Xiang, Yi Fang, Yangqing Jia, Meina Kan, Shiguang Shan, Chunhua Shen, Jingdong Wang i in., red. Video Analytics. Face and Facial Expression Recognition. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12177-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Screening the face. Houndmills, Basingstoke, Hampshire: Palgrave Macmillan, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Prager, Alex. Face in the crowd. Washington, DC: Corcoran Gallery of Art, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Nasrollahi, Kamal, Cosimo Distante, Gang Hua, Andrea Cavallaro, Thomas B. Moeslund, Sebastiano Battiato i Qiang Ji, red. Video Analytics. Face and Facial Expression Recognition and Audience Measurement. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56687-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Noll, Katherine. Scholastic's Pokémon hall of fame. New York: Scholastic, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Levy, Frederick. 15 Minutes of Fame. New York: Penguin Group USA, Inc., 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kurit︠s︡yn, Vi︠a︡cheslav, Naili︠a︡ Allakhverdieva, Marat Gelʹman i Iulii︠a︡ Sorokina. Lit︠s︡o nevesty: Sovremennoe iskusstvo Kazakhstana = Face of the bride : contemporary art of Kazakhstan. Permʹ: Muzeĭ sovremennogo iskusstva PERMM, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "FAKE VIDEOS"

1

Roy, Ritaban, Indu Joshi, Abhijit Das i Antitza Dantcheva. "3D CNN Architectures and Attention Mechanisms for Deepfake Detection". W Handbook of Digital Face Manipulation and Detection, 213–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_10.

Pełny tekst źródła
Streszczenie:
AbstractManipulated images and videos have become increasingly realistic due to the tremendous progress of deep convolutional neural networks (CNNs). While technically intriguing, such progress raises a number of social concerns related to the advent and spread of fake information and fake news. Such concerns necessitate the introduction of robust and reliable methods for fake image and video detection. Toward this in this work, we study the ability of state-of-the-art video CNNs including 3D ResNet, 3D ResNeXt, and I3D in detecting manipulated videos. In addition, and toward a more robust detection, we investigate the effectiveness of attention mechanisms in this context. Such mechanisms are introduced in CNN architectures in order to ensure that robust features are being learnt. We test two attention mechanisms, namely SE-block and Non-local networks. We present related experimental results on videos tampered by four manipulation techniques, as included in the FaceForensics++ dataset. We investigate three scenarios, where the networks are trained to detect (a) all manipulated videos, (b) each manipulation technique individually, as well as (c) the veracity of videos pertaining to manipulation techniques not included in the train set.
Style APA, Harvard, Vancouver, ISO itp.
2

Hedge, Amrita Shivanand, M. N. Vinutha, Kona Supriya, S. Nagasundari i Prasad B. Honnavalli. "CLH: Approach for Detecting Deep Fake Videos". W Communications in Computer and Information Science, 539–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-8059-5_33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez i Aythami Morales. "DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame". W Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.

Pełny tekst źródła
Streszczenie:
AbstractThis chapter describes a DeepFake detection framework based on physiological measurement. In particular, we consider information related to the heart rate using remote photoplethysmography (rPPG). rPPG methods analyze video sequences looking for subtle color changes in the human skin, revealing the presence of human blood under the tissues. This chapter explores to what extent rPPG is useful for the detection of DeepFake videos. We analyze the recent fake detector named DeepFakesON-Phys that is based on a Convolutional Attention Network (CAN), which extracts spatial and temporal information from video frames, analyzing and combining both sources to better detect fake videos. DeepFakesON-Phys has been experimentally evaluated using the latest public databases in the field: Celeb-DF v2 and DFDC. The results achieved for DeepFake detection based on a single frame are over 98% AUC (Area Under the Curve) on both databases, proving the success of fake detectors based on physiological measurement to detect the latest DeepFake videos. In this chapter, we also propose and study heuristical and statistical approaches for performing continuous DeepFake detection by combining scores from consecutive frames with low latency and high accuracy (100% on the Celeb-DF v2 evaluation dataset). We show that combining scores extracted from short-time video sequences can improve the discrimination power of DeepFakesON-Phys.
Style APA, Harvard, Vancouver, ISO itp.
4

Markatopoulou, Foteini, Markos Zampoglou, Evlampios Apostolidis, Symeon Papadopoulos, Vasileios Mezaris, Ioannis Patras i Ioannis Kompatsiaris. "Finding Semantically Related Videos in Closed Collections". W Video Verification in the Fake News Era, 127–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kordopatis-Zilos, Giorgos, Symeon Papadopoulos, Ioannis Patras i Ioannis Kompatsiaris. "Finding Near-Duplicate Videos in Large-Scale Collections". W Video Verification in the Fake News Era, 91–126. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Singh, Aadya, Abey Alex George, Pankaj Gupta i Lakshmi Gadhikar. "ShallowFake-Detection of Fake Videos Using Deep Learning". W Conference Proceedings of ICDLAIR2019, 170–78. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67187-7_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Papadopoulou, Olga, Markos Zampoglou, Symeon Papadopoulos i Ioannis Kompatsiaris. "Verification of Web Videos Through Analysis of Their Online Context". W Video Verification in the Fake News Era, 191–221. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Long, Chengjiang, Arslan Basharat i Anthony Hoogs. "Video Frame Deletion and Duplication". W Multimedia Forensics, 333–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.

Pełny tekst źródła
Streszczenie:
AbstractVideos can be manipulated in a number of different ways, including object addition or removal, deep fake videos, temporal removal or duplication of parts of the video, etc. In this chapter, we provide an overview of the previous work related to video frame deletion and duplication and dive into the details of two deep-learning-based approaches for detecting and localizing frame deletion (Chengjiang et al. 2017) and duplication (Chengjiang et al. 2019) manipulations.
Style APA, Harvard, Vancouver, ISO itp.
9

Boccignone, Giuseppe, Sathya Bursic, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi, Raffaella Lanzarotti i Sabrina Patania. "DeepFakes Have No Heart: A Simple rPPG-Based Method to Reveal Fake Videos". W Image Analysis and Processing – ICIAP 2022, 186–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06430-2_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Bao, Heng, Lirui Deng, Jiazhi Guan, Liang Zhang i Xunxun Chen. "Improving Deepfake Video Detection with Comprehensive Self-consistency Learning". W Communications in Computer and Information Science, 151–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8285-9_11.

Pełny tekst źródła
Streszczenie:
AbstractDeepfake videos created by generative-base models have become a serious societal problem recently as been hardly distinguishable by human eyes, which has aroused a lot of academic attention. Previous researches have made effort to address this problem by various schemes to extract visual artifacts of non-pristine frames or discrepancy between real and fake videos, where the patch-based approaches are shown to be promising but mostly used in frame-level prediction. In this paper, we propose a method that leverages comprehensive consistency learning in both spatial and temporal relation with patch-based feature extraction. Extensive experiments on multiple datasets demonstrate the effectiveness and robustness of our approach by combines all consistency cue together.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "FAKE VIDEOS"

1

Shang, Jiacheng, i Jie Wu. "Protecting Real-time Video Chat against Fake Facial Videos Generated by Face Reenactment". W 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2020. http://dx.doi.org/10.1109/icdcs47774.2020.00082.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Zhenguang, Sifan Wu, Chejian Xu, Xiang Wang, Lei Zhu, Shuang Wu i Fuli Feng. "Copy Motion From One to Another: Fake Motion Video Generation". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/171.

Pełny tekst źródła
Streszczenie:
One compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion (from a source person). While the state-of-the-art methods are able to synthesize a video demonstrating similar broad stroke motion details, they are generally lacking in texture details. A pertinent manifestation appears as distorted face, feet, and hands, and such flaws are very sensitively perceived by human observers. Furthermore, current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos, inherently requiring a large amount of training samples to learn the texture details for adequate video generation. In this work, we tackle these challenges from three aspects: 1) We disentangle each video frame into foreground (the person) and background, focusing on generating the foreground to reduce the underlying dimension of the network output. 2) We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image. 3) To enhance texture details, we encode facial features with geometric guidance and employ local GANs to refine the face, feet, and hands. Extensive experiments show that our method is able to generate realistic target person videos, faithfully copying complex motions from a source person. Our code and datasets are released at https://github.com/Sifann/FakeMotion.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng i Shiming Ge. "Detecting Deepfake Videos with Temporal Dropout 3DCNN". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.

Pełny tekst źródła
Streszczenie:
While the abuse of deepfake technology has brought about a serious impact on human society, the detection of deepfake videos is still very challenging due to their highly photorealistic synthesis on each frame. To address that, this paper aims to leverage the possible inconsistent cues among video frames and proposes a Temporal Dropout 3-Dimensional Convolutional Neural Network (TD-3DCNN) to detect deepfake videos. In the approach, the fixed-length frame volumes sampled from a video are fed into a 3-Dimensional Convolutional Neural Network (3DCNN) to extract features across different scales and identified whether they are real or fake. Especially, a temporal dropout operation is introduced to randomly sample frames in each batch. It serves as a simple yet effective data augmentation and can enhance the representation and generalization ability, avoiding model overfitting and improving detecting accuracy. In this way, the resulting video-level classifier is accurate and effective to identify deepfake videos. Extensive experiments on benchmarks including Celeb-DF(v2) and DFDC clearly demonstrate the effectiveness and generalization capacity of our approach.
Style APA, Harvard, Vancouver, ISO itp.
4

Celebi, Naciye, Qingzhong Liu i Muhammed Karatoprak. "A Survey of Deep Fake Detection for Trial Courts". W 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120919.

Pełny tekst źródła
Streszczenie:
Recently, image manipulation has achieved rapid growth due to the advancement of sophisticated image editing tools. A recent surge of generated fake imagery and videos using neural networks is DeepFake. DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones. (GANs) have been extensively used for creating realistic images without accessing the original images. Therefore, it is become essential to detect fake videos to avoid spreading false information. This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes in the literature to date. We present extensive discussions and research trends related to DeepFake technologies.
Style APA, Harvard, Vancouver, ISO itp.
5

Agarwal, Shruti, Hany Farid, Ohad Fried i Maneesh Agrawala. "Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches". W 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00338.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Agarwal, Shruti, Hany Farid, Tarek El-Gaaly i Ser-Nam Lim. "Detecting Deep-Fake Videos from Appearance and Behavior". W 2020 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2020. http://dx.doi.org/10.1109/wifs49906.2020.9360904.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Agarwal, Shruti, i Hany Farid. "Detecting Deep-Fake Videos from Aural and Oral Dynamics". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00109.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Gerstner, Candice R., i Hany Farid. "Detecting Real-Time Deep-Fake Videos Using Active Illumination". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Chauhan, Ruby, Renu Popli i Isha Kansal. "A Comprehensive Review on Fake Images/Videos Detection Techniques". W 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2022. http://dx.doi.org/10.1109/icrito56286.2022.9964871.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mira, Fahad. "Deep Learning Technique for Recognition of Deep Fake Videos". W 2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET). IEEE, 2023. http://dx.doi.org/10.1109/globconet56651.2023.10150143.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "FAKE VIDEOS"

1

Grother, Patrick J., George W. Quinn i Mei Lee Ngan. Face in video evaluation (FIVE) face recognition of non-cooperative subjects. Gaithersburg, MD: National Institute of Standards and Technology, marzec 2017. http://dx.doi.org/10.6028/nist.ir.8173.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Chen, Yi-Chen, Vishal M. Patel, Sumit Shekhar, Rama Chellappa i P. Jonathon Phillips. Video-based face recognition via joint sparse representation. Gaithersburg, MD: National Institute of Standards and Technology, 2013. http://dx.doi.org/10.6028/nist.ir.7906.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lee, Yooyoung, P. Jonathon Phillips, James J. Filliben, J. Ross Beveridge i Hao Zhang. Identifying face quality and factor measures for video. National Institute of Standards and Technology, maj 2014. http://dx.doi.org/10.6028/nist.ir.8004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Тарасова, Олена Юріївна, i Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Pełny tekst źródła
Streszczenie:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
Style APA, Harvard, Vancouver, ISO itp.
5

Drury, J., S. Arias, T. Au-Yeung, D. Barr, L. Bell, T. Butler, H. Carter i in. Public behaviour in response to perceived hostile threats: an evidence base and guide for practitioners and policymakers. University of Sussex, 2023. http://dx.doi.org/10.20919/vjvt7448.

Pełny tekst źródła
Streszczenie:
Background: Public behaviour and the new hostile threats • Civil contingencies planning and preparedness for hostile threats requires accurate and up to date knowledge about how the public might behave in relation to such incidents. Inaccurate understandings of public behaviour can lead to dangerous and counterproductive practices and policies. • There is consistent evidence across both hostile threats and other kinds of emergencies and disasters that significant numbers of those affected give each other support, cooperate, and otherwise interact socially within the incident itself. • In emergency incidents, competition among those affected occurs in only limited situations, and loss of behavioural control is rare. • Spontaneous cooperation among the public in emergency incidents, based on either social capital or emergent social identity, is a crucial part of civil contingencies planning. • There has been relatively little research on public behaviour in response to the new hostile threats of the past ten years, however. • The programme of work summarized in this briefing document came about in response to a wave of false alarm flight incidents in the 2010s, linked to the new hostile threats (i.e., marauding terrorist attacks). • By using a combination of archive data for incidents in Great Britain 2010-2019, interviews, video data analysis, and controlled experiments using virtual reality technology, we were able to examine experiences, measure behaviour, and test hypotheses about underlying psychological mechanisms in both false alarms and public interventions against a hostile threat. Re-visiting the relationship between false alarms and crowd disasters • The Bethnal Green tube disaster of 1943, in which 173 people died, has historically been used to suggest that (mis)perceived hostile threats can lead to uncontrolled ‘stampedes’. • Re-analysis of witness statements suggests that public fears of Germany bombs were realistic rather than unreasonable, and that flight behaviour was socially structured rather than uncontrolled. • Evidence for a causal link between the flight of the crowd and the fatal crowd collapse is weak at best. • Altogether, the analysis suggests the importance of examining people’s beliefs about context to understand when they might interpret ambiguous signals as a hostile threat, and that. Tthe concepts of norms and relationships offer better ways to explain such incidents than ‘mass panic’. Why false alarms occur • The wider context of terrorist threat provides a framing for the public’s perception of signals as evidence of hostile threats. In particular, the magnitude of recent psychologically relevant terrorist attacks predicts likelihood of false alarm flight incidents. • False alarms in Great Britain are more likely to occur in those towns and cities that have seen genuine terrorist incidents. • False alarms in Great Britain are more likely to occur in the types of location where terrorist attacks happen, such as shopping areass, transport hubs, and other crowded places. • The urgent or flight behaviour of other people (including the emergency services) influences public perceptions that there is a hostile threat, particularly in situations of greater ambiguity, and particularly when these other people are ingroup. • High profile tweets suggesting a hostile threat, including from the police, have been associated with the size and scale of false alarm responses. • In most cases, it is a combination of factors – context, others’ behaviour, communications – that leads people to flee. A false alarm tends not to be sudden or impulsive, and often follows an initial phase of discounting threat – as with many genuine emergencies. 2.4 How the public behave in false alarm flight incidents • Even in those false alarm incidents where there is urgent flight, there are also other behaviours than running, including ignoring the ‘threat’, and walking away. • Injuries occur but recorded injuries are relatively uncommon. • Hiding is a common behaviour. In our evidence, this was facilitated by orders from police and offers from people staff in shops and other premises. • Supportive behaviours are common, including informational and emotional support. • Members of the public often cooperate with the emergency services and comply with their orders but also question instructions when the rationale is unclear. • Pushing, trampling and other competitive behaviour can occur,s but only in restricted situations and briefly. • At the Oxford Street Black Friday 2017 false alarm, rather than an overall sense of unity across the crowd, camaraderie existed only in pockets. This was likely due to the lack of a sense of common fate or reference point across the incident; the fragmented experience would have hindered the development of a shared social identity across the crowd. • Large and high profile false alarm incidents may be associated with significant levels of distress and even humiliation among those members of the public affected, both at the time and in the aftermath, as the rest of society reflects and comments on the incident. Public behaviour in response to visible marauding attackers • Spontaneous, coordinated public responses to marauding bladed attacks have been observed on a number of occasions. • Close examination of marauding bladed attacks suggests that members of the public engage in a wide variety of behaviours, not just flight. • Members of the public responding to marauding bladed attacks adopt a variety of complementary roles. These, that may include defending, communicating, first aid, recruiting others, marshalling, negotiating, risk assessment, and evidence gathering. Recommendations for practitioners and policymakers • Embed the psychology of public behaviour in emergencies in your training and guidance. • Continue to inform the public and promote public awareness where there is an increased threat. • Build long-term relations with the public to achieve trust and influence in emergency preparedness. • Use a unifying language and supportive forms of communication to enhance unity both within the crowd and between the crowd and the authorities. • Authorities and responders should take a reflexive approach to their responses to possible hostile threats, by reflecting upon how their actions might be perceived by the public and impact (positively and negatively) upon public behaviour. • To give emotional support, prioritize informative and actionable risk and crisis communication over emotional reassurances. • Provide first aid kits in transport infrastructures to enable some members of the public more effectively to act as zero responders.
Style APA, Harvard, Vancouver, ISO itp.
6

Neural correlates of face familiarity in institutionalised children and links to attachment disordered behaviour. ACAMH, marzec 2023. http://dx.doi.org/10.13056/acamh.23409.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Cybervictimization in adolescence and its association with subsequent suicidal ideation/attempt beyond face‐to‐face victimization: a longitudinal population‐based study – video Q & A. ACAMH, wrzesień 2020. http://dx.doi.org/10.13056/acamh.13319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii