Добірка наукової літератури з теми "DETECTING DEEPFAKES"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "DETECTING DEEPFAKES".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "DETECTING DEEPFAKES"

1

Mai, Kimberly T., Sergi Bray, Toby Davies, and Lewis D. Griffin. "Warning: Humans cannot reliably detect speech deepfakes." PLOS ONE 18, no. 8 (August 2, 2023): e0285333. http://dx.doi.org/10.1371/journal.pone.0285333.

Повний текст джерела
Анотація:
Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dobber, Tom, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?" International Journal of Press/Politics 26, no. 1 (July 25, 2020): 69–91. http://dx.doi.org/10.1177/1940161220944364.

Повний текст джерела
Анотація:
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment ( N = 278). We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vinogradova, Ekaterina. "The malicious use of political deepfakes and attempts to neutralize them in Latin America." Latinskaia Amerika, no. 5 (2023): 35. http://dx.doi.org/10.31857/s0044748x0025404-3.

Повний текст джерела
Анотація:
Deepfake technology has revolutionized the field of artificial intelligence and communication processes, creating a real threat of misinformation of target audiences on digital platforms. The malicious use of political deepfakes has become widespread between 2017 and 2023. The political leaders of Argentina, Brazil, Colombia and Mexico were attacked with elements of doxing. Fake videos that used the politicians' faces undermined their reputations, diminishing the trust of the electorate, and became an advanced tool for manipulating public opinion. A series of political deepfakes has raised an issue for the countries of the Latin American region to develop timely legal regulation of this threat. The purpose of this study is to identify the threats from the uncontrolled use of political deepfake in Latin America. According to this purpose, the author solves the following tasks: analyzes political deepfakes; identifies the main threats from the use of deepfake technology; examines the legislative features of their use in Latin America. The article describes the main detectors and programs for detecting malicious deepfakes, as well as introduces a scientific definition of political deepfake.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Singh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari, and Bharat Rawal. "A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques." Journal of Cybersecurity and Information Management 9, no. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.

Повний текст джерела
Анотація:
Deepfake videos are a growing concern today as they can be used to spread misinformation and manipulate public opinion. In this paper, we investigate the use of different feature extraction techniques for detecting deepfake videos using machine learning algorithms. We explore three feature extraction techniques, including facial landmarks detection, optical flow, and frequency analysis, and evaluate their effectiveness in detecting deepfake videos. We compare the performance of different machine learning algorithms and analyze their ability to detect deepfakes using the extracted features. Our experimental results show that the combination of facial landmarks detection and frequency analysis provides the best performance in detecting deepfake videos, with an accuracy of over 95%. Our findings suggest that machine learning algorithms can be a powerful tool in detecting deepfake videos, and feature extraction techniques play a crucial role in achieving high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Das, Rashmiranjan, Gaurav Negi, and Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification." Electronic Imaging 2021, no. 4 (January 18, 2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Raza, Ali, Kashif Munir, and Mubarak Almutairi. "A Novel Deep Learning Approach for Deepfake Image Detection." Applied Sciences 12, no. 19 (September 29, 2022): 9820. http://dx.doi.org/10.3390/app12199820.

Повний текст джерела
Анотація:
Deepfake is utilized in synthetic media to generate fake visual and audio content based on a person’s existing media. The deepfake replaces a person’s face and voice with fake media to make it realistic-looking. Fake media content generation is unethical and a threat to the community. Nowadays, deepfakes are highly misused in cybercrimes for identity theft, cyber extortion, fake news, financial fraud, celebrity fake obscenity videos for blackmailing, and many more. According to a recent Sensity report, over 96% of the deepfakes are of obscene content, with most victims being from the United Kingdom, United States, Canada, India, and South Korea. In 2019, cybercriminals generated fake audio content of a chief executive officer to call his organization and ask them to transfer $243,000 to their bank account. Deepfake crimes are rising daily. Deepfake media detection is a big challenge and has high demand in digital forensics. An advanced research approach must be built to protect the victims from blackmailing by detecting deepfake content. The primary aim of our research study is to detect deepfake media using an efficient framework. A novel deepfake predictor (DFP) approach based on a hybrid of VGG16 and convolutional neural network architecture is proposed in this study. The deepfake dataset based on real and fake faces is utilized for building neural network techniques. The Xception, NAS-Net, Mobile Net, and VGG16 are the transfer learning techniques employed in comparison. The proposed DFP approach achieved 95% precision and 94% accuracy for deepfake detection. Our novel proposed DFP approach outperformed transfer learning techniques and other state-of-the-art studies. Our novel research approach helps cybersecurity professionals overcome deepfake-related cybercrimes by accurately detecting the deepfake content and saving the deepfake victims from blackmailing.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jameel, Wildan J., Suhad M. Kadhem, and Ayad R. Abbas. "Detecting Deepfakes with Deep Learning and Gabor Filters." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 10, no. 1 (March 18, 2022): 18–22. http://dx.doi.org/10.14500/aro.10917.

Повний текст джерела
Анотація:
The proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue color information. The purpose of this paper is to give the reader a deeper view of (1) enhancing the efficiency of distinguishing fake facial images from real facial images by developing a novel model based on deep learning and Gabor filters and (2) how deep learning (CNN) if combined with forensic tools (Gabor filters) contributed to the detection of deepfakes. Our experiment shows that the training accuracy reaches about 98.06% and 97.50% validation. Likened to the state-of-the-art methods, the proposed model has higher efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Giudice, Oliver, Luca Guarnera, and Sebastiano Battiato. "Fighting Deepfakes by Detecting GAN DCT Anomalies." Journal of Imaging 7, no. 8 (July 30, 2021): 128. http://dx.doi.org/10.3390/jimaging7080128.

Повний текст джерела
Анотація:
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this paper we propose a new pipeline able to detect the so-called GAN Specific Frequencies (GSF) representing a unique fingerprint of the different generative architectures. By employing Discrete Cosine Transform (DCT), anomalous frequencies were detected. The β statistics inferred by the AC coefficients distribution have been the key to recognize GAN-engine generated data. Robustness tests were also carried out in order to demonstrate the effectiveness of the technique using different attacks on images such as JPEG Compression, mirroring, rotation, scaling, addition of random sized rectangles. Experiments demonstrated that the method is innovative, exceeds the state of the art and also give many insights in terms of explainability.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.

Повний текст джерела
Анотація:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gadgilwar, Jitesh, Kunal Rahangdale, Om Jaiswal, Parag Asare, Pratik Adekar, and Prof Leela Bitla. "Exploring Deepfakes - Creation Techniques, Detection Strategies, and Emerging Challenges: A Survey." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (March 31, 2023): 1491–95. http://dx.doi.org/10.22214/ijraset.2023.49681.

Повний текст джерела
Анотація:
Abstract: Deep learning, integrated with Artificial Intelligence algorithms, has brought about numerous beneficial practical technologies. However, it also brings up a problem that the world is facing today. Despite its innumerable suitable applications, it poses a danger to public personal privacy, democracy, and corporate credibility. One such use that has emerged is deepfake, which has caused chaos on the internet. Deepfake manipulates an individual's image and video, creating problems in differentiating the original from the fake. This requires a solution in today's period to counter and automatically detect such media. This study aims to explore the techniques for deepfake creation and detection, using various methods for algorithm analysis and image analysis to find the root of deepfake creation. This study examines image, audio, and ML algorithms to extract a possible sign to analyze deepfake. The research compares the performance of these methods in detecting deepfakes generated using different techniques and datasets. As deepfake is a rapidly evolving technology, we need avant-garde techniques to counter and detect its presence accurately.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "DETECTING DEEPFAKES"

1

Hasanaj, Enis, Albert Aveler, and William Söder. "Cooperative edge deepfake detection." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Повний текст джерела
Анотація:
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gardner, Angelica. "Stronger Together? An Ensemble of CNNs for Deepfakes Detection." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97643.

Повний текст джерела
Анотація:
Deepfakes technology is a face swap technique that enables anyone to replace faces in a video, with highly realistic results. Despite its usefulness, if used maliciously, this technique can have a significant impact on society, for instance, through the spreading of fake news or cyberbullying. This makes the ability of deepfakes detection a problem of utmost importance. In this paper, I tackle the problem of deepfakes detection by identifying deepfakes forgeries in video sequences. Inspired by the state-of-the-art, I study the ensembling of different machine learning solutions built on convolutional neural networks (CNNs) and use these models as objects for comparison between ensemble and single model performances. Existing work in the research field of deepfakes detection suggests that escalated challenges posed by modern deepfake videos make it increasingly difficult for detection methods. I evaluate that claim by testing the detection performance of four single CNN models as well as six stacked ensembles on three modern deepfakes datasets. I compare various ensemble approaches to combine single models and in what way their predictions should be incorporated into the ensemble output. The results I found was that the best approach for deepfakes detection is to create an ensemble, though, the ensemble approach plays a crucial role in the detection performance. The final proposed solution is an ensemble of all available single models which use the concept of soft (weighted) voting to combine its base-learners’ predictions. Results show that this proposed solution significantly improved deepfakes detection performance and substantially outperformed all single models.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.

Повний текст джерела
Анотація:
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

GUARNERA, LUCA. "Discovering Fingerprints for Deepfake Detection and Multimedia-Enhanced Forensic Investigations." Doctoral thesis, Università degli studi di Catania, 2021. http://hdl.handle.net/20.500.11769/539620.

Повний текст джерела
Анотація:
Forensic Science, which concerns the application of technical and scientific methods to justice, investigation and evidence discovery, has evolved over the years to the birth of several fields such as Multimedia Forensics, which involves the analysis of digital images, video and audio contents. Multimedia data was (and still is), altered using common editing tools such as Photoshop and GIMP. Rapid advances in Deep Learning have opened up the possibility of creating sophisticated algorithms capable of manipulating images, video and audio in a “simple” manner causing the emergence of a powerful yet frightening new phenomenon called deepfake: synthetic multimedia data created and/or altered using generative models. A great discovery made by forensic researchers over the years concerns the possibility of extracting a unique fingerprint that can determine the devices and software used to create the data itself. Unfortunately, extracting these traces turns out to be a complicated task. A fingerprint can be extracted not only in multimedia data in order to determine the devices used in the acquisition phase, or the social networks where the file was uploaded, or recently define the generative models used to create deepfakes, but, in general, this trace can be extracted from evidences recovered in a crime scene as shells or projectiles to determine the model of gun that have fired (Forensic Firearms Ballistics Comparison). Forensic Analysis of Handwritten Documents is another field of Forensic Science that can determine the authors of a manuscript by extracting a fingerprint defined by a careful analysis of the text style in the document. Developing new algorithms for Deepfake Detection, Forensic Firearms Ballistics Comparison, and Forensic Handwritten Document Analysis was the main focus of this Ph.D. thesis. These three macro areas of Forensic Science have a common element, namely a unique fingerprint present in the data itself that can be extracted in order to solve the various tasks. Therefore, for each of these topics a preliminary analysis will be performed and new detection techniques will be presented obtaining promising results in all these domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

SONI, ANKIT. "DETECTING DEEPFAKES USING HYBRID CNN-RNN MODEL." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19168.

Повний текст джерела
Анотація:
We are living in the world of digital media and are connected to various types of digital media contents present in form of images and videos. Our lives are surrounded by digital contents and thus originality of content is very important. In the recent times, there is a huge emergence of deep learning-based tools that are used to create believable manipulated media known as Deepfakes. These are realistic fake media, that can cause threat to reputation, privacy and can even prove to be a serious threat to public security. These can even be used to create political distress, spread fake terrorism or for blackmailing anyone. As with growing technology, the tampered media getting generated are way more realistic that it can even bluff the human eyes. Hence, we need better deepfake detection algorithms for efficiently detect deepfakes. The proposed system that has been presented is based on a combination of CNN followed by RNN. The CNN model deployed here is SE-ResNeXt-101. The system proposed uses the CNN model SE-ResNeXt-101 model for extraction of feature vectors from the videos and further these feature vectors are utilized to train the RNN model which is LSTM model for classification of videos as Real or Deepfake. We evaluate our method on the dataset made by collecting huge number of videos from various distributed sources. We demonstrate how a simple architecture can be used to attain competitive results.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

RASOOL, AALE. "DETECTING DEEPFAKES WITH MULTI-MODEL NEURAL NETWORKS: A TRANSFER LEARNING APPROACH." Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19993.

Повний текст джерела
Анотація:
The prevalence of deepfake technology has led to serious worries about the veracity and dependability of visual media. To reduce any harm brought on by the malicious use of this technology, it is essential to identify deepfakes. By using the Vision Transformer (ViT) model for classification and the InceptionResNetV2 architecture for feature extraction, we offer a novel approach to deepfake detection in this thesis. The highly discriminative features are extracted from the input photos using the InceptionResNetV2 network, which has been pre-trained on a substantial dataset. The Vision Transformer model then receives these characteristics and uses the self attention method to identify long-range relationships and categorize the pictures as deepfakes or real. We use transfer learning techniques to improve the performance of the deepfake detection system. The InceptionResNetV2 model is fine-tuned using a deep fake specific dataset, which allows the pre-trained weights to adapt to whatever task is at hand, allowing the extraction of meaningful and discriminative deepfake features. Following that, the refined features are put into the ViT model for categorization. Extensive experiments are conducted to evaluate the performance of our proposed approach using various deepfake datasets. The results demonstrate the effectiveness of the InceptionResNetV2 and ViT combination, achieving high accuracy and robustness in deepfake detection across different types of manipulations, including face swapping and facial re-enactment. Additionally, the utilization of transfer learning significantly reduces the training time and computational resources required to train the deepfake detection system. This research's outcomes contribute to advancing deepfake detection techniques by leveraging state-of-the-art architectures for feature extraction and classification. The fusion of InceptionResNetV2 and ViT, along with the implementation of transfer learning, offers a powerful and efficient solution for accurate deepfake detection, thereby safeguarding the integrity and trustworthiness of visual media in an era of increasing digital manipulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chang, Ching-Tang, and 張景棠. "Detecting Deepfake Videos with CNN and Image Partitioning." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394052%22.&searchmode=basic.

Повний текст джерела
Анотація:
碩士
國立中興大學
資訊科學與工程學系所
107
The AI­generated images are gradually similar to the pictures taken. When the generated images are used in inappropriate cases, it will cause damage to people’s rights and benefits. These doubtful images will cause illegal problems. The issue of detecting digital forgery has existed for many years. However, the fake images generated by the development of science and technology are more difficult to distinguish. Therefore, this thesis based on deep learning technology to detect the controversial face manipulation images. We proposed to segment the image block by block method and use CNN to train the features of each block separately. Finally, each feature is voted in an ensemble model to detect forgery images. Accurately, we recognize Faceswap, DeepFakes, and Face2Face with the dataset provided by FaceForensics++. Nowadays, classifiers require not only high accuracy but also the robustness of different datasets. Therefore, we train some data to test whether it is robust in other data. We collected digital forgeries generated by different methods on the video­sharing platform to test the generalization of our model in detecting these forgeries.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "DETECTING DEEPFAKES"

1

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press LLC, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Busch, Christoph, Christian Rathgeb, Ruben Vera-Rodriguez, and Ruben Tolosana. Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks. Springer International Publishing AG, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Busch, Christoph, Christian Rathgeb, Ruben Vera-Rodriguez, and Ruben Tolosana. Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks. Springer International Publishing AG, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal, and Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal, and Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal, and Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "DETECTING DEEPFAKES"

1

Korshunov, Pavel, and Sébastien Marcel. "The Threat of Deepfakes to Computer and Human Visions." In Handbook of Digital Face Manipulation and Detection, 97–115. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_5.

Повний текст джерела
Анотація:
AbstractDeepfake videos, where a person’s face is automatically swapped with a face of someone else, are becoming easier to generate with more realistic results. The concern for the impact of the widespread deepfake videos on the societal trust in video recordings is growing. In this chapter, we demonstrate how dangerous deepfakes are for both human and computer visions by showing how well these videos can fool face recognition algorithms and naïve human subjects. We also show how well the state-of-the-art deepfake detection algorithms can detect deepfakes and whether they can outperform humans.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zobaed, Sm, Fazle Rabby, Istiaq Hossain, Ekram Hossain, Sazib Hasan, Asif Karim, and Khan Md. Hasib. "DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning." In Advanced Sciences and Technologies for Security Applications, 177–201. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88040-8_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Yuezun, Pu Sun, Honggang Qi, and Siwei Lyu. "Toward the Creation and Obstruction of DeepFakes." In Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.

Повний текст джерела
Анотація:
AbstractAI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for large-scale datasets. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5, 639 high-quality DeepFake videos of celebrities generated using an improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF. Then we introduce Landmark Breaker, the first dedicated method to disrupt facial landmark extraction, and apply it to the obstruction of the generation of DeepFake videos. The experiments are conducted on three state-of-the-art facial landmark extractors using our Celeb-DF dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez, and Aythami Morales. "DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame." In Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.

Повний текст джерела
Анотація:
AbstractThis chapter describes a DeepFake detection framework based on physiological measurement. In particular, we consider information related to the heart rate using remote photoplethysmography (rPPG). rPPG methods analyze video sequences looking for subtle color changes in the human skin, revealing the presence of human blood under the tissues. This chapter explores to what extent rPPG is useful for the detection of DeepFake videos. We analyze the recent fake detector named DeepFakesON-Phys that is based on a Convolutional Attention Network (CAN), which extracts spatial and temporal information from video frames, analyzing and combining both sources to better detect fake videos. DeepFakesON-Phys has been experimentally evaluated using the latest public databases in the field: Celeb-DF v2 and DFDC. The results achieved for DeepFake detection based on a single frame are over 98% AUC (Area Under the Curve) on both databases, proving the success of fake detectors based on physiological measurement to detect the latest DeepFake videos. In this chapter, we also propose and study heuristical and statistical approaches for performing continuous DeepFake detection by combining scores from consecutive frames with low latency and high accuracy (100% on the Celeb-DF v2 evaluation dataset). We show that combining scores extracted from short-time video sequences can improve the discrimination power of DeepFakesON-Phys.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lyu, Siwei. "DeepFake Detection." In Multimedia Forensics, 313–31. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_12.

Повний текст джерела
Анотація:
AbstractOne particular disconcerting form of disinformation are the impersonating audios/videos backed by advanced AI technologies, in particular, deep neural networks (DNNs). These media forgeries are commonly known as the DeepFakes. The AI-based tools are making it easier and faster than ever to create compelling fakes that are challenging to spot. While there are interesting and creative applications of this technology, it can be weaponized to cause negative consequences. In this chapter, we survey the state-of-the-art DeepFake detection methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hao, Hanxiang, Emily R. Bartusiak, David Güera, Daniel Mas Montserrat, Sriram Baireddy, Ziyue Xiang, Sri Kalyan Yarlagadda, et al. "Deepfake Detection Using Multiple Data Modalities." In Handbook of Digital Face Manipulation and Detection, 235–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_11.

Повний текст джерела
Анотація:
AbstractFalsified media threatens key areas of our society, ranging from politics to journalism to economics. Simple and inexpensive tools available today enable easy, credible manipulations of multimedia assets. Some even utilize advanced artificial intelligence concepts to manipulate media, resulting in videos known as deepfakes. Social media platforms and their “echo chamber” effect propagate fabricated digital content at scale, sometimes with dire consequences in real-world situations. However, ensuring semantic consistency across falsified media assets of different modalities is still very challenging for current deepfake tools. Therefore, cross-modal analysis (e.g., video-based and audio-based analysis) provides forensic analysts an opportunity to identify inconsistencies with higher accuracy. In this chapter, we introduce several approaches to detect deepfakes. These approaches leverage different data modalities, including video and audio. We show that the presented methods achieve accurate detection for various large-scale datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Raturi, Sonali, Amit Kumar Mishra, and Srabanti Maji. "Fake News Detection Using Machine Learning." In DeepFakes, 121–33. New York: CRC Press, 2022. http://dx.doi.org/10.1201/9781003231493-10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rastogi, Shreya, Amit Kumar Mishra, and Loveleen Gaur. "Detection of DeepFakes Using Local Features and Convolutional Neural Network." In DeepFakes, 73–89. New York: CRC Press, 2022. http://dx.doi.org/10.1201/9781003231493-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bhilare, Omkar, Rahul Singh, Vedant Paranjape, Sravan Chittupalli, Shraddha Suratkar, and Faruk Kazi. "DEEPFAKE CLI: Accelerated Deepfake Detection Using FPGAs." In Parallel and Distributed Computing, Applications and Technologies, 45–56. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29927-8_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jiang, Liming, Wayne Wu, Chen Qian, and Chen Change Loy. "DeepFakes Detection: the Dataset and Challenge." In Handbook of Digital Face Manipulation and Detection, 303–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_14.

Повний текст джерела
Анотація:
AbstractRecent years have witnessed exciting progress in automatic face swapping and editing. Many techniques have been proposed, facilitating the rapid development of creative content creation. The emergence and easy accessibility of such techniques, however, also cause potential unprecedented ethical and moral issues. To this end, academia and industry proposed several effective forgery detection methods. Nonetheless, challenges could still exist. (1) Current face manipulation advances can produce high-fidelity fake videos, rendering forgery detection challenging. (2) The generalization capability of most existing detection models is poor, particularly in real-world scenarios where the media sources and distortions are unknown. The primary difficulty in overcoming these challenges is the lack of amenable datasets for real-world face forgery detection. Most existing datasets are either of a small number, of low quality, or overly artificial. Meanwhile, the large distribution gap between training data and actual test videos also leads to weak generalization ability. In this chapter, we present our on-going effort of constructing DeeperForensics-1.0, a large-scale forgery detection dataset, to address the challenges above. We discuss approaches to ensure the quality and diversity of the dataset. Besides, we describe the observations we obtained from organizing DeeperForensics Challenge 2020, a real-world face forgery detection competition based on DeeperForensics-1.0. Specifically, we summarize the winning solutions and provide some discussions on potential research directions.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "DETECTING DEEPFAKES"

1

Celebi, Naciye, Qingzhong Liu, and Muhammed Karatoprak. "A Survey of Deep Fake Detection for Trial Courts." In 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120919.

Повний текст джерела
Анотація:
Recently, image manipulation has achieved rapid growth due to the advancement of sophisticated image editing tools. A recent surge of generated fake imagery and videos using neural networks is DeepFake. DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones. (GANs) have been extensively used for creating realistic images without accessing the original images. Therefore, it is become essential to detect fake videos to avoid spreading false information. This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes in the literature to date. We present extensive discussions and research trends related to DeepFake technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kumar, Akash, Arnav Bhavsar, and Rajesh Verma. "Detecting Deepfakes with Metric Learning." In 2020 8th International Workshop on Biometrics and Forensics (IWBF). IEEE, 2020. http://dx.doi.org/10.1109/iwbf49977.2020.9107962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dheeraj, J. C., Krutant Nandakumar, A. V. Aditya, B. S. Chethan, and G. C. R. Kartheek. "Detecting Deepfakes Using Deep Learning." In 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT). IEEE, 2021. http://dx.doi.org/10.1109/rteict52294.2021.9573740.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lacerda, Gustavo Cunha, and Raimundo Claudio da Silva Vasconcelos. "A Machine Learning Approach for DeepFake Detection." In Anais Estendidos da Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/sibgrapi.est.2022.23272.

Повний текст джерела
Анотація:
With the spread of DeepFake techniques, this technology has become quite accessible and good enough that there is concern about its malicious use. Faced with this problem, detecting forged faces is of utmost importance to ensure security and avoid socio-political problems, both on a global and private scale. This paper presents a solution for the detection of DeepFakes using convolution neural networks and a dataset developed for this purpose - Celeb-DF. The results show that, with an overall accuracy of 95% in the classification of these images, the proposed model is close to what exists in the state of the art with the possibility of adjustment for better results in the manipulation techniques that arise in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shiohara, Kaede, and Toshihiko Yamasaki. "Detecting Deepfakes with Self-Blended Images." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01816.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mallet, Jacob, Rushit Dave, Naeem Seliya, and Mounika Vanamala. "Using Deep Learning to Detecting Deepfakes." In 2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI). IEEE, 2022. http://dx.doi.org/10.1109/iscmi56532.2022.10068449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Khichi, Manish, and Rajesh Kumar Yadav. "Analyzing the Methods for Detecting Deepfakes." In 2021 3rd International Conference on Advances in Computing, Communication Control and Networking (ICAC3N). IEEE, 2021. http://dx.doi.org/10.1109/icac3n53548.2021.9725773.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Malik, Yushaa Shafqat, Nosheen Sabahat, and Muhammad Osama Moazzam. "Image Animations on Driving Videos with DeepFakes and Detecting DeepFakes Generated Animations." In 2020 IEEE 23rd International Multitopic Conference (INMIC). IEEE, 2020. http://dx.doi.org/10.1109/inmic50486.2020.9318064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hosler, Brian, Davide Salvi, Anthony Murray, Fabio Antonacci, Paolo Bestagini, Stefano Tubaro, and Matthew C. Stamm. "Do Deepfakes Feel Emotions? A Semantic Approach to Detecting Deepfakes Via Emotional Inconsistencies." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00112.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

He, Yang, Ning Yu, Margret Keuper, and Mario Fritz. "Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/349.

Повний текст джерела
Анотація:
The rapid advances in deep generative models over the past years have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes. These advances make assessing the authenticity of visual data increasingly difficult and pose a misinformation threat to the trustworthiness of visual content in general. Although recent work has shown strong detection accuracy of such deepfakes, the success largely relies on identifying frequency artifacts in the generated images, which will not yield a sustainable detection approach as generative models continue evolving and closing the gap to real images. In order to overcome this issue, we propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection. The re-synthesis procedure is flexible, allowing us to incorporate a series of visual tasks - we adopt super-resolution, denoising and colorization as the re-synthesis. We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios involving multiple generators over CelebA-HQ, FFHQ, and LSUN datasets. Source code is available at https://github.com/SSAW14/BeyondtheSpectrum.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії