Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: DETECTING DEEPFAKES.

Статті в журналах з теми "DETECTING DEEPFAKES"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "DETECTING DEEPFAKES".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mai, Kimberly T., Sergi Bray, Toby Davies, and Lewis D. Griffin. "Warning: Humans cannot reliably detect speech deepfakes." PLOS ONE 18, no. 8 (August 2, 2023): e0285333. http://dx.doi.org/10.1371/journal.pone.0285333.

Повний текст джерела
Анотація:
Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dobber, Tom, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?" International Journal of Press/Politics 26, no. 1 (July 25, 2020): 69–91. http://dx.doi.org/10.1177/1940161220944364.

Повний текст джерела
Анотація:
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment ( N = 278). We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vinogradova, Ekaterina. "The malicious use of political deepfakes and attempts to neutralize them in Latin America." Latinskaia Amerika, no. 5 (2023): 35. http://dx.doi.org/10.31857/s0044748x0025404-3.

Повний текст джерела
Анотація:
Deepfake technology has revolutionized the field of artificial intelligence and communication processes, creating a real threat of misinformation of target audiences on digital platforms. The malicious use of political deepfakes has become widespread between 2017 and 2023. The political leaders of Argentina, Brazil, Colombia and Mexico were attacked with elements of doxing. Fake videos that used the politicians' faces undermined their reputations, diminishing the trust of the electorate, and became an advanced tool for manipulating public opinion. A series of political deepfakes has raised an issue for the countries of the Latin American region to develop timely legal regulation of this threat. The purpose of this study is to identify the threats from the uncontrolled use of political deepfake in Latin America. According to this purpose, the author solves the following tasks: analyzes political deepfakes; identifies the main threats from the use of deepfake technology; examines the legislative features of their use in Latin America. The article describes the main detectors and programs for detecting malicious deepfakes, as well as introduces a scientific definition of political deepfake.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Singh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari, and Bharat Rawal. "A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques." Journal of Cybersecurity and Information Management 9, no. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.

Повний текст джерела
Анотація:
Deepfake videos are a growing concern today as they can be used to spread misinformation and manipulate public opinion. In this paper, we investigate the use of different feature extraction techniques for detecting deepfake videos using machine learning algorithms. We explore three feature extraction techniques, including facial landmarks detection, optical flow, and frequency analysis, and evaluate their effectiveness in detecting deepfake videos. We compare the performance of different machine learning algorithms and analyze their ability to detect deepfakes using the extracted features. Our experimental results show that the combination of facial landmarks detection and frequency analysis provides the best performance in detecting deepfake videos, with an accuracy of over 95%. Our findings suggest that machine learning algorithms can be a powerful tool in detecting deepfake videos, and feature extraction techniques play a crucial role in achieving high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Das, Rashmiranjan, Gaurav Negi, and Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification." Electronic Imaging 2021, no. 4 (January 18, 2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Raza, Ali, Kashif Munir, and Mubarak Almutairi. "A Novel Deep Learning Approach for Deepfake Image Detection." Applied Sciences 12, no. 19 (September 29, 2022): 9820. http://dx.doi.org/10.3390/app12199820.

Повний текст джерела
Анотація:
Deepfake is utilized in synthetic media to generate fake visual and audio content based on a person’s existing media. The deepfake replaces a person’s face and voice with fake media to make it realistic-looking. Fake media content generation is unethical and a threat to the community. Nowadays, deepfakes are highly misused in cybercrimes for identity theft, cyber extortion, fake news, financial fraud, celebrity fake obscenity videos for blackmailing, and many more. According to a recent Sensity report, over 96% of the deepfakes are of obscene content, with most victims being from the United Kingdom, United States, Canada, India, and South Korea. In 2019, cybercriminals generated fake audio content of a chief executive officer to call his organization and ask them to transfer $243,000 to their bank account. Deepfake crimes are rising daily. Deepfake media detection is a big challenge and has high demand in digital forensics. An advanced research approach must be built to protect the victims from blackmailing by detecting deepfake content. The primary aim of our research study is to detect deepfake media using an efficient framework. A novel deepfake predictor (DFP) approach based on a hybrid of VGG16 and convolutional neural network architecture is proposed in this study. The deepfake dataset based on real and fake faces is utilized for building neural network techniques. The Xception, NAS-Net, Mobile Net, and VGG16 are the transfer learning techniques employed in comparison. The proposed DFP approach achieved 95% precision and 94% accuracy for deepfake detection. Our novel proposed DFP approach outperformed transfer learning techniques and other state-of-the-art studies. Our novel research approach helps cybersecurity professionals overcome deepfake-related cybercrimes by accurately detecting the deepfake content and saving the deepfake victims from blackmailing.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jameel, Wildan J., Suhad M. Kadhem, and Ayad R. Abbas. "Detecting Deepfakes with Deep Learning and Gabor Filters." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 10, no. 1 (March 18, 2022): 18–22. http://dx.doi.org/10.14500/aro.10917.

Повний текст джерела
Анотація:
The proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue color information. The purpose of this paper is to give the reader a deeper view of (1) enhancing the efficiency of distinguishing fake facial images from real facial images by developing a novel model based on deep learning and Gabor filters and (2) how deep learning (CNN) if combined with forensic tools (Gabor filters) contributed to the detection of deepfakes. Our experiment shows that the training accuracy reaches about 98.06% and 97.50% validation. Likened to the state-of-the-art methods, the proposed model has higher efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Giudice, Oliver, Luca Guarnera, and Sebastiano Battiato. "Fighting Deepfakes by Detecting GAN DCT Anomalies." Journal of Imaging 7, no. 8 (July 30, 2021): 128. http://dx.doi.org/10.3390/jimaging7080128.

Повний текст джерела
Анотація:
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this paper we propose a new pipeline able to detect the so-called GAN Specific Frequencies (GSF) representing a unique fingerprint of the different generative architectures. By employing Discrete Cosine Transform (DCT), anomalous frequencies were detected. The β statistics inferred by the AC coefficients distribution have been the key to recognize GAN-engine generated data. Robustness tests were also carried out in order to demonstrate the effectiveness of the technique using different attacks on images such as JPEG Compression, mirroring, rotation, scaling, addition of random sized rectangles. Experiments demonstrated that the method is innovative, exceeds the state of the art and also give many insights in terms of explainability.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.

Повний текст джерела
Анотація:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gadgilwar, Jitesh, Kunal Rahangdale, Om Jaiswal, Parag Asare, Pratik Adekar, and Prof Leela Bitla. "Exploring Deepfakes - Creation Techniques, Detection Strategies, and Emerging Challenges: A Survey." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (March 31, 2023): 1491–95. http://dx.doi.org/10.22214/ijraset.2023.49681.

Повний текст джерела
Анотація:
Abstract: Deep learning, integrated with Artificial Intelligence algorithms, has brought about numerous beneficial practical technologies. However, it also brings up a problem that the world is facing today. Despite its innumerable suitable applications, it poses a danger to public personal privacy, democracy, and corporate credibility. One such use that has emerged is deepfake, which has caused chaos on the internet. Deepfake manipulates an individual's image and video, creating problems in differentiating the original from the fake. This requires a solution in today's period to counter and automatically detect such media. This study aims to explore the techniques for deepfake creation and detection, using various methods for algorithm analysis and image analysis to find the root of deepfake creation. This study examines image, audio, and ML algorithms to extract a possible sign to analyze deepfake. The research compares the performance of these methods in detecting deepfakes generated using different techniques and datasets. As deepfake is a rapidly evolving technology, we need avant-garde techniques to counter and detect its presence accurately.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Dobrobaba, M. B. "Deepfakes as a Threat to Human Rights." Lex Russica 75, no. 11 (November 14, 2022): 112–19. http://dx.doi.org/10.17803/1729-5920.2022.192.11.112-119.

Повний текст джерела
Анотація:
The paper is devoted to such a new threat to human rights arising amid digitalization as deepfake technologies. The author shows that the use of such technologies is a tool that can have both positive and negative effects. In particular, the use of dipfakes entails a threat to privacy, violations of the honor and dignity of citizens. In this regard, the legislator is faced with the task of developing and implementing a set of measures, the application of which will minimize the possibility of violation of citizens’ rights by deepfake technologies. It is proposed to direct the efforts of the state: to create automated tools for detecting deepfakes with the ability to determine the date, time and physical origin of their contents, and, if there are signs of potential danger, capable of stopping the placement and distribution of deepfakes; development of fact-checking services and tools; regulatory consolidation of requirements according to which mandatory marking of deepfake content should be carried out in social networks and messengers; formation of a regulatory framework providing for responsibility for the creation and distribution of deepfake materials that can damage business reputation, humiliate the honor and dignity of a person. From the standpoint of copyright, it is proposed to consider deepfakes through the prism of a derivative work, in which the use of the original work without the consent of its copyright holder will be illegal. Taking into account the threats of deepfake technologies to human rights, a set of measures has been proposed. Its use will minimize the possibility of violating citizens’ rights by means of these technologies. These measures are: the introduction of a fact-checking mechanism, the development of its services and tools; mandatory labeling of deepfake content in social networks and messengers; the creation of a regulatory framework providing for responsibility for the creation and distribution of deepfake materials capable of damage the business reputation, humiliate the honor and dignity of a person.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Salvi, Davide, Honggu Liu, Sara Mandelli, Paolo Bestagini, Wenbo Zhou, Weiming Zhang, and Stefano Tubaro. "A Robust Approach to Multimodal Deepfake Detection." Journal of Imaging 9, no. 6 (June 19, 2023): 122. http://dx.doi.org/10.3390/jimaging9060122.

Повний текст джерела
Анотація:
The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tursman, Eleanor. "Detecting deepfakes using crowd consensus." XRDS: Crossroads, The ACM Magazine for Students 27, no. 1 (September 4, 2020): 22–25. http://dx.doi.org/10.1145/3416061.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mateen, Marium, and Narmeen Zakaria Bawany. "Deep Learning Approach for Detecting Audio Deepfakes in Urdu." NUML International Journal of Engineering and Computing 2, no. 1 (July 26, 2023): 1–11. http://dx.doi.org/10.52015/nijec.v2i1.37.

Повний текст джерела
Анотація:
The application of Deep Learning algorithms for speech synthesis has led to the widespread generation of Audio Deepfakes, which are becoming a real threat to voice interfaces. Audio Deepfakes are fake audio recordings that are difficult to differentiate from real recordings because they use AI-generated techniques to clone human voices. When prominent speakers, celebrities, and politicians are the target of Audio Deepfakes, this technology can potentially undermine public confidence and trustworthiness. Therefore, it is essential to create efficient methods and technologies to identify and stop the creation and spread of Audio Deepfakes. To address the critical issue of the widespread circulation of fake audio and to detect Audio Deepfakes, several Machine Learning and Deep Learning techniques have been developed recently. However, most such solutions have been trained using datasets in English, raising concerns about their accuracy and trustworthiness for other languages. The primary objective of this research is to develop a Deep Learning model for detecting Audio Deepfakes in Urdu. For this purpose, the deep learning model is trained using an Urdu language audio dataset. The dataset was prepared using both real and fake audio. The real Urdu audio clips were initially collected from which Deep fakes were generated with the help of the Real-Time Voice Cloning tool. Our Deep Learning-based model is built to detect Audio Deep fakes produced using imitation and synthesis techniques. According to the findings of our study, when tested and evaluated, our model obtained an accuracy of 91 percent.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Choi, Nakhoon, and Heeyoul Kim. "DDS: Deepfake Detection System through Collective Intelligence and Deep-Learning Model in Blockchain Environment." Applied Sciences 13, no. 4 (February 7, 2023): 2122. http://dx.doi.org/10.3390/app13042122.

Повний текст джерела
Анотація:
With the spread of mobile devices and the improvement of the mobile service environment, the use of various Internet content providers (ICPs), including content services such as YouTube and video hosting services, has increased significantly. Video content shared in ICP is used for information delivery and issue checking based on accessibility. However, if the content registered and shared in ICP is manipulated through deepfakes and maliciously distributed to cause political attacks or social problems, it can cause a very large negative effect. This study aims to propose a deepfake detection system that detects manipulated video content distributed in video hosting services while ensuring the transparency and objectivity of the detection subject. The detection method of the proposed system is configured through a blockchain and is not dependent on a single ICP, establishing a cooperative system among multiple ICPs and achieving consensus for the common purpose of deepfake detection. In the proposed system, the deep-learning model for detecting deepfakes is independently driven by each ICP, and the results are ensembled through integrated voting. Furthermore, this study proposes a method to supplement the objectivity of integrated voting and the neutrality of the deep-learning model by ensembling collective intelligence-based voting through the participation of ICP users in the integrated voting process and ensuring high accuracy at the same time. Through the proposed system, the accuracy of the deep-learning model is supplemented by utilizing collective intelligence in the blockchain environment, and the creation of a consortium contract environment for common goals between companies with conflicting interests is illuminated.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wan, Da, Manchun Cai, Shufan Peng, Wenkai Qin, and Lanting Li. "Deepfake Detection Algorithm Based on Dual-Branch Data Augmentation and Modified Attention Mechanism." Applied Sciences 13, no. 14 (July 18, 2023): 8313. http://dx.doi.org/10.3390/app13148313.

Повний текст джерела
Анотація:
Mainstream deepfake detection algorithms generally fail to fully extract forgery traces and have low accuracy when detecting forged images with natural corruptions or human damage. On this basis, a new algorithm based on an adversarial dual-branch data augmentation framework and a modified attention mechanism is proposed in this paper to improve the robustness of detection models. First, this paper combines the traditional random sampling augmentation method with the adversarial sample idea to enhance and expand the forged images in data preprocessing. Then, we obtain training samples with diversity and hardness uniformity. Meanwhile, a new attention mechanism is modified and added to the ResNet50 model. The improved model serves as the backbone, effectively increasing the weight of forged traces in the multi-scale feature maps. The Jensen–Shannon divergence loss and cosine annealing algorithms are introduced into the training process to improve the model’s accuracy and convergence speed. The proposed algorithm is validated on standard and corrupted datasets. The experiments show that the algorithm proposed in this paper significantly improves effectiveness and robustness, with accuracies of 4.16%, 7.37%, and 3.87% better than the baseline model on DeepFakes, FaceSwap, and FaceShifer, respectively. Most importantly, its detection performance on the corrupted datasets DeepFakes-C, FaceSwap-C, and FaceShifer-C is much higher than that of mainstream methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Frick, Raphael Antonius, Sascha Zmudzinski, and Martin Steinebach. "Detecting Deepfakes with Haralick’s Texture Properties." Electronic Imaging 2021, no. 4 (January 18, 2021): 271–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-271.

Повний текст джерела
Анотація:
In the recent years, the detection of deepfake videos has become a major topic in the field of digital media forensics, as the amount of such videos circulating on the internet has drastically risen. Providers of content, such as Facebook and Amazon, have become aware of this new threat to spreading misinformation on the Internet. In this work, a novel forgery detection method based on the texture analysis known from image classification and segmentation is proposed. In the experimental results, its performance has shown to be comparable to related works.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Taeb, Maryam, and Hongmei Chi. "Comparison of Deepfake Detection Techniques through Deep Learning." Journal of Cybersecurity and Privacy 2, no. 1 (March 4, 2022): 89–106. http://dx.doi.org/10.3390/jcp2010007.

Повний текст джерела
Анотація:
Deepfakes are realistic-looking fake media generated by deep-learning algorithms that iterate through large datasets until they have learned how to solve the given problem (i.e., swap faces or objects in video and digital content). The massive generation of such content and modification technologies is rapidly affecting the quality of public discourse and the safeguarding of human rights. Deepfakes are being widely used as a malicious source of misinformation in court that seek to sway a court’s decision. Because digital evidence is critical to the outcome of many legal cases, detecting deepfake media is extremely important and in high demand in digital forensics. As such, it is important to identify and build a classifier that can accurately distinguish between authentic and disguised media, especially in facial-recognition systems as it can be used in identity protection too. In this work, we compare the most common, state-of-the-art face-detection classifiers such as Custom CNN, VGG19, and DenseNet-121 using an augmented real and fake face-detection dataset. Data augmentation is used to boost performance and reduce computational resources. Our preliminary results indicate that VGG19 has the best performance and highest accuracy of 95% when compared with other analyzed models.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Amatika, Faith. "The Regulation of Deepfakes in Kenya." Journal of Intellectual Property and Information Technology Law (JIPIT) 2, no. 1 (September 15, 2022): 145–86. http://dx.doi.org/10.52907/jipit.v2i1.208.

Повний текст джерела
Анотація:
‘Truth has become elusive.’ ‘We are entering into an age of information apocalypse.’ ‘Seeing is no longer believing unless you saw it live.’ These and similar statements characterise most discussions in the present highly digital age. With the borderless nature of the Internet, it is possible to share videos, photos, and information with countless people provided one has a reliable internet source and a smart gadget, for instance, a mobile phone. Technological advancements have also made it possible for tech-savvy individuals to compile computer programs that make it possible to swap faces and replace them with those of celebrities, politicians, et cetera. Yet even more sophisticated technology uses Artificial Intelligence (AI) methods to create videos and photos that are not easily distinguishable from the real ones. ‘Deepfakes’ has become a buzzword. Along this line, this paper posits that there is widescale misinformation due to deepfakes and assesses the regulation of deepfakes in Kenya to curb the misinformation. It recommends pragmatic ways to train forensic experts and to create awareness among members of the public for detecting deepfakes, hence curbing their negative effects.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Arshed, Muhammad Asad, Ayed Alwadain, Rao Faizan Ali, Shahzad Mumtaz, Muhammad Ibrahim, and Amgad Muneer. "Unmasking Deception: Empowering Deepfake Detection with Vision Transformer Network." Mathematics 11, no. 17 (August 29, 2023): 3710. http://dx.doi.org/10.3390/math11173710.

Повний текст джерела
Анотація:
With the development of image-generating technologies, significant progress has been made in the field of facial manipulation techniques. These techniques allow people to easily modify media information, such as videos and images, by substituting the identity or facial expression of one person with the face of another. This has significantly increased the availability and accessibility of such tools and manipulated content termed ‘deepfakes’. Developing an accurate method for detecting fake images needs time to prevent their misuse and manipulation. This paper examines the capabilities of the Vision Transformer (ViT), i.e., extracting global features to detect deepfake images effectively. After conducting comprehensive experiments, our method demonstrates a high level of effectiveness, achieving a detection accuracy, precision, recall, and F1 rate of 99.5 to 100% for both the original and mixture data set. According to our existing understanding, this study is a research endeavor incorporating real-world applications, specifically examining Snapchat-filtered images.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yasrab, Robail, Wanqi Jiang, and Adnan Riaz. "Fighting Deepfakes Using Body Language Analysis." Forecasting 3, no. 2 (April 28, 2021): 303–21. http://dx.doi.org/10.3390/forecast3020020.

Повний текст джерела
Анотація:
Recent improvements in deepfake creation have made deepfake videos more realistic. Moreover, open-source software has made deepfake creation more accessible, which reduces the barrier to entry for deepfake creation. This could pose a threat to the people’s privacy. There is a potential danger if the deepfake creation techniques are used by people with an ulterior motive to produce deepfake videos of world leaders to disrupt the order of countries and the world. Therefore, research into the automatic detection of deepfaked media is essential for public security. In this work, we propose a deepfake detection method using upper body language analysis. Specifically, a many-to-one LSTM network was designed and trained as a classification model for deepfake detection. Different models were trained by varying the hyperparameters to build a final model with benchmark accuracy. We achieved 94.39% accuracy on the deepfake test set. The experimental results showed that upper body language can effectively detect deepfakes.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Tran, Van-Nhan, Suk-Hwan Lee, Hoanh-Su Le, and Ki-Ryong Kwon. "High Performance DeepFake Video Detection on CNN-Based with Attention Target-Specific Regions and Manual Distillation Extraction." Applied Sciences 11, no. 16 (August 20, 2021): 7678. http://dx.doi.org/10.3390/app11167678.

Повний текст джерела
Анотація:
The rapid development of deep learning models that can produce and synthesize hyper-realistic videos are known as DeepFakes. Moreover, the growth of forgery data has prompted concerns about malevolent intent usage. Detecting forgery videos are a crucial subject in the field of digital media. Nowadays, most models are based on deep learning neural networks and vision transformer, SOTA model with EfficientNetB7 backbone. However, due to the usage of excessively large backbones, these models have the intrinsic drawback of being too heavy. In our research, a high performance DeepFake detection model for manipulated video is proposed, ensuring accuracy of the model while keeping an appropriate weight. We inherited content from previous research projects related to distillation methodology but our proposal approached in a different way with manual distillation extraction, target-specific regions extraction, data augmentation, frame and multi-region ensemble, along with suggesting a CNN-based model as well as flexible classification with a dynamic threshold. Our proposal can reduce the overfitting problem, a common and particularly important problem affecting the quality of many models. So as to analyze the quality of our model, we performed tests on two datasets. DeepFake Detection Dataset (DFDC) with our model obtains 0.958 of AUC and 0.9243 of F1-score, compared with the SOTA model which obtains 0.972 of AUC and 0.906 of F1-score, and the smaller dataset Celeb-DF v2 with 0.978 of AUC and 0.9628 of F1-score.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Le, Vincent. "The Deepfakes to Come: A Turing Cop’s Nightmare." Identities: Journal for Politics, Gender and Culture 17, no. 2-3 (December 30, 2020): 8–18. http://dx.doi.org/10.51151/identities.v17i2-3.468.

Повний текст джерела
Анотація:
In 1950, Turing proposed to answer the question “can machines think” by staging an “imitation game” where a hidden computer attempts to mislead a human interrogator into believing it is human. While the cybercrime of bots defrauding people by posing as Nigerian princes and lascivious e-girls indicates humans have been losing the Turing test for some time, this paper focuses on “deepfakes,” artificial neural nets generating realistic audio-visual simulations of public figures, as a variation on the imitation game. Deepfakes blur the lines between fact and fiction, making it possible for the mere fiction of a nuclear apocalypse to make itself real. Seeing oneself becoming another, doing and saying strange things as if demonically possessed, triggers a disillusionment of our sense of self as human cloning and sinister doppelgängers become a reality that’s open-source and free. Along with electronic club music, illicit drugs, movies like Ex Machina and the coming sex robots, the primarily pornographic deepfakes are how the aliens invade by hijacking human drives in the pursuit of a machinic desire. Contrary to the popular impression that deepfakes exemplify the post-truth phenomenon of fake news, they mark an anarchic, massively distributed anti-fascist resistance network capable of sabotaging centralized, authoritarian institutions’ hegemonic narratives. That the only realistic “solutions” for detecting deepfakes have been to build better machines capable of exposing them ultimately suggests that human judgment is soon to be discarded into the dustbin of history. From now on, only a machine can win the Turing test against another machine. Author(s): Vincent Le Title (English): The Deepfakes to Come: A Turing Cop’s Nightmare Journal Reference: Identities: Journal for Politics, Gender and Culture, Vol. 17, No. 2-3 (Winter 2020) Publisher: Institute of Social Sciences and Humanities - Skopje Page Range: 8-18 Page Count: 11 Citation (English): Vincent Le, “The Deepfakes to Come: A Turing Cop’s Nightmare,” Identities: Journal for Politics, Gender and Culture, Vol. 17, No. 2-3 (Winter 2020): 8-18. Author Biography Vincent Le, Monash University Vincent Le is a PhD candidate in philosophy at Monash University. He has taught philosophy at Deakin University and The Melbourne School of Continental Philosophy. He has published in Hypatia, Cosmos and History, Art + Australia, Šum, Horror Studies and Colloquy, among other journals. His recent work focuses on the reckless propagation of the will to critique.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Frick, Raphael Antonius, Sascha Zmudzinski, and Martin Steinebach. "Detecting “DeepFakes” in H.264 Video Data Using Compression Ghost Artifacts." Electronic Imaging 2020, no. 4 (January 26, 2020): 116–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.4.mwsf-116.

Повний текст джерела
Анотація:
In recent years, the number of forged videos circulating on the Internet has immensely increased. Software and services to create such forgeries have become more and more accessible to the public. In this regard, the risk of malicious use of forged videos has risen. This work proposes an approach based on the Ghost effect knwon from image forensics for detecting forgeries in videos that can replace faces in video sequences or change the mimic of a face. The experimental results show that the proposed approach is able to identify forgery in high-quality encoded video content.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Saxena, Akash, Dharmendra Yadav, Manish Gupta, Sunil Phulre, Tripti Arjariya, Varshali Jaiswal, and Rakesh Kumar Bhujade. "Detecting Deepfakes: A Novel Framework Employing XceptionNet-Based Convolutional Neural Networks." Traitement du Signal 40, no. 3 (June 28, 2023): 835–46. http://dx.doi.org/10.18280/ts.400301.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

A. Abu-Ein, Ashraf, Obaida M. Al-Hazaimeh, Alaa M. Dawood, and Andraws I. Swidan. "Analysis of the current state of deepfake techniques-creation and detection methods." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 3 (October 7, 2022): 1659. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1659-1667.

Повний текст джерела
Анотація:
Deep learning has effectively solved complicated challenges ranging from large data analytics to human level control and computer vision. However, deep learning has been used to produce software that threatens privacy, democracy, and national security. Deepfake is one of these new applications backed by deep learning. Fake images and movies created by Deepfake algorithms might be difficult for people to tell apart from real ones. This necessitates the development of tools that can automatically detect and evaluate the quality of digital visual media. This paper provides an overview of the algorithms and datasets used to build deepfakes, as well as the approaches presented to detect deepfakes to date. By reviewing the background of deepfakes methods, this paper provides a complete overview of deepfake approaches and promotes the creation of new and more robust strategies to deal with the increasingly complex deepfakes.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Godulla, Alexander, Christian P. Hoffmann, and Daniel Seibert. "Dealing with deepfakes – an interdisciplinary examination of the state of research and implications for communication studies." Studies in Communication and Media 10, no. 1 (2021): 72–96. http://dx.doi.org/10.5771/2192-4007-2021-1-72.

Повний текст джерела
Анотація:
Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify research gaps and derive implications for future communication studies research.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Shahzad, Hina Fatima, Furqan Rustam, Emmanuel Soriano Flores, Juan Luís Vidal Mazón, Isabel de la Torre Diez, and Imran Ashraf. "A Review of Image Processing Techniques for Deepfakes." Sensors 22, no. 12 (June 16, 2022): 4556. http://dx.doi.org/10.3390/s22124556.

Повний текст джерела
Анотація:
Deep learning is used to address a wide range of challenging issues including large data analysis, image processing, object detection, and autonomous control. In the same way, deep learning techniques are also used to develop software and techniques that pose a danger to privacy, democracy, and national security. Fake content in the form of images and videos using digital manipulation with artificial intelligence (AI) approaches has become widespread during the past few years. Deepfakes, in the form of audio, images, and videos, have become a major concern during the past few years. Complemented by artificial intelligence, deepfakes swap the face of one person with the other and generate hyper-realistic videos. Accompanying the speed of social media, deepfakes can immediately reach millions of people and can be very dangerous to make fake news, hoaxes, and fraud. Besides the well-known movie stars, politicians have been victims of deepfakes in the past, especially US presidents Barak Obama and Donald Trump, however, the public at large can be the target of deepfakes. To overcome the challenge of deepfake identification and mitigate its impact, large efforts have been carried out to devise novel methods to detect face manipulation. This study also discusses how to counter the threats from deepfake technology and alleviate its impact. The outcomes recommend that despite a serious threat to society, business, and political institutions, they can be combated through appropriate policies, regulation, individual actions, training, and education. In addition, the evolution of technology is desired for deepfake identification, content authentication, and deepfake prevention. Different studies have performed deepfake detection using machine learning and deep learning techniques such as support vector machine, random forest, multilayer perceptron, k-nearest neighbors, convolutional neural networks with and without long short-term memory, and other similar models. This study aims to highlight the recent research in deepfake images and video detection, such as deepfake creation, various detection algorithms on self-made datasets, and existing benchmark datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Vinay, A., Paras S. Khurana, T. B. Sudarshan, S. Natarajan, Vivek Nagesh, Vishruth Lakshminarayanan, and Niput Bhat. "AFMB-Net." Tehnički glasnik 16, no. 4 (September 26, 2022): 503–8. http://dx.doi.org/10.31803/tg-20220403080215.

Повний текст джерела
Анотація:
With advances in deepfake generating technology, it is getting increasingly difficult to detect deepfakes. Deepfakes can be used for many malpractices such as blackmail, politics, social media, etc. These can lead to widespread misinformation and can be harmful to an individual or an institution’s reputation. It has become important to be able to identify deepfakes effectively, while there exist many machine learning techniques to identify them, these methods are not able to cope up with the rapidly improving GAN technology which is used to generate deepfakes. Our project aims to identify deepfakes successfully using machine learning along with Heart Rate Analysis. The heart rate identified by our model is unique to each individual and cannot be spoofed or imitated by a GAN and is thus susceptible to improving GAN technology. To solve the deepfake detection problem we employ various machine learning models along with heart rate analysis to detect deepfakes.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Jiang, Jianguo, Boquan Li, Baole Wei, Gang Li, Chao Liu, Weiqing Huang, Meimei Li, and Min Yu. "FakeFilter: A cross-distribution Deepfake detection system with domain adaptation." Journal of Computer Security 29, no. 4 (June 18, 2021): 403–21. http://dx.doi.org/10.3233/jcs-200124.

Повний текст джерела
Анотація:
Abuse of face swap techniques poses serious threats to the integrity and authenticity of digital visual media. More alarmingly, fake images or videos created by deep learning technologies, also known as Deepfakes, are more realistic, high-quality, and reveal few tampering traces, which attracts great attention in digital multimedia forensics research. To address those threats imposed by Deepfakes, previous work attempted to classify real and fake faces by discriminative visual features, which is subjected to various objective conditions such as the angle or posture of a face. Differently, some research devises deep neural networks to discriminate Deepfakes at the microscopic-level semantics of images, which achieves promising results. Nevertheless, such methods show limited success as encountering unseen Deepfakes created with different methods from the training sets. Therefore, we propose a novel Deepfake detection system, named FakeFilter, in which we formulate the challenge of unseen Deepfake detection into a problem of cross-distribution data classification, and address the issue with a strategy of domain adaptation. By mapping different distributions of Deepfakes into similar features in a certain space, the detection system achieves comparable performance on both seen and unseen Deepfakes. Further evaluation and comparison results indicate that the challenge has been successfully addressed by FakeFilter.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Guarnera, Luca, Oliver Giudice, Francesco Guarnera, Alessandro Ortis, Giovanni Puglisi, Antonino Paratore, Linh M. Q. Bui, et al. "The Face Deepfake Detection Challenge." Journal of Imaging 8, no. 10 (September 28, 2022): 263. http://dx.doi.org/10.3390/jimaging8100263.

Повний текст джерела
Анотація:
Multimedia data manipulation and forgery has never been easier than today, thanks to the power of Artificial Intelligence (AI). AI-generated fake content, commonly called Deepfakes, have been raising new issues and concerns, but also new challenges for the research community. The Deepfake detection task has become widely addressed, but unfortunately, approaches in the literature suffer from generalization issues. In this paper, the Face Deepfake Detection and Reconstruction Challenge is described. Two different tasks were proposed to the participants: (i) creating a Deepfake detector capable of working in an “in the wild” scenario; (ii) creating a method capable of reconstructing original images from Deepfakes. Real images from CelebA and FFHQ and Deepfake images created by StarGAN, StarGAN-v2, StyleGAN, StyleGAN2, AttGAN and GDWCT were collected for the competition. The winning teams were chosen with respect to the highest classification accuracy value (Task I) and “minimum average distance to Manhattan” (Task II). Deep Learning algorithms, particularly those based on the EfficientNet architecture, achieved the best results in Task I. No winners were proclaimed for Task II. A detailed discussion of teams’ proposed methods with corresponding ranking is presented in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

López-Gil, Juan-Miguel, Rosa Gil, and Roberto García. "Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression." Computational Intelligence and Neuroscience 2022 (October 18, 2022): 1–12. http://dx.doi.org/10.1155/2022/1332122.

Повний текст джерела
Анотація:
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Khormali, Aminollah, and Jiann-Shiun Yuan. "ADD: Attention-Based DeepFake Detection Approach." Big Data and Cognitive Computing 5, no. 4 (September 27, 2021): 49. http://dx.doi.org/10.3390/bdcc5040049.

Повний текст джерела
Анотація:
Recent advancements of Generative Adversarial Networks (GANs) pose emerging yet serious privacy risks threatening digital media’s integrity and trustworthiness, specifically digital video, through synthesizing hyper-realistic images and videos, i.e., DeepFakes. The need for ascertaining the trustworthiness of digital media calls for automatic yet accurate DeepFake detection algorithms. This paper presents an attention-based DeepFake detection (ADD) method that exploits the fine-grained and spatial locality attributes of artificially synthesized videos for enhanced detection. ADD framework is composed of two main components including face close-up and face shut-off data augmentation methods and is applicable to any classifier based on convolutional neural network architecture. ADD first locates potentially manipulated areas of the input image to extract representative features. Second, the detection model is forced to pay more attention to these forgery regions in the decision-making process through a particular focus on interpreting the sample in the learning phase. ADD’s performance is evaluated against two challenging datasets of DeepFake forensics, i.e., Celeb-DF (V2) and WildDeepFake. We demonstrated the generalization of ADD by evaluating four popular classifiers, namely VGGNet, ResNet, Xception, and MobileNet. The obtained results demonstrate that ADD can boost the detection performance of all four baseline classifiers significantly on both benchmark datasets. Particularly, ADD with ResNet backbone detects DeepFakes with more than 98.3% on Celeb-DF (V2), outperforming state-of-the-art DeepFake detection methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Shad, Hasin Shahed, Md Mashfiq Rizvee, Nishat Tasnim Roza, S. M. Ahsanul Hoq, Mohammad Monirujjaman Khan, Arjun Singh, Atef Zaguia, and Sami Bourouis. "Comparative Analysis of Deepfake Image Detection Method Using Convolutional Neural Network." Computational Intelligence and Neuroscience 2021 (December 16, 2021): 1–18. http://dx.doi.org/10.1155/2021/3111676.

Повний текст джерела
Анотація:
Generation Z is a data-driven generation. Everyone has the entirety of humanity’s knowledge in their hands. The technological possibilities are endless. However, we use and misuse this blessing to face swap using deepfake. Deepfake is an emerging subdomain of artificial intelligence technology in which one person’s face is overlaid over another person’s face, which is very prominent across social media. Machine learning is the main element of deepfakes, and it has allowed deepfake images and videos to be generated considerably faster and at a lower cost. Despite the negative connotations associated with the phrase “deepfakes,” the technology is being more widely employed commercially and individually. Although it is relatively new, the latest technological advances make it more and more challenging to detect deepfakes and synthesized images from real ones. An increasing sense of unease has developed around the emergence of deepfake technologies. Our main objective is to detect deepfake images from real ones accurately. In this research, we implemented several methods to detect deepfake images and make a comparative analysis. Our model was trained by datasets from Kaggle, which had 70,000 images from the Flickr dataset and 70,000 images produced by styleGAN. For this comparative study of the use of convolutional neural networks (CNN) to identify genuine and deepfake pictures, we trained eight different CNN models. Three of these models were trained using the DenseNet architecture (DenseNet121, DenseNet169, and DenseNet201); two were trained using the VGGNet architecture (VGG16, VGG19); one was with the ResNet50 architecture, one with the VGGFace, and one with a bespoke CNN architecture. We have also implemented a custom model that incorporates methods like dropout and padding that aid in determining whether or not the other models reflect their objectives. The results were categorized by five evaluation metrics: accuracy, precision, recall, F1-score, and area under the ROC (receiver operating characteristic) curve. Amongst all the models, VGGFace performed the best, with 99% accuracy. Besides, we obtained 97% from the ResNet50, 96% from the DenseNet201, 95% from the DenseNet169, 94% from the VGG19, 92% from the VGG16, 97% from the DenseNet121 model, and 90% from the custom model.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Noreen, Iram, Muhammad Shahid Muneer, and Saira Gillani. "Deepfake attack prevention using steganography GANs." PeerJ Computer Science 8 (October 20, 2022): e1125. http://dx.doi.org/10.7717/peerj-cs.1125.

Повний текст джерела
Анотація:
Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. Methods Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an “attention model” with the ReLU activation function to achieve a fast learning rate. Results The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Bogdanova, D. A. "About some aspects of digital ecology." Informatics in school, no. 7 (November 19, 2021): 15–19. http://dx.doi.org/10.32517/2221-1993-2021-20-7-15-19.

Повний текст джерела
Анотація:
The situation with the spread of disinformation in the modern information environment and the complexity of the presentation and perception of refutations caused by the effects of "lasting influence", "reverse action" and the existence of polarized communities — "echo chambers" have been analysed. The dangers of deepfakes have been considered. A new type of content marketing with the self-explanatory name clickbait has been considered. It has seriously revolutionized the way content is distributed and attracted readers' attention. The proficiency of media literacy skills by children and adults in these conditions becomes a key factor in assessing online content and detecting misinformation. And as disinformation takes on new and viral forms, teaching children and teachers how to identify it and interact with it becomes especially relevant. It is concluded that it is necessary to create and introduce a media literacy course into the training program for all subject teachers.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Akhtar, Zahid. "Deepfakes Generation and Detection: A Short Survey." Journal of Imaging 9, no. 1 (January 13, 2023): 18. http://dx.doi.org/10.3390/jimaging9010018.

Повний текст джерела
Анотація:
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems under various face manipulations. Next, this survey presents an overview of the techniques and works that have been carried out in recent years for deepfake and face manipulations. Especially, four kinds of deepfake or face manipulations are reviewed, i.e., identity swap, face reenactment, attribute manipulation, and entire face synthesis. For each category, deepfake or face manipulation generation methods as well as those manipulation detection methods are detailed. Despite significant progress based on traditional and advanced computer vision, artificial intelligence, and physics, there is still a huge arms race surging up between attackers/offenders/adversaries (i.e., DeepFake generation methods) and defenders (i.e., DeepFake detection methods). Thus, open challenges and potential research directions are also discussed. This paper is expected to aid the readers in comprehending deepfake generation and detection mechanisms, together with open issues and future directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Coccomini, Davide Alessandro, Roberto Caldelli, Fabrizio Falchi, and Claudio Gennaro. "On the Generalization of Deep Learning Models in Video Deepfake Detection." Journal of Imaging 9, no. 5 (April 29, 2023): 89. http://dx.doi.org/10.3390/jimaging9050089.

Повний текст джерела
Анотація:
The increasing use of deep learning techniques to manipulate images and videos, commonly referred to as “deepfakes”, is making it more challenging to differentiate between real and fake content, while various deepfake detection systems have been developed, they often struggle to detect deepfakes in real-world situations. In particular, these methods are often unable to effectively distinguish images or videos when these are modified using novel techniques which have not been used in the training set. In this study, we carry out an analysis of different deep learning architectures in an attempt to understand which is more capable of better generalizing the concept of deepfake. According to our results, it appears that Convolutional Neural Networks (CNNs) seem to be more capable of storing specific anomalies and thus excel in cases of datasets with a limited number of elements and manipulation methodologies. The Vision Transformer, conversely, is more effective when trained with more varied datasets, achieving more outstanding generalization capabilities than the other methods analysed. Finally, the Swin Transformer appears to be a good alternative for using an attention-based method in a more limited data regime and performs very well in cross-dataset scenarios. All the analysed architectures seem to have a different way to look at deepfakes, but since in a real-world environment the generalization capability is essential, based on the experiments carried out, the attention-based architectures seem to provide superior performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Olariu, Oana. "Critical Thinking as Dynamic Shield against Media Deception. Exploring Connections between the Analytical Mind and Detecting Disinformation Techniques and Logical Fallacies in Journalistic Production." Logos Universality Mentality Education Novelty: Social Sciences 11, no. 1 (September 2, 2022): 29–57. http://dx.doi.org/10.18662/lumenss/11.1/61.

Повний текст джерела
Анотація:
As research on fake news and deepfakes advanced, a growing consensus is building towards considering critical and analytical thinking, as well as general or topic specific knowledge, which is related to information literacy, as the main significant or effective factors in curving vulnerability to bogus digital content. However, although the connection might be intuitive, the processes linking critical or analytical thinking to manipulation resistance are still not known and understudied. The present study aims to contribute to filling this gap by exploring how analytically driven conclusions over a media content relate to proper evaluations of its credibility. In order to observe how observations highlighted through critical engagement with a specific content are related with awareness on its manipulative structure, a biased, not fake, journalistic article was first passed through Faircough’s (2013) model of Critical Discourse Analysis, which was adapted for media studies. The same article was then screened for disinformation techniques embedded in its architecture, as well as for logical fallacies incorporated as arguments. Preliminary conclusions show that analytical thinking outcomes are consistent with evaluations based on particular filters for credibility attribution. Furthermore, the two ways derived observations over the same content, partially overlap.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Maharjan, Ashish, and Asish Shakya. "Learning Approaches used by Different Applications to Achieve Deep Fake Technology." Interdisciplinary Journal of Innovation in Nepalese Academia 2, no. 1 (June 22, 2023): 96–101. http://dx.doi.org/10.3126/idjina.v2i1.55969.

Повний текст джерела
Анотація:
Deepfake technology is an emerging field that has gained considerable attention in recent years. Deepfakes are synthetic media, including images, videos, and audio recordings, that are manipulated by advanced machine learning algorithms to produce convincing yet entirely artificial content. This paper explores the various applications and the technologies used by them to achieve deep fake. The machine learning algorithms and the software are used by each of them for proper execution of the technology. Further, we discuss the future prospects of the deepfake technology and explore future directions for research and development in this area, including the need for improved detection and verification techniques and increased education and awareness among the public.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lee, Gihun, and Mihui Kim. "Deepfake Detection Using the Rate of Change between Frames Based on Computer Vision." Sensors 21, no. 21 (November 5, 2021): 7367. http://dx.doi.org/10.3390/s21217367.

Повний текст джерела
Анотація:
Recently, artificial intelligence has been successfully used in fields, such as computer vision, voice, and big data analysis. However, various problems, such as security, privacy, and ethics, also occur owing to the development of artificial intelligence. One such problem are deepfakes. Deepfake is a compound word for deep learning and fake. It refers to a fake video created using artificial intelligence technology or the production process itself. Deepfakes can be exploited for political abuse, pornography, and fake information. This paper proposes a method to determine integrity by analyzing the computer vision features of digital content. The proposed method extracts the rate of change in the computer vision features of adjacent frames and then checks whether the video is manipulated. The test demonstrated the highest detection rate of 97% compared to the existing method or machine learning method. It also maintained the highest detection rate of 96%, even for the test that manipulates the matrix of the image to avoid the convolutional neural network detection method.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kale, Prachi. "Forensic Verification and Detection of Fake Video using Deep Fake Algorithm." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 2789–94. http://dx.doi.org/10.22214/ijraset.2021.35599.

Повний текст джерела
Анотація:
In the course of the most recen years, the ascent in cell phones and interpersonal organizations has made computerized pictures and recordings basic advanced articles. per reports, right around two billion pictures are transferred every day on the web. This gigantic utilization of computerized pictures has been trailed by an increment of methods to change picture substance, utilizing altering programming like Photoshop for instance. Counterfeit recordings and pictures made by deepFake methods turned into a decent open issue as of late. These days a few procedures for facial control in recordings are effectively evolved like FaceSwap, deepFake, and so on On one side, this innovative progression increment degree to new regions (e.g., film making, special visualization, visual expressions, and so on) On the contrary side, repudiating, it likewise expands the advantage inside the age of video frauds by malignant clients. In this manner by utilizing profound learning strategies we can distinguish the video is phony or not. to recognize these malevolent pictures, we are visiting foster a framework which will naturally identify and survey the trustworthiness of advanced visual media is in this way crucial. Deepfake could be a procedure for human picture union upheld AI, i.e., to superimpose the predominant (source) pictures or recordings onto objective pictures or recordings utilizing neural organizations (NNs). Deepfake aficionados are utilizing NNs to give persuading face trades. Deepfakes are a sort of video or picture imitation created to spread deception, attack protection, and veil the truth utilizing cutting edge innovations like prepared calculations, profound learning applications, and figuring. they need become an irritation to online media clients by distributing counterfeit recordings made by melding a big name's face over a precise video. The effect of deepFakes is disturbing, with lawmakers, senior corporate officials, and world pioneers being focused by loathsome entertainers. A way to deal with distinguish deepFake recordings of legislators utilizing transient consecutive edges is proposed. The proposed approach utilizes the strong video to separate the edges at the essential level followed by a profound profundity based convolutional long memory model to recognize the phony casings at the subsequent level. Additionally, the proposed model is assessed on our recently gathered ground truth dataset of produced recordings utilizing source and objective video edges of renowned lawmakers. Trial results exhibit the viability of our strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Khormali, Aminollah, and Jiann-Shiun Yuan. "DFDT: An End-to-End DeepFake Detection Framework Using Vision Transformer." Applied Sciences 12, no. 6 (March 14, 2022): 2953. http://dx.doi.org/10.3390/app12062953.

Повний текст джерела
Анотація:
The ever-growing threat of deepfakes and large-scale societal implications has propelled the development of deepfake forensics to ascertain the trustworthiness of digital media. A common theme of existing detection methods is using Convolutional Neural Networks (CNNs) as a backbone. While CNNs have demonstrated decent performance on learning local discriminative information, they fail to learn relative spatial features and lose important information due to constrained receptive fields. Motivated by the aforementioned challenges, this work presents DFDT, an end-to-end deepfake detection framework that leverages the unique characteristics of transformer models, for learning hidden traces of perturbations from both local image features and global relationship of pixels at different forgery scales. DFDT is specifically designed for deepfake detection tasks consisting of four main components: patch extraction & embedding, multi-stream transformer block, attention-based patch selection followed by a multi-scale classifier. DFDT’s transformer layer benefits from a re-attention mechanism instead of a traditional multi-head self-attention layer. To evaluate the performance of DFDT, a comprehensive set of experiments are conducted on several deepfake forensics benchmarks. Obtained results demonstrated the surpassing detection rate of DFDT, achieving 99.41%, 99.31%, and 81.35% on FaceForensics++, Celeb-DF (V2), and WildDeepfake, respectively. Moreover, DFDT’s excellent cross-dataset & cross-manipulation generalization provides additional strong evidence on its effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Mehra, Aman, Akshay Agarwal, Mayank Vatsa, and Richa Singh. "Detection of Digital Manipulation in Facial Images (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15845–46. http://dx.doi.org/10.1609/aaai.v35i18.17919.

Повний текст джерела
Анотація:
Advances in deep learning have enabled the creation of photo-realistic DeepFakes by switching the identity or expression of individuals. Such technology in the wrong hands can seed chaos through blackmail, extortion, and forging false statements of influential individuals. This work proposes a novel approach to detect forged videos by magnifying their temporal inconsistencies. A study is also conducted to understand role of ethnicity bias due to skewed datasets on deepfake detection. A new dataset comprising forged videos of Indian ethnicity individuals is presented to facilitate this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yavuzkilic, Semih, Abdulkadir Sengur, Zahid Akhtar, and Kamran Siddique. "Spotting Deepfakes and Face Manipulations by Fusing Features from Multi-Stream CNNs Models." Symmetry 13, no. 8 (July 26, 2021): 1352. http://dx.doi.org/10.3390/sym13081352.

Повний текст джерела
Анотація:
Deepfake is one of the applications that is deemed harmful. Deepfakes are a sort of image or video manipulation in which a person’s image is changed or swapped with that of another person’s face using artificial neural networks. Deepfake manipulations may be done with a variety of techniques and applications. A quintessential countermeasure against deepfake or face manipulation is deepfake detection method. Most of the existing detection methods perform well under symmetric data distributions, but are still not robust to asymmetric datasets variations and novel deepfake/manipulation types. In this paper, for the identification of fake faces in videos, a new multistream deep learning algorithm is developed, where three streams are merged at the feature level using the fusion layer. After the fusion layer, the fully connected, Softmax, and classification layers are used to classify the data. The pre-trained VGG16 model is adopted for transferred CNN1stream. In transfer learning, the weights of the pre-trained CNN model are further used for training the new classification problem. In the second stream (transferred CNN2), the pre-trained VGG19 model is used. Whereas, in the third stream, the pre-trained ResNet18 model is considered. In this paper, a new large-scale dataset (i.e., World Politicians Deepfake Dataset (WPDD)) is introduced to improve deepfake detection systems. The dataset was created by downloading videos of 20 different politicians from YouTube. Over 320,000 frames were retrieved after dividing the downloaded movie into little sections and extracting the frames. Finally, various manipulations were performed to these frames, resulting in seven separate manipulation classes for men and women. In the experiments, three fake face detection scenarios are investigated. First, fake and real face discrimination is studied. Second, seven face manipulations are performed, including age, beard, face swap, glasses, hair color, hairstyle, smiling, and genuine face discrimination. Third, performance of deepfake detection system under novel type of face manipulation is analyzed. The proposed strategy outperforms the prior existing methods. The calculated performance metrics are over 99%.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Balasubramanian, Saravana Balaji, Jagadeesh Kannan R, Prabu P, Venkatachalam K, and Pavel Trojovský. "Deep fake detection using cascaded deep sparse auto-encoder for effective feature selection." PeerJ Computer Science 8 (July 13, 2022): e1040. http://dx.doi.org/10.7717/peerj-cs.1040.

Повний текст джерела
Анотація:
In the recent research era, artificial intelligence techniques have been used for computer vision, big data analysis, and detection systems. The development of these advanced technologies has also increased security and privacy issues. One kind of this issue is Deepfakes which is the combined word of deep learning and fake. DeepFake refers to the formation of a fake image or video using artificial intelligence approaches which are created for political abuse, fake data transfer, and pornography. This paper has developed a Deepfake detection method by examining the computer vision features of the digital content. The computer vision features based on the frame change are extracted using a proposed deep learning model called the Cascaded Deep Sparse Auto Encoder (CDSAE) trained by temporal CNN. The detection process is performed using a Deep Neural Network (DNN) to classify the deep fake image/video from the real image/video. The proposed model is implemented using Face2Face, FaceSwap, and DFDC datasets which have secured an improved detection rate when compared to the traditional deep fake detection approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yang, Sung-Hyun, Keshav Thapa, and Barsha Lamichhane. "Detection of Image Level Forgery with Various Constraints Using DFDC Full and Sample Datasets." Sensors 22, no. 23 (November 24, 2022): 9121. http://dx.doi.org/10.3390/s22239121.

Повний текст джерела
Анотація:
The emergence of advanced machine learning or deep learning techniques such as autoencoders and generative adversarial networks, can generate images known as deepfakes, which astonishingly resemble the realistic images. These deepfake images are hard to distinguish from the real images and are being used unethically against famous personalities such as politicians, celebrities, and social workers. Hence, we propose a method to detect these deepfake images using a light weighted convolutional neural network (CNN). Our research is conducted with Deep Fake Detection Challenge (DFDC) full and sample datasets, where we compare the performance of our proposed model with various state-of-the-art pretrained models such as VGG-19, Xception and Inception-ResNet-v2. Furthermore, we perform the experiments with various resolutions maintaining 1:1 and 9:16 aspect ratios, which have not been explored for DFDC datasets by any other groups to date. Thus, the proposed model can flexibly accommodate various resolutions and aspect ratios, without being constrained to a specific resolution or aspect ratio for any type of image classification problem. While most of the reported research is limited to sample or preview DFDC datasets only, we have also attempted the testing on full DFDC datasets and presented the results. Contemplating the fact that the detailed results and resource analysis for various scenarios are provided in this research, the proposed deepfake detection method is anticipated to pave new avenues for deepfake detection research, that engages with DFDC datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Amoah-Yeboah, Yaw. "Biometric Spoofing and Deepfake Detection." Advances in Multidisciplinary and scientific Research Journal Publication 1, no. 1 (July 26, 2022): 279–84. http://dx.doi.org/10.22624/aims/crp-bk3-p45.

Повний текст джерела
Анотація:
Biometrics have increasingly become most suited mechanisms for identification and authentication in the use of diverse technologies and systems. However, much they prove to be more robust than other identification and authentication mechanisms, there is also an upsurge with privacy and security concerns. With AI being at the forefront of our technological advancement, it has been to our advantage and also, somehow to our detriment. People are constantly deriving ways to either trick biometric sensors to crack and bypass these authentication protocols. The practice of these nefarious activities ranges from creating fake videos or images for spreading hate, political expediency, embarrassing celebrities etc. This paper seeks to delve to the nooks and crannies of the subject matter to provide a vivid understanding in this regard and also throws light on a few areas where there exists the need for more research. Keywords: Biometrics, Spoofing, Deepfakes, Deepfake Detection, Africa, Cybersecurity. BOOK Chapter ǀ Research Nexus in IT, Law, Cyber Security & Forensics. Open Access. Distributed Free Citation: Yaw Amoah-Yeboah (2022): Biometric Spoofing and Deepfake Detection Book Chapter Series on Research Nexus in IT, Law, Cyber Security & Forensics. Pp 279-284 www.isteams.net/ITlawbookchapter2022. dx.doi.org/10.22624/AIMS/CRP-BK3-P45
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Đorđević, Miljan, Milan Milivojević, and Ana Gavrovska. "DeepFake video production and SIFT-based analysis." Telfor Journal 12, no. 1 (2020): 22–27. http://dx.doi.org/10.5937/telfor2001022q.

Повний текст джерела
Анотація:
Nowadays advantages in face-based modification using DeepFake algorithms made it possible to replace a face of one person with a face of another person. Thus, it is possible to make not only copy-move modifications, but to implement artificial intelligence and deep learning for replacing face movements from one person to another. Still images can be converted into video sequences. Consequently, the contemporaries, historical figures or even animated characters can be lively presented. Deepfakes are becoming more and more successful and it is difficult to detect them in some cases. In this paper we explain the video sequences we produced (e.g. using X2Face method, and First Order Motion Model for Image Animation) and perform deepfake video analysis using SIFT (Scale Invariant Feature Transform) based approach. The experiments show the simplicity in video forgery production, as well as the possible role of SIFT keypoints detection in differentiation between the deeply forged and original video content.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Binh, Le Minh, and Simon Woo. "ADD: Frequency Attention and Multi-View Based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 122–30. http://dx.doi.org/10.1609/aaai.v36i1.19886.

Повний текст джерела
Анотація:
Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Because of the limited information in low-quality images, detecting low-quality deepfake remains an important challenge. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher’s and student’s tensors under different views to transfer the teacher tensor’s distribution to the student more efficiently. Our extensive experimental results demonstrate that our approach outperforms state-of-the-art baselines in detecting low-quality compressed deepfake images.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії