To see the other types of publications on this topic, follow the link: Deepfake Detection.

Journal articles on the topic 'Deepfake Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deepfake Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yasrab, Robail, Wanqi Jiang, and Adnan Riaz. "Fighting Deepfakes Using Body Language Analysis." Forecasting 3, no. 2 (April 28, 2021): 303–21. http://dx.doi.org/10.3390/forecast3020020.

Full text
Abstract:
Recent improvements in deepfake creation have made deepfake videos more realistic. Moreover, open-source software has made deepfake creation more accessible, which reduces the barrier to entry for deepfake creation. This could pose a threat to the people’s privacy. There is a potential danger if the deepfake creation techniques are used by people with an ulterior motive to produce deepfake videos of world leaders to disrupt the order of countries and the world. Therefore, research into the automatic detection of deepfaked media is essential for public security. In this work, we propose a deepfake detection method using upper body language analysis. Specifically, a many-to-one LSTM network was designed and trained as a classification model for deepfake detection. Different models were trained by varying the hyperparameters to build a final model with benchmark accuracy. We achieved 94.39% accuracy on the deepfake test set. The experimental results showed that upper body language can effectively detect deepfakes.
APA, Harvard, Vancouver, ISO, and other styles
2

Niveditha, Zohaib Hasan Princy, Saurabh Sharma, Vishal Paranjape, and Abhishek Singh. "Review of Deep Learning Techniques for Deepfake Image Detection." International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 11, no. 02 (February 25, 2022): 1–14. http://dx.doi.org/10.15662/ijareeie.2022.1102021.

Full text
Abstract:
Deepfake is an advanced synthetic media technology that generates convincingly authentic yet fake images and videos by modifying a person's likeness. The term "Deepfake" is a blend of "Deep learning" and "Fake," highlighting the use of artificial intelligence and deep learning algorithms in its creation. Deepfake generation involves training models to learn the nuances of facial attributes, expressions, motion, and speech patterns to produce fabricated media indistinguishable from real footage. Deepfakes are often used to manipulate human content, especially the invariant facial regions. The spatial relationship between facial attributes is crucial for creating a convincing, hyperrealistic deepfake output. Subtle inconsistencies in facial features, such as eye spacing, skin color, and mouth shape, can serve as indicators for detecting deepfakes. While many techniques have been developed to detect deepfakes, not all are perfectly accurate for every case. As new deepfake creation methods emerge, existing detection strategies must be continually updated to address these advancements. This paper reviews various deepfake image detection methods and deep learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Sunkari, Venkateswarlu, and Ayyagari Sri Nagesh. "Artificial intelligence for deepfake detection: systematic review and impact analysis." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 4 (December 1, 2024): 3786. http://dx.doi.org/10.11591/ijai.v13.i4.pp3786-3792.

Full text
Abstract:
<p>Deep learning and artificial intelligence (AI) have enabled deepfakes, prompting concerns about their social impact. deepfakes have detrimental effects in several businesses, despite their apparent benefits. We explore deepfake detection research and its social implications in this study. We examine capsule networks' ability to detect video deepfakes and their design implications. This strategy reduces parameters and provides excellent accuracy, making it a promising deepfake defense. The social significance of deepfakes is also highlighted, underlining the necessity to understand them. Despite extensive use of face swap services, nothing is known about deepfakes' social impact. The misuse of deepfakes in image-based sexual assault and public figure distortion, especially in politics, highlight the necessity for further research on their social impact. Using state-of-the-art deepfake detection methods like fake face and deepfake detectors and a broad forgery analysis tool reduces the damage deepfakes do. We inquire about to review deepfake detection research and its social impacts in this work. In this paper we analysed various deepfake methods, social impact with misutilization of deepfake technology, and finally giving clear analysis of existing machine learning models. We want to illuminate the potential effects of deepfakes on society and suggest solutions by combining study data.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Battula Thirumaleshwari Devi, Et al. "A Comprehensive Survey on Deepfake Methods: Generation, Detection, and Applications." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 30, 2023): 654–78. http://dx.doi.org/10.17762/ijritcc.v11i9.8857.

Full text
Abstract:
Due to recent advancements in AI and deep learning, several methods and tools for multimedia transformation, known as deepfake, have emerged. A deepfake is a synthetic media where a person's resemblance is used to substitute their presence in an already-existing image or video. Deepfakes have both positive and negative implications. They can be used in politics to simulate events or speeches, in translation to provide natural-sounding translations, in education for virtual experiences, and in entertainment for realistic special effects. The emergence of deepfake face forgery on the internet has raised significant societal concerns. As a result, detecting these forgeries has become an emerging field of research, and many deepfake detection methods have been proposed. This paper has introduced deepfakes and explained the different types of deepfakes that exist. It also explains a summary of various deep fake generation techniques, both traditional and AI detection techniques. Datasets used for deepfake-generating that are freely accessible are emphasized. To further advance the deepfake research field, we aim to provide relevant research findings, identify existing gaps, and propose emerging trends for future study.
APA, Harvard, Vancouver, ISO, and other styles
5

Lad, Sumit. "Adversarial Approaches to Deepfake Detection: A Theoretical Framework for Robust Defense." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (September 21, 2024): 46–58. http://dx.doi.org/10.60087/jaigs.v6i1.225.

Full text
Abstract:
The rapid improvements in capabilities of neural networks and generative adversarial networks (GANs) has given rise to extremely sophisticated deepfake technologies. This has made it very difficult to reliably recognize fake digital content. It has enabled the creation of highly convincing synthetic media which can be used in malicious ways in this era of user generated information and social media. Existing deepfake detection techniques are effective against early iterations of deepfakes but get increasingly vulnerable to more sophisticated deepfakes and adversarial attacks. In this paper we explore a novel approach to deepfake detection which uses a framework to integrate adversarial training to improve the robustness and accuracy of deepfake detection models. By looking deeper into state of art adversarial machine learning, forensic analysis and deepfake detection techniques we will explore how adversarial training can improve the robustness of deep fake detection techniques against future threats. We will use perturbations which are adversarial examples designed specifically to deceive the deepfake detection algorithms. By training deepfake detection models with these perturbations we will create detection systems that can more accurately identify deepfakes. Our approach shows promise and avenues for future research in building resilience against deepfakes and applications in content moderation, security and combating synthetic media manipulation.
APA, Harvard, Vancouver, ISO, and other styles
6

Krueger, Natalie, Mounika Vanamala, and Rushit Dave. "Recent Advancements in the Field of Deepfake Detection." International Journal of Computer Science and Information Technology 15, no. 4 (August 27, 2023): 01–11. http://dx.doi.org/10.5121/ijcsit.2023.15401.

Full text
Abstract:
A deepfake is a photo or video of a person whose image has been digitally altered or partially replaced with an image of someone else. Deepfakes have the potential to cause a variety of problems and are often used maliciously. A common usage is altering videos of prominent political figures and celebrities. These deepfakes can portray them making offensive, problematic, and/or untrue statements. Current deepfakes can be very realistic, and when used in this way, can spread panic and even influence elections and political opinions. There are many deepfake detection strategies currently in use but finding the most comprehensive and universal method is critical. So, in this survey we will address the problems of malicious deepfake creation and the lack of universal deepfake detection methods. Our objective is to survey and analyze a variety of current methods and advances in the field of deepfake detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Kawabe, Akihisa, Ryuto Haga, Yoichi Tomioka, Jungpil Shin, and Yuichi Okuyama. "A Dynamic Ensemble Selection of Deepfake Detectors Specialized for Individual Face Parts." Electronics 12, no. 18 (September 18, 2023): 3932. http://dx.doi.org/10.3390/electronics12183932.

Full text
Abstract:
The development of deepfake technology, based on deep learning, has made it easier to create images of fake human faces that are indistinguishable from the real thing. Many deepfake methods and programs are publicly available and can be used maliciously, for example, by creating fake social media accounts with images of non-existent human faces. To prevent the misuse of such fake images, several deepfake detection methods have been proposed as a countermeasure and have proven capable of detecting deepfakes with high accuracy when the target deepfake model has been identified. However, the existing approaches are not robust to partial editing and/or occlusion caused by masks, glasses, or manual editing, all of which can lead to an unacceptable drop in accuracy. In this paper, we propose a novel deepfake detection approach based on a dynamic configuration of an ensemble model that consists of deepfake detectors. These deepfake detectors are based on convolutional neural networks (CNNs) and are specialized to detect deepfakes by focusing on individual parts of the face. We demonstrate that a dynamic selection of face parts and an ensemble of selected CNN models is effective at realizing highly accurate deepfake detection even from partly edited and occluded images.
APA, Harvard, Vancouver, ISO, and other styles
8

Raza, Ali, Kashif Munir, and Mubarak Almutairi. "A Novel Deep Learning Approach for Deepfake Image Detection." Applied Sciences 12, no. 19 (September 29, 2022): 9820. http://dx.doi.org/10.3390/app12199820.

Full text
Abstract:
Deepfake is utilized in synthetic media to generate fake visual and audio content based on a person’s existing media. The deepfake replaces a person’s face and voice with fake media to make it realistic-looking. Fake media content generation is unethical and a threat to the community. Nowadays, deepfakes are highly misused in cybercrimes for identity theft, cyber extortion, fake news, financial fraud, celebrity fake obscenity videos for blackmailing, and many more. According to a recent Sensity report, over 96% of the deepfakes are of obscene content, with most victims being from the United Kingdom, United States, Canada, India, and South Korea. In 2019, cybercriminals generated fake audio content of a chief executive officer to call his organization and ask them to transfer $243,000 to their bank account. Deepfake crimes are rising daily. Deepfake media detection is a big challenge and has high demand in digital forensics. An advanced research approach must be built to protect the victims from blackmailing by detecting deepfake content. The primary aim of our research study is to detect deepfake media using an efficient framework. A novel deepfake predictor (DFP) approach based on a hybrid of VGG16 and convolutional neural network architecture is proposed in this study. The deepfake dataset based on real and fake faces is utilized for building neural network techniques. The Xception, NAS-Net, Mobile Net, and VGG16 are the transfer learning techniques employed in comparison. The proposed DFP approach achieved 95% precision and 94% accuracy for deepfake detection. Our novel proposed DFP approach outperformed transfer learning techniques and other state-of-the-art studies. Our novel research approach helps cybersecurity professionals overcome deepfake-related cybercrimes by accurately detecting the deepfake content and saving the deepfake victims from blackmailing.
APA, Harvard, Vancouver, ISO, and other styles
9

Singh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari, and Bharat Rawal. "A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques." Journal of Cybersecurity and Information Management 9, no. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.

Full text
Abstract:
Deepfake videos are a growing concern today as they can be used to spread misinformation and manipulate public opinion. In this paper, we investigate the use of different feature extraction techniques for detecting deepfake videos using machine learning algorithms. We explore three feature extraction techniques, including facial landmarks detection, optical flow, and frequency analysis, and evaluate their effectiveness in detecting deepfake videos. We compare the performance of different machine learning algorithms and analyze their ability to detect deepfakes using the extracted features. Our experimental results show that the combination of facial landmarks detection and frequency analysis provides the best performance in detecting deepfake videos, with an accuracy of over 95%. Our findings suggest that machine learning algorithms can be a powerful tool in detecting deepfake videos, and feature extraction techniques play a crucial role in achieving high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Qureshi, Shavez Mushtaq, Atif Saeed, Sultan H. Almotiri, Farooq Ahmad, and Mohammed A. Al Ghamdi. "Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media." PeerJ Computer Science 10 (May 27, 2024): e2037. http://dx.doi.org/10.7717/peerj-cs.2037.

Full text
Abstract:
The rapid advancement of deepfake technology poses an escalating threat of misinformation and fraud enabled by manipulated media. Despite the risks, a comprehensive understanding of deepfake detection techniques has not materialized. This research tackles this knowledge gap by providing an up-to-date systematic survey of the digital forensic methods used to detect deepfakes. A rigorous methodology is followed, consolidating findings from recent publications on deepfake detection innovation. Prevalent datasets that underpin new techniques are analyzed. The effectiveness and limitations of established and emerging detection approaches across modalities including image, video, text and audio are evaluated. Insights into real-world performance are shared through case studies of high-profile deepfake incidents. Current research limitations around aspects like cross-modality detection are highlighted to inform future work. This timely survey furnishes researchers, practitioners and policymakers with a holistic overview of the state-of-the-art in deepfake detection. It concludes that continuous innovation is imperative to counter the rapidly evolving technological landscape enabling deepfakes.
APA, Harvard, Vancouver, ISO, and other styles
11

Rajagopal, Tendral, Velayutham Chandrashekaran, and Vignesh Ilango. "Unmasking the Deepfake Infocalypse: Debunking Manufactured Misinformation with a Prototype Model in the AI Era “Seeing and hearing, no longer believing.”." Journal of Communication and Management 2, no. 04 (December 18, 2023): 230–37. http://dx.doi.org/10.58966/jcm2023243.

Full text
Abstract:
Machine learning and artificial intelligence in Journalism are aid and not a replacement or challenge to a journalist’s ability. Artificial intelligence-backed fake news characterized by misinformation and disinformation is the new emerging threat in our broken information ecosystem. Deepfakes erode trust in visual evidence, making it increasingly challenging to discern real from fake. Deepfakes are an increasing cause for concern since they can be used to propagate false information, fabricate news, or deceive people. While Artificial intelligence is used to create deepfakes, the same technology is also used to detect them. Digital Media literacy, along with technological deepfake detection tools, is an effective solution to the menace of deepfake. The paper reviews the creation and detection of deepfakes using machine learning and deep learning models. It also discusses the implications of cognitive biases and social identity theories in deepfake creation and strategies for establishing a trustworthy information ecosystem. The researchers have developed a prototype deepfake detection model, which can lay a foundation to expose deepfake videos. The prototype model correctly identified 35 out of 50 deepfake videos, achieving 70% accuracy. The researcher considers 65% and above as “fake” and 65% and below as “real”. 15 videos were incorrectly classified as real, potentially due to model limitations and the quality of the deepfakes. These deepfakes were highly convincing and flawless. Deepfakes have a high potential to damage reputations and are often obscene or vulgar. There is no specific law for deepfakes, but general laws require offensive/fake content to be taken down. Deepfakes are often used to spread misinformation or harm someone’s reputation. They are designed to harass, intimidate, or spread fear. A significant majority of deepfake videos are pornographic and target female celebrities
APA, Harvard, Vancouver, ISO, and other styles
12

Singh, Parminder. "A Survey of Deepfake Detection Methods: Innovations, Accuracy, and Future Directions." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 008 (August 9, 2024): 1–12. http://dx.doi.org/10.55041/ijsrem37000.

Full text
Abstract:
Deepfake technology has emerged as a significant challenge in digital media, posing risks related to misinformation and identity theft. This paper provides a comprehensive review of deepfake detection techniques, highlighting advancements in traditional machine learning, deep learning models, hybrid approaches, and attention mechanisms. We evaluate the effectiveness of various methods based on accuracy, computational efficiency, and practical applicability, using key datasets and benchmarking systems. Our review underscores the progress made in detecting deepfakes and identifies areas for future research, including real-time detection, multimodal approaches, and improvements in computational efficiency. Key Words: Deepfake detection, machine learning, deep learning, convolutional neural networks, transformers, attention mechanisms, multimodal data, benchmarking systems, datasets.
APA, Harvard, Vancouver, ISO, and other styles
13

Singh, Dr Viomesh, Bhavesh Agone, Aryan More, Aryan Mengawade, Atharva Deshmukh, and Atharva Badgujar. "SAVANA- A Robust Framework for Deepfake Video Detection and Hybrid Double Paraphrasing with Probabilistic Analysis Approach for AI Text Detection." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (November 30, 2024): 2074–83. http://dx.doi.org/10.22214/ijraset.2024.65526.

Full text
Abstract:
Abstract: As the generative AI has advanced with a great speed, the need to detect AI-generated content, including text and deepfake media, also increased. This research work proposes a hybrid detection method that includes double paraphrasing-based consistency checks, coupled with probabilistic content analysis through natural language processing and machine learning algorithms for text and advanced deepfake detection techniques for media. Our system hybridizes the double paraphrasing framework of SAVANA with probabilistic analysis toward high accuracy on AI-text detection in forms such as DOCX or PDF from diverse domains- academic text, business text, reviews, and media. Specifically, for detecting visual artifact and spatiotemporal inconsistencies attributed to deepfakes within media applications, we'll be exploiting BlazeFace, EfficientNetB4 for extracting features while classifying and detecting respective deepfakes. Experimental results indicate that the hybrid model achieves up to 95% accuracy for AI-generated text detection and up to 96% accuracy for deepfake detection with the traditional models and the standalone SAVANA-based methods. This approach therefore positions our framework as an adaptive and reliable tool to detect AI-generated content within various contexts, thereby enriching content integrity in digital environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Jagdale, Anushka, Vanshika Kubde, Rahul Kortikar, Prof Aparna V. Mote, and Prof Nitisha Rajgure. "DeepFake Image Detection: Fake Image Detection using CNNs and GANs Algorithm." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 11 (November 10, 2024): 1–6. http://dx.doi.org/10.55041/ijsrem38628.

Full text
Abstract:
Deep learning is a powerful and versatile technique that has seen extensive applications in areas such as natural language processing, machine learning, and computer vision. Among its most recent applications is the generation of deepfakes, which are high-quality, realistic altered videos or images that have garnered significant attention. While innovative uses of deepfake technology are being explored, its potential for misuse has raised serious concerns. Harmful applications, such as spreading fake news, creating celebrity pornography, financial fraud, and revenge pornography, have become increasingly prevalent in the digital age. As a result, public figures, including celebrities and politicians, face heightened risks from deepfake content. In response, substantial research has been conducted to explore the mechanics behind deepfakes, leading to the development of various deep learning-based algorithms for their detection. This study provides a comprehensive review of deepfake creation and detection techniques, focusing on different deep learning approaches. Additionally, it discusses the limitations of existing methods and the availability of datasets for research. The lack of a highly accurate and fully automated deepfake detection system presents a significant challenge as the ease of generating and distributing such content continues to grow. Nonetheless, recent efforts in deep learning have shown promising results, surpassing traditional detection methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Eun-Gi, Isack Lee, and Seok-Bong Yoo. "ClueCatcher: Catching Domain-Wise Independent Clues for Deepfake Detection." Mathematics 11, no. 18 (September 17, 2023): 3952. http://dx.doi.org/10.3390/math11183952.

Full text
Abstract:
Deepfake detection is a focus of extensive research to combat the proliferation of manipulated media. Existing approaches suffer from limited generalizability and struggle to detect deepfakes created using unseen techniques. This paper proposes a novel deepfake detection method to improve generalizability. We observe domain-wise independent clues in deepfake images, including inconsistencies in facial colors, detectable artifacts at synthesis boundaries, and disparities in quality between facial and nonfacial regions. This approach uses an interpatch dissimilarity estimator and a multistream convolutional neural network to capture deepfake clues unique to each feature. By exploiting these clues, we enhance the effectiveness and generalizability of deepfake detection. The experimental results demonstrate the improved performance and robustness of this method.
APA, Harvard, Vancouver, ISO, and other styles
16

Guarnera, Luca, Oliver Giudice, Francesco Guarnera, Alessandro Ortis, Giovanni Puglisi, Antonino Paratore, Linh M. Q. Bui, et al. "The Face Deepfake Detection Challenge." Journal of Imaging 8, no. 10 (September 28, 2022): 263. http://dx.doi.org/10.3390/jimaging8100263.

Full text
Abstract:
Multimedia data manipulation and forgery has never been easier than today, thanks to the power of Artificial Intelligence (AI). AI-generated fake content, commonly called Deepfakes, have been raising new issues and concerns, but also new challenges for the research community. The Deepfake detection task has become widely addressed, but unfortunately, approaches in the literature suffer from generalization issues. In this paper, the Face Deepfake Detection and Reconstruction Challenge is described. Two different tasks were proposed to the participants: (i) creating a Deepfake detector capable of working in an “in the wild” scenario; (ii) creating a method capable of reconstructing original images from Deepfakes. Real images from CelebA and FFHQ and Deepfake images created by StarGAN, StarGAN-v2, StyleGAN, StyleGAN2, AttGAN and GDWCT were collected for the competition. The winning teams were chosen with respect to the highest classification accuracy value (Task I) and “minimum average distance to Manhattan” (Task II). Deep Learning algorithms, particularly those based on the EfficientNet architecture, achieved the best results in Task I. No winners were proclaimed for Task II. A detailed discussion of teams’ proposed methods with corresponding ranking is presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
17

K. D.V.N.Vaishnavi, L. Hima Bindu, M. Sathvika, K. Udaya Lakshmi, M. Harini, and N. Ashok. "Deep learning approaches for robust deep fake detection." World Journal of Advanced Research and Reviews 21, no. 3 (March 30, 2023): 2283–89. http://dx.doi.org/10.30574/wjarr.2024.21.3.0889.

Full text
Abstract:
Detecting deepfake images using a deep learning approach, particularly using model Densenet121, involves training a neural network to differentiate between authentic and manipulated images. Deepfakes have gained prominence due to advances in deep learning, especially generative adversarial networks (GANs). They pose significant challenges to the veracity of digital content, as they can be used to create realistic and deceptive media. Deepfakes are realistic looking fake media generated by many artificial intelligence tools like face2face and deepfake, which pose a severe threat to public. As more deepfakes are spreading, we really need better ways to find and prevent them. Deepfake involves creation of highly realistic images and videos and misuse them for spreading fake news, defaming individuals, and possess a significant threat to the integrity of digital content. Our project “Deep Learning Approaches for Robust Deep Fake Detection” aims to address this critical issue by developing a robust system for identification and localization of deep fake content by using ‘Densenet121’ model. This proposed framework seamlessly integrates forgery detection and localization. The dataset used in this project is “140k Real and Fake Faces”, and it consists of 70k real faces from Flickr dataset collected by Nvidia and 70k fake faces sampled from the 1 million Fake faces generated by StyleGAN. For localization purpose, we use GRAD-CAM method to accurately identify the morphed regions. Overall, our goal is to make deepfake detection more effective and reliable in today’s digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
18

A. Abu-Ein, Ashraf, Obaida M. Al-Hazaimeh, Alaa M. Dawood, and Andraws I. Swidan. "Analysis of the current state of deepfake techniques-creation and detection methods." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 3 (October 7, 2022): 1659. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1659-1667.

Full text
Abstract:
Deep learning has effectively solved complicated challenges ranging from large data analytics to human level control and computer vision. However, deep learning has been used to produce software that threatens privacy, democracy, and national security. Deepfake is one of these new applications backed by deep learning. Fake images and movies created by Deepfake algorithms might be difficult for people to tell apart from real ones. This necessitates the development of tools that can automatically detect and evaluate the quality of digital visual media. This paper provides an overview of the algorithms and datasets used to build deepfakes, as well as the approaches presented to detect deepfakes to date. By reviewing the background of deepfakes methods, this paper provides a complete overview of deepfake approaches and promotes the creation of new and more robust strategies to deal with the increasingly complex deepfakes.
APA, Harvard, Vancouver, ISO, and other styles
19

Shahzad, Hina Fatima, Furqan Rustam, Emmanuel Soriano Flores, Juan Luís Vidal Mazón, Isabel de la Torre Diez, and Imran Ashraf. "A Review of Image Processing Techniques for Deepfakes." Sensors 22, no. 12 (June 16, 2022): 4556. http://dx.doi.org/10.3390/s22124556.

Full text
Abstract:
Deep learning is used to address a wide range of challenging issues including large data analysis, image processing, object detection, and autonomous control. In the same way, deep learning techniques are also used to develop software and techniques that pose a danger to privacy, democracy, and national security. Fake content in the form of images and videos using digital manipulation with artificial intelligence (AI) approaches has become widespread during the past few years. Deepfakes, in the form of audio, images, and videos, have become a major concern during the past few years. Complemented by artificial intelligence, deepfakes swap the face of one person with the other and generate hyper-realistic videos. Accompanying the speed of social media, deepfakes can immediately reach millions of people and can be very dangerous to make fake news, hoaxes, and fraud. Besides the well-known movie stars, politicians have been victims of deepfakes in the past, especially US presidents Barak Obama and Donald Trump, however, the public at large can be the target of deepfakes. To overcome the challenge of deepfake identification and mitigate its impact, large efforts have been carried out to devise novel methods to detect face manipulation. This study also discusses how to counter the threats from deepfake technology and alleviate its impact. The outcomes recommend that despite a serious threat to society, business, and political institutions, they can be combated through appropriate policies, regulation, individual actions, training, and education. In addition, the evolution of technology is desired for deepfake identification, content authentication, and deepfake prevention. Different studies have performed deepfake detection using machine learning and deep learning techniques such as support vector machine, random forest, multilayer perceptron, k-nearest neighbors, convolutional neural networks with and without long short-term memory, and other similar models. This study aims to highlight the recent research in deepfake images and video detection, such as deepfake creation, various detection algorithms on self-made datasets, and existing benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

AL-KHAZRAJI, Samer Hussain, Hassan Hadi SALEH, Adil Ibrahim KHALID, and Israa Adnan MISHKHAL. "Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications." Eurasia Proceedings of Science Technology Engineering and Mathematics 23 (October 16, 2023): 429–41. http://dx.doi.org/10.55549/epstem.1371792.

Full text
Abstract:
Deepfake technology, which allows the manipulation and fabrication of audio, video, and images, has gained significant attention due to its potential to deceive and manipulate. As deepfakes proliferate on social media platforms, understanding their impact becomes crucial. This research investigates the detection, misinformation, and societal implications of deepfake technology on social media. Through a comprehensive literature review, the study examines the development and capabilities of deepfakes, existing detection techniques, and challenges in identifying them. The role of deepfakes in spreading misinformation and disinformation is explored, highlighting their potential consequences on public trust and social cohesion. The societal implications and ethical considerations surrounding deepfakes are examined, along with legal and policy responses. Mitigation strategies, including technological advancements and platform policies, are discussed. By shedding light on these critical aspects, this research aims to contribute to a better understanding of the impact of deepfake technology on social media and to inform future efforts in detection, prevention, and policy development.
APA, Harvard, Vancouver, ISO, and other styles
21

Sharma, Ankita. "RESILIENCE OF NETWORK PROTOCOLS TO DEEPFAKE DETECTION TRAFFIC." International Research Journal of Computer Science 09, no. 08 (August 31, 2022): 342–47. http://dx.doi.org/10.26562/irjcs.2022.v0908.36.

Full text
Abstract:
This research analyzes the robustness of current multimedia protocols in managing traffic produced by real-time, high-volume deepfake detection. The widespread occurrence of deepfakes on online platforms necessitates swift and precise detection algorithms for multimedia data, highlighting the demand for optimized network protocols. This paper assesses the existing functionalities of multimedia protocols, including RTP and RTSP, in facilitating deepfake detection and proposes improvements to enhance robustness, speed, and accuracy. The results underscore the necessary protocol modifications to accommodate real-time deepfake traffic requirements and reduce network stress.
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Gourav, Kiran Raja, Manish Gupta, Tony Jan, Scott Thompson Whiteside, and Mukesh Prasad. "A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods." Electronics 13, no. 1 (December 25, 2023): 95. http://dx.doi.org/10.3390/electronics13010095.

Full text
Abstract:
Recent advances in Generative Artificial Intelligence (AI) have increased the possibility of generating hyper-realistic DeepFake videos or images to cause serious harm to vulnerable children, individuals, and society at large with misinformation. To overcome this serious problem, many researchers have attempted to detect DeepFakes using advanced machine learning techniques and advanced fusion techniques. This paper presents a detailed review of past and present DeepFake detection methods with a particular focus on media-modality fusion and machine learning. This paper also provides detailed information on available benchmark datasets in DeepFake detection research. This review paper addressed the 67 primary papers that were published between 2015 and 2023 in DeepFake detection, including 55 research papers in image and video DeepFake detection methodologies and 15 research papers on identifying and verifying speaker authentication. This paper offers lucrative information on DeepFake detection research and offers a unique review analysis of advanced machine learning and modality fusion that sets it apart from other review papers. This paper further offers informed guidelines for future work in DeepFake detection utilizing advanced state-of-the-art machine learning and information fusion models that should support further advancement in DeepFake detection for a sustainable and safer digital future.
APA, Harvard, Vancouver, ISO, and other styles
23

Kumari, Prerna, and Vikas Kumar. "Deepfake Detection." International Journal of Science and Research (IJSR) 13, no. 6 (June 5, 2024): 356–58. http://dx.doi.org/10.21275/sr24606012528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

S, Mrs Prajwal. "DeepFake Image Detection." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 6, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem30215.

Full text
Abstract:
The growth of deepfakes in today’s digital environ- ment raises significant doubts regarding the genuineness and dependability of the content found. To overcome this new challenge, Developing an effective method in the context of detection of deep images. In this study, we conduct a comparative analysis of three varied convolutional neural networks (CNNs) for deepfake image detection. Our experimental results highlight the strengths and weaknesses of each CNN architecture. We deliberate on the consequences of our results in the context of deep image perception and show which models may be better for certain situations. We also address the challenges and limitations associated with deep learning, such as the arms race of deep learning technologies and tools. In conclusion, our work adds to the expanding body of knowledge regarding deep image detection by comparing three major CNN architectures. Our findings provide important guidance for researchers, prac- titioners, and policymakers working to improve the security and authenticity of content in an increasingly digital age. As deepfake technology continues to evolve, the information presented in this study sets the groundwork for development of more powerful and updated deepfake detection mechanisms. Keywords—Deepfake, image detection, convolutional neural networks (CNNs), ResNet, InceptionV3, DenseNet, face forgery detection, GAN, forensics, deep learning, artificial intelligence, convolutional layers, pooling layers, fully connected layers, augmented data, accuracy, precision, recall, F1 measure, Area Under Receiver Operating Characteristic Curve.
APA, Harvard, Vancouver, ISO, and other styles
25

Law Kian Seng, NORMAISHARAH MAMAT, Hafiza Abas, and Wan Noor Hamiza Wan Ali. "AI Integrity Solutions for Deepfake Identification and Prevention." Open International Journal of Informatics 12, no. 1 (June 28, 2024): 35–46. http://dx.doi.org/10.11113/oiji2024.12n1.297.

Full text
Abstract:
The increasing complexity of deepfake technology has sparked significant worries over individual privacy, the spread of false information, and deficiencies in cybersecurity. Deepfakes have the ability to effectively modify audio and visual content, resulting in a growing challenge to differentiate between real and fake content. To address this critical challenge, the study is conducting a survey to reveal a broad range of perspectives on the familiarity, encounters, and concerns related to deepfake technology. In addition, the study evaluates the effectiveness of current strategies in addressing the spread of deepfake material and proposes future approaches for improving the integrity of AI. The survey was delivered digitally, and responses were examined to provide an in- depth analysis of the latest techniques and difficulties in the context of deepfake detection. The outcomes demonstrate a range of perspectives on understanding deepfakes with an explicit agreement on the importance of risk presented by harmful deepfake applications. Although the participants showed an understanding of the available interventions, they also identified considerable challenges and the need for improved awareness, robust detection tools, and ethical standards in AI development to address the challenges posed by deepfakes with the present detection method. Implementing AI ethical guidelines to avoid deepfakes has a significant and beneficial effect on several industries by providing protection against their harmful effects. By fostering AI ethical guidelines, these policies are able to foster societal trust, mitigate risk, and cultivate a more robust digital environment.
APA, Harvard, Vancouver, ISO, and other styles
26

P. Kamakshi Thai, Sathvik Kalige, Sai Nikhil Ediga, and Lokesh Chougoni. "A survey on deepfake detection through deep learning." World Journal of Advanced Research and Reviews 21, no. 3 (March 30, 2023): 2214–17. http://dx.doi.org/10.30574/wjarr.2024.21.3.0946.

Full text
Abstract:
Imagine watching a video where Tom Hanks delivers a rousing speech, but you suspect it might be fabricated. This growing concern stems from the rise of "DeepFakes," hyper realistic manipulated videos created using deep learning algorithms. These tools can seamlessly stitch together faces, voices, and body movements, blurring the lines between reality and fiction. While DeepFakes hold promise for entertainment and creative expression, their potential for misuse is significant. Malicious actors could leverage them to spread misinformation, damage reputations, or even influence elections. Thankfully, researchers are developing sophisticated techniques to detect these synthetic creations. This survey delves into the realm of DeepFake detection, exploring various methods employed by deep neural networks (DNNs). We'll dissect how DeepFakes are made, categorize the most common creation techniques, and analyze the strengths and weaknesses of different detection approaches. Furthermore, we'll examine the evolving landscape of DeepFake datasets, which fuel the training and testing of detection models. We'll also discuss the ongoing quest for a universal DeepFake detector, capable of identifying even unseen manipulations. Finally, we'll touch on the ongoing challenges facing both DeepFake creators and detectors, highlighting the arms race that is unfolding in this technological battleground. By shedding light on these advancements and obstacles, we hope to empower audiences with the knowledge to critically evaluate the information they encounter in the digital age.
APA, Harvard, Vancouver, ISO, and other styles
27

Gadgilwar, Jitesh, Kunal Rahangdale, Om Jaiswal, Parag Asare, Pratik Adekar, and Prof Leela Bitla. "Exploring Deepfakes - Creation Techniques, Detection Strategies, and Emerging Challenges: A Survey." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (March 31, 2023): 1491–95. http://dx.doi.org/10.22214/ijraset.2023.49681.

Full text
Abstract:
Abstract: Deep learning, integrated with Artificial Intelligence algorithms, has brought about numerous beneficial practical technologies. However, it also brings up a problem that the world is facing today. Despite its innumerable suitable applications, it poses a danger to public personal privacy, democracy, and corporate credibility. One such use that has emerged is deepfake, which has caused chaos on the internet. Deepfake manipulates an individual's image and video, creating problems in differentiating the original from the fake. This requires a solution in today's period to counter and automatically detect such media. This study aims to explore the techniques for deepfake creation and detection, using various methods for algorithm analysis and image analysis to find the root of deepfake creation. This study examines image, audio, and ML algorithms to extract a possible sign to analyze deepfake. The research compares the performance of these methods in detecting deepfakes generated using different techniques and datasets. As deepfake is a rapidly evolving technology, we need avant-garde techniques to counter and detect its presence accurately.
APA, Harvard, Vancouver, ISO, and other styles
28

Karaköse, Mehmet, İsmail İlhan, Hasan Yetiş, and Serhat Ataş. "A New Approach for Deepfake Detection with the Choquet Fuzzy Integral." Applied Sciences 14, no. 16 (August 16, 2024): 7216. http://dx.doi.org/10.3390/app14167216.

Full text
Abstract:
Deepfakes have become widespread and have continued to develop rapidly in recent years. In addition to the use of deepfakes in movies and for humorous purposes, this technology has also begun to pose a threat to many companies and politicians. Deepfake detection is critical to the prevention of this threat. In this study, a Choquet fuzzy integral-based deepfake detection method is proposed to increase overall performance by combining the results obtained from different deepfake detection methods. Three different deepfake detection models were used in the study: XceptionNet, which has better performance in detecting real images/videos; EfficientNet, which has better performance in detecting fake videos; and a model based on their hybrid uses. The proposed method based on the Choquet fuzzy integral aims to eliminate the shortcomings of these methods by using each of the other methods. As a result, a higher performance was achieved with the proposed method than found when all three methods were used individually. As a result of the testing and validation studies carried out on FaceForensics++, DFDC, Celeb-DF, and DeepFake-TIMIT datasets, the individual performance levels of the algorithms used were 81.34%, 82.78%, and 79.15% on average, according to the AUC curve, while the level of 97.79% was reached with the proposed method. Considering that the average performance of the three methods across all datasets is 81.09%, it can be seen that an improvement of approximately 16.7% is achieved. In the FaceForensics++ dataset, in which individual algorithms are more successful, the performance of the proposed method reaches the highest AUC value, 99.8%. It can be seen that the performance rates can be increased by changing the individual methods discussed in the proposed method. We believe that the proposed method will inspire researchers and will be further developed.
APA, Harvard, Vancouver, ISO, and other styles
29

Khormali, Aminollah, and Jiann-Shiun Yuan. "ADD: Attention-Based DeepFake Detection Approach." Big Data and Cognitive Computing 5, no. 4 (September 27, 2021): 49. http://dx.doi.org/10.3390/bdcc5040049.

Full text
Abstract:
Recent advancements of Generative Adversarial Networks (GANs) pose emerging yet serious privacy risks threatening digital media’s integrity and trustworthiness, specifically digital video, through synthesizing hyper-realistic images and videos, i.e., DeepFakes. The need for ascertaining the trustworthiness of digital media calls for automatic yet accurate DeepFake detection algorithms. This paper presents an attention-based DeepFake detection (ADD) method that exploits the fine-grained and spatial locality attributes of artificially synthesized videos for enhanced detection. ADD framework is composed of two main components including face close-up and face shut-off data augmentation methods and is applicable to any classifier based on convolutional neural network architecture. ADD first locates potentially manipulated areas of the input image to extract representative features. Second, the detection model is forced to pay more attention to these forgery regions in the decision-making process through a particular focus on interpreting the sample in the learning phase. ADD’s performance is evaluated against two challenging datasets of DeepFake forensics, i.e., Celeb-DF (V2) and WildDeepFake. We demonstrated the generalization of ADD by evaluating four popular classifiers, namely VGGNet, ResNet, Xception, and MobileNet. The obtained results demonstrate that ADD can boost the detection performance of all four baseline classifiers significantly on both benchmark datasets. Particularly, ADD with ResNet backbone detects DeepFakes with more than 98.3% on Celeb-DF (V2), outperforming state-of-the-art DeepFake detection methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Tipper, Sarah, Hany F. Atlam, and Harjinder Singh Lallie. "An Investigation into the Utilisation of CNN with LSTM for Video Deepfake Detection." Applied Sciences 14, no. 21 (October 25, 2024): 9754. http://dx.doi.org/10.3390/app14219754.

Full text
Abstract:
Video deepfake detection has emerged as a critical field within the broader domain of digital technologies driven by the rapid proliferation of AI-generated media and the increasing threat of its misuse for deception and misinformation. The integration of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) has proven to be a promising approach for improving video deepfake detection, achieving near-perfect accuracy. CNNs enable the effective extraction of spatial features from video frames, such as facial textures and lighting, while LSTM analyses temporal patterns, detecting inconsistencies over time. This hybrid model enhances the ability to detect deepfakes by combining spatial and temporal analysis. However, the existing research lacks systematic evaluations that comprehensively assess their effectiveness and optimal configurations. Therefore, this paper provides a comprehensive review of video deepfake detection techniques utilising hybrid CNN-LSTM models. It systematically investigates state-of-the-art techniques, highlighting common feature extraction approaches and widely used datasets for training and testing. This paper also evaluates model performance across different datasets, identifies key factors influencing detection accuracy, and explores how CNN-LSTM models can be optimised. It also compares CNN-LSTM models with non-LSTM approaches, addresses implementation challenges, and proposes solutions for them. Lastly, open issues and future research directions of video deepfake detection using CNN-LSTM will be discussed. This paper provides valuable insights for researchers and cyber security professionals by reviewing CNN-LSTM models for video deepfake detection contributing to the advancement of robust and effective deepfake detection systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Akhtar, Zahid. "Deepfakes Generation and Detection: A Short Survey." Journal of Imaging 9, no. 1 (January 13, 2023): 18. http://dx.doi.org/10.3390/jimaging9010018.

Full text
Abstract:
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems under various face manipulations. Next, this survey presents an overview of the techniques and works that have been carried out in recent years for deepfake and face manipulations. Especially, four kinds of deepfake or face manipulations are reviewed, i.e., identity swap, face reenactment, attribute manipulation, and entire face synthesis. For each category, deepfake or face manipulation generation methods as well as those manipulation detection methods are detailed. Despite significant progress based on traditional and advanced computer vision, artificial intelligence, and physics, there is still a huge arms race surging up between attackers/offenders/adversaries (i.e., DeepFake generation methods) and defenders (i.e., DeepFake detection methods). Thus, open challenges and potential research directions are also discussed. This paper is expected to aid the readers in comprehending deepfake generation and detection mechanisms, together with open issues and future directions.
APA, Harvard, Vancouver, ISO, and other styles
32

Kapoor, Tushar. "Deepfake Audio Detection System." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (May 31, 2024): 984–89. http://dx.doi.org/10.22214/ijraset.2024.61718.

Full text
Abstract:
Abstract: To ensure the authenticity of audio material, it is important to establish reliable detection tools that can trace the spread and use of deepfake technologies. This study focuses on the methodology of deepfake audio identification via Mel Frequency Cepstral Coefficients (MFCC) as features with Random Forest as a classifier. By extracting MFCC features from audio clips and using them in a Random Forest model, one can learn unique spectral properties that can help distinguish deepfakes from authentic audio. The Random Forest algorithm, famous for its quality of being able to work well in an ensemble learning paradigm, is utilized to identify patterns that are representative of deepfake manipulation. To ensure the efficiency and reliability of this method, it was tested on a large number of different data sets that included both genuine and fake voice samples. To ensure robustness and generalization, cross-validation techniques are employed, restricting model predictions to the range of 0 to 1 and providing informative error messages for effective diagnosis. Thus, it is an important scientific study helping to develop and strengthen methods for identifying and eliminating threats in the area of artificial sound activity.
APA, Harvard, Vancouver, ISO, and other styles
33

Shad, Hasin Shahed, Md Mashfiq Rizvee, Nishat Tasnim Roza, S. M. Ahsanul Hoq, Mohammad Monirujjaman Khan, Arjun Singh, Atef Zaguia, and Sami Bourouis. "Comparative Analysis of Deepfake Image Detection Method Using Convolutional Neural Network." Computational Intelligence and Neuroscience 2021 (December 16, 2021): 1–18. http://dx.doi.org/10.1155/2021/3111676.

Full text
Abstract:
Generation Z is a data-driven generation. Everyone has the entirety of humanity’s knowledge in their hands. The technological possibilities are endless. However, we use and misuse this blessing to face swap using deepfake. Deepfake is an emerging subdomain of artificial intelligence technology in which one person’s face is overlaid over another person’s face, which is very prominent across social media. Machine learning is the main element of deepfakes, and it has allowed deepfake images and videos to be generated considerably faster and at a lower cost. Despite the negative connotations associated with the phrase “deepfakes,” the technology is being more widely employed commercially and individually. Although it is relatively new, the latest technological advances make it more and more challenging to detect deepfakes and synthesized images from real ones. An increasing sense of unease has developed around the emergence of deepfake technologies. Our main objective is to detect deepfake images from real ones accurately. In this research, we implemented several methods to detect deepfake images and make a comparative analysis. Our model was trained by datasets from Kaggle, which had 70,000 images from the Flickr dataset and 70,000 images produced by styleGAN. For this comparative study of the use of convolutional neural networks (CNN) to identify genuine and deepfake pictures, we trained eight different CNN models. Three of these models were trained using the DenseNet architecture (DenseNet121, DenseNet169, and DenseNet201); two were trained using the VGGNet architecture (VGG16, VGG19); one was with the ResNet50 architecture, one with the VGGFace, and one with a bespoke CNN architecture. We have also implemented a custom model that incorporates methods like dropout and padding that aid in determining whether or not the other models reflect their objectives. The results were categorized by five evaluation metrics: accuracy, precision, recall, F1-score, and area under the ROC (receiver operating characteristic) curve. Amongst all the models, VGGFace performed the best, with 99% accuracy. Besides, we obtained 97% from the ResNet50, 96% from the DenseNet201, 95% from the DenseNet169, 94% from the VGG19, 92% from the VGG16, 97% from the DenseNet121 model, and 90% from the custom model.
APA, Harvard, Vancouver, ISO, and other styles
34

Emaley, Aman Kumar. "Discerning Deception: A Face-Centric Deepfake Detection Approach with ResNeXt-50 and LSTMs." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 5075–83. http://dx.doi.org/10.22214/ijraset.2024.61186.

Full text
Abstract:
Abstract: A.I. has grown to epidemic proportions over the last years as its applied in almost all sectors to allocate workload from humans but end up being done effectively with no human intervention. A branch of A.I. called deep learning, which operates by mimicking human judgment and action through neural network systems. Nonetheless, with the increase height of the two platforms have been experienced sufficient cases of misguided individuals using tools to recycle videos, audios, and texts to achieve their agendas. This insinuates a due assumption that Generative Adversarial Networks, GANs, are central to the development of believable deepfakes. GANs have developed a crucial ability to generate videos that replace frames with material from another video source to create deepfakes videos. While GANs serves various purposes such as entertainment, teaching, and experimentation, malicious actors can misuse these deep learning techniques to manipulate videos, impacting the privacy of individuals in society. This paper conducts an analysis of different deepfake detection models, comparing their efficacy and discussing potential future extensions of deepfake technology. The study presents a novel deepfake detection approach utilizing a combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). This method utilizes ResNext50 for extracting features at the frame level, while employing LSTM (Long Short-Term Memory) for video classification based on these extracted features. Various datasets are incorporated, including the deepfake detection challenge dataset (DFDC) and Face Forensics deepfake collections (FF++), combining them to achieve a high-accuracy model capable of accurately discerning between real and deepfake videos. The results of this study make a valuable contribution to the continuous endeavors aimed at improving deepfake detection abilities and ensuring privacy protection in a time heavily influenced by artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
35

Noreen, Iram, Muhammad Shahid Muneer, and Saira Gillani. "Deepfake attack prevention using steganography GANs." PeerJ Computer Science 8 (October 20, 2022): e1125. http://dx.doi.org/10.7717/peerj-cs.1125.

Full text
Abstract:
Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. Methods Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an “attention model” with the ReLU activation function to achieve a fast learning rate. Results The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique.
APA, Harvard, Vancouver, ISO, and other styles
36

Godulla, Alexander, Christian P. Hoffmann, and Daniel Seibert. "Dealing with deepfakes – an interdisciplinary examination of the state of research and implications for communication studies." Studies in Communication and Media 10, no. 1 (2021): 72–96. http://dx.doi.org/10.5771/2192-4007-2021-1-72.

Full text
Abstract:
Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify research gaps and derive implications for future communication studies research.
APA, Harvard, Vancouver, ISO, and other styles
37

K, Mr Gopi. "Deep Fake Detection using Deep Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 6, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33196.

Full text
Abstract:
Deep learning is an effective method that is broadly used across a wide range of areas, i.e., computer vision, machine vision, and natural language processing. Deepfakes is an application of this technology where the images and videos of someone are manipulated in such a way that it is difficult for human beings to tell the difference between them and their true selves. Deepfakes have been the subject of several studies recently, and a number of deep learning approaches have been proposed for their detection. Here, we provide an extensive survey on deepfake generation and recognition techniques using neural networks. Additionally, a detailed study of the different technologies used in deepfake detection is provided. This will surely benefit researchers in this area because it will include new cutting-edge methods for detecting fake videos or images on social networks. Moreover, it will make it easy for us to compare what others have done in their papers by explaining how they came up with their models or what information was employed for training them. Key Words: Deep Learning, Fake Detection, Neural Networks, Social Networks
APA, Harvard, Vancouver, ISO, and other styles
38

Dr. Sheshang Degadwala and Vishal Manishbhai Patel. "Advancements in Deepfake Detection : A Review of Emerging Techniques and Technologies." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 5 (September 5, 2024): 127–39. http://dx.doi.org/10.32628/cseit24105811.

Full text
Abstract:
This review paper provides a comprehensive analysis of the current state of deepfake detection technologies, driven by the growing concerns over the misuse of synthetic media for malicious purposes, such as misinformation, identity theft, and privacy invasion. The motivation behind this work stems from the increasing sophistication of deepfake generation methods, making it challenging to differentiate between real and manipulated content. While numerous detection techniques have been proposed, they often face limitations in scalability, generalization across different types of deepfakes, and robustness against adversarial attacks. The aim of this paper is to critically assess existing deepfake detection approaches, highlighting their strengths and weaknesses. The objectives include categorizing detection methods, evaluating their performance in diverse contexts, identifying the limitations of current technologies, and proposing future research directions to enhance detection efficacy and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
39

Naitali, Amal, Mohammed Ridouani, Fatima Salahdine, and Naima Kaabouch. "Deepfake Attacks: Generation, Detection, Datasets, Challenges, and Research Directions." Computers 12, no. 10 (October 23, 2023): 216. http://dx.doi.org/10.3390/computers12100216.

Full text
Abstract:
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment.
APA, Harvard, Vancouver, ISO, and other styles
40

Borade, Shwetambari, Nilakshi Jain, Bhavesh Patel, Vineet Kumar, Mustansir Godhrawala, Shubham Kolaskar, Yash Nagare, Pratham Shah, and Jayan Shah. "ResNet50 DeepFake Detector: Unmasking Reality." Indian Journal Of Science And Technology 17, no. 13 (March 25, 2024): 1263–71. http://dx.doi.org/10.17485/ijst/v17i13.285.

Full text
Abstract:
Objectives: The objective of this research is to detect video deepfakes with a higher accuracy and provide optimum results. The research aims to reduce time complexity for the media processing while simultaneously working on the model accuracy. Methods: This research has utilized CelebDF and FaceForensics++ Datasets for training and 32 epochs with the use of Single Nvidia Tesla T4 GPU. The above method of training and validating the model yielded error of <5% and is very capable. Using image scraping this model initially eliminates the unimportant areas of consideration. Thus, reducing the amount of scans that the model does in order to identify the face. This reduces that training to graph ratio and provides less error margin. Findings: This research’s findings reveal the model's robustness in detecting manipulated videos generated by deepfake techniques. Through extensive experimentation on diverse datasets, ResNet50 consistently demonstrated 97% accuracy, sensitivity, and specificity in distinguishing authentic content from deepfakes. The model exhibited exceptional generalization across various scenarios, including face-swapping and lip-syncing, showcasing its adaptability to evolving deepfake techniques. This research contributes to the already existing literature on ResNet50 deepfake detection tools. It contributes by adding the image scraping feature to the ResNet50 model and overcomes the gaps such as increasing error percentage of some of the models. The research of 1 has a 20% error percentage while this research has an error percentage of 5% with an accuracy of 97%. Novelty: The study employs ResNet50 to detect deepfake videos, utilizing novel image scraping techniques to minimize errors and enhance prediction accuracy. Keywords: Deepfakes, Deep Learning, GAN, ResNet50, FaceForensics++, CelebDF
APA, Harvard, Vancouver, ISO, and other styles
41

Salvi, Davide, Honggu Liu, Sara Mandelli, Paolo Bestagini, Wenbo Zhou, Weiming Zhang, and Stefano Tubaro. "A Robust Approach to Multimodal Deepfake Detection." Journal of Imaging 9, no. 6 (June 19, 2023): 122. http://dx.doi.org/10.3390/jimaging9060122.

Full text
Abstract:
The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Al Waro'i, Muhammad Nur Abdul Latif. "False Reality: Deepfakes in Terrorist Propaganda and Recruitment." Security Intelligence Terrorism Journal (SITJ) 1, no. 1 (August 14, 2024): 41–59. http://dx.doi.org/10.70710/sitj.v1i1.5.

Full text
Abstract:
Deepfake technology, which leverages artificial intelligence to create hyper-realistic digital fabrications, has emerged as a significant threat across various domains, notably in terrorism. This review critically examines the exploitation of deepfakes in terrorist propaganda and recruitment, presenting a systematic analysis of the technical mechanisms behind their creation and detection, historical and contemporary propaganda methods, and their psychological impacts on audiences. The study identifies key advancements in deepfake detection technologies, such as ensemble learning and convolutional neural networks, which are crucial in distinguishing real from synthetic media. Furthermore, the review highlights the importance of public awareness and psychological resilience as vital countermeasures against deepfake manipulation. Despite technological advancements, significant challenges remain, including the development of real-time detection systems capable of operating in diverse and uncontrolled environments and a comprehensive understanding of the psychological processes affected by deepfake propaganda. The review underscores the urgent need for robust policy frameworks and international cooperation to address the ethical, legal, and security implications of deepfake technology. By integrating technical, psychological, and policy perspectives, this study provides a holistic understanding of deepfake technology's role in modern terrorism and offers insights for developing effective countermeasures. The comprehensive approach aims to contribute to the creation of robust strategies to mitigate the misuse of deepfake technology, ensuring a safer and more trustworthy digital environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Sameer, Sameer. "Integrating Deep Learning Architecture with Pufferfish Optimization Algorithm for Real-Time Deepfake Video Detection and Classification Model." Fusion: Practice and Applications 18, no. 1 (2025): 288–303. https://doi.org/10.54216/fpa.180120.

Full text
Abstract:
Deepfake is a technology employed in making definite videos, which are operated utilizing an artificial intelligence (AI) model named deep learning (DL). Deepfake videos were normally videos that cover activities grabbed by definite people but with another individual's face. Substitute of people appearances in videos utilizing the DL model. The technology of Deepfake permits humans to operate videos and images utilizing DL. The outcomes from deepfakes are challenging to differentiate utilizing normal vision. It is a combination of the words DL and fake, and it mostly denotes material shaped by deep neural networks (DNNs), which is a subclass of machine learning (ML). Deepfake denotes numerous modifications of face models, and integrates innovative technologies, with computer vision and DL. The detection of a deepfake model can be assumed as a dual classification procedure that can be categorized as the original or deepfake class. It works by removing features from the videos or images that is employed to distinguish between original and deepfake content. Therefore, this study proposes Leveraging Pufferfish Optimization and Deep Belief Network for an Enhanced Deepfake Video Detection (LPODBN-EDVD) technique. The LPODBN-EDVD technique intends to detect fake videos utilizing the DL model. In the presented LPODBN-EDVD technique, the data preprocessing stages include splitting the video into frames, face detection, and face cropping. For the process of feature extraction, the EfficientNet model is exploited. Besides, the deep belief network (DBN) classifier can be executed for deepfake video detection. Finally, the pufferfish optimization algorithm (POA) is employed for the optimal hyperparameter selection of the DBN classifier. A wide range of simulations was involved in exhibiting the promising results of the LPODBN-EDVD method. The experimental analysis pointed out the enhanced performance of the LPODBN-EDVD technique compared to recent approaches
APA, Harvard, Vancouver, ISO, and other styles
44

Robert Wolański and Karol Jędrasiak. "Audio-Video Analysis Method of Public Speaking Videos to Detect Deepfake Threat." SAFETY & FIRE TECHNOLOGY 62, no. 2 (December 29, 2023): 172–80. http://dx.doi.org/10.12845/sft.62.2.2023.10.

Full text
Abstract:
Aim: The purpose of the article is to present the hypothesis that the use of discrepancies in audiovisual materials can significantly increase the effectiveness of detecting various types of deepfake and related threats. In order to verify this hypothesis, the authors proposed a new method that reveals inconsistencies in both multiple modalities simultaneously and within individual modalities separately, enabling them to effectively distinguish between authentic and altered public speaking videos. Project and methods: The proposed approach is to integrate audio and visual signals in a so-called fine-grained manner, and then carry out binary classification processes based on calculated adjustments to the classification results of each modality. The method has been tested using various network architectures, in particular Capsule networks – for deep anomaly detection and Swin Transformer – for image classification. Pre-processing included frame extraction and face detection using the MTCNN algorithm, as well as conversion of audio to mel spectrograms to better reflect human auditory perception. The proposed technique was tested on multimodal deepfake datasets, namely FakeAVCeleb and TMC, along with a custom dataset containing 4,700 recordings. The method has shown high performance in identifying deepfake threats in various test scenarios. Results: The method proposed by the authors achieved better AUC and accuracy compared to other reference methods, confirming its effectiveness in the analysis of multimodal artefacts. The test results confirm that it is effective in detecting modified videos in a variety of test scenarios which can be considered an advance over existing deepfake detection techniques. The results highlight the adaptability of the method in various architectures of feature extraction networks. Conclusions: The presented method of audiovisual deepfake detection uses fine inconsistencies of multimodal features to distinguish whether the material is authentic or synthetic. It is distinguished by its ability to point out inconsistencies in different types of deepfakes and, within each individual modality, can effectively distinguish authentic content from manipulated counterparts. The adaptability has been confirmed by the successful application of the method in various feature extraction network architectures. Moreover, its effectiveness has been proven in rigorous tests on two different audiovisual deepfake datasets. Keywords: analysis of audio-video stream, detection of deepfake threats, analysis of public speeches
APA, Harvard, Vancouver, ISO, and other styles
45

Alanazi, Fatimah, Gary Ushaw, and Graham Morgan. "Improving Detection of DeepFakes through Facial Region Analysis in Images." Electronics 13, no. 1 (December 28, 2023): 126. http://dx.doi.org/10.3390/electronics13010126.

Full text
Abstract:
In the evolving landscape of digital media, the discipline of media forensics, which encompasses the critical examination and authentication of digital images, videos, and audio recordings, has emerged as an area of paramount importance. This heightened significance is predominantly attributed to the burgeoning concerns surrounding the proliferation of DeepFakes, which are highly realistic and manipulated media content, often created using advanced artificial intelligence techniques. Such developments necessitate a profound understanding and advancement in media forensics to ensure the integrity of digital media in various domains. Current research endeavours are primarily directed towards addressing a common challenge observed in DeepFake datasets, which pertains to the issue of overfitting. Many suggested remedies centre around the application of data augmentation methods, with a frequently adopted strategy being the incorporation of random erasure or cutout. This method entails the random removal of sections from an image to introduce diversity and mitigate overfitting. Generating disparities between the altered and unaltered images serves to inhibit the model from excessively adapting itself to individual samples, thus leading to more favourable results. Nonetheless, the stochastic nature of this approach may inadvertently obscure facial regions that harbour vital information necessary for DeepFake detection. Due to the lack of guidelines on specific regions for cutout, most studies use a randomised approach. However, in recent research, face landmarks have been integrated to designate specific facial areas for removal, even though the selection remains somewhat random. Therefore, there is a need to acquire a more comprehensive insight into facial features and identify which regions hold more crucial data for the identification of DeepFakes. In this study, the investigation delves into the data conveyed by various facial components through the excision of distinct facial regions during the training of the model. The goal is to offer valuable insights to enhance forthcoming face removal techniques within DeepFake datasets, fostering a deeper comprehension among researchers and advancing the realm of DeepFake detection. Our study presents a novel method that uses face cutout techniques to improve understanding of key facial features crucial in DeepFake detection. Moreover, the method combats overfitting in DeepFake datasets by generating diverse images with these techniques, thereby enhancing model robustness. The developed methodology is validated against publicly available datasets like FF++ and Celeb-DFv2. Both face cutout groups surpassed the Baseline, indicating cutouts improve DeepFake detection. Face Cutout Group 2 excelled, with 91% accuracy on Celeb-DF and 86% on the compound dataset, suggesting external facial features’ significance in detection. The study found that eyes are most impactful and the nose is least in model performance. Future research could explore the augmentation policy’s effect on video-based DeepFake detection.
APA, Harvard, Vancouver, ISO, and other styles
46

Jiang, Jianguo, Boquan Li, Baole Wei, Gang Li, Chao Liu, Weiqing Huang, Meimei Li, and Min Yu. "FakeFilter: A cross-distribution Deepfake detection system with domain adaptation." Journal of Computer Security 29, no. 4 (June 18, 2021): 403–21. http://dx.doi.org/10.3233/jcs-200124.

Full text
Abstract:
Abuse of face swap techniques poses serious threats to the integrity and authenticity of digital visual media. More alarmingly, fake images or videos created by deep learning technologies, also known as Deepfakes, are more realistic, high-quality, and reveal few tampering traces, which attracts great attention in digital multimedia forensics research. To address those threats imposed by Deepfakes, previous work attempted to classify real and fake faces by discriminative visual features, which is subjected to various objective conditions such as the angle or posture of a face. Differently, some research devises deep neural networks to discriminate Deepfakes at the microscopic-level semantics of images, which achieves promising results. Nevertheless, such methods show limited success as encountering unseen Deepfakes created with different methods from the training sets. Therefore, we propose a novel Deepfake detection system, named FakeFilter, in which we formulate the challenge of unseen Deepfake detection into a problem of cross-distribution data classification, and address the issue with a strategy of domain adaptation. By mapping different distributions of Deepfakes into similar features in a certain space, the detection system achieves comparable performance on both seen and unseen Deepfakes. Further evaluation and comparison results indicate that the challenge has been successfully addressed by FakeFilter.
APA, Harvard, Vancouver, ISO, and other styles
47

Kumar, Naresh, and Ankit Kundu. "SecureVision: Advanced Cybersecurity Deepfake Detection with Big Data Analytics." Sensors 24, no. 19 (September 29, 2024): 6300. http://dx.doi.org/10.3390/s24196300.

Full text
Abstract:
SecureVision is an advanced and trustworthy deepfake detection system created to tackle the growing threat of ‘deepfake’ movies that tamper with media, undermine public trust, and jeopardize cybersecurity. We present a novel approach that combines big data analytics with state-of-the-art deep learning algorithms to detect altered information in both audio and visual domains. One of SecureVision’s primary innovations is the use of multi-modal analysis, which improves detection capabilities by concurrently analyzing many media forms and strengthening resistance against advanced deepfake techniques. The system’s efficacy is further enhanced by its capacity to manage large datasets and integrate self-supervised learning, which guarantees its flexibility in the ever-changing field of digital deception. In the end, this study helps to protect digital integrity by providing a proactive, scalable, and efficient defense against the ubiquitous threat of deepfakes, thereby establishing a new benchmark for privacy and security measures in the digital era.
APA, Harvard, Vancouver, ISO, and other styles
48

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.

Full text
Abstract:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Adwan, Aryaf, Hadeel Alazzam, Noor Al-Anbaki, and Eman Alduweib. "Detection of Deepfake Media Using a Hybrid CNN–RNN Model and Particle Swarm Optimization (PSO) Algorithm." Computers 13, no. 4 (April 15, 2024): 99. http://dx.doi.org/10.3390/computers13040099.

Full text
Abstract:
Deepfakes are digital audio, video, or images manipulated using machine learning algorithms. These manipulated media files can convincingly depict individuals doing or saying things they never actually did. Deepfakes pose significant risks to our lives, including national security, financial markets, and personal privacy. The ability to create convincing deep fakes can also harm individuals’ reputations and can be used to spread disinformation and fake news. As such, there is a growing need for reliable and accurate methods to detect deep fakes and prevent their harmful effects. In this paper, a hybrid convolutional neural network (CNN) and recurrent neural network (RNN) with a particle swarm optimization (PSO) algorithm is utilized to demonstrate a deep learning strategy for detecting deepfake videos. High accuracy, sensitivity, specificity, and F1 score were attained by the proposed approach when tested on two publicly available datasets: Celeb-DF and the Deepfake Detection Challenge Dataset (DFDC). Specifically, the proposed method achieved an average accuracy of 97.26% on Celeb-DF and an average accuracy of 94.2% on DFDC. The results were compared to other state-of-the-art methods and showed that the proposed method outperformed many. The proposed method can effectively detect deepfake videos, which is essential for identifying and preventing the spread of manipulated content online.
APA, Harvard, Vancouver, ISO, and other styles
50

Yavuzkilic, Semih, Abdulkadir Sengur, Zahid Akhtar, and Kamran Siddique. "Spotting Deepfakes and Face Manipulations by Fusing Features from Multi-Stream CNNs Models." Symmetry 13, no. 8 (July 26, 2021): 1352. http://dx.doi.org/10.3390/sym13081352.

Full text
Abstract:
Deepfake is one of the applications that is deemed harmful. Deepfakes are a sort of image or video manipulation in which a person’s image is changed or swapped with that of another person’s face using artificial neural networks. Deepfake manipulations may be done with a variety of techniques and applications. A quintessential countermeasure against deepfake or face manipulation is deepfake detection method. Most of the existing detection methods perform well under symmetric data distributions, but are still not robust to asymmetric datasets variations and novel deepfake/manipulation types. In this paper, for the identification of fake faces in videos, a new multistream deep learning algorithm is developed, where three streams are merged at the feature level using the fusion layer. After the fusion layer, the fully connected, Softmax, and classification layers are used to classify the data. The pre-trained VGG16 model is adopted for transferred CNN1stream. In transfer learning, the weights of the pre-trained CNN model are further used for training the new classification problem. In the second stream (transferred CNN2), the pre-trained VGG19 model is used. Whereas, in the third stream, the pre-trained ResNet18 model is considered. In this paper, a new large-scale dataset (i.e., World Politicians Deepfake Dataset (WPDD)) is introduced to improve deepfake detection systems. The dataset was created by downloading videos of 20 different politicians from YouTube. Over 320,000 frames were retrieved after dividing the downloaded movie into little sections and extracting the frames. Finally, various manipulations were performed to these frames, resulting in seven separate manipulation classes for men and women. In the experiments, three fake face detection scenarios are investigated. First, fake and real face discrimination is studied. Second, seven face manipulations are performed, including age, beard, face swap, glasses, hair color, hairstyle, smiling, and genuine face discrimination. Third, performance of deepfake detection system under novel type of face manipulation is analyzed. The proposed strategy outperforms the prior existing methods. The calculated performance metrics are over 99%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography