Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: FAKE VIDEOS.

Artykuły w czasopismach na temat „FAKE VIDEOS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „FAKE VIDEOS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Abidin, Muhammad Indra, Ingrid Nurtanio i Andani Achmad. "Deepfake Detection in Videos Using Long Short-Term Memory and CNN ResNext". ILKOM Jurnal Ilmiah 14, nr 3 (19.12.2022): 178–85. http://dx.doi.org/10.33096/ilkom.v14i3.1254.178-185.

Pełny tekst źródła
Streszczenie:
Deep-fake in videos is a video synthesis technique by changing the people’s face in the video with others’ face. Deep-fake technology in videos has been used to manipulate information, therefore it is necessary to detect deep-fakes in videos. This paper aimed to detect deep-fakes in videos using the ResNext Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms. The video data was divided into 4 types, namely video with 10 frames, 20 frames, 40 frames and 60 frames. Furthermore, face detection was used to crop the image to 100 x 100 pixels and then the pictures were processed using ResNext CNN and LSTM. The confusion matrix was employed to measure the performance of the ResNext CNN-LSTM algorithm. The indicators used were accuracy, precision, and recall. The results of data classification showed that the highest accuracy value was 90% for data with 40 and 60 frames. While data with 10 frames had the lowest accuracy with 52% only. ResNext CNN-LSTM was able to detect deep-fakes in videos well even though the size of the image was small.
Style APA, Harvard, Vancouver, ISO itp.
2

López-Gil, Juan-Miguel, Rosa Gil i Roberto García. "Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression". Computational Intelligence and Neuroscience 2022 (18.10.2022): 1–12. http://dx.doi.org/10.1155/2022/1332122.

Pełny tekst źródła
Streszczenie:
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.
Style APA, Harvard, Vancouver, ISO itp.
3

Arunkumar, P. M., Yalamanchili Sangeetha, P. Vishnu Raja i S. N. Sangeetha. "Deep Learning for Forgery Face Detection Using Fuzzy Fisher Capsule Dual Graph". Information Technology and Control 51, nr 3 (23.09.2022): 563–74. http://dx.doi.org/10.5755/j01.itc.51.3.31510.

Pełny tekst źródła
Streszczenie:
In digital manipulation, creating fake images/videos or swapping face images/videos with another person is done by using a deep learning algorithm is termed deep fake. Fake pornography is a harmful one because of the inclusion of fake content in the hoaxes, fake news, and fraud things in the financial. The Deep Learning technique is an effective tool in the detection of deep fake images or videos. With the advancement of Generative adversarial networks (GAN) in the deep learning techniques, deep fake has become an essential one in the social media platform. This may threaten the public, therefore detection of deep fake images/videos is needed. For detecting the forged images/videos, many research works have been done and those methods are inefficient in the detection of new threats or newly created forgery images or videos, and also consumption time is high. Therefore, this paper focused on the detection of different types of fake images or videos using Fuzzy Fisher face with Capsule dual graph (FFF-CDG). The data set used in this work is FFHQ, 100K-Faces DFFD, VGG-Face2, and Wild Deep fake. The accuracy for FFHQ datasets, the existing and proposed systems obtained the accuracy of 81.5%, 89.32%, 91.35%, and 95.82% respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Shuting (Ada), Min-Seok Pang i Paul Pavlou. "Seeing Is Believing? How Including a Video in Fake News Influences Users’ Reporting of Fake News to Social Media Platforms". MIS Quarterly 45, nr 3 (1.09.2022): 1323–54. http://dx.doi.org/10.25300/misq/2022/16296.

Pełny tekst źródła
Streszczenie:
Social media platforms, such as Facebook, Instagram, and Twitter, are combating the spread of fakennews by developing systems that allow their users to report fake news. However, it remains unclear whether these reporting systems that harness the “wisdom of the crowd” are effective. Notably, concerns have been raised that the popularity of videos may hamper users’ reporting of fake news. The persuasive power of videos may render fake news more deceptive and less likely to be reported in practice. However, this is neither theoretically nor empirically straightforward, as videos not only affect users’ ability to detect fake news, but also impact their willingness to report and their engagement (i.e., likes, shares, and comments) which would further spread fake news. Using a unique dataset from a leading social media platform, we empirically examine how including a video in a fake news post affects the number of users reporting the post to the platform. Our results indicate that including a video significantly increases the number of users reporting the fake news post to the social media platform. Additionally, we find that the sentiment intensity of the fake news text content, especially when the sentiment is positive, attenuates the effect of including a video. Randomized experiments and a set of mediation analyses are included to uncover the underlying mechanisms. We contribute to the information systems literature by examining how social media platforms can leverage their users to report fake news, and how different formats (e.g., videos and text) of fake news interact to influence users’ reporting behavior. Social media platforms that seek to leverage the “wisdom of the crowd” to combat the proliferation of fake news should consider both the popularity of videos and the role of text sentiment in fake news to adjust their strategies.
Style APA, Harvard, Vancouver, ISO itp.
5

Deng, Liwei, Hongfei Suo i Dongjie Li. "Deepfake Video Detection Based on EfficientNet-V2 Network". Computational Intelligence and Neuroscience 2022 (15.04.2022): 1–13. http://dx.doi.org/10.1155/2022/3441549.

Pełny tekst źródła
Streszczenie:
As technology advances and society evolves, deep learning is becoming easier to operate. Many unscrupulous people are using deep learning technology to create fake pictures and fake videos that seriously endanger the stability of the country and society. Examples include faking politicians to make inappropriate statements, using face-swapping technology to spread false information, and creating fake videos to obtain money. In view of this social problem, based on the original fake face detection system, this paper proposes using a new network of EfficientNet-V2 to distinguish the authenticity of pictures and videos. Moreover, our method was used to deal with two current mainstream large-scale fake face datasets, and EfficientNet-V2 highlighted the superior performance of the new network by comparing the existing detection network with the actual training and testing results. Finally, based on improving the accuracy of the detection system in distinguishing real and fake faces, the actual pictures and videos are detected, and an excellent visualization effect is achieved.
Style APA, Harvard, Vancouver, ISO itp.
6

Shahar, Hadas, i Hagit Hel-Or. "Fake Video Detection Using Facial Color". Color and Imaging Conference 2020, nr 28 (4.11.2020): 175–80. http://dx.doi.org/10.2352/issn.2169-2629.2020.28.27.

Pełny tekst źródła
Streszczenie:
The field of image forgery is widely studied, and with the recent introduction of deep networks based image synthesis, detection of fake image sequences has increased the challenge. Specifically, detecting spoofing attacks is of grave importance. In this study we exploit the minute changes in facial color of human faces in videos to determine real from fake videos. Even when idle, human skin color changes with sub-dermal blood flow, these changes are enhanced under stress and emotion. We show that extracting facial color along a video sequence can serve as a feature for training deep neural networks to successfully determine fake vs real face sequences.
Style APA, Harvard, Vancouver, ISO itp.
7

Lin, Yih-Kai, i Hao-Lun Sun. "Few-Shot Training GAN for Face Forgery Classification and Segmentation Based on the Fine-Tune Approach". Electronics 12, nr 6 (16.03.2023): 1417. http://dx.doi.org/10.3390/electronics12061417.

Pełny tekst źródła
Streszczenie:
There are many techniques for faking videos that can alter the face in a video to look like another person. This type of fake video has caused a number of information security crises. Many deep learning-based detection methods have been developed for these forgery methods. These detection methods require a large amount of training data and thus cannot develop detectors quickly when new forgery methods emerge. In addition, traditional forgery detection refers to a classifier that outputs real or fake versions of the input images. If the detector can output a prediction of the fake area, i.e., a segmentation version of forgery detection, it will be a great help for forensic work. Thus, in this paper, we propose a GAN-based deep learning approach that allows detection of forged regions using a smaller number of training samples. The generator part of the proposed architecture is used to synthesize predicted segmentation which indicates the fakeness of each pixel. To solve the classification problem, a threshold on the percentage of fake pixels is used to decide whether the input image is fake. For detecting fake videos, frames of the video are extracted and it is detected whether they are fake. If the percentage of fake frames is higher than a given threshold, the video is classified as fake. Compared with other papers, the experimental results show that our method has better classification and segmentation.
Style APA, Harvard, Vancouver, ISO itp.
8

Liang, Xiaoyun, Zhaohong Li, Zhonghao Li i Zhenzhen Zhang. "Fake Bitrate Detection of HEVC Videos Based on Prediction Process". Symmetry 11, nr 7 (15.07.2019): 918. http://dx.doi.org/10.3390/sym11070918.

Pełny tekst źródła
Streszczenie:
In order to defraud click-through rate, some merchants recompress the low-bitrate video to a high-bitrate one without improving the video quality. This behavior deceives viewers and wastes network resources. Therefore, a stable algorithm that detects fake bitrate videos is urgently needed. High-Efficiency Video Coding (HEVC) is a worldwide popular video coding standard. Hence, in this paper, a robust algorithm is proposed to detect HEVC fake bitrate videos. Firstly, five effective feature sets are extracted from the prediction process of HEVC, including Coding Unit of I-picture/P-picture partitioning modes, Prediction Unit of I-picture/P-picture partitioning modes, Intra Prediction Modes of I-picture. Secondly, feature concatenation is adopted to enhance the expressiveness and improve the effectiveness of the features. Finally, five single feature sets and three concatenate feature sets are separately sent to the support vector machine for modeling and testing. The performance of the proposed algorithm is compared with state-of-the-art algorithms on HEVC videos of various resolutions and fake bitrates. The results show that the proposed algorithm can not only can better detect HEVC fake bitrate videos, but also has strong robustness against frame deletion, copy-paste, and shifted Group of Picture structure attacks.
Style APA, Harvard, Vancouver, ISO itp.
9

Pei, Pengfei, Xianfeng Zhao, Jinchuan Li, Yun Cao i Xuyuan Lai. "Vision Transformer-Based Video Hashing Retrieval for Tracing the Source of Fake Videos". Security and Communication Networks 2023 (28.06.2023): 1–16. http://dx.doi.org/10.1155/2023/5349392.

Pełny tekst źródła
Streszczenie:
With the increasing negative impact of fake videos on individuals and society, it is crucial to detect different types of forgeries. Existing forgery detection methods often output a probability value, which lacks interpretability and reliability. In this paper, we propose a source-tracing-based solution to find the original real video of a fake video, which can provide more reliable results in practical situations. However, directly applying retrieval methods to traceability tasks is infeasible since traceability tasks require finding the unique source video from a large number of real videos, while retrieval methods are typically used to find similar videos. In addition, training an effective hashing center to distinguish similar real videos is challenging. To address the above issues, we introduce a novel loss function, hash triplet loss, to capture fine-grained features with subtle differences. Extensive experiments show that our method outperforms state-of-the-art methods on multiple datasets of object removal (video inpainting), object addition (video splicing), and object swapping (face swapping), demonstrating excellent robustness and cross-dataset performance. The effectiveness of the hash triplet loss for nondifferentiable optimization problems is validated through experiments in similar video scenes.
Style APA, Harvard, Vancouver, ISO itp.
10

Das, Rashmiranjan, Gaurav Negi i Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification". Electronic Imaging 2021, nr 4 (18.01.2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.

Pełny tekst źródła
Streszczenie:
Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.
Style APA, Harvard, Vancouver, ISO itp.
11

Maras, Marie-Helen, i Alex Alexandrou. "Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos". International Journal of Evidence & Proof 23, nr 3 (28.10.2018): 255–62. http://dx.doi.org/10.1177/1365712718807226.

Pełny tekst źródła
Streszczenie:
Deepfake videos are the product of artificial intelligence or machine-learning applications that merge, combine, replace and superimpose images and video clips onto a video, creating a fake video that appears authentic. The main issue with Deepfake videos is that anyone can produce explicit content without the consent of those involved. While some of these videos are humorous and benign, the majority of them are pornographic. The faces of celebrities and other well-known (and lesser-known) individuals have been superimposed on the bodies of porn stars. The existence of this technology erodes trust in video evidence and adversely affects its probative value in court. This article describes the current and future capabilities of this technology, stresses the need to plan for its treatment as evidence in court, and draws attention to its current and future impact on the authentication process of video evidence in courts. Ultimately, as the technology improves, parallel technologies will need to be developed and utilised to identify and expose fake videos.
Style APA, Harvard, Vancouver, ISO itp.
12

Jin, Xinlei, Dengpan Ye i Chuanxi Chen. "Countering Spoof: Towards Detecting Deepfake with Multidimensional Biological Signals". Security and Communication Networks 2021 (22.04.2021): 1–8. http://dx.doi.org/10.1155/2021/6626974.

Pełny tekst źródła
Streszczenie:
The deepfake technology is conveniently abused with the low technology threshold, which may bring the huge social security risks. As GAN-based synthesis technology is becoming stronger, various methods are difficult to classify the fake content effectively. However, although the fake content generated by GANs can deceive the human eyes, it ignores the biological signals hidden in the face video. In this paper, we proposed a novel video forensics method with multidimensional biological signals, which extracting the difference of the biological signal between real and fake videos from three dimensions. The experimental results show that our method achieves 98% accuracy on the main public dataset. Compared with other technologies, the proposed method only extracts fake video information and is not limited to a specific generation method, so it is not affected by synthetic methods and has good adaptability.
Style APA, Harvard, Vancouver, ISO itp.
13

SanilM, Rithvika, S. Saathvik, Rithesh RaiK i Srinivas P M. "DEEPFAKE DETECTION USING EYE-BLINKING PATTERN". International Journal of Engineering Applied Sciences and Technology 7, nr 3 (1.07.2022): 229–34. http://dx.doi.org/10.33564/ijeast.2022.v07i03.036.

Pełny tekst źródła
Streszczenie:
Deep learning algorithms have become so potent due to increased computing power that it is now relatively easy to produce human-like synthetic videos, sometimes known as & quot; deep fakes. & quot; It is simple to imagine scenarios in which these realistic face switched deep fakes are used to extort individuals, foment political unrest, and stage fake terrorist attacks. This paper provides a deep learning strategy novel for the efficient separation of fraudulent films produced by AI from actual ones. Automatically spotting replacement and recreation deep fakes is possible with our technology. To combat artificial intelligence, we are attempting to deploy artificial intelligence. The framelevel characteristics are extracted by our system using a Res-Next Convolution neural network, and later these features are applied to train an LSTM-based recurrent neural network to determine if those submitted video is being altered in any way or not, i.e. whether it is a deep fake or authentic video. We test our technique on a sizable quantity of balanced and mixed data sets created by combining the different accessible data sets, such as Face-Forensic++[1], Deep fake detection challenge[2], and Celeb-DF[3], in order to simulate real-time events and improve the model's performance on real-time data.
Style APA, Harvard, Vancouver, ISO itp.
14

Awotunde, Joseph Bamidele, Rasheed Gbenga Jimoh, Agbotiname Lucky Imoize, Akeem Tayo Abdulrazaq, Chun-Ta Li i Cheng-Chi Lee. "An Enhanced Deep Learning-Based DeepFake Video Detection and Classification System". Electronics 12, nr 1 (26.12.2022): 87. http://dx.doi.org/10.3390/electronics12010087.

Pełny tekst źródła
Streszczenie:
The privacy of individuals and entire countries is currently threatened by the widespread use of face-swapping DeepFake models, which result in a sizable number of fake videos that seem extraordinarily genuine. Because DeepFake production tools have advanced so much and since so many researchers and businesses are interested in testing their limits, fake media is spreading like wildfire over the internet. Therefore, this study proposes five-layered convolutional neural networks (CNNs) for a DeepFake detection and classification model. The CNN enhanced with ReLU is used to extract features from these faces once the model has extracted the face region from video frames. To guarantee model accuracy while maintaining a suitable weight, a CNN enabled with ReLU model was used for the DeepFake-detection-influenced video. The performance evaluation of the proposed model was tested using Face2Face, and first-order motion DeepFake datasets. Experimental results revealed that the proposed model has an average prediction rate of 98% for DeepFake videos and 95% for Face2Face videos under actual network diffusion circumstances. When compared with systems such as Meso4, MesoInception4, Xception, EfficientNet-B0, and VGG16 which utilizes the convolutional neural network, the suggested model produced the best results with an accuracy rate of 86%.
Style APA, Harvard, Vancouver, ISO itp.
15

Qi, Peng, Yuyan Bu, Juan Cao, Wei Ji, Ruihao Shui, Junbin Xiao, Danding Wang i Tat-Seng Chua. "FakeSV: A Multimodal Benchmark with Rich Social Context for Fake News Detection on Short Video Platforms". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14444–52. http://dx.doi.org/10.1609/aaai.v37i12.26689.

Pełny tekst źródła
Streszczenie:
Short video platforms have become an important channel for news sharing, but also a new breeding ground for fake news. To mitigate this problem, research of fake news video detection has recently received a lot of attention. Existing works face two roadblocks: the scarcity of comprehensive and largescale datasets and insufficient utilization of multimodal information. Therefore, in this paper, we construct the largest Chinese short video dataset about fake news named FakeSV, which includes news content, user comments, and publisher profiles simultaneously. To understand the characteristics of fake news videos, we conduct exploratory analysis of FakeSV from different perspectives. Moreover, we provide a new multimodal detection model named SV-FEND, which exploits the cross-modal correlations to select the most informative features and utilizes the social context information for detection. Extensive experiments evaluate the superiority of the proposed method and provide detailed comparisons of different methods and modalities for future works. Our dataset and codes are available in https://github.com/ICTMCG/FakeSV.
Style APA, Harvard, Vancouver, ISO itp.
16

Lai, Zhimao, Yufei Wang, Renhai Feng, Xianglei Hu i Haifeng Xu. "Multi-Feature Fusion Based Deepfake Face Forgery Video Detection". Systems 10, nr 2 (7.03.2022): 31. http://dx.doi.org/10.3390/systems10020031.

Pełny tekst źródła
Streszczenie:
With the rapid development of deep learning, generating realistic fake face videos is becoming easier. It is common to make fake news, network pornography, extortion and other related illegal events using deep forgery. In order to attenuate the harm of deep forgery face video, researchers proposed many detection methods based on the tampering traces introduced by deep forgery. However, these methods generally have poor cross-database detection performance. Therefore, this paper proposes a multi-feature fusion detection method to improve the generalization ability of the detector. This method combines feature information of face video in the spatial domain, frequency domain, Pattern of Local Gravitational Force (PLGF) and time domain and effectively reduces the average error rate of span detection while ensuring good detection effect in the library.
Style APA, Harvard, Vancouver, ISO itp.
17

Doke, Yash. "Deep fake Detection Through Deep Learning". International Journal for Research in Applied Science and Engineering Technology 11, nr 5 (31.05.2023): 861–66. http://dx.doi.org/10.22214/ijraset.2023.51630.

Pełny tekst źródła
Streszczenie:
Abstract: Deep fake is a rapidly growing concern in society, and it has become a significant challenge to detect such manipulated media. Deep fake detection involves identifying whether a media file is authentic or generated using deep learning algorithms. In this project, we propose a deep learning-based approach for detecting deep fakes in videos. We use the Deep fake Detection Challenge dataset, which consists of real and Deep fake videos, to train and evaluate our deep learning model. We employ a Convolutional Neural Network (CNN) architecture for our implementation, which has shown great potential in previous studies. We pre-process the dataset using several techniques such as resizing, normalization, and data augmentation to enhance the quality of the input data. Our proposed model achieves high detection accuracy of 97.5% on the Deep fake Detection Challenge dataset, demonstrating the effectiveness of the proposed approach for deep fake detection. Our approach has the potential to be used in real-world scenarios to detect deep fakes, helping to mitigate the risks posed by deep fakes to individuals and society. The proposed methodology can also be extended to detect in other types of media, such as images and audio, providing a comprehensive solution for deep fake detection.
Style APA, Harvard, Vancouver, ISO itp.
18

Noreen, Iram, Muhammad Shahid Muneer i Saira Gillani. "Deepfake attack prevention using steganography GANs". PeerJ Computer Science 8 (20.10.2022): e1125. http://dx.doi.org/10.7717/peerj-cs.1125.

Pełny tekst źródła
Streszczenie:
Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. Methods Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an “attention model” with the ReLU activation function to achieve a fast learning rate. Results The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique.
Style APA, Harvard, Vancouver, ISO itp.
19

García-Retuerta, David, Álvaro Bartolomé, Pablo Chamoso i Juan Manuel Corchado. "Counter-Terrorism Video Analysis Using Hash-Based Algorithms". Algorithms 12, nr 5 (24.05.2019): 110. http://dx.doi.org/10.3390/a12050110.

Pełny tekst źródła
Streszczenie:
The Internet is becoming a major source of radicalization. The propaganda efforts of new extremist groups include creating new propaganda videos from fragments of old terrorist attack videos. This article presents a web-scraping method for retrieving relevant videos and a pHash-based algorithm which identifies the original content of a video. Automatic novelty verification is now possible, which can potentially reduce and improve journalist research work, as well as reduce the spreading of fake news. The obtained results have been satisfactory as all original sources of new videos have been identified correctly.
Style APA, Harvard, Vancouver, ISO itp.
20

Megawan, Sunario, Wulan Sri Lestari i Apriyanto Halim. "Deteksi Non-Spoofing Wajah pada Video secara Real Time Menggunakan Faster R-CNN". Journal of Information System Research (JOSH) 3, nr 3 (29.04.2022): 291–99. http://dx.doi.org/10.47065/josh.v3i3.1519.

Pełny tekst źródła
Streszczenie:
Face non-spoofing detection is an important job used to ensure authentication security by performing an analysis of the captured faces. Face spoofing is the process of fake faces by other people to gain illegal access to the biometric system which can be done by displaying videos or images of someone's face on the monitor screen or using printed images. There are various forms of attacks that can be carried out on the face authentication system in the form of face sketches, face photos, face videos and 3D face masks. Such attacks can occur because photos and videos of faces from users of the facial authentication system are very easy to obtain via the internet or cameras. To solve this problem, in this research proposes a non-spoofing face detection model on video using Faster R-CNN. The results obtained in this study are the Faster R-CNN model that can detect non-spoof and spoof face in real time using the Raspberry Pi as a camera with a frame rate of 1 fps.
Style APA, Harvard, Vancouver, ISO itp.
21

Ferreira, Sara, Mário Antunes i Manuel E. Correia. "Exposing Manipulated Photos and Videos in Digital Forensics Analysis". Journal of Imaging 7, nr 7 (24.06.2021): 102. http://dx.doi.org/10.3390/jimaging7070102.

Pełny tekst źródła
Streszczenie:
Tampered multimedia content is being increasingly used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are amongst the most recurrent crimes in which manipulated digital photos and videos are the perpetrating and disseminating medium. Criminal investigation has been challenged in applying machine learning techniques to automatically distinguish between fake and genuine seized photos and videos. Despite the pertinent need for manual validation, easy-to-use platforms for digital forensics are essential to automate and facilitate the detection of tampered content and to help criminal investigators with their work. This paper presents a machine learning Support Vector Machines (SVM) based method to distinguish between genuine and fake multimedia files, namely digital photos and videos, which may indicate the presence of deepfake content. The method was implemented in Python and integrated as new modules in the widely used digital forensics application Autopsy. The implemented approach extracts a set of simple features resulting from the application of a Discrete Fourier Transform (DFT) to digital photos and video frames. The model was evaluated with a large dataset of classified multimedia files containing both legitimate and fake photos and frames extracted from videos. Regarding deepfake detection in videos, the Celeb-DFv1 dataset was used, featuring 590 original videos collected from YouTube, and covering different subjects. The results obtained with the 5-fold cross-validation outperformed those SVM-based methods documented in the literature, by achieving an average F1-score of 99.53%, 79.55%, and 89.10%, respectively for photos, videos, and a mixture of both types of content. A benchmark with state-of-the-art methods was also done, by comparing the proposed SVM method with deep learning approaches, namely Convolutional Neural Networks (CNN). Despite CNN having outperformed the proposed DFT-SVM compound method, the competitiveness of the results attained by DFT-SVM and the substantially reduced processing time make it appropriate to be implemented and embedded into Autopsy modules, by predicting the level of fakeness calculated for each analyzed multimedia file.
Style APA, Harvard, Vancouver, ISO itp.
22

Wu, Nan, Xin Jin, Qian Jiang, Puming Wang, Ya Zhang, Shaowen Yao i Wei Zhou. "Multisemantic Path Neural Network for Deepfake Detection". Security and Communication Networks 2022 (11.10.2022): 1–14. http://dx.doi.org/10.1155/2022/4976848.

Pełny tekst źródła
Streszczenie:
With the continuous development of deep learning techniques, it is now easy for anyone to swap faces in videos. Researchers find that the abuse of these techniques threatens cyberspace security; thus, face forgery detection is a popular research topic. However, current detection methods do not fully use the semantic features of deepfake videos. Most previous work has only divided the semantic features, the importance of which may be unequal, by experimental experience. To solve this problem, we propose a new framework, which is the multisemantic pathway network (MSPNN) for fake face detection. This method comprehensively captures forged information from the dimensions of microscopic, mesoscopic, and macroscopic features. These three kinds of semantic information are given learnable weights. The artifacts of deepfake images are more difficult to observe in a compressed video. Therefore, preprocessing is proposed to detect low-quality deepfake videos, including multiscale detail enhancement and channel information screening based on the compression principle. Center loss and cross-entropy loss are combined to further reduce intraclass spacing. Experimental results show that MSPNN is superior to contrast methods, especially low-quality deepfake video detection.
Style APA, Harvard, Vancouver, ISO itp.
23

Thaseen Ikram, Sumaiya, Priya V, Shourya Chambial, Dhruv Sood i Arulkumar V. "A Performance Enhancement of Deepfake Video Detection through the use of a Hybrid CNN Deep Learning Model". International journal of electrical and computer engineering systems 14, nr 2 (27.02.2023): 169–78. http://dx.doi.org/10.32985/ijeces.14.2.6.

Pełny tekst źródła
Streszczenie:
In the current era, many fake videos and images are created with the help of various software and new AI (Artificial Intelligence) technologies, which leave a few hints of manipulation. There are many unethical ways videos can be used to threaten, fight, or create panic among people. It is important to ensure that such methods are not used to create fake videos. An AI-based technique for the synthesis of human images is called Deep Fake. They are created by combining and superimposing existing videos onto the source videos. In this paper, a system is developed that uses a hybrid Convolutional Neural Network (CNN) consisting of InceptionResnet v2 and Xception to extract frame-level features. Experimental analysis is performed using the DFDC deep fake detection challenge on Kaggle. These deep learning-based methods are optimized to increase accuracy and decrease training time by using this dataset for training and testing. We achieved a precision of 0.985, a recall of 0.96, an f1-score of 0.98, and support of 0.968.
Style APA, Harvard, Vancouver, ISO itp.
24

Ashish Ransom, Shashank Shekhar,. "Ethical & Legal Implications of Deep Fake Technology: A Global Overview". Proceeding International Conference on Science and Engineering 11, nr 1 (18.02.2023): 2226–35. http://dx.doi.org/10.52783/cienceng.v11i1.398.

Pełny tekst źródła
Streszczenie:
It is said that a camera cannot lie. However, in this digital era, it has become abundantly clear that it doesn’t necessarily depict the truth. Increasingly sophisticated machine learning and artificial intelligence with inexpensive, easy to use and easily accessible video editing software are allowing more and more people to indulge in generating so-called deep fake videos, photos and audios.. These clips, which feature fabricated, altered and fake footage of people and things, are a growing concern in human society. Although political deep fakes are a new concern, pornographic deep fakes have been a problem for some time. These often purported to show a famous actress or model or any other woman, involved in a sex act but actually show the subject’s face superimposed onto another woman’s body who is actually involved in that act. This feature is called face-swapping and is known as the simplest method of creating a deep fake. There are numerous software applications that can be used for face-swapping, and the technology used is very advanced and is accessible. Deep fakes raise questions of personal reputation and control over one’s image on the one hand and freedom of expression on the other. This will have a significant impact on user’s privacy and security. Increasingly, governments around the world are reacting to these privacy evading applications for e.g., India banning TikTok and the USA investigating the privacy issues of TikTok and in the process of enacting laws to reduce the impact of deep fakes in the society. The study in this paper includes the ethical and legal implications surrounding the deep fake technology which also includes the study of several international legislations and analysing the position of India in tackling the crime of deepfake.
Style APA, Harvard, Vancouver, ISO itp.
25

Saealal, Muhammad Salihin, Mohd Zamri Ibrahim, David J. Mulvaney, Mohd Ibrahim Shapiai i Norasyikin Fadilah. "Using cascade CNN-LSTM-FCNs to identify AI-altered video based on eye state sequence". PLOS ONE 17, nr 12 (15.12.2022): e0278989. http://dx.doi.org/10.1371/journal.pone.0278989.

Pełny tekst źródła
Streszczenie:
Deep learning is notably successful in data analysis, computer vision, and human control. Nevertheless, this approach has inevitably allowed the development of DeepFake video sequences and images that could be altered so that the changes are not easily or explicitly detectable. Such alterations have been recently used to spread false news or disinformation. This study aims to identify Deepfaked videos and images and alert viewers to the possible falsity of the information. The current work presented a novel means of revealing fake face videos by cascading the convolution network with recurrent neural networks and fully connected network (FCN) models. The system detection approach utilizes the eye-blinking state in temporal video frames. Notwithstanding, it is deemed challenging to precisely depict (i) artificiality in fake videos and (ii) spatial information within the individual frame through this physiological signal. Spatial features were extracted using the VGG16 network and trained with the ImageNet dataset. The temporal features were then extracted in every 20 sequences through the LSTM network. On another note, the pre-processed eye-blinking state served as a probability to generate a novel BPD dataset. This newly-acquired dataset was fed to three models for training purposes with each entailing four, three, and six hidden layers, respectively. Every model constitutes a unique architecture and specific dropout value. Resultantly, the model optimally and accurately identified tampered videos within the dataset. The study model was assessed using the current BPD dataset based on one of the most complex datasets (FaceForensic++) with 90.8% accuracy. Such precision was successfully maintained in datasets that were not used in the training process. The training process was also accelerated by lowering the computation prerequisites.
Style APA, Harvard, Vancouver, ISO itp.
26

Hubálovský, Štěpán, Pavel Trojovský, Nebojsa Bacanin i Venkatachalam K. "Evaluation of deepfake detection using YOLO with local binary pattern histogram". PeerJ Computer Science 8 (13.09.2022): e1086. http://dx.doi.org/10.7717/peerj-cs.1086.

Pełny tekst źródła
Streszczenie:
Recently, deepfake technology has become a popularly used technique for swapping faces in images or videos that create forged data to mislead society. Detecting the originality of the video is a critical process due to the negative pattern of the image. In the detection of forged images or videos, various image processing techniques were implemented. Existing methods are ineffective in detecting new threats or false images. This article has proposed You Only Look Once–Local Binary Pattern Histogram (YOLO-LBPH) to detect fake videos. YOLO is used to detect the face in an image or a frame of a video. The spatial features are extracted from the face image using a EfficientNet-B5 method. Spatial feature extractions are fed as input in the Local Binary Pattern Histogram to extract temporal features. The proposed YOLO-LBPH is implemented using the large scale deepfake forensics (DF) dataset known as CelebDF-FaceForensics++(c23), which is a combination of FaceForensics++(c23) and Celeb-DF. As a result, the precision score is 86.88% in the CelebDF-FaceForensics++(c23) dataset, 88.9% in the DFFD dataset, 91.35% in the CASIA-WebFace data. Similarly, the recall is 92.45% in the Celeb-DF-Face Forensics ++(c23) dataset, 93.76% in the DFFD dataset, and 94.35% in the CASIA-Web Face dataset.
Style APA, Harvard, Vancouver, ISO itp.
27

Tambe, Swapnali, Anil Pawar i S. K. Yadav. "Deep fake videos identification using ANN and LSTM". Journal of Discrete Mathematical Sciences and Cryptography 24, nr 8 (17.11.2021): 2353–64. http://dx.doi.org/10.1080/09720529.2021.2014140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Muqsith, Munadhil Abdul, i Rizky Ridho Pratomo. "The Development of Fake News in the Post-Truth Age". SALAM: Jurnal Sosial dan Budaya Syar-i 8, nr 5 (22.09.2021): 1391–406. http://dx.doi.org/10.15408/sjsbs.v8i5.22395.

Pełny tekst źródła
Streszczenie:
This study aims to provide a complete picture of the development of fake news during this post-truth era. Authors use a qualitative approach with a literature study method. Today's society uses social media for various needs and has considerable advantages. However, social media is also a mouthpiece for spreading fake news. Fake news today is a threat to society. In the post-truth era, people only believe in the truth of news based on belief, not objective facts. Society becomes polarized because of a relative definition of truth. This phenomenon is also supported by an algorithm that causes people to echo their own making. It is this echo that makes objective facts blurry and causes truth bias. Plus, with the development of technology, it is possible to make videos that resemble the original or deep fakes. Deep fakes become dangerous because people are becoming increasingly difficult to distinguish whether a video is real or fake. Many applications are able to create deep fakes video. This study concludes that fake news in this era is becoming increasingly dangerous. However, the public also plays a role in accelerating the development of fake news. Literacy skills, such as digital literacy, are very important to prevent the harmful effects of fake news.Keywords: Fake News; Digital Literacy; Deepfake; Post-Truth AbstrakPenelitian ini bertujuan untuk memberikan gambaran yang utuh tentang perkembangan berita palsu di era post-truth ini. Penulis menggunakan pendekatan kualitatif dengan metode studi literatur. Masyarakat saat ini menggunakan media sosial untuk berbagai kebutuhan dan memiliki keuntungan yang cukup besar. Namun, media sosial juga menjadi corong penyebaran berita bohong. Berita palsu hari ini adalah ancaman bagi masyarakat. Di era post-truth, orang hanya percaya pada kebenaran berita berdasarkan keyakinan, bukan fakta objektif. Masyarakat menjadi terpolarisasi karena definisi kebenaran yang relatif. Fenomena ini juga didukung oleh algoritma yang menyebabkan orang menggemakan buatan mereka sendiri. Gema inilah yang membuat fakta objektif menjadi kabur dan menyebabkan bias kebenaran. Ditambah lagi dengan perkembangan teknologi yang memungkinkan untuk membuat video yang menyerupai aslinya atau deep fakes. Kepalsuan yang dalam menjadi berbahaya karena orang semakin sulit membedakan apakah video itu asli atau palsu. Banyak aplikasi yang mampu membuat video palsu yang dalam. Studi ini menyimpulkan bahwa berita palsu di era ini semakin berbahaya. Namun, masyarakat juga berperan dalam mempercepat berkembangnya berita bohong. Keterampilan literasi, seperti literasi digital, sangat penting untuk mencegah efek berbahaya dari berita palsu.Kata Kunci: Berita Palsu; Literasi Digital; Palsu; Pasca-Kebenaran
Style APA, Harvard, Vancouver, ISO itp.
29

Ismail, Aya, Marwa Elpeltagy, Mervat S. Zaki i Kamal Eldahshan. "A New Deep Learning-Based Methodology for Video Deepfake Detection Using XGBoost". Sensors 21, nr 16 (10.08.2021): 5413. http://dx.doi.org/10.3390/s21165413.

Pełny tekst źródła
Streszczenie:
Currently, face-swapping deepfake techniques are widely spread, generating a significant number of highly realistic fake videos that threaten the privacy of people and countries. Due to their devastating impacts on the world, distinguishing between real and deepfake videos has become a fundamental issue. This paper presents a new deepfake detection method: you only look once–convolutional neural network–extreme gradient boosting (YOLO-CNN-XGBoost). The YOLO face detector is employed to extract the face area from video frames, while the InceptionResNetV2 CNN is utilized to extract features from these faces. These features are fed into the XGBoost that works as a recognizer on the top level of the CNN network. The proposed method achieves 90.62% of an area under the receiver operating characteristic curve (AUC), 90.73% accuracy, 93.53% specificity, 85.39% sensitivity, 85.39% recall, 87.36% precision, and 86.36% F1-measure on the CelebDF-FaceForencics++ (c23) merged dataset. The experimental study confirms the superiority of the presented method as compared to the state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
30

Ismail, Aya, Marwa Elpeltagy, Mervat Zaki i Kamal A. ElDahshan. "Deepfake video detection: YOLO-Face convolution recurrent approach". PeerJ Computer Science 7 (21.09.2021): e730. http://dx.doi.org/10.7717/peerj-cs.730.

Pełny tekst źródła
Streszczenie:
Recently, the deepfake techniques for swapping faces have been spreading, allowing easy creation of hyper-realistic fake videos. Detecting the authenticity of a video has become increasingly critical because of the potential negative impact on the world. Here, a new project is introduced; You Only Look Once Convolution Recurrent Neural Networks (YOLO-CRNNs), to detect deepfake videos. The YOLO-Face detector detects face regions from each frame in the video, whereas a fine-tuned EfficientNet-B5 is used to extract the spatial features of these faces. These features are fed as a batch of input sequences into a Bidirectional Long Short-Term Memory (Bi-LSTM), to extract the temporal features. The new scheme is then evaluated on a new large-scale dataset; CelebDF-FaceForencics++ (c23), based on a combination of two popular datasets; FaceForencies++ (c23) and Celeb-DF. It achieves an Area Under the Receiver Operating Characteristic Curve (AUROC) 89.35% score, 89.38% accuracy, 83.15% recall, 85.55% precision, and 84.33% F1-measure for pasting data approach. The experimental analysis approves the superiority of the proposed method compared to the state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
31

Nassif, Ali Bou, Qassim Nasir, Manar Abu Talib i Omar Mohamed Gouda. "Improved Optical Flow Estimation Method for Deepfake Videos". Sensors 22, nr 7 (24.03.2022): 2500. http://dx.doi.org/10.3390/s22072500.

Pełny tekst źródła
Streszczenie:
Creating deepfake multimedia, and especially deepfake videos, has become much easier these days due to the availability of deepfake tools and the virtually unlimited numbers of face images found online. Research and industry communities have dedicated time and resources to develop detection methods to expose these fake videos. Although these detection methods have been developed over the past few years, synthesis methods have also made progress, allowing for the production of deepfake videos that are harder and harder to differentiate from real videos. This research paper proposes an improved optical flow estimation-based method to detect and expose the discrepancies between video frames. Augmentation and modification are experimented upon to try to improve the system’s overall accuracy. Furthermore, the system is trained on graphics processing units (GPUs) and tensor processing units (TPUs) to explore the effects and benefits of each type of hardware in deepfake detection. TPUs were found to have shorter training times compared to GPUs. VGG-16 is the best performing model when used as a backbone for the system, as it achieved around 82.0% detection accuracy when trained on GPUs and 71.34% accuracy on TPUs.
Style APA, Harvard, Vancouver, ISO itp.
32

Yavuzkilic, Semih, Abdulkadir Sengur, Zahid Akhtar i Kamran Siddique. "Spotting Deepfakes and Face Manipulations by Fusing Features from Multi-Stream CNNs Models". Symmetry 13, nr 8 (26.07.2021): 1352. http://dx.doi.org/10.3390/sym13081352.

Pełny tekst źródła
Streszczenie:
Deepfake is one of the applications that is deemed harmful. Deepfakes are a sort of image or video manipulation in which a person’s image is changed or swapped with that of another person’s face using artificial neural networks. Deepfake manipulations may be done with a variety of techniques and applications. A quintessential countermeasure against deepfake or face manipulation is deepfake detection method. Most of the existing detection methods perform well under symmetric data distributions, but are still not robust to asymmetric datasets variations and novel deepfake/manipulation types. In this paper, for the identification of fake faces in videos, a new multistream deep learning algorithm is developed, where three streams are merged at the feature level using the fusion layer. After the fusion layer, the fully connected, Softmax, and classification layers are used to classify the data. The pre-trained VGG16 model is adopted for transferred CNN1stream. In transfer learning, the weights of the pre-trained CNN model are further used for training the new classification problem. In the second stream (transferred CNN2), the pre-trained VGG19 model is used. Whereas, in the third stream, the pre-trained ResNet18 model is considered. In this paper, a new large-scale dataset (i.e., World Politicians Deepfake Dataset (WPDD)) is introduced to improve deepfake detection systems. The dataset was created by downloading videos of 20 different politicians from YouTube. Over 320,000 frames were retrieved after dividing the downloaded movie into little sections and extracting the frames. Finally, various manipulations were performed to these frames, resulting in seven separate manipulation classes for men and women. In the experiments, three fake face detection scenarios are investigated. First, fake and real face discrimination is studied. Second, seven face manipulations are performed, including age, beard, face swap, glasses, hair color, hairstyle, smiling, and genuine face discrimination. Third, performance of deepfake detection system under novel type of face manipulation is analyzed. The proposed strategy outperforms the prior existing methods. The calculated performance metrics are over 99%.
Style APA, Harvard, Vancouver, ISO itp.
33

Sohaib, Muhammad, i Samabia Tehseen. "Forgery detection of low quality deepfake videos". Neural Network World 33, nr 2 (2023): 85–99. http://dx.doi.org/10.14311/nnw.2023.33.006.

Pełny tekst źródła
Streszczenie:
The rapid growth of online media over different social media platforms or over the internet along with many benefits have some negative effects as well. Deep learning has many positive applications like medical, animations and cybersecurity etc. But over the past few years, it is observed that it is been used for negative aspects as well such as defaming, black-mailing and creating privacy concerns for the general public. Deepfake is common terminology used for facial forgery of a person in media like images or videos.The advancement in the forgery creation area have challenged the researchers to create and develop advance forgery detection systems capable to detect facial forgeries. Proposed forgery detection system works on the CNN-LSTM model in which we first extracted faces from the frames using MTCNN then performed spatial feature extraction using pretrained Xception network and then used LSTM for temporal feature extraction. At the end classification is performed to predict the video as real or fake. The system is capable to detect low quality videos. The current system has shown good accuracy results for detecting real or fake videos on the Google deepfake AI dataset.
Style APA, Harvard, Vancouver, ISO itp.
34

Bansal, Nency, Turki Aljrees, Dhirendra Prasad Yadav, Kamred Udham Singh, Ankit Kumar, Gyanendra Kumar Verma i Teekam Singh. "Real-Time Advanced Computational Intelligence for Deep Fake Video Detection". Applied Sciences 13, nr 5 (27.02.2023): 3095. http://dx.doi.org/10.3390/app13053095.

Pełny tekst źródła
Streszczenie:
As digitization is increasing, threats to our data are also increasing at a faster pace. Generating fake videos does not require any particular type of knowledge, hardware, memory, or any computational device; however, its detection is challenging. Several methods in the past have solved the issue, but computation costs are still high and a highly efficient model has yet to be developed. Therefore, we proposed a new model architecture known as DFN (Deep Fake Network), which has the basic blocks of mobNet, a linear stack of separable convolution, max-pooling layers with Swish as an activation function, and XGBoost as a classifier to detect deepfake videos. The proposed model is more accurate compared to Xception, Efficient Net, and other state-of-the-art models. The DFN performance was tested on a DFDC (Deep Fake Detection Challenge) dataset. The proposed method achieved an accuracy of 93.28% and a precision of 91.03% with this dataset. In addition, training and validation loss was 0.14 and 0.17, respectively. Furthermore, we have taken care of all types of facial manipulations, making the model more robust, generalized, and lightweight, with the ability to detect all types of facial manipulations in videos.
Style APA, Harvard, Vancouver, ISO itp.
35

Sabah, Hanady. "Detection of Deep Fake in Face Images Using Deep Learning". Wasit Journal of Computer and Mathematics Science 1, nr 4 (31.12.2022): 94–111. http://dx.doi.org/10.31185/wjcm.92.

Pełny tekst źródła
Streszczenie:
Fake images are one of the most widespread phenomena that have a significant influence on our social life, particularly in the world of politics and celeb. Nowadays, generating fake images has become very easy due to the powerful yet simple applications in mobile devices that navigate in the social media world and with the emergence of the Generative Adversarial Network (GAN) that produces images which are indistinguishable to the human eye. Which makes fake images and fake videos easy to perform, difficult to detect, and fast to spread. As a result, image processing and artificial intelligence play an important role in solving such issues. Thus, detecting fake images is a critical problem that must be controlled and to prevent these numerous harmful effects. This research proposed utilizing the most popular algorithm in deep learning is (Convolution Neural Network) to detect the fake images. The first steps includes a preprocessing which start with converting images from RGB to YCbCr color space, after that entering the Gamma correction. finally extract edge detection by entering the Canny filter on them. After that, utilizing two different method of detection by applying (Convolution Neural Network with Principal Component Analysis) and (Convolution Neural Network without Principal Component Analysis) as a classifiers. The results reveal that the use of CNN with PCA in this research results in acceptable accuracy. In contrast, using CNN only gave the highest level of accuracy in detecting manipulated images.
Style APA, Harvard, Vancouver, ISO itp.
36

Bilohrats, Khrystyna. "PECULIARITIES OF FAKE MEDIA MESSAGES (ON THE EXAMPLE OF RUSSIAN FAKES ABOUT UKRAINE)". Bulletin of Lviv Polytechnic National University: journalism 1, nr 2 (2021): 1–10. http://dx.doi.org/10.23939/sjs2021.02.001.

Pełny tekst źródła
Streszczenie:
Information wars have long been used as a full-fledged weapon against the enemy, using both manipulation and completely fake messages. The methods used to disseminate false media messages by their authors are completely different, but the goal is almost always the same - to make the target audience that consumes information, believe and be influenced. Scientists have identified three main elements of the media chain: the author of the message - the transmission channel - the recipient of the message. According to M. McLuhan, media channels of information transmission are a technical continuation of natural channels: radio (auditory), printed periodicals (visual), television (combination of vocal and visual), and Internet MMC (combination of auditory, optic and visual). Therefore, it is worth considering the ways in which fakes are distributed - through photos, videos or just texts. Very often two or sometimes three distribution channels are used, because video can accompany text and attached to it photos, so this division should be considered conditional. Having analysed fake reports from the Russian-language media segment, it became possible to draw conclusions about the use of basic evaluation criteria according to professional journalistic standards of publications. The emotionality of the texts, which was conditionally divided into two groups - "excessive emotionality" and "moderate neutrality" was taken into account. As for the excessive emotionality of the texts, it has been determined that it is most common in the video, a little less in the photo, and very little in the texts. As to the studies concerning the topic of fakes in the Russian-language media segment, a vast majority of studies concerned Ukraine, and military issues namely. Usually, the authors of fake media reports aim to destabilize the situation, and to make the target audience believe in nonsense and behave predictably, to divert attention from their own problems.
Style APA, Harvard, Vancouver, ISO itp.
37

Shalini, S. "Fake Image Detection". International Journal for Research in Applied Science and Engineering Technology 9, nr VI (15.06.2021): 1140–45. http://dx.doi.org/10.22214/ijraset.2021.35238.

Pełny tekst źródła
Streszczenie:
In this technological generation, social media plays an important role in people’s daily life. Most of them share text, images and videos on social media(Instagram, Facebook, Twitter ,etc.,). Images are one of the common types of media share among users on social media. So, there is a chance for monitoring of images contained in social media. So most of the people can fabricate these images and disseminate them widely in a very short time, which treats the creditability of the news and public confidence in the means of social communication. So here this research has attempted to propose an approach which will extract image content, classify it and verify that the image is false or true and uncovers the manipulation. There are many unwanted contents in social media such as threats and forged images, which may cause many issues to the society and also national security. This approach aims to build a model that can be used to classify social media content to detect any threats and forged images.
Style APA, Harvard, Vancouver, ISO itp.
38

Dutta, Hridoy Sankar, Mayank Jobanputra, Himani Negi i Tanmoy Chakraborty. "Detecting and Analyzing Collusive Entities on YouTube". ACM Transactions on Intelligent Systems and Technology 12, nr 5 (31.10.2021): 1–28. http://dx.doi.org/10.1145/3477300.

Pełny tekst źródła
Streszczenie:
YouTube sells advertisements on the posted videos, which in turn enables the content creators to monetize their videos. As an unintended consequence, this has proliferated various illegal activities such as artificial boosting of views, likes, comments, and subscriptions. We refer to such videos (gaining likes and comments artificially) and channels (gaining subscriptions artificially) as “collusive entities.” Detecting such collusive entities is an important yet challenging task. Existing solutions mostly deal with the problem of spotting fake views, spam comments, fake content, and so on, and oftentimes ignore how such fake activities emerge via collusion. Here, we collect a large dataset consisting of two types of collusive entities on YouTube— videos submitted to gain collusive likes and comment requests and channels submitted to gain collusive subscriptions. We begin by providing an in-depth analysis of collusive entities on YouTube fostered by various blackmarket services . Following this, we propose models to detect three types of collusive YouTube entities: videos seeking collusive likes, channels seeking collusive subscriptions, and videos seeking collusive comments. The third type of entity is associated with temporal information. To detect videos and channels for collusive likes and subscriptions, respectively, we utilize one-class classifiers trained on our curated collusive entities and a set of novel features. The SVM-based model shows significant performance with a true positive rate of 0.911 and 0.910 for detecting collusive videos and collusive channels, respectively. To detect videos seeking collusive comments, we propose CollATe , a novel end-to-end neural architecture that leverages time-series information of posted comments along with static metadata of videos. CollATe is composed of three components: metadata feature extractor (which derives metadata-based features from videos), anomaly feature extractor (which utilizes the time-series data to detect sudden changes in the commenting activity), and comment feature extractor (which utilizes the text of the comments posted during collusion and computes a similarity score between the comments). Extensive experiments show the effectiveness of CollATe (with a true positive rate of 0.905) over the baselines.
Style APA, Harvard, Vancouver, ISO itp.
39

Shan Bian, Weiqi Luo i Jiwu Huang. "Exposing Fake Bit Rate Videos and Estimating Original Bit Rates". IEEE Transactions on Circuits and Systems for Video Technology 24, nr 12 (grudzień 2014): 2144–54. http://dx.doi.org/10.1109/tcsvt.2014.2334031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Liang, Xiaoyun, Zhaohong Li, Yiyuan Yang, Zhenzhen Zhang i Yu Zhang. "Detection of Double Compression for HEVC Videos With Fake Bitrate". IEEE Access 6 (2018): 53243–53. http://dx.doi.org/10.1109/access.2018.2869627.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Alsakar, Yasmin M., Nagham E. Mekky i Noha A. Hikal. "Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation". Journal of Imaging 7, nr 3 (5.03.2021): 47. http://dx.doi.org/10.3390/jimaging7030047.

Pełny tekst źródła
Streszczenie:
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames.
Style APA, Harvard, Vancouver, ISO itp.
42

Burgstaller, Markus, i Scott Macpherson. "Deepfakes in International Arbitration: How Should Tribunals Treat Video Evidence and Allegations of Technological Tampering?" Journal of World Investment & Trade 22, nr 5-6 (10.12.2021): 860–90. http://dx.doi.org/10.1163/22119000-12340232.

Pełny tekst źródła
Streszczenie:
Abstract Deepfakes can be described as videos of people doing and saying things that they have not done or said. Their potential use in international arbitration leads to two competing threats. Tribunals may be conscious of the difficulties in proving that a deepfake is, in fact, fake. If the ‘clear and convincing evidence’ standard of proof is applied, it may be very difficult, if not impossible, to prove that a sophisticated deepfake is fake. However, the burgeoning awareness of deepfakes may render tribunals less inclined to believe what they see on video even in circumstances in which the video before it is real. This may encourage parties to seek to deny legitimate video evidence as a deepfake. The ‘balance of probabilities’ standard, while not perfect, would appear to address this concern. In order to properly assess deepfakes, tribunals should apply this standard while assessing both technical and circumstantial evidence holistically.
Style APA, Harvard, Vancouver, ISO itp.
43

Shahzad, Hina Fatima, Furqan Rustam, Emmanuel Soriano Flores, Juan Luís Vidal Mazón, Isabel de la Torre Diez i Imran Ashraf. "A Review of Image Processing Techniques for Deepfakes". Sensors 22, nr 12 (16.06.2022): 4556. http://dx.doi.org/10.3390/s22124556.

Pełny tekst źródła
Streszczenie:
Deep learning is used to address a wide range of challenging issues including large data analysis, image processing, object detection, and autonomous control. In the same way, deep learning techniques are also used to develop software and techniques that pose a danger to privacy, democracy, and national security. Fake content in the form of images and videos using digital manipulation with artificial intelligence (AI) approaches has become widespread during the past few years. Deepfakes, in the form of audio, images, and videos, have become a major concern during the past few years. Complemented by artificial intelligence, deepfakes swap the face of one person with the other and generate hyper-realistic videos. Accompanying the speed of social media, deepfakes can immediately reach millions of people and can be very dangerous to make fake news, hoaxes, and fraud. Besides the well-known movie stars, politicians have been victims of deepfakes in the past, especially US presidents Barak Obama and Donald Trump, however, the public at large can be the target of deepfakes. To overcome the challenge of deepfake identification and mitigate its impact, large efforts have been carried out to devise novel methods to detect face manipulation. This study also discusses how to counter the threats from deepfake technology and alleviate its impact. The outcomes recommend that despite a serious threat to society, business, and political institutions, they can be combated through appropriate policies, regulation, individual actions, training, and education. In addition, the evolution of technology is desired for deepfake identification, content authentication, and deepfake prevention. Different studies have performed deepfake detection using machine learning and deep learning techniques such as support vector machine, random forest, multilayer perceptron, k-nearest neighbors, convolutional neural networks with and without long short-term memory, and other similar models. This study aims to highlight the recent research in deepfake images and video detection, such as deepfake creation, various detection algorithms on self-made datasets, and existing benchmark datasets.
Style APA, Harvard, Vancouver, ISO itp.
44

Wagner, Travis L., i Ashley Blewer. "“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video". Open Information Science 3, nr 1 (1.01.2019): 32–46. http://dx.doi.org/10.1515/opis-2019-0003.

Pełny tekst źródła
Streszczenie:
Abstract It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to “read” the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are “deepfakes”: a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around “fake news.” Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women’s faces into pornographic videos. The implication here is a reification of women’s bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used to perfect deepfakes were men. This paper explores how the emergence and distribution of deepfakes continues to enforce gendered disparities within visual information. This paper, however, rejects the inevitability of deepfakes arguing that feminist oriented approaches to artificial intelligence building and a critical approaches to visual information literacy can stifle the distribution of violently sexist deepfakes.
Style APA, Harvard, Vancouver, ISO itp.
45

Pérez Dasilva, Jesús, Koldobika Meso Ayerdi i Terese Mendiguren Galdospin. "Deepfakes on Twitter: Which Actors Control Their Spread?" Media and Communication 9, nr 1 (3.03.2021): 301–12. http://dx.doi.org/10.17645/mac.v9i1.3433.

Pełny tekst źródła
Streszczenie:
The term deepfake was first used in a Reddit post in 2017 to refer to videos manipulated using artificial intelligence techniques and since then it is becoming easier to create such fake videos. A recent investigation by the cybersecurity company Deeptrace in September 2019 indicated that the number of what is known as fake videos had doubled in the last nine months and that most were pornographic videos used as revenge to harm many women. The report also highlighted the potential of this technology to be used in political campaigns such as in Gabon and Malaysia. In this sense, the phenomenon of deepfake has become a concern for governments because it poses a short-term threat not only to politics, but also for fraud or cyberbullying. The starting point of this research was Twitter’s announcement of a change in its protocols to fight fake news and deepfakes. We have used the Social Network Analysis technique, with visualization as a key component, to analyze the conversation on Twitter about the deepfake phenomenon. NodeXL was used to identify main actors and the network of connections between all these accounts. In addition, the semantic networks of the tweets were analyzed to discover hidden patterns of meaning. The results show that half of the actors who function as bridges in the interactions that shape the network are journalists and media, which is a sign of the concern that this sophisticated form of manipulation generates in this collective.
Style APA, Harvard, Vancouver, ISO itp.
46

Adams, Caitlin. "“It’s So Bad It Has to be Real”: Mimic Vlogs and the Use of User-Generated Formats for Storytelling". Platform: Journal of Media and Communication 9, nr 2 (grudzień 2022): 22–36. http://dx.doi.org/10.46580/p84398.

Pełny tekst źródła
Streszczenie:
Mimic vlogs – that is, a form of fictional web series that tell stories utilising a vlog format – draw on audience expectations to elicit a particular response. Mimic vlogs use the conventions of an authentic format to tell a story in a way that resembles a genre audiences know and trust, much like mockumentaries and other forms of parodies. It is integral to understand how viewers approach and understand these videos, particularly on a platform such as YouTube which hosts both amateur, professional, usergenerated, and professionally-produced content. Mimic vlogs constitute a small part of a much larger phenomenon of replica content online, such as deep fakes, cheap fakes, fake news, misinformation and disinformation. This exploratory paper draws on primary data from YouTube viewers to investigate what methods audience members use to identify video content. Participants watched and responded to a series of eight videos made up of both user-generated vlogs and fictional mimic vlogs to determine the elements viewers considered while categorising the videos. The approaches participants employed were frequently unreliable, with participants coming to different conclusions based on the same piece of information. Contributing factors to this effect included the viewers’ perceptions around authenticity, plausibility, and markers of quality in each video. The results of this research illustrate the ways in which audiences read texts in different ways. This is in line with Stuart Hall’s encoding and decoding theory (1980) and broader audience reception studies which suggest that audiences play a vital role in interpreting texts and their meaning. Consequently, this research shows how audiences are vulnerable to even low-stakes replica content online, in part, because of their decoding of textual elements.
Style APA, Harvard, Vancouver, ISO itp.
47

An, Byeongseon, Hyeji Lim i Eui Chul Lee. "Fake Biometric Detection Based on Photoplethysmography Extracted from Short Hand Videos". Electronics 12, nr 17 (26.08.2023): 3605. http://dx.doi.org/10.3390/electronics12173605.

Pełny tekst źródła
Streszczenie:
An array of authentication methods has emerged, underscoring the importance of addressing spoofing challenges arising from forgery and alteration. Previous studies utilizing palm biometrics have attempted to circumvent spoofing through geometric methods or the analysis of vein images. However, these approaches are inadequate when faced with hand-printed photographs or in the absence of near-infrared sensors. In this study, we propose using remote photoplethysmography (rPPG) signals to tackle spoofing concerns in palm images captured in RGB environments. rPPG signals were extracted using video durations of 3, 5, and 7 s, and 30 features within the heart rate band were identified through frequency conversion. A support vector machine (SVM) model was trained with the processed features, yielding accuracies of 97.16%, 98.4%, and 97.28% for video durations of 3, 5, and 7 s, respectively. These features underwent dimensionality reduction through a principal component analysis (PCA), and the results were compared with the initial 30 features. Additionally, we evaluated the confusion matrix with zero false-positives for each video duration, finding that the overall accuracy experienced a decline of 1 to 3%. The 5 s video retained the highest accuracy with the smallest decrement, registering a value of 97.2%.
Style APA, Harvard, Vancouver, ISO itp.
48

S., Gayathri, Santhiya S., Nowneesh T., Sanjana Shuruthy K. i Sakthi S. "Deep fake detection using deep learning techniques". Applied and Computational Engineering 2, nr 1 (22.03.2023): 1010–19. http://dx.doi.org/10.54254/2755-2721/2/20220655.

Pełny tekst źródła
Streszczenie:
Deep fake is the artificial manipulation and creation of data, primarily through photo-graphs or videos into the likeness of another person. This technology has a variety of ap-plications. Despite its uses, it can also influence society in a controversial way like de-faming a person, Political distress, etc. Many models had been proposed by different re-searchers which give an average accuracy of 90%. To improve the detection efficiency, this proposed paper uses 3 different deep learning techniques: Inception ResNetV2, Effi-cientNet, and VGG16. These proposed models are trained by the combination of Facfo-rensic++ and DeepFake Detection Challenge Dataset. This proposed system gives the highest accuracy of 97%.
Style APA, Harvard, Vancouver, ISO itp.
49

Claretta, Dyva, i Marta Wijayanengtias. "VIEWER RECEPTION TOWARD YOUTUBER'S GIVEAWAY". JOSAR (Journal of Students Academic Research) 7, nr 1 (22.05.2021): 45–57. http://dx.doi.org/10.35457/josar.v7i1.1533.

Pełny tekst źródła
Streszczenie:
Lately Youtuber is one of the favorite professions by people because easy to getting money from uploading videos that generate Adsense. The amount of Adsense that you get depends from account owners traffic, that is the number of viewers on the video uploaded. This makes Youtubers use their brain more creative to promoting their channel with giveaway as the way. However, when something becomes a trend, there will be various reactions from the audience. For example, news about suspicion about Youtubers who give fake giveaways, giveaway law in islam, videos of giveaway reactions by Youtubers, how to get giveaways from Youtuber and many more. From the many pros and cons, the reason based on the main goal of Youtuber making giveaway to get Adsense, makes value of the video ignored. So that video Is considered less education and seems just s setting. From this respons, the researcher wants to know how the viewer receptions about giveaway trend that youtuber doing lately using Stuart Hall’s theory. The results of this qualitative descriptive study found that the viewer’s reading position is negotiated reading because they accept the existing giveaway trends, but they are quite selective and have their own standart to join giveaway and choosing Youtube videos.
Style APA, Harvard, Vancouver, ISO itp.
50

Rupapara, Vaibhav, Furqan Rustam, Aashir Amaar, Patrick Bernard Washington, Ernesto Lee i Imran Ashraf. "Deepfake tweets classification using stacked Bi-LSTM and words embedding". PeerJ Computer Science 7 (21.10.2021): e745. http://dx.doi.org/10.7717/peerj-cs.745.

Pełny tekst źródła
Streszczenie:
The spread of altered media in the form of fake videos, audios, and images, has been largely increased over the past few years. Advanced digital manipulation tools and techniques make it easier to generate fake content and post it on social media. In addition, tweets with deep fake content make their way to social platforms. The polarity of such tweets is significant to determine the sentiment of people about deep fakes. This paper presents a deep learning model to predict the polarity of deep fake tweets. For this purpose, a stacked bi-directional long short-term memory (SBi-LSTM) network is proposed to classify the sentiment of deep fake tweets. Several well-known machine learning classifiers are investigated as well such as support vector machine, logistic regression, Gaussian Naive Bayes, extra tree classifier, and AdaBoost classifier. These classifiers are utilized with term frequency-inverse document frequency and a bag of words feature extraction approaches. Besides, the performance of deep learning models is analyzed including long short-term memory network, gated recurrent unit, bi-direction LSTM, and convolutional neural network+LSTM. Experimental results indicate that the proposed SBi-LSTM outperforms both machine and deep learning models and achieves an accuracy of 0.92.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii