Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Deepfake Detection“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Deepfake Detection" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Deepfake Detection"
Yasrab, Robail, Wanqi Jiang und Adnan Riaz. „Fighting Deepfakes Using Body Language Analysis“. Forecasting 3, Nr. 2 (28.04.2021): 303–21. http://dx.doi.org/10.3390/forecast3020020.
Der volle Inhalt der QuelleNiveditha, Zohaib Hasan Princy, Saurabh Sharma, Vishal Paranjape und Abhishek Singh. „Review of Deep Learning Techniques for Deepfake Image Detection“. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 11, Nr. 02 (25.02.2022): 1–14. http://dx.doi.org/10.15662/ijareeie.2022.1102021.
Der volle Inhalt der QuelleSunkari, Venkateswarlu, und Ayyagari Sri Nagesh. „Artificial intelligence for deepfake detection: systematic review and impact analysis“. IAES International Journal of Artificial Intelligence (IJ-AI) 13, Nr. 4 (01.12.2024): 3786. http://dx.doi.org/10.11591/ijai.v13.i4.pp3786-3792.
Der volle Inhalt der QuelleBattula Thirumaleshwari Devi, Et al. „A Comprehensive Survey on Deepfake Methods: Generation, Detection, and Applications“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 9 (30.10.2023): 654–78. http://dx.doi.org/10.17762/ijritcc.v11i9.8857.
Der volle Inhalt der QuelleLad, Sumit. „Adversarial Approaches to Deepfake Detection: A Theoretical Framework for Robust Defense“. Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, Nr. 1 (21.09.2024): 46–58. http://dx.doi.org/10.60087/jaigs.v6i1.225.
Der volle Inhalt der QuelleKrueger, Natalie, Mounika Vanamala und Rushit Dave. „Recent Advancements in the Field of Deepfake Detection“. International Journal of Computer Science and Information Technology 15, Nr. 4 (27.08.2023): 01–11. http://dx.doi.org/10.5121/ijcsit.2023.15401.
Der volle Inhalt der QuelleKawabe, Akihisa, Ryuto Haga, Yoichi Tomioka, Jungpil Shin und Yuichi Okuyama. „A Dynamic Ensemble Selection of Deepfake Detectors Specialized for Individual Face Parts“. Electronics 12, Nr. 18 (18.09.2023): 3932. http://dx.doi.org/10.3390/electronics12183932.
Der volle Inhalt der QuelleRaza, Ali, Kashif Munir und Mubarak Almutairi. „A Novel Deep Learning Approach for Deepfake Image Detection“. Applied Sciences 12, Nr. 19 (29.09.2022): 9820. http://dx.doi.org/10.3390/app12199820.
Der volle Inhalt der QuelleSingh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari und Bharat Rawal. „A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques“. Journal of Cybersecurity and Information Management 9, Nr. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.
Der volle Inhalt der QuelleQureshi, Shavez Mushtaq, Atif Saeed, Sultan H. Almotiri, Farooq Ahmad und Mohammed A. Al Ghamdi. „Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media“. PeerJ Computer Science 10 (27.05.2024): e2037. http://dx.doi.org/10.7717/peerj-cs.2037.
Der volle Inhalt der QuelleDissertationen zum Thema "Deepfake Detection"
Hasanaj, Enis, Albert Aveler und William Söder. „Cooperative edge deepfake detection“. Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.
Der volle Inhalt der QuelleEmir, Alkazhami. „Facial Identity Embeddings for Deepfake Detection in Videos“. Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.
Der volle Inhalt der QuelleGUARNERA, LUCA. „Discovering Fingerprints for Deepfake Detection and Multimedia-Enhanced Forensic Investigations“. Doctoral thesis, Università degli studi di Catania, 2021. http://hdl.handle.net/20.500.11769/539620.
Der volle Inhalt der QuelleTak, Hemlata. „End-to-End Modeling for Speech Spoofing and Deepfake Detection“. Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS104.pdf.
Der volle Inhalt der QuelleVoice biometric systems are being used in various applications for secure user authentication using automatic speaker verification technology. However, these systems are vulnerable to spoofing attacks, which have become even more challenging with recent advances in artificial intelligence algorithms. There is hence a need for more robust, and efficient detection techniques. This thesis proposes novel detection algorithms which are designed to perform reliably in the face of the highest-quality attacks. The first contribution is a non-linear ensemble of sub-band classifiers each of which uses a Gaussian mixture model. Competitive results show that models which learn sub-band specific discriminative information can substantially outperform models trained on full-band signals. Given that deep neural networks are more powerful and can perform both feature extraction and classification, the second contribution is a RawNet2 model. It is an end-to-end (E2E) model which learns features directly from raw waveform. The third contribution includes the first use of graph neural networks (GNNs) with an attention mechanism to model the complex relationship between spoofing cues present in spectral and temporal domains. We propose an E2E spectro-temporal graph attention network called RawGAT-ST. RawGAT-ST model is further extended to an integrated spectro-temporal graph attention network, named AASIST which exploits the relationship between heterogeneous spectral and temporal graphs. Finally, this thesis proposes a novel data augmentation technique called RawBoost and uses a self-supervised, pre-trained speech model as a front-end to improve generalisation in the wild conditions
Moufidi, Abderrazzaq. „Machine Learning-Based Multimodal integration for Short Utterance-Based Biometrics Identification and Engagement Detection“. Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0026.
Der volle Inhalt der QuelleThe rapid advancement and democratization of technology have led to an abundance of sensors. Consequently, the integration of these diverse modalities presents an advantage for numerous real-life applications, such as biometrics recognition and engage ment detection. In the field of multimodality, researchers have developed various fusion ar chitectures, ranging from early, hybrid, to late fusion approaches. However, these architec tures may have limitations involving short utterances and brief video segments, necessi tating a paradigm shift towards the development of multimodal machine learning techniques that promise precision and efficiency for short-duration data analysis. In this thesis, we lean on integration of multimodality to tackle these previous challenges ranging from supervised biometrics identification to unsupervised student engagement detection. This PhD began with the first contribution on the integration of multiscale Wavelet Scattering Transform with x-vectors architecture, through which we enhanced the accuracy of speaker identification in scenarios involving short utterances. Going through multimodality benefits, a late fusion architecture combining lips depth videos and audio signals further improved identification accuracy under short utterances, utilizing an effective and less computational methods to extract spatiotemporal features. In the realm of biometrics challenges, there is the threat emergence of deepfakes. There-fore, we focalized on elaborating a deepfake detection methods based on, shallow learning and a fine-tuned architecture of our previous late fusion architecture applied on RGB lips videos and audios. By employing hand-crafted anomaly detection methods for both audio and visual modalities, the study demonstrated robust detection capabilities across various datasets and conditions, emphasizing the importance of multimodal approaches in countering evolving deepfake techniques. Expanding to educational contexts, the dissertation explores multimodal student engagement detection in classrooms. Using low-cost sensors to capture Heart Rate signals and facial expressions, the study developed a reproducible dataset and pipeline for identifying significant moments, accounting for cultural nuances. The analysis of facial expressions using Vision Transformer (ViT) fused with heart rate signal processing, validated through expert observations, showcased the potential for real-time monitoring to enhance educational outcomes through timely interventions
Gardner, Angelica. „Stronger Together? An Ensemble of CNNs for Deepfakes Detection“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97643.
Der volle Inhalt der QuelleChang, Ching-Tang, und 張景棠. „Detecting Deepfake Videos with CNN and Image Partitioning“. Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394052%22.&searchmode=basic.
Der volle Inhalt der Quelle國立中興大學
資訊科學與工程學系所
107
The AIgenerated images are gradually similar to the pictures taken. When the generated images are used in inappropriate cases, it will cause damage to people’s rights and benefits. These doubtful images will cause illegal problems. The issue of detecting digital forgery has existed for many years. However, the fake images generated by the development of science and technology are more difficult to distinguish. Therefore, this thesis based on deep learning technology to detect the controversial face manipulation images. We proposed to segment the image block by block method and use CNN to train the features of each block separately. Finally, each feature is voted in an ensemble model to detect forgery images. Accurately, we recognize Faceswap, DeepFakes, and Face2Face with the dataset provided by FaceForensics++. Nowadays, classifiers require not only high accuracy but also the robustness of different datasets. Therefore, we train some data to test whether it is robust in other data. We collected digital forgeries generated by different methods on the videosharing platform to test the generalization of our model in detecting these forgeries.
SONI, ANKIT. „DETECTING DEEPFAKES USING HYBRID CNN-RNN MODEL“. Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19168.
Der volle Inhalt der QuelleRASOOL, AALE. „DETECTING DEEPFAKES WITH MULTI-MODEL NEURAL NETWORKS: A TRANSFER LEARNING APPROACH“. Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19993.
Der volle Inhalt der QuelleBücher zum Thema "Deepfake Detection"
Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal und Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.
Den vollen Inhalt der Quelle findenAbdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal und Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.
Den vollen Inhalt der Quelle findenAbdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal und Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.
Den vollen Inhalt der Quelle findenAbdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal und Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.
Den vollen Inhalt der Quelle findenGaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press, 2022.
Den vollen Inhalt der Quelle findenGaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.
Den vollen Inhalt der Quelle findenGaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.
Den vollen Inhalt der Quelle findenGaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.
Den vollen Inhalt der Quelle findenGaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press LLC, 2022.
Den vollen Inhalt der Quelle findenBusch, Christoph, Christian Rathgeb, Ruben Vera-Rodriguez und Ruben Tolosana. Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks. Springer International Publishing AG, 2021.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Deepfake Detection"
Lyu, Siwei. „DeepFake Detection“. In Multimedia Forensics, 313–31. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_12.
Der volle Inhalt der QuelleLi, Yuezun, Pu Sun, Honggang Qi und Siwei Lyu. „Toward the Creation and Obstruction of DeepFakes“. In Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.
Der volle Inhalt der QuelleKorshunov, Pavel, und Sébastien Marcel. „The Threat of Deepfakes to Computer and Human Visions“. In Handbook of Digital Face Manipulation and Detection, 97–115. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_5.
Der volle Inhalt der QuelleHao, Hanxiang, Emily R. Bartusiak, David Güera, Daniel Mas Montserrat, Sriram Baireddy, Ziyue Xiang, Sri Kalyan Yarlagadda et al. „Deepfake Detection Using Multiple Data Modalities“. In Handbook of Digital Face Manipulation and Detection, 235–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_11.
Der volle Inhalt der QuelleBhilare, Omkar, Rahul Singh, Vedant Paranjape, Sravan Chittupalli, Shraddha Suratkar und Faruk Kazi. „DEEPFAKE CLI: Accelerated Deepfake Detection Using FPGAs“. In Parallel and Distributed Computing, Applications and Technologies, 45–56. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29927-8_4.
Der volle Inhalt der QuelleSaurav, Dheeraj Azad, Preeti Pandey, Mohammad Sheihan Javaid und Utkarsh. „Deepfake Detection Using AI“. In Advancement of Intelligent Computational Methods and Technologies, 98–102. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003487906-19.
Der volle Inhalt der QuelleHernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez und Aythami Morales. „DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame“. In Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.
Der volle Inhalt der QuelleNadimpalli, Aakash Varma, und Ajita Rattani. „GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection“. In Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, 320–37. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37742-6_25.
Der volle Inhalt der QuelleRehman, Mariam, Mehran Rasool und Sadaf Safder. „DeepFake Detection Using Deep Learning“. In Communications in Computer and Information Science, 142–54. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7969-1_11.
Der volle Inhalt der QuelleRajesh, N., M. S. Prajwala, Nancy Kumari, Muhammad Rayyan und A. C. Ramachandra. „Hybrid Model for Deepfake Detection“. In Lecture Notes in Electrical Engineering, 639–49. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2828-4_57.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Deepfake Detection"
Ju, Yan, Chengzhe Sun, Shan Jia, Shuwei Hou, Zhaofeng Si, Soumyya Kanti Datta, Lipeng Ke, Riky Zhou, Anita Nikolich und Siwei Lyu. „DeepFake-o-meter v2.0: An Open Platform for DeepFake Detection“. In 2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR), 439–45. IEEE, 2024. http://dx.doi.org/10.1109/mipr62202.2024.00075.
Der volle Inhalt der QuelleSarada, B., TVS Laxmi Sudha, Meghana Domakonda und B. Vasantha. „Audio Deepfake Detection and Classification“. In 2024 Asia Pacific Conference on Innovation in Technology (APCIT), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/apcit62007.2024.10673438.
Der volle Inhalt der QuelleS, Prakash Raj, Pravin D, Sabareeswaran G, Sanjith R. K und Gomathi B. „Deepfake Detection Using Deep Learning“. In 2024 10th International Conference on Advanced Computing and Communication Systems (ICACCS), 1768–74. IEEE, 2024. http://dx.doi.org/10.1109/icaccs60874.2024.10717155.
Der volle Inhalt der QuelleRavale, Ujwala, Riya Ramesh Tattu, Ashish Baban Bhoir und Sneha Bhaskar Mahajan. „Deepfake Detection using InceptionResNetV2 Model“. In 2024 IEEE 3rd World Conference on Applied Intelligence and Computing (AIC), 873–79. IEEE, 2024. http://dx.doi.org/10.1109/aic61668.2024.10730917.
Der volle Inhalt der QuelleKhan, Shafiqul Alam, und Damian Valles. „Deepfake Detection Using Transfer Learning“. In 2024 IEEE 15th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 556–62. IEEE, 2024. http://dx.doi.org/10.1109/uemcon62879.2024.10754706.
Der volle Inhalt der QuelleOuajdi, Hafsa, Oussama Hadder, Modan Tailleur, Mathieu Lagrange und Laurie M. Heller. „Detection of Deepfake Environmental Audio“. In 2024 32nd European Signal Processing Conference (EUSIPCO), 196–200. IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715076.
Der volle Inhalt der QuelleWu, Hsiu-Fu, Chia-Yi Hsu, Chih-Hsun Lin, Chia-Mu Yu und Chun-Ying Huang. „Deepfake Detection through Temporal Attention“. In 2024 33rd Wireless and Optical Communications Conference (WOCC), 109–13. IEEE, 2024. https://doi.org/10.1109/wocc61718.2024.10786063.
Der volle Inhalt der QuelleXie, Yuankun, Chenxu Xiong, Xiaopeng Wang, Zhiyong Wang, Yi Lu, Xin Qi, Ruibo Fu et al. „Does Current Deepfake Audio Detection Model Effectively Detect ALM-Based Deepfake Audio?“ In 2024 IEEE 14th International Symposium on Chinese Spoken Language Processing (ISCSLP), 481–85. IEEE, 2024. https://doi.org/10.1109/iscslp63861.2024.10800375.
Der volle Inhalt der QuelleWin, Aung Kyi, Myo Min Hein, Chit Htay Lwin, Aung Myo Thu, Myo Myat Thu und Nu Yin Khaing. „A Novel Methodology for Deepfake Detection Using MesoNet and GAN-based Deepfake Creation“. In 2024 5th International Conference on Advanced Information Technologies (ICAIT), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/icait65209.2024.10754912.
Der volle Inhalt der QuelleBao, Han, Xuhong Zhang, Qinying Wang, Kangming Liang, Zonghui Wang, Shouling Ji und Wenzhi Chen. „Pluggable Watermarking of Deepfake Models for Deepfake Detection“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/37.
Der volle Inhalt der Quelle