Letteratura scientifica selezionata sul tema "Deepfake Detection"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Deepfake Detection".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Deepfake Detection"

1

Yasrab, Robail, Wanqi Jiang e Adnan Riaz. "Fighting Deepfakes Using Body Language Analysis". Forecasting 3, n. 2 (28 aprile 2021): 303–21. http://dx.doi.org/10.3390/forecast3020020.

Testo completo
Abstract (sommario):
Recent improvements in deepfake creation have made deepfake videos more realistic. Moreover, open-source software has made deepfake creation more accessible, which reduces the barrier to entry for deepfake creation. This could pose a threat to the people’s privacy. There is a potential danger if the deepfake creation techniques are used by people with an ulterior motive to produce deepfake videos of world leaders to disrupt the order of countries and the world. Therefore, research into the automatic detection of deepfaked media is essential for public security. In this work, we propose a deepfake detection method using upper body language analysis. Specifically, a many-to-one LSTM network was designed and trained as a classification model for deepfake detection. Different models were trained by varying the hyperparameters to build a final model with benchmark accuracy. We achieved 94.39% accuracy on the deepfake test set. The experimental results showed that upper body language can effectively detect deepfakes.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Niveditha, Zohaib Hasan Princy, Saurabh Sharma, Vishal Paranjape e Abhishek Singh. "Review of Deep Learning Techniques for Deepfake Image Detection". International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 11, n. 02 (25 febbraio 2022): 1–14. http://dx.doi.org/10.15662/ijareeie.2022.1102021.

Testo completo
Abstract (sommario):
Deepfake is an advanced synthetic media technology that generates convincingly authentic yet fake images and videos by modifying a person's likeness. The term "Deepfake" is a blend of "Deep learning" and "Fake," highlighting the use of artificial intelligence and deep learning algorithms in its creation. Deepfake generation involves training models to learn the nuances of facial attributes, expressions, motion, and speech patterns to produce fabricated media indistinguishable from real footage. Deepfakes are often used to manipulate human content, especially the invariant facial regions. The spatial relationship between facial attributes is crucial for creating a convincing, hyperrealistic deepfake output. Subtle inconsistencies in facial features, such as eye spacing, skin color, and mouth shape, can serve as indicators for detecting deepfakes. While many techniques have been developed to detect deepfakes, not all are perfectly accurate for every case. As new deepfake creation methods emerge, existing detection strategies must be continually updated to address these advancements. This paper reviews various deepfake image detection methods and deep learning techniques.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sunkari, Venkateswarlu, e Ayyagari Sri Nagesh. "Artificial intelligence for deepfake detection: systematic review and impact analysis". IAES International Journal of Artificial Intelligence (IJ-AI) 13, n. 4 (1 dicembre 2024): 3786. http://dx.doi.org/10.11591/ijai.v13.i4.pp3786-3792.

Testo completo
Abstract (sommario):
<p>Deep learning and artificial intelligence (AI) have enabled deepfakes, prompting concerns about their social impact. deepfakes have detrimental effects in several businesses, despite their apparent benefits. We explore deepfake detection research and its social implications in this study. We examine capsule networks' ability to detect video deepfakes and their design implications. This strategy reduces parameters and provides excellent accuracy, making it a promising deepfake defense. The social significance of deepfakes is also highlighted, underlining the necessity to understand them. Despite extensive use of face swap services, nothing is known about deepfakes' social impact. The misuse of deepfakes in image-based sexual assault and public figure distortion, especially in politics, highlight the necessity for further research on their social impact. Using state-of-the-art deepfake detection methods like fake face and deepfake detectors and a broad forgery analysis tool reduces the damage deepfakes do. We inquire about to review deepfake detection research and its social impacts in this work. In this paper we analysed various deepfake methods, social impact with misutilization of deepfake technology, and finally giving clear analysis of existing machine learning models. We want to illuminate the potential effects of deepfakes on society and suggest solutions by combining study data.</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Battula Thirumaleshwari Devi, Et al. "A Comprehensive Survey on Deepfake Methods: Generation, Detection, and Applications". International Journal on Recent and Innovation Trends in Computing and Communication 11, n. 9 (30 ottobre 2023): 654–78. http://dx.doi.org/10.17762/ijritcc.v11i9.8857.

Testo completo
Abstract (sommario):
Due to recent advancements in AI and deep learning, several methods and tools for multimedia transformation, known as deepfake, have emerged. A deepfake is a synthetic media where a person's resemblance is used to substitute their presence in an already-existing image or video. Deepfakes have both positive and negative implications. They can be used in politics to simulate events or speeches, in translation to provide natural-sounding translations, in education for virtual experiences, and in entertainment for realistic special effects. The emergence of deepfake face forgery on the internet has raised significant societal concerns. As a result, detecting these forgeries has become an emerging field of research, and many deepfake detection methods have been proposed. This paper has introduced deepfakes and explained the different types of deepfakes that exist. It also explains a summary of various deep fake generation techniques, both traditional and AI detection techniques. Datasets used for deepfake-generating that are freely accessible are emphasized. To further advance the deepfake research field, we aim to provide relevant research findings, identify existing gaps, and propose emerging trends for future study.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lad, Sumit. "Adversarial Approaches to Deepfake Detection: A Theoretical Framework for Robust Defense". Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, n. 1 (21 settembre 2024): 46–58. http://dx.doi.org/10.60087/jaigs.v6i1.225.

Testo completo
Abstract (sommario):
The rapid improvements in capabilities of neural networks and generative adversarial networks (GANs) has given rise to extremely sophisticated deepfake technologies. This has made it very difficult to reliably recognize fake digital content. It has enabled the creation of highly convincing synthetic media which can be used in malicious ways in this era of user generated information and social media. Existing deepfake detection techniques are effective against early iterations of deepfakes but get increasingly vulnerable to more sophisticated deepfakes and adversarial attacks. In this paper we explore a novel approach to deepfake detection which uses a framework to integrate adversarial training to improve the robustness and accuracy of deepfake detection models. By looking deeper into state of art adversarial machine learning, forensic analysis and deepfake detection techniques we will explore how adversarial training can improve the robustness of deep fake detection techniques against future threats. We will use perturbations which are adversarial examples designed specifically to deceive the deepfake detection algorithms. By training deepfake detection models with these perturbations we will create detection systems that can more accurately identify deepfakes. Our approach shows promise and avenues for future research in building resilience against deepfakes and applications in content moderation, security and combating synthetic media manipulation.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Krueger, Natalie, Mounika Vanamala e Rushit Dave. "Recent Advancements in the Field of Deepfake Detection". International Journal of Computer Science and Information Technology 15, n. 4 (27 agosto 2023): 01–11. http://dx.doi.org/10.5121/ijcsit.2023.15401.

Testo completo
Abstract (sommario):
A deepfake is a photo or video of a person whose image has been digitally altered or partially replaced with an image of someone else. Deepfakes have the potential to cause a variety of problems and are often used maliciously. A common usage is altering videos of prominent political figures and celebrities. These deepfakes can portray them making offensive, problematic, and/or untrue statements. Current deepfakes can be very realistic, and when used in this way, can spread panic and even influence elections and political opinions. There are many deepfake detection strategies currently in use but finding the most comprehensive and universal method is critical. So, in this survey we will address the problems of malicious deepfake creation and the lack of universal deepfake detection methods. Our objective is to survey and analyze a variety of current methods and advances in the field of deepfake detection.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kawabe, Akihisa, Ryuto Haga, Yoichi Tomioka, Jungpil Shin e Yuichi Okuyama. "A Dynamic Ensemble Selection of Deepfake Detectors Specialized for Individual Face Parts". Electronics 12, n. 18 (18 settembre 2023): 3932. http://dx.doi.org/10.3390/electronics12183932.

Testo completo
Abstract (sommario):
The development of deepfake technology, based on deep learning, has made it easier to create images of fake human faces that are indistinguishable from the real thing. Many deepfake methods and programs are publicly available and can be used maliciously, for example, by creating fake social media accounts with images of non-existent human faces. To prevent the misuse of such fake images, several deepfake detection methods have been proposed as a countermeasure and have proven capable of detecting deepfakes with high accuracy when the target deepfake model has been identified. However, the existing approaches are not robust to partial editing and/or occlusion caused by masks, glasses, or manual editing, all of which can lead to an unacceptable drop in accuracy. In this paper, we propose a novel deepfake detection approach based on a dynamic configuration of an ensemble model that consists of deepfake detectors. These deepfake detectors are based on convolutional neural networks (CNNs) and are specialized to detect deepfakes by focusing on individual parts of the face. We demonstrate that a dynamic selection of face parts and an ensemble of selected CNN models is effective at realizing highly accurate deepfake detection even from partly edited and occluded images.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Raza, Ali, Kashif Munir e Mubarak Almutairi. "A Novel Deep Learning Approach for Deepfake Image Detection". Applied Sciences 12, n. 19 (29 settembre 2022): 9820. http://dx.doi.org/10.3390/app12199820.

Testo completo
Abstract (sommario):
Deepfake is utilized in synthetic media to generate fake visual and audio content based on a person’s existing media. The deepfake replaces a person’s face and voice with fake media to make it realistic-looking. Fake media content generation is unethical and a threat to the community. Nowadays, deepfakes are highly misused in cybercrimes for identity theft, cyber extortion, fake news, financial fraud, celebrity fake obscenity videos for blackmailing, and many more. According to a recent Sensity report, over 96% of the deepfakes are of obscene content, with most victims being from the United Kingdom, United States, Canada, India, and South Korea. In 2019, cybercriminals generated fake audio content of a chief executive officer to call his organization and ask them to transfer $243,000 to their bank account. Deepfake crimes are rising daily. Deepfake media detection is a big challenge and has high demand in digital forensics. An advanced research approach must be built to protect the victims from blackmailing by detecting deepfake content. The primary aim of our research study is to detect deepfake media using an efficient framework. A novel deepfake predictor (DFP) approach based on a hybrid of VGG16 and convolutional neural network architecture is proposed in this study. The deepfake dataset based on real and fake faces is utilized for building neural network techniques. The Xception, NAS-Net, Mobile Net, and VGG16 are the transfer learning techniques employed in comparison. The proposed DFP approach achieved 95% precision and 94% accuracy for deepfake detection. Our novel proposed DFP approach outperformed transfer learning techniques and other state-of-the-art studies. Our novel research approach helps cybersecurity professionals overcome deepfake-related cybercrimes by accurately detecting the deepfake content and saving the deepfake victims from blackmailing.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Singh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari e Bharat Rawal. "A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques". Journal of Cybersecurity and Information Management 9, n. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.

Testo completo
Abstract (sommario):
Deepfake videos are a growing concern today as they can be used to spread misinformation and manipulate public opinion. In this paper, we investigate the use of different feature extraction techniques for detecting deepfake videos using machine learning algorithms. We explore three feature extraction techniques, including facial landmarks detection, optical flow, and frequency analysis, and evaluate their effectiveness in detecting deepfake videos. We compare the performance of different machine learning algorithms and analyze their ability to detect deepfakes using the extracted features. Our experimental results show that the combination of facial landmarks detection and frequency analysis provides the best performance in detecting deepfake videos, with an accuracy of over 95%. Our findings suggest that machine learning algorithms can be a powerful tool in detecting deepfake videos, and feature extraction techniques play a crucial role in achieving high accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Qureshi, Shavez Mushtaq, Atif Saeed, Sultan H. Almotiri, Farooq Ahmad e Mohammed A. Al Ghamdi. "Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media". PeerJ Computer Science 10 (27 maggio 2024): e2037. http://dx.doi.org/10.7717/peerj-cs.2037.

Testo completo
Abstract (sommario):
The rapid advancement of deepfake technology poses an escalating threat of misinformation and fraud enabled by manipulated media. Despite the risks, a comprehensive understanding of deepfake detection techniques has not materialized. This research tackles this knowledge gap by providing an up-to-date systematic survey of the digital forensic methods used to detect deepfakes. A rigorous methodology is followed, consolidating findings from recent publications on deepfake detection innovation. Prevalent datasets that underpin new techniques are analyzed. The effectiveness and limitations of established and emerging detection approaches across modalities including image, video, text and audio are evaluated. Insights into real-world performance are shared through case studies of high-profile deepfake incidents. Current research limitations around aspects like cross-modality detection are highlighted to inform future work. This timely survey furnishes researchers, practitioners and policymakers with a holistic overview of the state-of-the-art in deepfake detection. It concludes that continuous innovation is imperative to counter the rapidly evolving technological landscape enabling deepfakes.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Deepfake Detection"

1

Hasanaj, Enis, Albert Aveler e William Söder. "Cooperative edge deepfake detection". Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Testo completo
Abstract (sommario):
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos". Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.

Testo completo
Abstract (sommario):
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

GUARNERA, LUCA. "Discovering Fingerprints for Deepfake Detection and Multimedia-Enhanced Forensic Investigations". Doctoral thesis, Università degli studi di Catania, 2021. http://hdl.handle.net/20.500.11769/539620.

Testo completo
Abstract (sommario):
Forensic Science, which concerns the application of technical and scientific methods to justice, investigation and evidence discovery, has evolved over the years to the birth of several fields such as Multimedia Forensics, which involves the analysis of digital images, video and audio contents. Multimedia data was (and still is), altered using common editing tools such as Photoshop and GIMP. Rapid advances in Deep Learning have opened up the possibility of creating sophisticated algorithms capable of manipulating images, video and audio in a “simple” manner causing the emergence of a powerful yet frightening new phenomenon called deepfake: synthetic multimedia data created and/or altered using generative models. A great discovery made by forensic researchers over the years concerns the possibility of extracting a unique fingerprint that can determine the devices and software used to create the data itself. Unfortunately, extracting these traces turns out to be a complicated task. A fingerprint can be extracted not only in multimedia data in order to determine the devices used in the acquisition phase, or the social networks where the file was uploaded, or recently define the generative models used to create deepfakes, but, in general, this trace can be extracted from evidences recovered in a crime scene as shells or projectiles to determine the model of gun that have fired (Forensic Firearms Ballistics Comparison). Forensic Analysis of Handwritten Documents is another field of Forensic Science that can determine the authors of a manuscript by extracting a fingerprint defined by a careful analysis of the text style in the document. Developing new algorithms for Deepfake Detection, Forensic Firearms Ballistics Comparison, and Forensic Handwritten Document Analysis was the main focus of this Ph.D. thesis. These three macro areas of Forensic Science have a common element, namely a unique fingerprint present in the data itself that can be extracted in order to solve the various tasks. Therefore, for each of these topics a preliminary analysis will be performed and new detection techniques will be presented obtaining promising results in all these domains.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tak, Hemlata. "End-to-End Modeling for Speech Spoofing and Deepfake Detection". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS104.pdf.

Testo completo
Abstract (sommario):
Les systèmes biométriques vocaux sont utilisés dans diverses applications pour une authentification sécurisée. Toutefois, ces systèmes sont vulnérables aux attaques par usurpation d'identité. Il est donc nécessaire de disposer de techniques de détection plus robustes. Cette thèse propose de nouvelles techniques de détection fiables et efficaces contre les attaques invisibles. La première contribution est un ensemble non linéaire de classificateurs de sous-bandes utilisant chacun un modèle de mélange gaussien. Des résultats compétitifs montrent que les modèles qui apprennent des indices discriminants spécifiques à la sous-bande peuvent être nettement plus performants que les modèles entraînés sur des signaux à bande complète. Étant donné que les DNN sont plus puissants et peuvent effectuer à la fois l'extraction de caractéristiques et la classification, la deuxième contribution est un modèle RawNet2. Il s'agit d'un modèle de bout en bout qui apprend les caractéristiques directement à partir de la forme d'onde brute. La troisième contribution comprend la première utilisation de réseaux neuronaux graphiques (GNN) avec un mécanisme d'attention pour modéliser la relation complexe entre les indices d'usurpation présents dans les domaines spectral et temporel. Nous proposons un réseau d'attention spectro-temporel E2E appelé RawGAT-ST. Il est ensuite étendu à un réseau d'attention spectro-temporel intégré, appelé AASIST, qui exploite la relation entre les graphes spectraux et temporels hétérogènes. Enfin, cette thèse propose une nouvelle technique d'augmentation des données appelée RawBoost et utilise un modèle vocal auto-supervisé et pré-entraîné pour améliorer la généralisation
Voice biometric systems are being used in various applications for secure user authentication using automatic speaker verification technology. However, these systems are vulnerable to spoofing attacks, which have become even more challenging with recent advances in artificial intelligence algorithms. There is hence a need for more robust, and efficient detection techniques. This thesis proposes novel detection algorithms which are designed to perform reliably in the face of the highest-quality attacks. The first contribution is a non-linear ensemble of sub-band classifiers each of which uses a Gaussian mixture model. Competitive results show that models which learn sub-band specific discriminative information can substantially outperform models trained on full-band signals. Given that deep neural networks are more powerful and can perform both feature extraction and classification, the second contribution is a RawNet2 model. It is an end-to-end (E2E) model which learns features directly from raw waveform. The third contribution includes the first use of graph neural networks (GNNs) with an attention mechanism to model the complex relationship between spoofing cues present in spectral and temporal domains. We propose an E2E spectro-temporal graph attention network called RawGAT-ST. RawGAT-ST model is further extended to an integrated spectro-temporal graph attention network, named AASIST which exploits the relationship between heterogeneous spectral and temporal graphs. Finally, this thesis proposes a novel data augmentation technique called RawBoost and uses a self-supervised, pre-trained speech model as a front-end to improve generalisation in the wild conditions
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Moufidi, Abderrazzaq. "Machine Learning-Based Multimodal integration for Short Utterance-Based Biometrics Identification and Engagement Detection". Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0026.

Testo completo
Abstract (sommario):
Le progrès rapide et la démocratisation de la technologie ont conduit à l’abondance des capteurs. Par conséquent, l’intégration de ces diverses modalités pourrait présenter un avantage considérable pour de nombreuses applications dans la vie réelle, telles que la reconnaissance biométrique ou la détection d’engagement des élèves. Dans le domaine de la multimodalité, les chercheurs ont établi des architectures variées de fusion, allant des approches de fusion précoce, hybride et tardive. Cependant, ces architectures peuvent avoir des limites en ce qui concerne des signaux temporels d’une durée courte, ce qui nécessite un changement de paradigme vers le développement de techniques d’apprentissage automatique multimodales qui promettent une précision et une efficacité pour l’analyse de ces données courtes. Dans cette thèse, nous nous appuyons sur l’intégration de la multimodalité pour relever les défis précédents, allant de l’identification biométrique supervisée à la détection non supervisée de l’engagement des étudiants. La première contribution de ce doctorat porte sur l’intégration de la Wavelet Scattering Transform à plusieurs couches avec une architecture profonde appelée x-vectors, grâce à laquelle nous avons amélioré la performance de l’identification du locuteur dans des scénarios impliquant des énoncés courts tout en réduisant le nombre de paramètres nécessaires à l’entraînement. En s’appuyant sur les avantages de la multimodalité, on a proposé une architecture de fusion tardive combinant des vidéos de la profondeur des lèvres et des signaux audios a permis d’améliorer la précision de l’identification dans le cas d’énoncés courts, en utilisant des méthodes efficaces et moins coûteuses pour extraire des caractéristiques spatio-temporelles. Dans le domaine des défis biométriques, il y a la menace de l’émergence des "deepfakes". Ainsi, nous nous sommes concentrés sur l’élaboration d’une méthode de détection des "deepfakes" basée sur des méthodes mathématiques compréhensibles et sur une version finement ajustée de notre précédente fusion tardive appliquée aux vidéos RVB des lèvres et aux audios. En utilisant des méthodes de détection d’anomalies conçues spécifiquement pour les modalités audio et visuelles, l’étude a démontré des capacités de détection robustes dans divers ensembles de données et conditions, soulignant l’importance des approches multimodales pour contrer l’évolution des techniques de deepfake. S’étendant aux contextes éducatifs, la thèse explore la détection multimodale de l’engagement des étudiants dans une classe. En utilisant des capteurs abordables pour acquérir les signaux du rythme cardiaque et les expressions faciales, l’étude a développé un ensemble de données reproductibles et un plan pour identifier des moments significatifs, tout en tenant compte des nuances culturelles. L’analyse des expressions faciales à l’aide de Vision Transformer (ViT) fusionnée avec le traitement des signaux de fréquence cardiaque, validée par des observations d’experts, a mis en évidence le potentiel du suivi des élèves afin d’améliorer la qualité d’enseignement
The rapid advancement and democratization of technology have led to an abundance of sensors. Consequently, the integration of these diverse modalities presents an advantage for numerous real-life applications, such as biometrics recognition and engage ment detection. In the field of multimodality, researchers have developed various fusion ar chitectures, ranging from early, hybrid, to late fusion approaches. However, these architec tures may have limitations involving short utterances and brief video segments, necessi tating a paradigm shift towards the development of multimodal machine learning techniques that promise precision and efficiency for short-duration data analysis. In this thesis, we lean on integration of multimodality to tackle these previous challenges ranging from supervised biometrics identification to unsupervised student engagement detection. This PhD began with the first contribution on the integration of multiscale Wavelet Scattering Transform with x-vectors architecture, through which we enhanced the accuracy of speaker identification in scenarios involving short utterances. Going through multimodality benefits, a late fusion architecture combining lips depth videos and audio signals further improved identification accuracy under short utterances, utilizing an effective and less computational methods to extract spatiotemporal features. In the realm of biometrics challenges, there is the threat emergence of deepfakes. There-fore, we focalized on elaborating a deepfake detection methods based on, shallow learning and a fine-tuned architecture of our previous late fusion architecture applied on RGB lips videos and audios. By employing hand-crafted anomaly detection methods for both audio and visual modalities, the study demonstrated robust detection capabilities across various datasets and conditions, emphasizing the importance of multimodal approaches in countering evolving deepfake techniques. Expanding to educational contexts, the dissertation explores multimodal student engagement detection in classrooms. Using low-cost sensors to capture Heart Rate signals and facial expressions, the study developed a reproducible dataset and pipeline for identifying significant moments, accounting for cultural nuances. The analysis of facial expressions using Vision Transformer (ViT) fused with heart rate signal processing, validated through expert observations, showcased the potential for real-time monitoring to enhance educational outcomes through timely interventions
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gardner, Angelica. "Stronger Together? An Ensemble of CNNs for Deepfakes Detection". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97643.

Testo completo
Abstract (sommario):
Deepfakes technology is a face swap technique that enables anyone to replace faces in a video, with highly realistic results. Despite its usefulness, if used maliciously, this technique can have a significant impact on society, for instance, through the spreading of fake news or cyberbullying. This makes the ability of deepfakes detection a problem of utmost importance. In this paper, I tackle the problem of deepfakes detection by identifying deepfakes forgeries in video sequences. Inspired by the state-of-the-art, I study the ensembling of different machine learning solutions built on convolutional neural networks (CNNs) and use these models as objects for comparison between ensemble and single model performances. Existing work in the research field of deepfakes detection suggests that escalated challenges posed by modern deepfake videos make it increasingly difficult for detection methods. I evaluate that claim by testing the detection performance of four single CNN models as well as six stacked ensembles on three modern deepfakes datasets. I compare various ensemble approaches to combine single models and in what way their predictions should be incorporated into the ensemble output. The results I found was that the best approach for deepfakes detection is to create an ensemble, though, the ensemble approach plays a crucial role in the detection performance. The final proposed solution is an ensemble of all available single models which use the concept of soft (weighted) voting to combine its base-learners’ predictions. Results show that this proposed solution significantly improved deepfakes detection performance and substantially outperformed all single models.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Chang, Ching-Tang, e 張景棠. "Detecting Deepfake Videos with CNN and Image Partitioning". Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394052%22.&searchmode=basic.

Testo completo
Abstract (sommario):
碩士
國立中興大學
資訊科學與工程學系所
107
The AI­generated images are gradually similar to the pictures taken. When the generated images are used in inappropriate cases, it will cause damage to people’s rights and benefits. These doubtful images will cause illegal problems. The issue of detecting digital forgery has existed for many years. However, the fake images generated by the development of science and technology are more difficult to distinguish. Therefore, this thesis based on deep learning technology to detect the controversial face manipulation images. We proposed to segment the image block by block method and use CNN to train the features of each block separately. Finally, each feature is voted in an ensemble model to detect forgery images. Accurately, we recognize Faceswap, DeepFakes, and Face2Face with the dataset provided by FaceForensics++. Nowadays, classifiers require not only high accuracy but also the robustness of different datasets. Therefore, we train some data to test whether it is robust in other data. We collected digital forgeries generated by different methods on the video­sharing platform to test the generalization of our model in detecting these forgeries.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

SONI, ANKIT. "DETECTING DEEPFAKES USING HYBRID CNN-RNN MODEL". Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19168.

Testo completo
Abstract (sommario):
We are living in the world of digital media and are connected to various types of digital media contents present in form of images and videos. Our lives are surrounded by digital contents and thus originality of content is very important. In the recent times, there is a huge emergence of deep learning-based tools that are used to create believable manipulated media known as Deepfakes. These are realistic fake media, that can cause threat to reputation, privacy and can even prove to be a serious threat to public security. These can even be used to create political distress, spread fake terrorism or for blackmailing anyone. As with growing technology, the tampered media getting generated are way more realistic that it can even bluff the human eyes. Hence, we need better deepfake detection algorithms for efficiently detect deepfakes. The proposed system that has been presented is based on a combination of CNN followed by RNN. The CNN model deployed here is SE-ResNeXt-101. The system proposed uses the CNN model SE-ResNeXt-101 model for extraction of feature vectors from the videos and further these feature vectors are utilized to train the RNN model which is LSTM model for classification of videos as Real or Deepfake. We evaluate our method on the dataset made by collecting huge number of videos from various distributed sources. We demonstrate how a simple architecture can be used to attain competitive results.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

RASOOL, AALE. "DETECTING DEEPFAKES WITH MULTI-MODEL NEURAL NETWORKS: A TRANSFER LEARNING APPROACH". Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19993.

Testo completo
Abstract (sommario):
The prevalence of deepfake technology has led to serious worries about the veracity and dependability of visual media. To reduce any harm brought on by the malicious use of this technology, it is essential to identify deepfakes. By using the Vision Transformer (ViT) model for classification and the InceptionResNetV2 architecture for feature extraction, we offer a novel approach to deepfake detection in this thesis. The highly discriminative features are extracted from the input photos using the InceptionResNetV2 network, which has been pre-trained on a substantial dataset. The Vision Transformer model then receives these characteristics and uses the self attention method to identify long-range relationships and categorize the pictures as deepfakes or real. We use transfer learning techniques to improve the performance of the deepfake detection system. The InceptionResNetV2 model is fine-tuned using a deep fake specific dataset, which allows the pre-trained weights to adapt to whatever task is at hand, allowing the extraction of meaningful and discriminative deepfake features. Following that, the refined features are put into the ViT model for categorization. Extensive experiments are conducted to evaluate the performance of our proposed approach using various deepfake datasets. The results demonstrate the effectiveness of the InceptionResNetV2 and ViT combination, achieving high accuracy and robustness in deepfake detection across different types of manipulations, including face swapping and facial re-enactment. Additionally, the utilization of transfer learning significantly reduces the training time and computational resources required to train the deepfake detection system. This research's outcomes contribute to advancing deepfake detection techniques by leveraging state-of-the-art architectures for feature extraction and classification. The fusion of InceptionResNetV2 and ViT, along with the implementation of transfer learning, offers a powerful and efficient solution for accurate deepfake detection, thereby safeguarding the integrity and trustworthiness of visual media in an era of increasing digital manipulation.
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Deepfake Detection"

1

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal e Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal e Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal e Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Abdul-Majeed, Ghassan H., Adriana Burlea-Schiopoiu, Parul Aggarwal e Ahmed J. Obaid. Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications. IGI Global, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. Taylor & Francis Group, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gaur, Loveleen. Deepfakes: Creation, Detection, and Impact. CRC Press LLC, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Busch, Christoph, Christian Rathgeb, Ruben Vera-Rodriguez e Ruben Tolosana. Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks. Springer International Publishing AG, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Deepfake Detection"

1

Lyu, Siwei. "DeepFake Detection". In Multimedia Forensics, 313–31. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_12.

Testo completo
Abstract (sommario):
AbstractOne particular disconcerting form of disinformation are the impersonating audios/videos backed by advanced AI technologies, in particular, deep neural networks (DNNs). These media forgeries are commonly known as the DeepFakes. The AI-based tools are making it easier and faster than ever to create compelling fakes that are challenging to spot. While there are interesting and creative applications of this technology, it can be weaponized to cause negative consequences. In this chapter, we survey the state-of-the-art DeepFake detection methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Li, Yuezun, Pu Sun, Honggang Qi e Siwei Lyu. "Toward the Creation and Obstruction of DeepFakes". In Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.

Testo completo
Abstract (sommario):
AbstractAI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for large-scale datasets. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5, 639 high-quality DeepFake videos of celebrities generated using an improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF. Then we introduce Landmark Breaker, the first dedicated method to disrupt facial landmark extraction, and apply it to the obstruction of the generation of DeepFake videos. The experiments are conducted on three state-of-the-art facial landmark extractors using our Celeb-DF dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Korshunov, Pavel, e Sébastien Marcel. "The Threat of Deepfakes to Computer and Human Visions". In Handbook of Digital Face Manipulation and Detection, 97–115. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_5.

Testo completo
Abstract (sommario):
AbstractDeepfake videos, where a person’s face is automatically swapped with a face of someone else, are becoming easier to generate with more realistic results. The concern for the impact of the widespread deepfake videos on the societal trust in video recordings is growing. In this chapter, we demonstrate how dangerous deepfakes are for both human and computer visions by showing how well these videos can fool face recognition algorithms and naïve human subjects. We also show how well the state-of-the-art deepfake detection algorithms can detect deepfakes and whether they can outperform humans.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Hao, Hanxiang, Emily R. Bartusiak, David Güera, Daniel Mas Montserrat, Sriram Baireddy, Ziyue Xiang, Sri Kalyan Yarlagadda et al. "Deepfake Detection Using Multiple Data Modalities". In Handbook of Digital Face Manipulation and Detection, 235–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_11.

Testo completo
Abstract (sommario):
AbstractFalsified media threatens key areas of our society, ranging from politics to journalism to economics. Simple and inexpensive tools available today enable easy, credible manipulations of multimedia assets. Some even utilize advanced artificial intelligence concepts to manipulate media, resulting in videos known as deepfakes. Social media platforms and their “echo chamber” effect propagate fabricated digital content at scale, sometimes with dire consequences in real-world situations. However, ensuring semantic consistency across falsified media assets of different modalities is still very challenging for current deepfake tools. Therefore, cross-modal analysis (e.g., video-based and audio-based analysis) provides forensic analysts an opportunity to identify inconsistencies with higher accuracy. In this chapter, we introduce several approaches to detect deepfakes. These approaches leverage different data modalities, including video and audio. We show that the presented methods achieve accurate detection for various large-scale datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bhilare, Omkar, Rahul Singh, Vedant Paranjape, Sravan Chittupalli, Shraddha Suratkar e Faruk Kazi. "DEEPFAKE CLI: Accelerated Deepfake Detection Using FPGAs". In Parallel and Distributed Computing, Applications and Technologies, 45–56. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29927-8_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Saurav, Dheeraj Azad, Preeti Pandey, Mohammad Sheihan Javaid e Utkarsh. "Deepfake Detection Using AI". In Advancement of Intelligent Computational Methods and Technologies, 98–102. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003487906-19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez e Aythami Morales. "DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame". In Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.

Testo completo
Abstract (sommario):
AbstractThis chapter describes a DeepFake detection framework based on physiological measurement. In particular, we consider information related to the heart rate using remote photoplethysmography (rPPG). rPPG methods analyze video sequences looking for subtle color changes in the human skin, revealing the presence of human blood under the tissues. This chapter explores to what extent rPPG is useful for the detection of DeepFake videos. We analyze the recent fake detector named DeepFakesON-Phys that is based on a Convolutional Attention Network (CAN), which extracts spatial and temporal information from video frames, analyzing and combining both sources to better detect fake videos. DeepFakesON-Phys has been experimentally evaluated using the latest public databases in the field: Celeb-DF v2 and DFDC. The results achieved for DeepFake detection based on a single frame are over 98% AUC (Area Under the Curve) on both databases, proving the success of fake detectors based on physiological measurement to detect the latest DeepFake videos. In this chapter, we also propose and study heuristical and statistical approaches for performing continuous DeepFake detection by combining scores from consecutive frames with low latency and high accuracy (100% on the Celeb-DF v2 evaluation dataset). We show that combining scores extracted from short-time video sequences can improve the discrimination power of DeepFakesON-Phys.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nadimpalli, Aakash Varma, e Ajita Rattani. "GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection". In Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, 320–37. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37742-6_25.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rehman, Mariam, Mehran Rasool e Sadaf Safder. "DeepFake Detection Using Deep Learning". In Communications in Computer and Information Science, 142–54. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7969-1_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Rajesh, N., M. S. Prajwala, Nancy Kumari, Muhammad Rayyan e A. C. Ramachandra. "Hybrid Model for Deepfake Detection". In Lecture Notes in Electrical Engineering, 639–49. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2828-4_57.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Deepfake Detection"

1

Ju, Yan, Chengzhe Sun, Shan Jia, Shuwei Hou, Zhaofeng Si, Soumyya Kanti Datta, Lipeng Ke, Riky Zhou, Anita Nikolich e Siwei Lyu. "DeepFake-o-meter v2.0: An Open Platform for DeepFake Detection". In 2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR), 439–45. IEEE, 2024. http://dx.doi.org/10.1109/mipr62202.2024.00075.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Sarada, B., TVS Laxmi Sudha, Meghana Domakonda e B. Vasantha. "Audio Deepfake Detection and Classification". In 2024 Asia Pacific Conference on Innovation in Technology (APCIT), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/apcit62007.2024.10673438.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

S, Prakash Raj, Pravin D, Sabareeswaran G, Sanjith R. K e Gomathi B. "Deepfake Detection Using Deep Learning". In 2024 10th International Conference on Advanced Computing and Communication Systems (ICACCS), 1768–74. IEEE, 2024. http://dx.doi.org/10.1109/icaccs60874.2024.10717155.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ravale, Ujwala, Riya Ramesh Tattu, Ashish Baban Bhoir e Sneha Bhaskar Mahajan. "Deepfake Detection using InceptionResNetV2 Model". In 2024 IEEE 3rd World Conference on Applied Intelligence and Computing (AIC), 873–79. IEEE, 2024. http://dx.doi.org/10.1109/aic61668.2024.10730917.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Khan, Shafiqul Alam, e Damian Valles. "Deepfake Detection Using Transfer Learning". In 2024 IEEE 15th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 556–62. IEEE, 2024. http://dx.doi.org/10.1109/uemcon62879.2024.10754706.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ouajdi, Hafsa, Oussama Hadder, Modan Tailleur, Mathieu Lagrange e Laurie M. Heller. "Detection of Deepfake Environmental Audio". In 2024 32nd European Signal Processing Conference (EUSIPCO), 196–200. IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715076.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Wu, Hsiu-Fu, Chia-Yi Hsu, Chih-Hsun Lin, Chia-Mu Yu e Chun-Ying Huang. "Deepfake Detection through Temporal Attention". In 2024 33rd Wireless and Optical Communications Conference (WOCC), 109–13. IEEE, 2024. https://doi.org/10.1109/wocc61718.2024.10786063.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Xie, Yuankun, Chenxu Xiong, Xiaopeng Wang, Zhiyong Wang, Yi Lu, Xin Qi, Ruibo Fu et al. "Does Current Deepfake Audio Detection Model Effectively Detect ALM-Based Deepfake Audio?" In 2024 IEEE 14th International Symposium on Chinese Spoken Language Processing (ISCSLP), 481–85. IEEE, 2024. https://doi.org/10.1109/iscslp63861.2024.10800375.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Win, Aung Kyi, Myo Min Hein, Chit Htay Lwin, Aung Myo Thu, Myo Myat Thu e Nu Yin Khaing. "A Novel Methodology for Deepfake Detection Using MesoNet and GAN-based Deepfake Creation". In 2024 5th International Conference on Advanced Information Technologies (ICAIT), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/icait65209.2024.10754912.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bao, Han, Xuhong Zhang, Qinying Wang, Kangming Liang, Zonghui Wang, Shouling Ji e Wenzhi Chen. "Pluggable Watermarking of Deepfake Models for Deepfake Detection". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/37.

Testo completo
Abstract (sommario):
Deepfake model misuse poses major security concerns. Existing passive and active Deepfake detection methods both suffer from a lack of generalizability and robustness. In this study, we propose a pluggable and efficient active model watermarking framework for Deepfake detection. This approach facilitates the embedding of identification watermarks across a variety of Deepfake generation models, enabling easy extraction by authorities for detection purposes. Specifically, our method leverages the universal convolutional structure in generative model decoders. It employs convolutional kernel sparsification for adaptive watermark embedding positioning and introduces convolutional kernel normalization to seamlessly integrate watermark parameters with those of the generative model. For watermark extraction, we jointly train a watermark extractor based on a Deepfake detection model and use BCH encoding to identify watermark images effectively. Finally, we apply our approach to eight major types of Deepfake generation models. Experiments show our method successfully detects Deepfakes with an average accuracy exceeding 94% even in heavy lossy channels. This approach operates independently of the generation model's training without affecting the original model's performance. Furthermore, our model requires training a very limited number of parameters, and it is resilient against three major adaptive attacks. The source code can be found at https://github.com/GuaiZao/Pluggable-Watermarking
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia