Добірка наукової літератури з теми "Adversarial multimedia forensics"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Adversarial multimedia forensics".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Adversarial multimedia forensics"

1

.., Ossama, and Mhmed Algrnaodi. "Deep Learning Fusion for Attack Detection in Internet of Things Communications." Fusion: Practice and Applications 9, no. 2 (2022): 27–47. http://dx.doi.org/10.54216/fpa.090203.

Повний текст джерела
Анотація:
The increasing deep learning techniques used in multimedia and networkIoT solve many problems and increase performance. Securing the deep learning models, multimedia, and networkIoT has become a major area of research in the past few years which is considered to be a challenge during generative adversarial attacks over the multimedia or networkIoT. Many efforts and studies try to provide intelligent forensics techniques to solve security issues. This paper introduces a holistic organization of intelligent multimedia forensics that involve deep learning fusion, multimedia, and networkIoT forensics to attack detection. We highlight the importance of using deep learning fusion techniques to obtain intelligent forensics and security over multimedia or NetworkIoT. Finally, we discuss the key challenges and future directions in the area of intelligent multimedia forensics using deep learning fusion techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zou, Hao, Pengpeng Yang, Rongrong Ni, and Yao Zhao. "Anti-Forensics of Image Contrast Enhancement Based on Generative Adversarial Network." Security and Communication Networks 2021 (March 24, 2021): 1–8. http://dx.doi.org/10.1155/2021/6663486.

Повний текст джерела
Анотація:
In the multimedia forensics community, anti-forensics of contrast enhancement (CE) in digital images is an important topic to understand the vulnerability of the corresponding CE forensic method. Some traditional CE anti-forensic methods have demonstrated their effective forging ability to erase forensic fingerprints of the contrast-enhanced image in histogram and even gray level cooccurrence matrix (GLCM), while they ignore the problem that their ways of pixel value changes can expose them in the pixel domain. In this paper, we focus on the study of CE anti-forensics based on Generative Adversarial Network (GAN) to handle the problem mentioned above. Firstly, we exploit GAN to process the contrast-enhanced image and make it indistinguishable from the unaltered one in the pixel domain. Secondly, we introduce a specially designed histogram-based loss to enhance the attack effectiveness in the histogram domain and the GLCM domain. Thirdly, we use a pixel-wise loss to keep the visual enhancement effect of the processed image. The experimental results show that our method achieves high anti-forensic attack performance against CE detectors in the pixel domain, the histogram domain, and the GLCM domain, respectively, and maintains the highest image quality compared with traditional CE anti-forensic methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Adversarial multimedia forensics"

1

Nowroozi, Ehsan. "Machine Learning Techniques for Image Forensics in Adversarial Setting." Doctoral thesis, Università di Siena, 2020. http://hdl.handle.net/11365/1096177.

Повний текст джерела
Анотація:
The use of machine-learning for multimedia forensics is gaining more and more consensus, especially due to the amazing possibilities offered by modern machine learning techniques. By exploiting deep learning tools, new approaches have been proposed whose performance remarkably exceed those achieved by state-of-the-art methods based on standard machine-learning and model-based techniques. However, the inherent vulnerability and fragility of machine learning architectures pose new serious security threats, hindering the use of these tools in security-oriented applications, and, among them, multimedia forensics. The analysis of the security of machine learning-based techniques in the presence of an adversary attempting to impede the forensic analysis, and the development of new solutions capable to improve the security of such techniques is then of primary importance, and, recently, has marked the birth of a new discipline, named Adversarial Machine Learning. By focusing on Image Forensics and image manipulation detection in particular, this thesis contributes to the above mission by developing novel techniques for enhancing the security of binary manipulation detectors based on machine learning in several adversarial scenarios. The validity of the proposed solutions has been assessed by considering several manipulation tasks, ranging from the detection of double compression and contrast adjustment, to the detection of geometric transformations and ltering operations.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Adversarial multimedia forensics"

1

Barni, Mauro, Wenjie Li, Benedetta Tondi, and Bowen Zhang. "Adversarial Examples in Image Forensics." In Multimedia Forensics, 435–66. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_16.

Повний текст джерела
Анотація:
AbstractWe describe the threats posed by adversarial examples in an image forensic context, highlighting the differences and similarities with respect to other application domains. Particular attention is paid to study the transferability of adversarial examples from a source to a target network and to the creation of attacks suitable to be applied in the physical domain. We also describe some possible countermeasures against adversarial examples and discuss their effectiveness. All the concepts described in the chapter are exemplified with results obtained in some selected image forensics scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Stamm, Matthew C., and Xinwei Zhao. "Anti-Forensic Attacks Using Generative Adversarial Networks." In Multimedia Forensics, 467–90. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_17.

Повний текст джерела
Анотація:
AbstractThe rise of deep learning has led to rapid advances in multimedia forensics. Algorithms based on deep neural networks are able to automatically learn forensic traces, detect complex forgeries, and localize falsified content with increasingly greater accuracy. At the same time, deep learning has expanded the capabilities of anti-forensic attackers. New anti-forensic attacks have emerged, including those discussed in Chap. 10.1007/978-981-16-7621-5_14 based on adversarial examples, and those based on generative adversarial networks (GANs). In this chapter, we discuss the emerging threat posed by GAN-based anti-forensic attacks. GANs are a powerful machine learning framework that can be used to create realistic, but completely synthetic data. Researchers have recently shown that anti-forensic attacks can be built by using GANs to create synthetic forensic traces. While only a small number of GAN-based anti-forensic attacks currently exist, results show these early attacks are both effective at fooling forensic algorithms and introduce very little distortion into attacked images.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Neves, João C., Ruben Tolosana, Ruben Vera-Rodriguez, Vasco Lopes, Hugo Proença, and Julian Fierrez. "GAN Fingerprints in Face Image Synthesis." In Multimedia Forensics, 175–204. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_8.

Повний текст джерела
Анотація:
AbstractThe availability of large-scale facial databases, together with the remarkable progresses of deep learning technologies, in particular Generative Adversarial Networks (GANs), have led to the generation of extremely realistic fake facial content, raising obvious concerns about the potential for misuse. Such concerns have fostered the research on manipulation detection methods that, contrary to humans, have already achieved astonishing results in various scenarios. This chapter is focused on the analysis of GAN fingerprints in face image synthesis.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Adversarial multimedia forensics"

1

Barni, Mauro, Matthew C. Stamm, and Benedetta Tondi. "Adversarial Multimedia Forensics: Overview and Challenges Ahead." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Barni, Mauro, and Benedetta Tondi. "Threat Models and Games for Adversarial Multimedia Forensics." In ICMR '17: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078897.3080533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Run, Ziheng Huang, Zhikai Chen, Li Liu, Jing Chen, and Lina Wang. "Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/107.

Повний текст джерела
Анотація:
DeepFake is becoming a real risk to society and brings potential threats to both individual privacy and political security due to the DeepFaked multimedia are realistic and convincing. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and failed in blocking the disinformation spreading in advance. To address this limitation, researchers study the proactive defense techniques by adding adversarial noises into the source data to disrupt the DeepFake manipulation. However, the existing studies on proactive DeepFake defense via injecting adversarial noises are not robust, which could be easily bypassed by employing simple image reconstruction revealed in a recent study MagDR. In this paper, we investigate the vulnerability of the existing forgery techniques and propose a novel anti-forgery technique that helps users protect the shared facial images from attackers who are capable of applying the popular forgery techniques. Our proposed method generates perceptual-aware perturbations in an incessant manner which is vastly different from the prior studies by adding adversarial noises that is sparse. Experimental results reveal that our perceptual-aware perturbations are robust to diverse image transformations, especially the competitive evasion technique, MagDR via image reconstruction. Our findings potentially open up a new research direction towards thorough understanding and investigation of perceptual-aware adversarial attack for protecting facial images against DeepFakes in a proactive and robust manner. Code is available at https://github.com/AbstractTeen/AntiForgery.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Khatoon, Shadma, and Mohammad Sarosh Umar. "Forensic sketch-to-photo transformation with improved Generative Adversarial Network (GAN)." In 2022 5th International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT). IEEE, 2022. http://dx.doi.org/10.1109/impact55510.2022.10029068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії