Academic literature on the topic 'Deep learning for Multimedia Forensics'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning for Multimedia Forensics.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Deep learning for Multimedia Forensics"
Amerini, Irene, Aris Anagnostopoulos, Luca Maiano, and Lorenzo Ricciardi Celsi. "Deep Learning for Multimedia Forensics." Foundations and Trends® in Computer Graphics and Vision 12, no. 4 (2021): 309–457. http://dx.doi.org/10.1561/0600000096.
Full text.., Ossama, and Mhmed Algrnaodi. "Deep Learning Fusion for Attack Detection in Internet of Things Communications." Fusion: Practice and Applications 9, no. 2 (2022): 27–47. http://dx.doi.org/10.54216/fpa.090203.
Full textCelebi, Naciye Hafsa, Tze-Li Hsu, and Qingzhong Liu. "A comparison study to detect seam carving forgery in JPEG images with deep learning models." Journal of Surveillance, Security and Safety 3, no. 3 (2022): 88–100. http://dx.doi.org/10.20517/jsss.2022.02.
Full textHussain, Israr, Dostdar Hussain, Rashi Kohli, Muhammad Ismail, Saddam Hussain, Syed Sajid Ullah, Roobaea Alroobaea, Wajid Ali, and Fazlullah Umar. "Evaluation of Deep Learning and Conventional Approaches for Image Recaptured Detection in Multimedia Forensics." Mobile Information Systems 2022 (June 15, 2022): 1–10. http://dx.doi.org/10.1155/2022/2847580.
Full textPremanand Ghadekar, Vaibhavi Shetty, Prapti Maheshwari, Raj Shah, Anish Shaha, and Vaishnav Sonawane. "Non-Facial Video Spatiotemporal Forensic Analysis Using Deep Learning Techniques." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 01–14. http://dx.doi.org/10.46604/peti.2023.10290.
Full textFerreira, Sara, Mário Antunes, and Manuel E. Correia. "Exposing Manipulated Photos and Videos in Digital Forensics Analysis." Journal of Imaging 7, no. 7 (June 24, 2021): 102. http://dx.doi.org/10.3390/jimaging7070102.
Full textParkhi, Abhinav, and Atish Khobragade. "Review on deep learning based techniques for person re-identification." 3C TIC: Cuadernos de desarrollo aplicados a las TIC 11, no. 2 (December 29, 2022): 208–23. http://dx.doi.org/10.17993/3ctic.2022.112.208-223.
Full textFerreira, Sara, Mário Antunes, and Manuel E. Correia. "A Dataset of Photos and Videos for Digital Forensics Analysis Using Machine Learning Processing." Data 6, no. 8 (August 5, 2021): 87. http://dx.doi.org/10.3390/data6080087.
Full textMeshchaninov, Viacheslav Pavlovich, Ivan Andreevich Molodetskikh, Dmitriy Sergeevich Vatolin, and Alexey Gennadievich Voloboy. "Combining contrastive and supervised learning for video super-resolution detection." Keldysh Institute Preprints, no. 80 (2022): 1–13. http://dx.doi.org/10.20948/prepr-2022-80.
Full textJiang, Jianguo, Boquan Li, Baole Wei, Gang Li, Chao Liu, Weiqing Huang, Meimei Li, and Min Yu. "FakeFilter: A cross-distribution Deepfake detection system with domain adaptation." Journal of Computer Security 29, no. 4 (June 18, 2021): 403–21. http://dx.doi.org/10.3233/jcs-200124.
Full textDissertations / Theses on the topic "Deep learning for Multimedia Forensics"
Nowroozi, Ehsan. "Machine Learning Techniques for Image Forensics in Adversarial Setting." Doctoral thesis, Università di Siena, 2020. http://hdl.handle.net/11365/1096177.
Full textStanton, Jamie Alyssa. "Detecting Image Forgery with Color Phenomenology." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton15574119887572.
Full textBudnik, Mateusz. "Active and deep learning for multimedia." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM011.
Full textThe main topics of this thesis include the use of active learning-based methods and deep learning in the context of retrieval of multimodal documents. The contributions proposed during this thesis address both these topics. An active learning framework was introduced, which allows for a more efficient annotation of broadcast TV videos thanks to the propagation of labels, the use of multimodal data and selection strategies. Several different scenarios and experiments were considered in the context of person identification in videos, including using different modalities (such as faces, speech segments and overlaid text) and different selection strategies. The whole system was additionally validated in a dry run involving real human annotators.A second major contribution was the investigation and use of deep learning (in particular the convolutional neural network) for video retrieval. A comprehensive study was made using different neural network architectures and training techniques such as fine-tuning or using separate classifiers like SVM. A comparison was made between learned features (the output of neural networks) and engineered features. Despite the lower performance of the engineered features, fusion between these two types of features increases overall performance.Finally, the use of convolutional neural network for speaker identification using spectrograms is explored. The results are compared to other state-of-the-art speaker identification systems. Different fusion approaches are also tested. The proposed approach obtains comparable results to some of the other tested approaches and offers an increase in performance when fused with the output of the best system
Ha, Hsin-Yu. "Integrating Deep Learning with Correlation-based Multimedia Semantic Concept Detection." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2268.
Full textVukotic, Verdran. "Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0015/document.
Full textIn this dissertation, the thesis that deep neural networks are suited for analysis of visual, textual and fused visual and textual content is discussed. This work evaluates the ability of deep neural networks to learn automatic multimodal representations in either unsupervised or supervised manners and brings the following main contributions:1) Recurrent neural networks for spoken language understanding (slot filling): different architectures are compared for this task with the aim of modeling both the input context and output label dependencies.2) Action prediction from single images: we propose an architecture that allow us to predict human actions from a single image. The architecture is evaluated on videos, by utilizing solely one frame as input.3) Bidirectional multimodal encoders: the main contribution of this thesis consists of neural architecture that translates from one modality to the other and conversely and offers and improved multimodal representation space where the initially disjoint representations can translated and fused. This enables for improved multimodal fusion of multiple modalities. The architecture was extensively studied an evaluated in international benchmarks within the task of video hyperlinking where it defined the state of the art today.4) Generative adversarial networks for multimodal fusion: continuing on the topic of multimodal fusion, we evaluate the possibility of using conditional generative adversarial networks to lean multimodal representations in addition to providing multimodal representations, generative adversarial networks permit to visualize the learned model directly in the image domain
Hamm, Simon, and sinonh@angliss edu au. "Digital Audio Video Assessment: Surface or Deep Learning - An Investigation." RMIT University. Education, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091216.154300.
Full textQuan, Weize. "Detection of computer-generated images via deep learning." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT076.
Full textWith the advances of image editing and generation software tools, it has become easier to tamper with the content of images or create new images, even for novices. These generated images, such as computer graphics (CG) image and colorized image (CI), have high-quality visual realism, and potentially throw huge threats to many important scenarios. For instance, the judicial departments need to verify that pictures are not produced by computer graphics rendering technology, colorized images can cause recognition/monitoring systems to produce incorrect decisions, and so on. Therefore, the detection of computer-generated images has attracted widespread attention in the multimedia security research community. In this thesis, we study the identification of different computer-generated images including CG image and CI, namely, identifying whether an image is acquired by a camera or generated by a computer program. The main objective is to design an efficient detector, which has high classification accuracy and good generalization capability. Specifically, we consider dataset construction, network architecture, training methodology, visualization and understanding, for the considered forensic problems. The main contributions are: (1) a colorized image detection method based on negative sample insertion, (2) a generalization method for colorized image detection, (3) a method for the identification of natural image (NI) and CG image based on CNN (Convolutional Neural Network), and (4) a CG image identification method based on the enhancement of feature diversity and adversarial samples
MIGLIORELLI, LUCIA. "Towards digital patient monitoring: deep learning methods for the analysis of multimedia data from the actual clinical practice." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295052.
Full textAcquiring information on patients' health status from the analysis of video recordings is a crucial opportunity to enhance current clinical assessment and monitoring practices. This PhD thesis proposes four automated systems that analyse multimedia data using deep learning methodologies. These systems have been developed to enrich current assessment modalities - so far based on direct observation of the patient by trained clinicians coupled with the compilation of clinical scales often collected in paper format- of three categories of patients: preterm infants, adolescents with autism spectrum syndrome and adults affected by neuropathologies (such as stroke and amyotrophic lateral sclerosis). Each system stems from the clinical need of having new tools to treat patients, able at collecting structured, easily accessible and shareable information. This research will continue to be enhanced to ensure that clinicians have more time to devote to patients, to treat them better and to the best of their ability
Dutt, Anuvabh. "Continual learning for image classification." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM063.
Full textThis thesis deals with deep learning applied to image classification tasks. The primary motivation for the work is to make current deep learning techniques more efficient and to deal with changes in the data distribution. We work in the broad framework of continual learning, with the aim to have in the future machine learning models that can continuously improve.We first look at change in label space of a data set, with the data samples themselves remaining the same. We consider a semantic label hierarchy to which the labels belong. We investigate how we can utilise this hierarchy for obtaining improvements in models which were trained on different levels of this hierarchy.The second and third contribution involve continual learning using a generative model. We analyse the usability of samples from a generative model in the case of training good discriminative classifiers. We propose techniques to improve the selection and generation of samples from a generative model. Following this, we observe that continual learning algorithms do undergo some loss in performance when trained on several tasks sequentially. We analyse the training dynamics in this scenario and compare with training on several tasks simultaneously. We make observations that point to potential difficulties in the learning of models in a continual learning scenario.Finally, we propose a new design template for convolutional networks. This architecture leads to training of smaller models without compromising performance. In addition the design lends itself to easy parallelisation, leading to efficient distributed training.In conclusion, we look at two different types of continual learning scenarios. We propose methods that lead to improvements. Our analysis also points to greater issues, to over come which we might need changes in our current neural network training procedure
Darmet, Ludovic. "Vers une approche basée modèle-image flexible et adaptative en criminalistique des images." Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-03086427.
Full textImages are nowadays a standard and mature medium of communication.They appear in our day to day life and therefore they are subject to concernsabout security. In this work, we study different methods to assess theintegrity of images. Because of a context of high volume and versatilityof tampering techniques and image sources, our work is driven by the necessity to developflexible methods to adapt the diversity of images.We first focus on manipulations detection through statistical modeling ofthe images. Manipulations are elementary operations such as blurring,noise addition, or compression. In this context, we are more preciselyinterested in the effects of pre-processing. Because of storagelimitation or other reasons, images can be resized or compressed justafter their capture. Addition of a manipulation would then be applied on analready pre-processed image. We show that a pre-resizing of test datainduces a drop of performance for detectors trained on full-sized images.Based on these observations, we introduce two methods to counterbalancethis performance loss for a pipeline of classification based onGaussian Mixture Models. This pipeline models the local statistics, onpatches, of natural images. It allows us to propose adaptation of themodels driven by the changes in local statistics. Our first method ofadaptation is fully unsupervised while the second one, only requiring a fewlabels, is weakly supervised. Thus, our methods are flexible to adaptversatility of source of images.Then we move to falsification detection and more precisely to copy-moveidentification. Copy-move is one of the most common image tampering technique. Asource area is copied into a target area within the same image. The vastmajority of existing detectors identify indifferently the two zones(source and target). In an operational scenario, only the target arearepresents a tampering area and is thus an area of interest. Accordingly, wepropose a method to disentangle the two zones. Our method takesadvantage of local modeling of statistics in natural images withGaussian Mixture Model. The procedure is specific for each image toavoid the necessity of using a large training dataset and to increase flexibility.Results for all the techniques described above are illustrated on publicbenchmarks and compared to state of the art methods. We show that theclassical pipeline for manipulations detection with Gaussian MixtureModel and adaptation procedure can surpass results of fine-tuned andrecent deep-learning methods. Our method for source/target disentanglingin copy-move also matches or even surpasses performances of the latestdeep-learning methods. We explain the good results of these classicalmethods against deep-learning by their additional flexibility andadaptation abilities.Finally, this thesis has occurred in the special context of a contestjointly organized by the French National Research Agency and theGeneral Directorate of Armament. We describe in the Appendix thedifferent stages of the contest and the methods we have developed, as well asthe lessons we have learned from this experience to move the image forensics domain into the wild
Books on the topic "Deep learning for Multimedia Forensics"
Anagnostopoulos, Aris, Irene Amerini, Luca Maiano, and Lorenzo Ricciardi Celsi. Deep Learning for Multimedia Forensics. Now Publishers, 2021.
Find full textArumugam, Chamundeswari, Suresh Jaganathan, Saraswathi S, and Sanjay Misra. Confluence of AI, Machine, and Deep Learning in Cyber Forensics. IGI Global, 2020.
Find full textArumugam, Chamundeswari, Suresh Jaganathan, Saraswathi S, and Sanjay Misra. Confluence of AI, Machine, and Deep Learning in Cyber Forensics. IGI Global, 2020.
Find full textArumugam, Chamundeswari, Suresh Jaganathan, Saraswathi S, and Sanjay Misra. Confluence of AI, Machine, and Deep Learning in Cyber Forensics. IGI Global, 2020.
Find full textArumugam, Chamundeswari, Suresh Jaganathan, Saraswathi S, and Sanjay Misra. Confluence of AI, Machine, and Deep Learning in Cyber Forensics. IGI Global, 2020.
Find full textArumugam, Chamundeswari, Suresh Jaganathan, Saraswathi S, and Sanjay Misra. Confluence of Ai, Machine, and Deep Learning in Cyber Forensics. Information Science Reference, 2020.
Find full textBook chapters on the topic "Deep learning for Multimedia Forensics"
Stamm, Matthew C., and Xinwei Zhao. "Anti-Forensic Attacks Using Generative Adversarial Networks." In Multimedia Forensics, 467–90. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_17.
Full textLong, Chengjiang, Arslan Basharat, and Anthony Hoogs. "Video Frame Deletion and Duplication." In Multimedia Forensics, 333–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.
Full textZampoglou, Markos, Foteini Markatopoulou, Gregoire Mercier, Despoina Touska, Evlampios Apostolidis, Symeon Papadopoulos, Roger Cozien, Ioannis Patras, Vasileios Mezaris, and Ioannis Kompatsiaris. "Detecting Tampered Videos with Multimedia Forensics and Deep Learning." In MultiMedia Modeling, 374–86. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_31.
Full textNeves, João C., Ruben Tolosana, Ruben Vera-Rodriguez, Vasco Lopes, Hugo Proença, and Julian Fierrez. "GAN Fingerprints in Face Image Synthesis." In Multimedia Forensics, 175–204. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_8.
Full textCozzolino, Davide, and Luisa Verdoliva. "Multimedia Forensics Before the Deep Learning Era." In Handbook of Digital Face Manipulation and Detection, 45–67. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_3.
Full textLyu, Siwei. "DeepFake Detection." In Multimedia Forensics, 313–31. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_12.
Full textLi, Zhuopeng, and Xiaoyan Zhang. "Deep Reinforcement Learning for Automatic Thumbnail Generation." In MultiMedia Modeling, 41–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05716-9_4.
Full textRossetto, Luca, Mahnaz Amiri Parian, Ralph Gasser, Ivan Giangreco, Silvan Heller, and Heiko Schuldt. "Deep Learning-Based Concept Detection in vitrivr." In MultiMedia Modeling, 616–21. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05716-9_55.
Full textDey, Nilanjan, Amira S. Ashour, and Gia Nhu Nguyen. "Deep Learning for Multimedia Content Analysis." In Mining Multimedia Documents, 193–204. Boca Raton : CRC Press, [2017]: Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/b21638-14.
Full textDey, Nilanjan, Amira S. Ashour, and Gia Nhu Nguyen. "Deep Learning for Multimedia Content Analysis." In Mining Multimedia Documents, 193–203. Taylor & Francis Group, 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742: CRC Press, 2017. http://dx.doi.org/10.1201/9781315399744-15.
Full textConference papers on the topic "Deep learning for Multimedia Forensics"
Verdoliva, Luisa. "Deep Learning in Multimedia Forensics." In IH&MMSec '18: 6th ACM Workshop on Information Hiding and Multimedia Security. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3206004.3206024.
Full textLiu, Qingzhong, and Naciye Celebi. "Large Feature Mining and Deep Learning in Multimedia Forensics." In CODASPY '21: Eleventh ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3445970.3456285.
Full textMayer, Owen, Belhassen Bayar, and Matthew C. Stamm. "Learning Unified Deep-Features for Multiple Forensic Tasks." In IH&MMSec '18: 6th ACM Workshop on Information Hiding and Multimedia Security. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3206004.3206022.
Full textWei, Baole, Min Yu, Kai Chen, and Jianguo Jiang. "Deep-BIF: Blind Image Forensics Based on Deep Learning." In 2019 IEEE Conference on Dependable and Secure Computing (DSC). IEEE, 2019. http://dx.doi.org/10.1109/dsc47296.2019.8937712.
Full textNazar, Nidhin, Vinod Kumar Shukla, Gagandeep Kaur, and Nitin Pandey. "Integrating Web Server Log Forensics through Deep Learning." In 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2021. http://dx.doi.org/10.1109/icrito51393.2021.9596324.
Full textAndersson, Maria. "Deep learning for behaviour recognition in surveillance applications." In Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III, edited by Henri Bouma, Robert J. Stokes, Yitzhak Yitzhaky, and Radhakrishna Prabhu. SPIE, 2019. http://dx.doi.org/10.1117/12.2533764.
Full textWang, Hsin-Tzu, and Po-Chyi Su. "Deep-Learning-Based Block Similarity Evaluation for Image Forensics." In 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). IEEE, 2020. http://dx.doi.org/10.1109/icce-taiwan49838.2020.9258247.
Full textBuccoli, Michele, Paolo Bestagini, Massimiliano Zanoni, Augusto Sarti, and Stefano Tubaro. "Unsupervised feature learning for bootleg detection using deep learning architectures." In 2014 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2014. http://dx.doi.org/10.1109/wifs.2014.7084316.
Full textSang, Jitao, Jun Yu, Ramesh Jain, Rainer Lienhart, Peng Cui, and Jiashi Feng. "Deep Learning for Multimedia." In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3243931.
Full textChien, Jen-Tzung. "Deep Bayesian Multimedia Learning." In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3418545.
Full text