To see the other types of publications on this topic, follow the link: Image tamperings.

Dissertations / Theses on the topic 'Image tamperings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Image tamperings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xin, Xing. "A Singular-Value-Based Semi-Fragile Watermarking Scheme for Image Content Authentication with Tampering Localization." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/645.

Full text
Abstract:
This thesis presents a novel singular-value-based semi-fragile watermarking scheme for image content authentication with tampering localization. The proposed scheme first generates a secured watermark bit sequence by performing a logical "xor" operation on a content-based watermark and content-independent watermark, wherein the content-based watermark is generated by a singular-value-based watermark bit sequence that represents intrinsic algebraic image properties, and the content-independent watermark is generated by a private-key-based random watermark bit sequence. It next embeds the secure watermark in the approximation subband of each non-overlapping 4×4 block using the adaptive quantization method to generate the watermarked image. The image content authentication process starts with regenerating the secured watermark bit sequence following the same process mentioned in the secured watermark bit sequence generation. It then extracts a possibly embedded watermark using the parity of the quantization results from the probe image. Next, the authentication process constructs a binary error map, whose height and width are a quarter of those of the original image, using the absolute difference between the regenerated secured watermark and the extracted watermark. It finally computes two authentication measures (i.e., M1 and M2), with M1 measuring the overall similarity between the regenerated watermark and the extracted watermark, and M2 measuring the overall clustering level of the tampered error pixels. These two authentication measures are further seamlessly integrated in the authentication process to confirm the image content and localize any possible tampered areas. The extensive experimental results show that the proposed scheme outperforms four peer schemes and is capable of identifying intentional tampering, incidental modification, and localizing tampered regions.
APA, Harvard, Vancouver, ISO, and other styles
2

Abecidan, Rony. "Stratégies d'apprentissage robustes pour la détection de manipulation d'images." Electronic Thesis or Diss., Centrale Lille Institut, 2024. http://www.theses.fr/2024CLIL0025.

Full text
Abstract:
Aujourd'hui, la manipulation d'images à des fins non éthiques est courante, notamment sur les réseaux sociaux et dans la publicité. Les utilisateurs malveillants peuvent par exemple créer des images synthétiques convaincantes pour tromper le public ou dissimuler des messages dans des images numériques, posant des risques pour la sécurité nationale. Les chercheurs en analyse forensique d'image travaillent donc avec les forces de l'ordre pour détecter ces manipulations. Les méthodes d'analyse forensique les plus avancées utilisent notamment des réseaux neuronaux convolutifs pour les détecter. Cependant, ces réseaux sont entraînés sur des données préparées par des équipes de recherche, qui diffèrent largement des données réelles rencontrées en pratique. Cet écart réduit considérablement l'efficacité opérationnelle des détecteurs de manipulations d'images. Cette thèse vise précisément à améliorer l'efficacité des détecteurs de manipulation d'images dans un contexte pratique, en atténuant l'impact de ce décalage de données. Deux stratégies complémentaires sont explorées, toutes deux issues de la littérature en apprentissage automatique : 1. Créer des modèles capables d'apprendre à généraliser sur de nouvelles bases de données ou 2. Sélectionner, voire construire, des bases d'entraînement représentatives des images à examiner. Pour détecter des manipulations sur un grand nombre d'images non étiquetées, les stratégies d'adaptation de domaine cherchant à plonger les distributions d'entraînement et d'évaluation dans un espace latent où elles coïncident peuvent se révéler utiles. Néanmoins, on ne peut nier la faible efficacité opérationnelle de ces stratégies, étant donné qu'elles supposent un équilibre irréaliste entre images vraies et manipulées parmi les images à examiner. En plus de cette hypothèse problématique, les travaux de cette thèse montrent que ces stratégies ne fonctionnent que si la base d'entraînement guidant la détection est suffisamment proche de la base d'images sur laquelle on cherche à évaluer, une condition difficile à garantir pour un praticien. Généraliser sur un petit nombre d'images non étiquetées est encore plus difficile bien que plus réaliste. Dans la seconde partie de cette thèse, nous abordons ce scénario en examinant l'influence des opérations de développement d'images traditionnelles sur le phénomène de décalage de données en détection de manipulation d'images. Cela nous permet de formuler des stratégies pour sélectionner ou créer des bases d'entraînement adaptées à un petit nombre d'images. Notre contribution finale est une méthodologie qui exploite les propriétés statistiques des images pour construire des ensembles d'entraînement pertinents vis-à-vis des images à examiner. Cette approche réduit considérablement le problème du décalage de données et permet aux praticiens de développer des modèles sur mesure pour leur situation
Today, it is easier than ever to manipulate images for unethical purposes. This practice is therefore increasingly prevalent in social networks and advertising. Malicious users can for instance generate convincing deep fakes in a few seconds to lure a naive public. Alternatively, they can also communicate secretly hidding illegal information into images. Such abilities raise significant security concerns regarding misinformation and clandestine communications. The Forensics community thus actively collaborates with Law Enforcement Agencies worldwide to detect image manipulations. The most effective methodologies for image forensics rely heavily on convolutional neural networks meticulously trained on controlled databases. These databases are actually curated by researchers to serve specific purposes, resulting in a great disparity from the real-world datasets encountered by forensic practitioners. This data shift addresses a clear challenge for practitioners, hindering the effectiveness of standardized forensics models when applied in practical situations.Through this thesis, we aim to improve the efficiency of forensics models in practical settings, designing strategies to mitigate the impact of data shift. It starts by exploring literature on out-of-distribution generalization to find existing strategies already helping practitioners to make efficient forensic detectors in practice. Two main frameworks notably hold promise: the implementation of models inherently able to learn how to generalize on images coming from a new database, or the construction of a representative training base allowing forensics models to generalize effectively on scrutinized images. Both frameworks are covered in this manuscript. When faced with many unlabeled images to examine, domain adaptation strategies matching training and testing bases in latent spaces are designed to mitigate data shifts encountered by practitioners. Unfortunately, these strategies often fail in practice despite their theoretical efficiency, because they assume that scrutinized images are balanced, an assumption unrealistic for forensic analysts, as suspects might be for instance entirely innocent. Additionally, such strategies are tested typically assuming that an appropriate training set has been chosen from the beginning, to facilitate adaptation on the new distribution. Trying to generalize on a few images is more realistic but much more difficult by essence. We precisely deal with this scenario in the second part of this thesis, gaining a deeper understanding of data shifts in digital image forensics. Exploring the influence of traditional processing operations on the statistical properties of developed images, we formulate several strategies to select or create training databases relevant for a small amount of images under scrutiny. Our final contribution is a framework leveraging statistical properties of images to build relevant training sets for any testing set in image manipulation detection. This approach improves by far the generalization of classical steganalysis detectors on practical sets encountered by forensic analyst and can be extended to other forensic contexts
APA, Harvard, Vancouver, ISO, and other styles
3

Nyeem, Hussain Md Abu. "A digital watermarking framework with application to medical image security." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/74749/1/Hussain%20Md%20Abu_Nyeem_Thesis.pdf.

Full text
Abstract:
Dealing with digital medical images is raising many new security problems with legal and ethical complexities for local archiving and distant medical services. These include image retention and fraud, distrust and invasion of privacy. This project was a significant step forward in developing a complete framework for systematically designing, analyzing, and applying digital watermarking, with a particular focus on medical image security. A formal generic watermarking model, three new attack models, and an efficient watermarking technique for medical images were developed. These outcomes contribute to standardizing future research in formal modeling and complete security and computational analysis of watermarking schemes.
APA, Harvard, Vancouver, ISO, and other styles
4

Diallo, Boubacar. "Mesure de l'intégrité d'une image : des modèles physiques aux modèles d'apprentissage profond." Thesis, Poitiers, 2020. http://www.theses.fr/2020POIT2293.

Full text
Abstract:
Les images numériques sont devenues un outil de communication visuel puissant et efficace pour transmettre des messages, diffuser des idées et prouver des faits. L’apparition du smartphone avec une grande diversité de marques et de modèles facilite la création de nouveaux contenus visuels et leur diffusion dans les réseaux sociaux et les plateformes de partage d’images. Liés à ce phénomène de création et publication d'images et aidés par la disponibilité et la facilité d’utilisation des logiciels de manipulation d’images, de nombreux problèmes sont apparus allant de la diffusion de contenu illégal à la violation du droit d’auteur. La fiabilité des images numériques est remise en cause que ce soit pour de simples utilisateurs ou pour des professionnels experts tels que les tribunaux et les enquêteurs de police. Le phénomène des « fake news » est un exemple bien connu et répandu d’utilisation malveillante d’images numériques sur les réseaux.De nombreux chercheurs du domaine de la cybersécurité des images ont relevé les défis scientifiques liés aux manipulations des images. De nombreuses méthodes aux performances intéressantes ont été développées basées sur le traitement automatique des images et plus récemment l'adoption de l'apprentissage profond. Malgré la diversité des techniques proposées, certaines ne fonctionnent que pour certaines conditions spécifiques et restent vulnérables à des attaques malveillantes relativement simples. En effet, les images collectées sur Internet imposent de nombreuses contraintes aux algorithmes remettant en question de nombreuses techniques de vérification d’intégrité existantes. Il existe deux particularités principales à prendre en compte pour la détection d'une falsification : l’une est le manque d'informations sur l'acquisition de l'image d'origine, l'autre est la forte probabilité de transformations automatiques liées au partage de l'image telles que la compression avec pertes ou le redimensionnement.Dans cette thèse, nous sommes confrontés à plusieurs de ces défis liés à la cybersécurité des images notamment l’identification de modèles de caméra et la détection de falsification d’images. Après avoir passé en revue l'état de l'art du domaine, nous proposons une première méthode basée sur les données pour l’identification de modèles de caméra. Nous utilisons les techniques d’apprentissage profond basées sur les réseaux de neurones convolutifs (CNN) et développons une stratégie d’apprentissage prenant en compte la qualité des données d’entrée par rapport à la transformation appliquée. Une famille de réseaux CNN a été conçue pour apprendre les caractéristiques du modèle de caméra directement à partir d’une collection d’images subissant les mêmes transformations que celles couramment utilisées sur Internet. Notre intérêt s'est porté sur la compression avec pertes pour nos expérimentations, car c’est le type de post-traitement le plus utilisé sur Internet. L’approche proposée fournit donc une solution robuste face à la compression pour l’identification de modèles de caméra. Les performances obtenues par notre approche de détection de modèles de caméra sont également utilisées et adaptées pour la détection et la localisation de falsification d’images. Les performances obtenues soulignent la robustesse de nos propositions pour la classification de modèles de caméra et la détection de falsification d'images
Digital images have become a powerful and effective visual communication tool for delivering messages, diffusing ideas, and proving facts. The smartphone emergence with a wide variety of brands and models facilitates the creation of new visual content and its dissemination in social networks and image sharing platforms. Related to this phenomenon and helped by the availability and ease of use of image manipulation softwares, many issues have arisen ranging from the distribution of illegal content to copyright infringement. The reliability of digital images is questioned for common or expert users such as court or police investigators. A well known phenomenon and widespread examples are the "fake news" which oftenly include malicious use of digital images.Many researchers in the field of image forensic have taken up the scientific challenges associated with image manipulation. Many methods with interesting performances have been developed based on automatic image processing and more recently the adoption of deep learning. Despite the variety of techniques offered, performance are bound to specific conditions and remains vulnerable to relatively simple malicious attacks. Indeed, the images collected on the Internet impose many constraints on algorithms questioning many existing integrity verification techniques. There are two main peculiarities to be taken into account for the detection of a falsification: one is the lack of information on pristine image acquisition, the other is the high probability of automatic transformations linked to the image-sharing platforms such as lossy compression or resizing.In this thesis, we focus on several of these image forensic challenges including camera model identification and image tampering detection. After reviewing the state of the art in the field, we propose a first data-driven method for identifying camera models. We use deep learning techniques based on convolutional neural networks (CNNs) and develop a learning strategy considering the quality of the input data versus the applied transformation. A family of CNN networks has been designed to learn the characteristics of the camera model directly from a collection of images undergoing the same transformations as those commonly used on the Internet. Our interest focused on lossy compression for our experiments, because it is the most used type of post-processing on the Internet. The proposed approach, therefore, provides a robust solution to compression for camera model identification. The performance achieved by our camera model detection approach is also used and adapted for image tampering detection and localization. The performances obtained underline the robustness of our proposals for camera model identification and image forgery detection
APA, Harvard, Vancouver, ISO, and other styles
5

Hsiao, Dun-Yu. "Digital Image Tampering Synthesis and Identification." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2207200517502800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zou, Chuang Lou, and 莊潤洲. "The Study of Image Tampering Detection." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/58825581948530138596.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
88
Nowadays, Internet has become more and more popular. However, images are most oftenly used in Internet. Images can be used as the trademarks of company symbolization, electronic commerce (EC), multimedia, etc. However, there are also some security issues of images. Among them, the illegal like illegal copy and modification of image is a popular security problem which oftenly occurs in our daily life. Therefore, how to protect the intellectual property of images is an important research topic. The thesis planes to use image authentication to protect the images. Based on the image authentication, the modified places of the images will be pointed out; thus the images can preserve the image integrity. This thesis proposes two new methods for image authentication. The first method uses the RSA signatures and quadtree structure to achieve the image integrity. Based on those digital signatures, we can claim the authorship of the image and detect efficiently whether the image has been modified or not. On the other hand, we use quadtree structure to organize digital signatures; thus the detection procedure will be more efficient. The traditional image authentication methods cannot allow JPEG lossy compression since the JPEG lossy compression may destroy the signatures embedded in images. However, JPEG lossy compression method is often required and popularly used everywhere. Thus, the JPEG lossy compression should be taken into consideration. To improve the traditional methods, we propose a new image authentication that not only can prevent images tampered with but also allow reasonable JPEG lossy compression. Our method will extract some significant DCT coefficients and set a compression tolerant range of them. An extracted DCT coefficient will be survived after the image is not further modified or lossily compressed.
APA, Harvard, Vancouver, ISO, and other styles
7

Hsiao, Dun-Yu, and 蕭敦育. "Digital Image Tampering Synthesis and Identification." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/11083859577614039919.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
93
Have you ever edited some unsatisfied digital photographs of yours? If the answer is yes, then you have made some digital tampering. With powerful computer and mighty software, seasoned users could turn digital media into what they want. The detection of digital tampering has become a crucial problem. In most of the time, digital tampering is not perceptible by human; however, some traces of digital tampering may be left in the media during the process. Based on this idea, several detection methods are proposed in this thesis to against various common digital tampering without any help of embedded information such as the well-known atermarking technique. Effectiveness and results will be presented in each method, robustness will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yi-Lei, and 陳以雷. "Tampering Detection in JPEG Images." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/43195323821600534654.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
97
Since JPEG has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. Tampering on compressed images often involve recompression and tend to erase those tampering traces existed in uncompressed images. We could, however, try to discover new traces caused by recompression and use these traces to detect the recompression tampering. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for recompressed images. In this thesis, we first propose a robust tampered image detection approach by periodicity analysis with the compression artifacts both in spatial and DCT domain. To locate the forged regions, we then propose a forged regions localization method via quantization noise model and image restoration techniques. Finally, we conduct a series of experiments to demonstrate the validity of the proposed periodic features and quantization noise model, which all outperform the existing methods. Also, we show the effectiveness and feasibility of our forged regions localization method with proposed image restoration techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Wei-Yu, and 陳威宇. "A novel image tampering proof scheme using image heterogeneous channel." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/24686228296813647540.

Full text
Abstract:
碩士
國立勤益科技大學
資訊工程系
102
In the paperless applications world, the document contents protection becomes more and more important. And, many documents contain owners’ secret assets. In this thesis, we use the PNG (portable network graphics) format images to illustrate our scheme. PNG format image contains basic channel and heterogeneous channel, named alpha channel. Alpha channel is used to define the degree of image transparency. We can use alpha channel to hide important information. We use the transparently fuzzy cognitive of human eyes to achieve secret information hiding and retrieving. Accordingly, the usage of the alpha channel to hide secret is a non-visible watermark technology. When a part content of the document is marked as important, we will use three colors, black, gray, and white, to parse the marked content. After ternarization, three values of the attributes will be calculated from the marked document image. In our experimental results, we show that our methods can valid and meaningful hiding secrets using alpha channel.
APA, Harvard, Vancouver, ISO, and other styles
10

李明倫. "Watermarking Scheme for Tampering Detection and Image Recovery." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/9469vm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

吳佳穎. "Blind watermarking for image authentication and tampering localization." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/67669562991435293981.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
91
In this paper, an image authentication and tampering localization technique, which is based on embedding a digital watermark into an original image in frequency-domain, is proposed. First, we apply an algorithm and improve it to increase the robustness of the watermark about compression technique (e.g., JPEG), and the extracted watermark will as the same as the embedded watermark. Then, the extracted watermark is compared with the embedded watermark to determine whether the source of images with watermark information is correct or not. If the extracted watermark is incorrect, the watermarked image has been attacked. Finally, a line-based watermark alignment (LWA) method is proposed to find the attacked parts, and a morphology-based region growing (MRG) method decides the minima region(s) in the image.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Li-Jen, and 劉立仁. "The Detection of Visual Secret ShadowCheating and Digital Image Tampering." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/27727060986799355416.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
96
The communications through computer network are popular recently; therefore, the security problems of various image applications are getting much more important while transferring or passing images through the Internet. This thesis provided two methods for solving cheating problem in Visual Cryptography and detecting the tampering in digital images. Visual Cryptography (VC) has drawn much attention for providing the service of secret communication. Basically, VC is the process of encoding a secret into several meaningless shares and later decoding the secret by superimposing all or some of the shares without any computation involved. VC has been adopted to support some practical applications, such as image authentication, visual authentication, image hiding, and digital watermarking. Unfortunately, in many applications, VC has been shown to suffer from the “cheating problem” in which the disclosed secret image may be altered by malicious insiders who are called “cheaters.” While ubiquitous computing has been well developed, it has recently occurred to people in both academia and industry that research could benefit more from computational VC by introducing light-weight computation costs in the decoding phase. In Chapter 2, a simple scheme is proposed to conquer the cheating problem by facilitating the capability of share authentication. It is worthwhile to note that the proposed scheme can identify for certain whether cheating attacks have occurred or not, while other schemes that have the same objective frequently provide a vague answer. In addition, the proposed scheme effectively addresses the two main problems of VC, i.e., the inconvenience of meaningless share management and the challenge of achieving difficult alignment. In Chapter 3, a new fragile watermarking scheme is proposed. For the purpose of detecting any modification of the image, a hybrid scheme combining a two-pass logistic map with Hamming code is designed. For security purposes, the two-pass logistic map scheme contains a private key to help resist the VQ attack even when the embedding scheme is block-wise independent. The experimental results show that our proposed scheme can detect and locate burst bits and malicious tampering effectively; it can also resist the VQ attack successfully.
APA, Harvard, Vancouver, ISO, and other styles
13

Yung-AnLi and 李永安. "Camera Model Recognition and Image Tampering Detection Based on Color Analysis." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/43948561182619980581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hung, Hsiao-Ying, and 洪筱盈. "A Wavelet Transform Based Digital Watermarking for Image Authentication and Tampering Detection." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/18594281413291160363.

Full text
Abstract:
碩士
國立交通大學
資訊管理研究所
92
The rapid expansion of the Internet and the recent advance of digital technologies have sharply increased the availability of digital media. In consequence, watermarking is developed as a suitable candidate for the ownership identification of digital data as it allows the invisible insertion of information with the imperceptible modification. This thesis is to investigate a wavelet based semi-fragile watermarking technique for image authentication and tampering detection. Image protection is achieved by the insertion of a secret wavelet tree based binary image signature (WTS) after wavelet decomposition followed by quantization procedure. In addition, tuning steps is performed for the selected watermarked coefficients in order to increase the elasticity of watermark. During the verification, the original unmarked image is not needed for comparison. The detection of unauthorized tampering within the image is performed by comparison with the possibly modified image’s WTS and the authentic one. The proposed technique not only localizes the tampered position, but also has the capability to distinguish incidental modification from the malicious tampering. It stays unaffected by medium JPEG quality compression and also effectively points out the small image modifications.
APA, Harvard, Vancouver, ISO, and other styles
15

劉品均. "A Feature Fusion Model with Rank-Sparsity Decomposition for Image Tampering Localization." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/67165329995201795201.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
102
Nowadays, image editing softwares are powerful and user-friendly that most people can easily create visual-pleasant tampered images. The techniques of image forensics have been developed for about two decades. However, most techniques only focus on one tampering trace. In addition, they sometimes assume that the suspicious region is known a priori. The purpose of this work is to develop a feature fusion model which can utilize all the available traces and automatically localize the tampered region. We adopt the early fusion scheme to fuse features in order to consider all the available features simultaneously. We propose to utilize Robust Principal Component Analysis (RPCA) to decompose one test image into authentic parts and tampered parts. We assume the authentic parts share similar feature behaviors, i.e., low-rank, and the tampered parts are sparse and also share similar feature behaviors, i.e., sparse and low-rank. We consider the spatial consistency of the detected tampered parts by using Group-Sparsity. The experimental results demonstrate the effectiveness of the proposed method, which outperforms the state-of-the-art methods in both synthetic and realistic cases.
APA, Harvard, Vancouver, ISO, and other styles
16

HSU, HUAI-FAN, and 許懷方. "The techniques of image tampering localization and recovery based on vector quantization." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54802148269910452220.

Full text
Abstract:
碩士
真理大學
資訊工程學系碩士班
104
Nowadays, people rely on computer networks to deliver digital information more and more frequently. In order to protect transmitted data not being suffered from illegal tampering, a content authentication technology is gradually developed. Image authentication is to confirm the content integrity for digital images and prevent them from malicious modification, destruction and unauthorized reproduction. The main concept is to embed particular authentication data and recovery information into the image itself. Usually, the authenticated image must maintain appropriate and high quality, such that it cannot perceive any difference between embedded image and original image with naked eye. When a receiver receives an image, he can extract the embedded authentication data to detect whether that image has been tampered with or not. Further, the embedded recovery information can be utilized to restore the content of tampered regions on purpose of reducing the probability of image retransmission and save network bandwidth. This thesis presents three image authentication schemes based on vector quantization (VQ) technique. The first image authentication scheme regards VQ index values as the important authentication data and recovery information and then embeds them by using the traditional LSB hiding and the LSB matching function. However, the detected misjudgments will occurred in that the authentication data of the other untampered blocks might be hidden in tampered blocks, thus causing inconsistency. In order to improve this defect, therefore, this thesis’s second scheme utilizes the property of grouped block intersection in the second scheme to enhance detected accuracy. Finally, we also applied Reed-Solomon (RS) code to propose the third image authentication scheme. It makes use of the error correction capability of RS code to secure authentication data against modifying attacks so as to improve the restored image quality.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yin-Liang, and 王尹良. "Study of probability-based tampering authentication scheme for digital images." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/55w4hc.

Full text
Abstract:
碩士
銘傳大學
資訊管理學系碩士班
97
In recent years, digital watermarking technology has been widely used for digital images of property rights protection and integrity authentication. Image integrity authentication is usually done by a fragile watermarking scheme, which means to embed an image of feature values as an authentication message. When authenticating image integrity, the embedded authentication message in the image is extracted for comparison with the image feature values to identify whether the image has been tampered with, and if so, locate the affected area. However, such authentication schemes may result in detection error problems. Namely, the tampered area may be misjudged as not having been tampered with, or vice versa. Hence, methods that effectively reduce errors in tampering detection have become an important research topic. This study aims to integrate a probability theory to improve image tampering detection accuracy and precision. The scheme includes two processes: the embedding of an image authentication message and tampering detection. In the image tampering detection process, in addition to identifying whether the image has been tampered with and locating the tampered area, through the authentication message embedded in the image, a probability theory is employed to improve previously obtained detection results to enhance authentication accuracy. The experimental results reveal that the proposed scheme performs better in terms of detection precision and authentication accuracy rate.
APA, Harvard, Vancouver, ISO, and other styles
18

"The Effect of Image Preprocessing Techniques and Varying JPEG Quality on the Identifiability of Digital Image Splicing Forgery." Master's thesis, 2015. http://hdl.handle.net/2286/R.I.29668.

Full text
Abstract:
abstract: Splicing of digital images is a powerful form of tampering which transports regions of an image to create a composite image. When used as an artistic tool, this practice is harmless but when these composite images can be used to create political associations or are submitted as evidence in the judicial system they become more impactful. In these cases, distinction between an authentic image and a tampered image can become important. Many proposed approaches to image splicing detection follow the model of extracting features from an authentic and tampered dataset and then classifying them using machine learning with the goal of optimizing classification accuracy. This thesis approaches splicing detection from a slightly different perspective by choosing a modern splicing detection framework and examining a variety of preprocessing techniques along with their effect on classification accuracy. Preprocessing techniques explored include Joint Picture Experts Group (JPEG) file type block line blurring, image level blurring, and image level sharpening. Attention is also paid to preprocessing images adaptively based on the amount of higher frequency content they contain. This thesis also recognizes an identified problem with using a popular tampering evaluation dataset where a mismatch in the number of JPEG processing iterations between the authentic and tampered set creates an unfair statistical bias, leading to higher detection rates. Many modern approaches do not acknowledge this issue but this thesis applies a quality factor equalization technique to reduce this bias. Additionally, this thesis artificially inserts a mismatch in JPEG processing iterations by varying amounts to determine its effect on detection rates.
Dissertation/Thesis
Masters Thesis Computer Science 2015
APA, Harvard, Vancouver, ISO, and other styles
19

Chiu, Shian-Ren, and 邱顯仁. "Study of Hierarchical Tampering Detection and Recovery Schemes for Digital Images." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/80485277939778786629.

Full text
Abstract:
碩士
銘傳大學
資訊管理學系碩士班
98
In recent years, digital watermarking technology has been widely used for protection and integrity authentication of digital images property rights. Image integrity authentication is usually accomplished with a fragile watermarking scheme, which means to embed an image of feature values as an authentication message. When image integrity needs to be authenticated, the embedded authentication message in the image is extracted for comparison with the image feature values to identify whether the image has been tampered with and to locate the affected area. However, such authentication schemes may result in false tamper detection Except for correctly distinguish the tampered region, a good tamper detection system should be able to repair the tampered regions. However, many image tampering detection and recovery schemes cannot be against certain attacks effectively, and some do not restore the image to a fine quality. Therefore, the first purpose of this study is to integrate with a probability theory to improve the accuracy and precision of the image tampering detection. Since the more the recovery information survives the attacks, the higher the quality of the restored image, the second purpose is to increase the chance of survival of the recovered information by means of spreading it out to promote the quality of the restored image. Finally, the third purpose is to propose approximate recovery rules to further enhance the image quality. The experimental results show that the proposed scheme performs well in terms of detection accuracy and precision as well as the quality of the recovered image.
APA, Harvard, Vancouver, ISO, and other styles
20

AMERINI, IRENE. "Image Forensics: sourceidentification and tamperingdetection." Doctoral thesis, 2010. http://hdl.handle.net/2158/520262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Chun-Lin, and 林俊霖. "Modified Unsharp Masking Detection System Using Otsu Thresholding and Gray Code for Image Tampering Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/90844288704299915284.

Full text
Abstract:
碩士
國立臺灣師範大學
電機工程學系
104
In recent years, the development of the wireless technologies is growing rapidly. People generally have more than one mobile device such as smart phones or tablet PCs. Therefore, taking a picture becomes a simple thing in our live. Due to the fact that digital image processing software is easy to use, the research of digital image forensics becomes popular in the world. In this study, we focus on Unsharp Masking (USM) detection. The proposed detecting system is based on Edge Perpendicular Binary Coding (EPBC). We use Otsu thresholding to enhance the performance of Canny edge detection, so that the accuracy of USM detection is increased. Moreover, the symmetric property of Gray encoding is used to reduce the number of feature points. This improves the execution time of the detecting system. Experimental results show that our proposed method has faster execution and better accuracy of USM detection for the normal shooting environment.
APA, Harvard, Vancouver, ISO, and other styles
22

Kao, Chia-Wen, and 高嘉文. "Zernike moment and Edge Features based Semi-fragile Watermark for Image Authentication with Tampering Localization." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/70254921405934794170.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
95
This paper present a novel content-based image authentication framework which embeds the semi-fragile image feature into the host image based on wavelet transform. In this framework, two features of a target image from the low frequency domain to generate two watermarks: Zernike moments for classifying of the intentional content modification and sobel edge for indicating the modified location. In particular, we design a systematic method for automatic order selection of Zernike moments and in order to tell if the procession on the image is malicious or not, we also propose a weighted Euclidean distance by its reconstruction process. An important advantage of our approach is that it can tolerate compression and noise to a certain extent while rejecting common tampering to the image like rotation. Experimental results show that the framework can locate the malicious tamper locally, the unit of detection region is 8x8 block, while highly robust to content preserved processing, such as JPEG compression Q>=30 and Gaussian noise variance<=20.
APA, Harvard, Vancouver, ISO, and other styles
23

Liao, Siang-Fu, and 廖祥富. "Tampering localization in JPEG images using DCT coefficient quantization effect and blocking artifacts." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/84762300279264125580.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
103
Recently, due to the rapid progress of Internet and speed up of digital files transmission, people can receive new information easily and conveniently. However, there are many forged pictures distributing on Internet, and they possibly mislead a user to make the incorrect decision. Therefore, a tampered image detection method is important and useful. The image format this study processes is JPEG since it is the most commonly used one nowadays. Most of the previous work aimed to determine if the image is tampered or not, but not to locate the forged region precisely. The study proposes a forgery detection method based on EM algorithm, which can not only detect tampering for tampered double JPEG image but also locate the tampered region accurately. The study brings forward a novel tampered image localization system. Based on an assumption of Laplacian distribution of un-quantized AC DCT coefficients, with the help of Expectation Maximization (EM) algorithm, we could estimate the primary quantization steps and tampering probability map. Afterward, our system utilizes primary quantization steps and artifact to produce the artifact probability map. At last, we locate the tampered regions by integrating the tampering probability map and artifact probability map. The test images used in the experiments contain: (1) the tampered region is from uncompressed images; (2) the tampered region is from JPEG images; (3) the tampered region is located on different position of images; (4) CASIA database. The experimental results demonstrate that the proposed work has better performance for different kinds of forged images compared to the previous work.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Hsien-Chu, and 吳憲珠. "Copyright Protection Techniques for Digital Images and Their Applications to Tampering Proof and Recovery." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/79884372922665844243.

Full text
Abstract:
博士
國立中正大學
資訊工程研究所
90
The goal of this dissertation is to develop improved image protection techniques to protect the intellectual property rights and to solve the above problems such as tamper proofing and recovery. First of all, this dissertation presents six digital copyrights protection schemes. The first proposed copyrights protection scheme takes advantage of visual cryptography to construct a master share from a digital image and come with an ownership share for each copyright. After stacking these two shares, the ownership information can be recovered directly by human visual system without any computation. Our method will not change the host image and can be invisible and multiple ownerships determining as well. Besides, our method has high security that piracies and attackers cannot detect ownership information and fake ownership of image. Our experimental results show that after JPEG lossy compression, blurring, noise adding, and cropping attacks, the ownership can still be robustly detected from the host image by our method. The second proposed copyright protection scheme conducts a rule to combine the related information between the digital host image and digital copyright image, both are gray-scaled images. Gray-scaled host image, and gray-scaled copyright image will be mapped to produce a time-stamped copyright key, a key to make host image intact in copyright protection. In mapping process, block truncation coding (BTC) technique is applied to retrieve a block character value from the host image. The proposed scheme satisfies all of the requirements of a mature image copyright protection technique, that is, it can cast the digital copyright into a host image with the abilities of invisibility, security, robustness and multiple casting. Experiments are successful for host image to survive the attacks of lossy compression, rotating, sharpening, blurring, and cropping, because from the attacked host image, the copyright still remains robust for detection. The third proposed copyright protection methodology uses a quadtree to record the information of a protected host image and the related watermark. The quadtree can help the processing of watermark casting and watermark verification successfully. Besides that, from the experimental results, we show that the proposed scheme can robustly recover recognizable watermarks from variety of attacked host images. The fourth intellectual property rights protection scheme is proposed to satisfy modern requirements of protecting techniques. For a host image, the relationship between the copyright information and discrete cosine transformation (DCT) frequency coefficients can effectively cast and detect the copyright without modifying the host image. Besides, multiple copyrights information can be managed without the phenomenon of interfering with each other. The proposed scheme truly possesses robust characteristic through experiments of various image attacks. The fifth protection scheme is presented to protect the intellectual property rights of color images. This methodology coalesces image compression technique and genetic algorithm for conducting a rule to combine the related information between protected host image and digital copyright image and then generates a time-stamped copyright key. The time-stamped copyright key is a witness when copyright dispute is occurred. The proposed scheme can provide secure, invisible, and multiple casting properties to successfully cast and verify copyright without modifying the host image. As the experiments come out, the proposed scheme can robustly reconstruct recognizable copyright images from variety of modified host images. This dissertation proposes a fractal-based watermarking scheme that efficiently protects the intellectual property rights of digital images. The main feature of fractal encoding is that it uses the self-similarity between the larger and smaller parts of an image to compress the image. When our scheme uses this self-similar relationship, it effects both the embedding and extraction of the watermark. As seen from the experimental results, the proposed scheme provides more robust capability than other compression-based watermarking techniques. On the other hand, it is an important issue to detect tamper of digital images and recover the images. We propose two schemes to solve this problem in this dissertation. First, with the specific DCT frequency coefficients taken as the characteristic values, which are embedded into the least significant bits of the image pixels, it is feasible to proof the image integrity. If the image is tampered, the embedded characteristic values that are affected will be changed accordingly and detected. Then the corresponding original characteristic values can be acquired by the proposed recovery process to reconstruct the image. In terms of the results from the experiments, the proposed scheme works absolutely both on identifying the tampered spots in the image as well as on restoring the image. The second proposed tamper proof and recovery scheme puts forward a technique in JPEG still image compression standard. The proposed scheme can detect tampered image and also can recover the image. During the image compressed by JPEG compression technique, the proposed scheme incorporates with the edge detection technique. First, the edge detection technique will recognize the edges of the image before image compression takes place. The obtained edge characteristic is then embedded into the image right after compression. If the image is tampered during transmission, the embedded edge characteristic can be used for detecting the tampered areas, and these tampered areas may be reconstructed by interpolation method and embedded edges. Therefore, the proposed scheme prevents receivers from being unaware of the tampered image during transmission.
APA, Harvard, Vancouver, ISO, and other styles
25

Chiu, Yen-Chung, and 邱彥中. "A Study on Digital Watermarking and Authentication of Images for Copyright Protection And Tampering Detection." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/89084112095535503685.

Full text
Abstract:
碩士
國立交通大學
資訊科學系所
92
Due to the advance of digital techniques, digital images may be copied or tampered with illegally. Therefore, it is important to develop methods to protect the copyright of digital images and verify their integrity. In this study, digital watermarking techniques for such purposes are proposed. In addition, the watermark must have an ability of robustness because users may apply various operations on a watermarked image. For color images, four methods are proposed to handle different attacks. First, a method based on creating peaks circularly and symmetrically in the DFT domain is proposed to resist rotation and scaling attacks. Second, a watermarking method utilizing the DFT and DCT domains is proposed to survive rotation and cropping attacks. Third, a method based on coding peaks by a combinatorial function in the DFT domain is proposed to work against print-and-scan operations. Fourth, a method based on an image rescaling technique is proposed to resist scaling and line-removal attacks. Finally, an authentication method for verifying the fidelity and integrity of color images is also proposed, which uses a key to generate authentication signals randomly. By checking authentication signals in an image, tampered parts can be pointed out. Such authentication signals can be used to conduct the authentication work without using other signature data. Good experimental results prove the feasibility of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Prasad, S. "Signal Processing Algorithms For Digital Image Forensics." Thesis, 2008. http://hdl.handle.net/2005/655.

Full text
Abstract:
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.
APA, Harvard, Vancouver, ISO, and other styles
27

Prasad, S. "Signal Processing Algorithms For Digital Image Forensics." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/655.

Full text
Abstract:
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.
APA, Harvard, Vancouver, ISO, and other styles
28

Su, Ting-Lin, and 蘇亭霖. "Tampering Detection and Localization Techniques for Images/Videos Based on Inter-frame Correlation and Classification and Regression Tree." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/17355454633269047350.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
104
With the advent of high-quality digital video cameras and the rapid development of the internet, people are living in a full digital image/video generation. We may watch a lot of digital video on smart phone, website and community media every day. However, many and many powerful digital editing software was increasingly developed (e.g. Adobe Photoshop, Video Studio, Power Director…), tampering digital image /video is much easier and without leaving any traces. As a result the truth of image/video content can no longer be taken by human eyes. In this case, detection of tampered video has become a critical requirement to ensure the truth of digital video data, therefore, how to develop a video authentication system is a very important research. In this thesis, we propose two tampered video detection and localization techniques. One is video-based tampered video detection and localization based on Inter-frame Correlation and the other is image-based tampered video detection and localization based on Classification and Regression Tree. The video-based tampered video detection and localization can detect temporal-domain forgery and locate the duplicate frames. Then the image-based tampered video detection and localization can detect spatial-domain forgery and locate the tampered region of the tampered frame. In our experiments, we apply Surrey University Library for Forensic Analysis (SULFA) database of realistic forged sequences to evaluate our video authentication techniques. In SULFA database has 10 original videos and 10 tampered videos which are all 40FPS AVI format and the frame size is primarily 320×240. These tampered videos were divided into 4 temporal-domain forgery videos and 6 spatial-domain forgery videos. Finally, we list the recent method of paper to compared with our experimental results. The results show that the two methods presented in this paper are all with high classification accuracy in the case of relatively narrow limits, and also has the ability to locate the video tampering area.
APA, Harvard, Vancouver, ISO, and other styles
29

Antselevich, A. A., and А. А. Анцелевич. "Выявление признаков постобработки изображений : магистерская диссертация." Master's thesis, 2015. http://hdl.handle.net/10995/31590.

Full text
Abstract:
An algorithm, which is able to find out, whether a given digital photo was tampered, and to generate tampering map, which depicts the processed parts of the image, was analyzed in details and implemented. The software was also optimized, deeply tested, the modes giving the best quality were found. The program can be launched on a usual user PC.
В процессе работы был детально разобран и реализован алгоритм поиска признаков постобработки в изображениях. Разработанное приложение было оптимизировано, было проведено его тестирование, были найдены режимы работы приложения с более высокими показателями точности. Реализованное приложение может быть запущено на обычном персональном компьютере. Помимо информации о наличии выявленных признаков постобработки полученное приложение генерирует карту поданного на вход изображения, на которой выделены его участки, возможно подвергнутые постобработке.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography