Indice
Letteratura scientifica selezionata sul tema "Détection de falsifications"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Détection de falsifications".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Détection de falsifications"
Victorri-Vigneau, Caroline, Katia Larour, Dominique Simon, Jacques Pivette e Pascale Jolliet. "Création et validation d’un outil de détection de la fraude par falsification d’ordonnance à partir des bases de données de l’Assurance Maladie". Therapies 64, n. 1 (gennaio 2009): 27–31. http://dx.doi.org/10.2515/therapie/2009004.
Testo completoTesi sul tema "Détection de falsifications"
Nguyen, Hoai phuong. "Certification de l'intégrité d'images numériques et de l'authenticité". Thesis, Reims, 2019. http://www.theses.fr/2019REIMS007/document.
Testo completoNowadays, with the advent of the Internet, the falsification of digital media such as digital images and video is a security issue that cannot be ignored. It is of vital importance to certify the conformity and the integrity of these media. This project, which is in the domain of digital forensics, is proposed to answer this problematic
Abecidan, Rony. "Stratégies d'apprentissage robustes pour la détection de manipulation d'images". Electronic Thesis or Diss., Centrale Lille Institut, 2024. http://www.theses.fr/2024CLIL0025.
Testo completoToday, it is easier than ever to manipulate images for unethical purposes. This practice is therefore increasingly prevalent in social networks and advertising. Malicious users can for instance generate convincing deep fakes in a few seconds to lure a naive public. Alternatively, they can also communicate secretly hidding illegal information into images. Such abilities raise significant security concerns regarding misinformation and clandestine communications. The Forensics community thus actively collaborates with Law Enforcement Agencies worldwide to detect image manipulations. The most effective methodologies for image forensics rely heavily on convolutional neural networks meticulously trained on controlled databases. These databases are actually curated by researchers to serve specific purposes, resulting in a great disparity from the real-world datasets encountered by forensic practitioners. This data shift addresses a clear challenge for practitioners, hindering the effectiveness of standardized forensics models when applied in practical situations.Through this thesis, we aim to improve the efficiency of forensics models in practical settings, designing strategies to mitigate the impact of data shift. It starts by exploring literature on out-of-distribution generalization to find existing strategies already helping practitioners to make efficient forensic detectors in practice. Two main frameworks notably hold promise: the implementation of models inherently able to learn how to generalize on images coming from a new database, or the construction of a representative training base allowing forensics models to generalize effectively on scrutinized images. Both frameworks are covered in this manuscript. When faced with many unlabeled images to examine, domain adaptation strategies matching training and testing bases in latent spaces are designed to mitigate data shifts encountered by practitioners. Unfortunately, these strategies often fail in practice despite their theoretical efficiency, because they assume that scrutinized images are balanced, an assumption unrealistic for forensic analysts, as suspects might be for instance entirely innocent. Additionally, such strategies are tested typically assuming that an appropriate training set has been chosen from the beginning, to facilitate adaptation on the new distribution. Trying to generalize on a few images is more realistic but much more difficult by essence. We precisely deal with this scenario in the second part of this thesis, gaining a deeper understanding of data shifts in digital image forensics. Exploring the influence of traditional processing operations on the statistical properties of developed images, we formulate several strategies to select or create training databases relevant for a small amount of images under scrutiny. Our final contribution is a framework leveraging statistical properties of images to build relevant training sets for any testing set in image manipulation detection. This approach improves by far the generalization of classical steganalysis detectors on practical sets encountered by forensic analyst and can be extended to other forensic contexts
Mahfoudi, Gaël. "Authentication of Digital Images and Videos". Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0043.
Testo completoDigital media are parts of our day-to-day lives. With years of photojournalism, we have been used to consider them as an objective testimony of the truth. But images and video retouching software are becoming increasingly more powerful and easy to use and allow counterfeiters to produce highly realistic image forgery. Consequently, digital media authenticity should not be taken for granted any more. Recent Anti-Money Laundering (AML) relegation introduced the notion of Know Your Customer (KYC) which enforced financial institutions to verify their customer identity. Many institutions prefer to perform this verification remotely relying on a Remote Identity Verification (RIV) system. Such a system relies heavily on both digital images and videos. The authentication of those media is then essential. This thesis focuses on the authentication of images and videos in the context of a RIV system. After formally defining a RIV system, we studied the various attacks that a counterfeiter may perform against it. We attempt to understand the challenges of each of those threats to propose relevant solutions. Our approaches are based on both image processing methods and statistical tests. We also proposed new datasets to encourage research on challenges that are not yet well studied
Iphar, Clément. "Formalisation d'un environnement d'analyse des données basé sur la détection d'anomalies pour l'évaluation de risques : Application à la connaissance de la situation maritime". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEM041/document.
Testo completoAt sea, various systems enable vessels to be aware of their environment and on the coast, those systems, such as radar, provide a picture of the maritime traffic to the coastal states. One of those systems, the Automatic Identification System (AIS) is used for security purposes (anti-collision) and as a tool for on-shore bodies as a control and surveillance and decision-support tool.An assessment of AIS based on data quality dimensions is proposed, in which integrity is highlighted as the most important of data quality dimensions. As the structure of AIS data is complex, a list of integrity items have been established, their purpose being to assess the consistency of the data within the data fields with the technical specifications of the system and the consistency of the data fields within themselves in a message and between the different messages. In addition, the use of additional data (such as fleet registers) provides additional information to assess the truthfulness and the genuineness of an AIS message and its sender.The system is weekly secured and bad quality data have been demonstrated, such as errors in the messages, data falsification or data spoofing, exemplified in concrete cases such as identity theft or vessel voluntary disappearances. In addition to message assessment, a set of threats have been identified, and an assessment of the associated risks is proposed, allowing a better comprehension of the maritime situation and the establishment of links between the vulnerabilities caused by the weaknesses of the system and the maritime risks related to the safety and security of maritime navigation
Ehret, Thibaud. "Video denoising and applications". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN018.
Testo completoThis thesis studies the problem of video denoising. In the first part we focus on patch-based video denoising methods. We study in details VBM3D, a popular video denoising method, to understand the mechanisms that made its success. We also present a real-time implementation on GPU of this method. We then study the impact of patch search in video denoising and in particular how searching for similar patches in the entire video, a global patch search, improves the denoising quality. Finally, we propose a novel causal and recursive method called NL-Kalman that produces very good temporal consistency.In the second part, we look at the newer trend of deep learning for image and video denoising. We present one of the first neural network architecture, using temporal self-similarity, competitive with state-of-the-art patch-based video denoising methods. We also show that deep learning offers new opportunities. In particular, it allows for denoising without knowing the noise model. We propose a framework that allows denoising of videos that have been through an unknown processing pipeline. We then look at the case of mosaicked data. In particular, we show that deep learning is undeniably superior to previous approaches for demosaicking. We also propose a novel training process for demosaicking without ground-truth based on multiple raw acquisition. This allows training for real case applications. In the third part we present different applications taking advantage of mechanisms similar those studied for denoising. The first problem studied is anomaly detection. We show that this problem can be reduced to detecting anomalies in noise. We also look at forgery detection and in particular copy-paste forgeries. Just like for patch-based denoising, solving this problem requires searching for similar patches. For that, we do an in-depth study of PatchMatch and see how it can be used for detecting forgeries. We also present an efficient method based on sparse patch matching
Diallo, Boubacar. "Mesure de l'intégrité d'une image : des modèles physiques aux modèles d'apprentissage profond". Thesis, Poitiers, 2020. http://www.theses.fr/2020POIT2293.
Testo completoDigital images have become a powerful and effective visual communication tool for delivering messages, diffusing ideas, and proving facts. The smartphone emergence with a wide variety of brands and models facilitates the creation of new visual content and its dissemination in social networks and image sharing platforms. Related to this phenomenon and helped by the availability and ease of use of image manipulation softwares, many issues have arisen ranging from the distribution of illegal content to copyright infringement. The reliability of digital images is questioned for common or expert users such as court or police investigators. A well known phenomenon and widespread examples are the "fake news" which oftenly include malicious use of digital images.Many researchers in the field of image forensic have taken up the scientific challenges associated with image manipulation. Many methods with interesting performances have been developed based on automatic image processing and more recently the adoption of deep learning. Despite the variety of techniques offered, performance are bound to specific conditions and remains vulnerable to relatively simple malicious attacks. Indeed, the images collected on the Internet impose many constraints on algorithms questioning many existing integrity verification techniques. There are two main peculiarities to be taken into account for the detection of a falsification: one is the lack of information on pristine image acquisition, the other is the high probability of automatic transformations linked to the image-sharing platforms such as lossy compression or resizing.In this thesis, we focus on several of these image forensic challenges including camera model identification and image tampering detection. After reviewing the state of the art in the field, we propose a first data-driven method for identifying camera models. We use deep learning techniques based on convolutional neural networks (CNNs) and develop a learning strategy considering the quality of the input data versus the applied transformation. A family of CNN networks has been designed to learn the characteristics of the camera model directly from a collection of images undergoing the same transformations as those commonly used on the Internet. Our interest focused on lossy compression for our experiments, because it is the most used type of post-processing on the Internet. The proposed approach, therefore, provides a robust solution to compression for camera model identification. The performance achieved by our camera model detection approach is also used and adapted for image tampering detection and localization. The performances obtained underline the robustness of our proposals for camera model identification and image forgery detection