Academic literature on the topic 'Compressive camera identification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compressive camera identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Compressive camera identification"

1

Liu, D. S., Z. H. Chen, C. Y. Tsai, R. J. Ye, and K. T. Yu. "Compressive Mechanical Property Analysis of Eva Foam: Its Buffering Effects at Different Impact Velocities." Journal of Mechanics 33, no. 4 (September 22, 2016): 435–41. http://dx.doi.org/10.1017/jmech.2016.98.

Full text
Abstract:
AbstractEVA foams, like all other polymers, also exhibit strain-rate effects and hysteresis. However, currently available approaches for predicting the mechanical response of polymeric foam subjected to an arbitrarily imposed loading history and strain-rate effect are highly limited. Especially, the strain rates in the intermediate rate domain (between 100and 102s–1) are extremely difficult to study. The use of data generated through the drop tower technique for implementation in constitutive equations or numerical models has not been considered in past studies. In this study, an experiment including a quasi-static compression test and drop impact tests with a high speed camera was conducted. An inverse analysis technique combined with a finite element model for material parameter identification was developed to determine the stress–strain behavior of foam at different specific strain rates. It was used in this study to simulate multiple loading and unloading cycles on foam specimens, and the results were compared with experimental measurements.
APA, Harvard, Vancouver, ISO, and other styles
2

Manisha, Chang-Tsun Li, Xufeng Lin, and Karunakar A. Kotegar. "Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification." Sensors 22, no. 20 (October 17, 2022): 7871. http://dx.doi.org/10.3390/s22207871.

Full text
Abstract:
Source-camera identification tools assist image forensics investigators to associate an image with a camera. The Photo Response Non-Uniformity (PRNU) noise pattern caused by sensor imperfections has been proven to be an effective way to identify the source camera. However, the PRNU is susceptible to camera settings, scene details, image processing operations (e.g., simple low-pass filtering or JPEG compression), and counter-forensic attacks. A forensic investigator unaware of malicious counter-forensic attacks or incidental image manipulation is at risk of being misled. The spatial synchronization requirement during the matching of two PRNUs also represents a major limitation of the PRNU. To address the PRNU’s fragility issue, in recent years, deep learning-based data-driven approaches have been developed to identify source-camera models. However, the source information learned by existing deep learning models is not able to distinguish individual cameras of the same model. In light of the vulnerabilities of the PRNU fingerprint and data-driven techniques, in this paper, we bring to light the existence of a new robust data-driven device-specific fingerprint in digital images that is capable of identifying individual cameras of the same model in practical forensic scenarios. We discover that the new device fingerprint is location-independent, stochastic, and globally available, which resolves the spatial synchronization issue. Unlike the PRNU, which resides in the high-frequency band, the new device fingerprint is extracted from the low- and mid-frequency bands, which resolves the fragility issue that the PRNU is unable to contend with. Our experiments on various datasets also demonstrate that the new fingerprint is highly resilient to image manipulations such as rotation, gamma correction, and aggressive JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
3

Kachhava, Rajendra, Vivek Srivastava, Rajkumar Jain, and Ekta Chaturvedi. "Security System and Surveillance Using Real Time Object Tracking and Multiple Cameras." Advanced Materials Research 403-408 (November 2011): 4968–73. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.4968.

Full text
Abstract:
In this paper we propose multiple cameras using real time tracking for surveillance and security system. It is extensively used in the research field of computer vision applications, like that video surveillance, authentication systems, robotics, pre-stage of MPEG4 image compression and user inter faces by gestures. The key components of tracking for surveillance system are extracting the feature, background subtraction and identification of extracted object. Video surveillance, object detection and tracking have drawn a successful increased interest in recent years. A object tracking can be understood as the problem of finding the path (i.e. trajectory) and it can be defined as a procedure to identify the different positions of the object in each frame of a video. Based on the previous work on single detection using single stationary camera, we extend the concept to enable the tracking of multiple object detection under multiple camera and also maintain a security based system by multiple camera to track person in indoor environment, to identify by my proposal system which consist of multiple camera to monitor a person. Present study mainly aims to provide security and detect the moving object in real time video sequences and live video streaming. Based on a robust algorithm for human body detection and tracking in videos created with support of multiple cameras.
APA, Harvard, Vancouver, ISO, and other styles
4

Goljan, Miroslav, Mo Chen, Pedro Comesaña, and Jessica Fridrich. "Effect of Compression on Sensor-Fingerprint Based Camera Identification." Electronic Imaging 2016, no. 8 (February 14, 2016): 1–10. http://dx.doi.org/10.2352/issn.2470-1173.2016.8.mwsf-086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

de Roos, Lars, and Zeno Geradts. "Factors that Influence PRNU-Based Camera-Identification via Videos." Journal of Imaging 7, no. 1 (January 13, 2021): 8. http://dx.doi.org/10.3390/jimaging7010008.

Full text
Abstract:
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly characteristic feature. However, this pattern, identically to a real fingerprint, is sensitive to many different influences, e.g., the influence of camera settings. In this study, several previously investigated factors were noted, after which three were selected for further investigation. The computation and comparison methods are evaluated under variation of the following factors: resolution, length of the video and compression. For all three studies, images were taken with a single iPhone 6. It was found that a higher resolution ensures a more reliable comparison, and that the length of a (reference) video should always be as high as possible to gain a better PRNU-pattern. It also became clear that compression (i.e., in this study the compression that Snapchat uses) has a negative effect on the correlation value. Therefore, it was found that many different factors play a part when comparing videos. Due to the large amount of controllable and non-controllable factors that influence the PRNU-pattern, it is of great importance that further research is carried out to gain clarity on the individual influences that factors exert.
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Tianhao, Hao Hu, Hitoshi Kobayashi, and Hiroshi Onoda. "Visual Identification-Based Spark Recognition System." International Journal of Automation Technology 16, no. 6 (November 5, 2022): 766–72. http://dx.doi.org/10.20965/ijat.2022.p0766.

Full text
Abstract:
With the development of artificial intelligence, image recognition has seen wider adoption. Here, a novel paradigm image recognition system is proposed for detection of fires owing to the compression of lithium-ion batteries at recycling facilities. The proposed system uses deep learning method. The SparkEye system is proposed, focusing on the early detection of fires as sparks, and is combined with a sprinkler system, to minimize fire-related losses at affected facilities. Approximately 30,000 images (resolution, 800 × 600 pixels) were used for training the system to >90% detection accuracy. To fulfil the demand for dust control at recycling facilities, air and frame camera protection methods were incorporated into the system. Based on the test data and realistic workplace feedback, the best placements of the SparkEye fire detectors were crushers, conveyors, and garbage pits.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Geun-Young, Sung-Hak Lee, and Hyuk-Ju Kwon. "DCT-Based HDR Exposure Fusion Using Multiexposed Image Sensors." Journal of Sensors 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/2837970.

Full text
Abstract:
It is difficult to apply existing exposure methods to a resource-constrained platform. Their pyramidal image processing and quality measures for interesting areas that need to be preserved require a lot of time and memory. The work presented in this paper is a DCT-based HDR exposure fusion using multiexposed image sensors. In particular, it uses the quantization process in JPEG encoding as a measurement of image quality such that the fusion process can be included in the DCT-based compression baseline. To enhance global image luminance, a Gauss error function based on camera characteristics is presented. In the simulation, the proposed method yields good quality images, which balance naturalness and object identification. This method also requires less time and memory. This qualifies our technique for use in resource-constrained platforms.
APA, Harvard, Vancouver, ISO, and other styles
8

Rouhi, Rahimeh, Flavio Bertini, and Danilo Montesi. "No Matter What Images You Share, You Can Probably Be Fingerprinted Anyway." Journal of Imaging 7, no. 2 (February 11, 2021): 33. http://dx.doi.org/10.3390/jimaging7020033.

Full text
Abstract:
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is uniquely characterized. Hence, in this paper, we compare classification-based methods to achieve smartphone identification (SI) and user profile linking (UPL) within the same or across different SNs, which can provide investigators with significant clues. We validate the proposed methods by two datasets, our dataset and the VISION dataset, both including original and shared images on the SN platforms such as Google Currents, Facebook, WhatsApp, and Telegram. The obtained results show that k-medoids achieves the best results compared with k-means, hierarchical approaches, and different models of convolutional neural network (CNN) in the classification of the images. The results show that k-medoids provides the values of F1-measure up to 0.91% for SI and UPL tasks. Moreover, the results prove the effectiveness of the methods which tackle the loss of image details through the compression process on the SNs, even for the images from the same model of smartphones. An important outcome of our work is presenting the inter-layer UPL task, which is more desirable in digital investigations as it can link user profiles on different SNs.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Zayneb, Abir Jaafar Hussain, Wasiq Khan, Thar Baker, Haya Al-Askar, Janet Lunn, Raghad Al-Shabandar, Dhiya Al-Jumeily, and Panos Liatsis. "Lossy and Lossless Video Frame Compression: A Novel Approach for High-Temporal Video Data Analytics." Remote Sensing 12, no. 6 (March 20, 2020): 1004. http://dx.doi.org/10.3390/rs12061004.

Full text
Abstract:
The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search.
APA, Harvard, Vancouver, ISO, and other styles
10

Mazumdar, Pramit, Kamal Lamichhane, Marco Carli, and Federica Battisti. "A Feature Integrated Saliency Estimation Model for Omnidirectional Immersive Images." Electronics 8, no. 12 (December 13, 2019): 1538. http://dx.doi.org/10.3390/electronics8121538.

Full text
Abstract:
Omnidirectional, or 360°, cameras are able to capture the surrounding space, thus providing an immersive experience when the acquired data is viewed using head mounted displays. Such an immersive experience inherently generates an illusion of being in a virtual environment. The popularity of 360° media has been growing in recent years. However, due to the large amount of data, processing and transmission pose several challenges. To this aim, efforts are being devoted to the identification of regions that can be used for compressing 360° images while guaranteeing the immersive feeling. In this contribution, we present a saliency estimation model that considers the spherical properties of the images. The proposed approach first divides the 360° image into multiple patches that replicate the positions (viewports) looked at by a subject while viewing a 360° image using a head mounted display. Next, a set of low-level features able to depict various properties of an image scene is extracted from each patch. The extracted features are combined to estimate the 360° saliency map. Finally, bias induced during image exploration and illumination variation is fine-tuned for estimating the final saliency map. The proposed method is evaluated using a benchmark 360° image dataset and is compared with two baselines and eight state-of-the-art approaches for saliency estimation. The obtained results show that the proposed model outperforms existing saliency estimation models.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Compressive camera identification"

1

VALSESIA, DIEGO. "Imaging using random projections: compression, communication, camera identification." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2642257.

Full text
Abstract:
This thesis discusses problems related to imaging applications, focusing on image compression, transmission and source camera identification. Random projections and compressed sensing are shown to be effective techniques to address such problems. We show that random projections can be used to compress multispectral images in a more rate-efficient manner than previously known and with a low-complexity scheme based on compressed sensing that is suitable for usage onboard of spacecrafts. Image measurements obtained through random projections possess peculiar properties that allow to devise novel schemes to mitigate channel unreliability when the data have to be transmitted over such channels. This is shown to be particularly useful for signal acquisition and transmission on low-power devices. Finally, quantized random projections implement low dimensionality signal embeddings that, by approximately preserving the geometry of the original signal space, allow to process high volumes of information with reduced complexity. We show that they can be used to address the problem of compressing camera sensor fingerprints, significantly improving over state of the art methods in terms of storage requirements and matching speed. Such improvements enable the development of novel applications such as camera-based image retrieval on large scales, for which we study efficient search algorithms exploiting properties of random projections.
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Kai-san, and 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Compressive camera identification"

1

Mandelli, Sara, Nicolò Bonettini, and Paolo Bestagini. "Source Camera Model Identification." In Multimedia Forensics, 133–73. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_7.

Full text
Abstract:
AbstractEvery camera model acquires images in a slightly different way. This may be due to differences in lenses and sensors. Alternatively, it may be due to the way each vendor applies characteristic image processing operations, from white balancing to compression.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Guowen, Bo Wang, and Yabin Li. "Cross-Class and Inter-class Alignment Based Camera Source Identification for Re-compression Images." In Lecture Notes in Computer Science, 563–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71598-8_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Yongfeng, Qiang Liu, and Cairong Yan. "Research on object re-identification with compressive sensing in multi-camera systems." In Automotive, Mechanical and Electrical Engineering, 447–50. CRC Press, 2017. http://dx.doi.org/10.1201/9781315210445-82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Tzuhuan, and Yu-Ru Wang. "Forensic Camera Identification in Social Networks via Camera Fingerprint." In Technologies to Advance Automation in Forensic Science and Criminal Investigation, 148–60. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8386-9.ch008.

Full text
Abstract:
Image-related crimes cause the urgent demand for tracing the origin of digital images. The breakthrough is a passive detection method via photo response non-uniformity (PRNU) analysis proposed by Lukáš et al. Recently, digital images are often shot with handheld devices (such as smartphones) and transmitted using social media (such as LINE). Most of the images are distorted (such as compressed and resized) during transmission. Previous studies are less focused on the impact of transmission compression through social networks. Thirty-one different Apple mobile phones were used to capture digital images in the experiment. Images were uploaded to the photo album via LINE software and then downloaded. The modified signed peak correlation energy (MSPCE) statistics is used to evaluate the correlation between the PRNU values of the disputed images and the pattern noise of the experimental devices. Experimental results show that the PRNU analysis method can effectively trace the source of the shot device using the distorted images which are compressed and resized during the transmission in LINE.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Compressive camera identification"

1

Choi, Kai San, Edmund Y. Lam, and Kenneth K. Y. Wong. "Source Camera Identification by JPEG Compression Statistics for Image Forensics." In TENCON 2006 - 2006 IEEE Region 10 Conference. IEEE, 2006. http://dx.doi.org/10.1109/tencon.2006.343943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chuang, Wei-Hong, Hui Su, and Min Wu. "Exploring compression effects for improved source camera identification using strongly compressed video." In 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6115855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Goeller, Adrien, Jean-Luc Dion, Thierry Soriano, and Bernard Roux. "Real Time Dynamic System Stochastic Identification in Video Capture for Data Compression, Image Interpolation, Prediction, and Augmented Reality." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47398.

Full text
Abstract:
In computer vision, cameras more and more accurate, fast, 3D featured are used. These still evolutions generate more data, which is an issue for users to store it with standard compression for example for recording proof in case of products manufacture defective. The aim of this work is to develop a specific solution adapted for vision systems which have a known scenario and can be described by dynamic models. In this framework, Kalman filters are used for data compression, observable variable prediction, and augmented reality. The developed concepts are tested with a scenario of a ruler on a table. The experiment aims to check the data compression level, the estimation of the friction forces coefficient of the ruler and the prediction of the stop position.
APA, Harvard, Vancouver, ISO, and other styles
4

Pillers, Roy A., and Theodore J. Heindel. "Stereographic Backlit Imaging and Bubble Identification From a Plunging Jet With Floor Interactions." In ASME 2021 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/fedsm2021-65313.

Full text
Abstract:
Abstract Plunging jets have been extensively studied for their relatively simple set-up but complex multiphase interactions. This phenomenon includes gas carry-under and mixing, which occurs when shear effects between the plunging liquid jet and surrounding gas are sufficient to entrain gas at the impact site. Previous investigations typically assume the floor has an infinite depth and neglect compressive effects caused by the jet interacting with the catch tank floor. While this assumption is ideal for breaking waves in the middle of the ocean, many other applications have to contend with floor effects. These include waterfalls, wastewater treatment, dams, fish farms, mineral separation, and molten metal pouring. It is hypothesized that floor interactions will significantly affect the multiphase flow hydrodynamics, especially in places where the uninhibited jet would approach or pass the floor region. Using a large catch tank with an adjustable floor region designed to hold a constant water level, data were collected using high-speed backlit stereographic imaging to capture and compare the effects of three separate tank depths with those found using an infinite pool assumption. To identify bubbles in each stereographic projection, a uniform bubble recognition procedure was developed that was used across all data sets. This allowed for the automated identification of bubble entrainment regions, which could be compared with different flow conditions. Preliminary results are inconclusive as to the effects of the floor region on the bubble plume dynamics; however, the results showed consistent measurements between trials and the two stereographic cameras, implying the time variation of the jet dynamics was the primary source of uncertainty in the results and not the identification procedure. Therefore, the identification methods have provided a method for plume volume and shape estimation, which will be used in future studies using 3D imaging techniques.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography