To see the other types of publications on this topic, follow the link: Source image.

Journal articles on the topic 'Source image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Source image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Guanjie, Zehua Ma, Chang Liu, Xi Yang, Han Fang, Weiming Zhang, and Nenghai Yu. "MuST: Robust Image Watermarking for Multi-Source Tracing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5364–71. http://dx.doi.org/10.1609/aaai.v38i6.28344.

Full text
Abstract:
In recent years, with the popularity of social media applications, massive digital images are available online, which brings great convenience to image recreation. However, the use of unauthorized image materials in multi-source composite images is still inadequately regulated, which may cause significant loss and discouragement to the copyright owners of the source image materials. Ideally, deep watermarking techniques could provide a solution for protecting these copyrights based on their encoder-noise-decoder training strategy. Yet existing image watermarking schemes, which are mostly designed for single images, cannot well address the copyright protection requirements in this scenario, since the multi-source image composing process commonly includes distortions that are not well investigated in previous methods, e.g., the extreme downsizing. To meet such demands, we propose MuST, a multi-source tracing robust watermarking scheme, whose architecture includes a multi-source image detector and minimum external rectangle operation for multiple watermark resynchronization and extraction. Furthermore, we constructed an image material dataset covering common image categories and designed the simulation model of the multi-source image composing process as the noise layer. Experiments demonstrate the excellent performance of MuST in tracing sources of image materials from the composite images compared with SOTA watermarking methods, which could maintain the extraction accuracy above 98% to trace the sources of at least 3 different image materials while keeping the average PSNR of watermarked image materials higher than 42.51 dB. We released our code on https://github.com/MrCrims/MuST
APA, Harvard, Vancouver, ISO, and other styles
2

Wink, Alexandra Elisabeth, Amanda N. Telfer, and Michael A. Pascoe. "Google Images Search Results as a Resource in the Anatomy Laboratory: Rating of Educational Value." JMIR Medical Education 8, no. 4 (October 21, 2022): e37730. http://dx.doi.org/10.2196/37730.

Full text
Abstract:
Background Preclinical medical learners are embedded in technology-rich environments, allowing them rapid access to a large volume of information. The anatomy laboratory is an environment in which faculty can assess the development of professional skills such as information literacy in preclinical medical learners. In the anatomy laboratory, many students use Google Images searches in addition to or in place of other course materials as a resource to locate and identify anatomical structures. However, the most frequent sources as well as the educational quality of these images are unknown. Objective This study was designed to assess the sources and educational value of Google Images search results for commonly searched anatomical structures. Methods The top 10 Google Images search results were collected for 39 anatomical structures. Image source websites were recorded and categorized based on the purpose and target audience of the site publishing the image. Educational value was determined through assessment of relevance (is the searched structure depicted in the image?), accuracy (does the image contain errors?), and usefulness (will the image assist a learner in locating the structure on an anatomical donor?). A reliable scoring rubric was developed to assess an image’s usefulness. Results A total of 390 images were analyzed. Most often, images were sourced from websites targeting health care professionals and health care professions students (38% of images), while Wikipedia was the most frequent single source of image results (62/390 results). Of the 390 total images, 363 (93.1%) depicted the searched structure and were therefore considered relevant. However, only 43.0% (156/363) of relevant images met the threshold to be deemed useful in identifying the searched structure in an anatomical donor. The usefulness of images did not significantly differ across source categories. Conclusions Anatomy faculty may use these results to develop interventions for gaps in information literacy in preclinical medical learners in the context of image searches in the anatomy laboratory.
APA, Harvard, Vancouver, ISO, and other styles
3

Haldorsen, Jakob B. U., W. Scott Leaney, Richard T. Coates, Steen A. Petersen, Helge Ivar Rutledal, and Kjetil A. Festervoll. "Imaging above an extended-reach horizontal well using converted shear waves and a rig source." GEOPHYSICS 78, no. 2 (March 1, 2013): S93—S103. http://dx.doi.org/10.1190/geo2012-0154.1.

Full text
Abstract:
We evaluated a method for using 3C vertical seismic profile data to image acoustic interfaces located between the surface source and a downhole receiver array. The approach was based on simple concepts adapted from whole-earth seismology, in which observed compressional and shear wavefields are traced back to a common origin. However, unlike whole-earth and passive seismology, in which physical sources are imaged, we used the observed compressional and shear wavefields to image secondary sources (scatterers) situated between the surface source and the downhole receiver array. The algorithm consisted of the following steps: first, estimating the receiver compressional wavefield; second, using polarization to estimating the shear wavefield; third, deconvolving the shear wavefield using estimates of the source wavelet obtained from the direct compressional wave; fourth, the compressional and shear wavefields were back projected into the volume between the source and receivers; where, finally, an imaging condition was applied. When applied to rig-source VSP data acquired in an extended-reach horizontal well, this process was demonstrated to give images of formation features in the overburden, consistent with surface-seismic images obtained from the same area.
APA, Harvard, Vancouver, ISO, and other styles
4

Vokes, Martha S., and Anne E. Carpenter. "CellProfiler: Open-Source Software to Automatically Quantify Images." Microscopy Today 16, no. 5 (September 2008): 38–39. http://dx.doi.org/10.1017/s1551929500061757.

Full text
Abstract:
Researchers often examine samples by eye on the microscope — qualitatively scoring each sample for a particular feature of interest. This approach, while suitable for many experiments, sacrifices quantitative results and a permanent record of the experiment. By contrast, if digital images are collected of each sample, software can be used to quantify features of interest. For small experiments, quantitative analysis is often done manually using interactive programs like Adobe Photoshop©. For the large number of images that can be easily collected with automated microscopes, this approach is tedious and time-consuming. NIH Image/ImageJ (http://rsb.info.nih.gov/ij) allows users comfortable writing in a macro language to automate quantitative image analysis. We have developed Cell- Profiler, a free, open-source software package, designed to enable scientists without prior programming experience to quantify relevant features of samples in large numbers of images automatically, in a modular system suitable for processing hundreds of thousands of images.
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Shiyuan, Lu Wang, Yaodong Cheng, and Gang Chen. "Intelligent compression for synchrotron radiation source image." EPJ Web of Conferences 251 (2021): 03073. http://dx.doi.org/10.1051/epjconf/202125103073.

Full text
Abstract:
Synchrotron radiation sources (SRS) produce a huge amount of image data. This scientific data, which needs to be stored and transferred losslessly, will bring great pressure on storage and bandwidth. The SRS images have the characteristics of high frame rate and high resolution, and traditional image lossless compression methods can only save up to 30% in size. Focus on this problem, we propose a lossless compression method for SRS images based on deep learning. First, we use the difference algorithm to reduce the linear correlation within the image sequence. Then we propose a reversible truncated mapping method to reduce the range of the pixel value distribution. Thirdly, we train a deep learning model to learn the nonlinear relationship within the image sequence. Finally, we use the probability distribution predicted by the deep leaning model combined with arithmetic coding to fulfil lossless compression. Test result based on SRS images shows that our method can further decrease 20% of the data size compared to PNG, JPEG2000 and FLIF.
APA, Harvard, Vancouver, ISO, and other styles
6

Legland, David, and Marie-Françoise Devaux. "ImageM: a user-friendly interface for the processing of multi-dimensional images with Matlab." F1000Research 10 (April 30, 2021): 333. http://dx.doi.org/10.12688/f1000research.51732.1.

Full text
Abstract:
Modern imaging devices provide a wealth of data often organized as images with many dimensions, such as 2D/3D, time and channel. Matlab is an efficient software solution for image processing, but it lacks many features facilitating the interactive interpretation of image data, such as a user-friendly image visualization, or the management of image meta-data (e.g. spatial calibration), thus limiting its application to bio-image analysis. The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. ImageM can also be run on the open source alternative software to Matlab, Octave. ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.
APA, Harvard, Vancouver, ISO, and other styles
7

Yao, Xin-Wei, Xinge Zhang, Yuchen Zhang, Weiwei Xing, and Xing Zhang. "Nighttime Image Dehazing Based on Point Light Sources." Applied Sciences 12, no. 20 (October 11, 2022): 10222. http://dx.doi.org/10.3390/app122010222.

Full text
Abstract:
Images routinely suffer from quality degradation in fog, mist, and other harsh weather conditions. Consequently, image dehazing is an essential and inevitable pre-processing step in computer vision tasks. Image quality enhancement for special scenes, especially nighttime image dehazing is extremely well studied for unmanned driving and nighttime surveillance, while the vast majority of dehazing algorithms in the past were only applicable to daytime conditions. After observing a large number of nighttime images, artificial light sources have replaced the position of the sun in daytime images and the impact of light sources on pixels varies with distance. This paper proposed a novel nighttime dehazing method using the light source influence matrix. The luminosity map can well express the photometric difference value of the picture light source. Then, the light source influence matrix is calculated to divide the image into near light source region and non-near light source region. Using the result of two regions, the two initial transmittances obtained by dark channel prior are fused by edge-preserving filtering. For the atmospheric light term, the initial atmospheric light value is corrected by the light source influence matrix. Finally, the final result is obtained by substituting the atmospheric light model. Theoretical analysis and comparative experiments verify the performance of the proposed image dehazing method. In terms of PSNR, SSIM, and UQI, this method improves 9.4%, 11.2%, and 3.3% over the existed night-time defogging method OSPF. In the future, we will explore the work from static picture dehazing to real-time video stream dehazing detection and will be used in detection on potential applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Botti, Filippo, Tomaso Fontanini, Massimo Bertozzi, and Andrea Prati. "Masked Style Transfer for Source-Coherent Image-to-Image Translation." Applied Sciences 14, no. 17 (September 4, 2024): 7876. http://dx.doi.org/10.3390/app14177876.

Full text
Abstract:
The goal of image-to-image translation (I2I) is to translate images from one domain to another while maintaining the content representations. A popular method for I2I translation involves the use of a reference image to guide the transformation process. However, most architectures fail to maintain the input’s main characteristics and produce images that are too similar to the reference during style transfer. In order to avoid this problem, we propose a novel architecture that is able to perform source-coherent translation between multiple domains. Our goal is to preserve the input details during I2I translation by weighting the style code obtained from the reference images before applying it to the source image. Therefore, we choose to mask the reference images in an unsupervised way before extracting the style from them. By doing so, the input characteristics are better maintained while performing the style transfer. As a result, we also increase the diversity in the generated images by extracting the style from the same reference. Additionally, adaptive normalization layers, which are commonly used to inject styles into a model, are substituted with an attention mechanism for the purpose of increasing the quality of the generated images. Several experiments are performed on the CelebA-HQ and AFHQ datasets in order to prove the efficacy of the proposed system. Quantitative results measured using the LPIPS and FID metrics demonstrate the superiority of the proposed architecture compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferrer-Rosell, Berta, and Estela Marine-Roig. "Projected Versus Perceived Destination Image." Tourism Analysis 25, no. 2 (July 8, 2020): 227–37. http://dx.doi.org/10.3727/108354220x15758301241747.

Full text
Abstract:
Due to the spectacular growth of traveler-generated content (TGC), researchers are using TGC as a source of data to analyze the image of destinations as perceived by tourists. In order to analyze a destination's projected image, researchers typically look to websites from destination marketing or management organizations (DMOs). The objective of this study is to calculate the gap between the projected and perceived images of Barcelona, Catalonia, in 2017, using Gartner's classification and applying compositional analysis. The official online press dossier is used as an induced source, the Lonely Planet guidebook as an autonomous source, and a collection of more than 70,000 online travel reviews hosted on TripAdvisor as an organic source. In addition to quantitative content analysis, this study undertakes two thematic analyses: the masterworks of architect Gaudi recognized as UNESCO WHS as part of the cognitive image component and feeling-related keywords as part of the affective image component. The results reveal strong differences between the induced and organic sources, but much smaller differences between the autonomous and organic sources. These results can be useful for DMOs to optimize promotion and supply.
APA, Harvard, Vancouver, ISO, and other styles
10

Maqsood, Sarmad, Umer Javed, Muhammad Mohsin Riaz, Muhammad Muzammil, Fazal Muhammad, and Sunghwan Kim. "Multiscale Image Matting Based Multi-Focus Image Fusion Technique." Electronics 9, no. 3 (March 12, 2020): 472. http://dx.doi.org/10.3390/electronics9030472.

Full text
Abstract:
Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.
APA, Harvard, Vancouver, ISO, and other styles
11

Formánek, Roman, Bohuš Kysela, and Radek Šulc. "Image analysis of particle size: effect of light source type." EPJ Web of Conferences 213 (2019): 02021. http://dx.doi.org/10.1051/epjconf/201921302021.

Full text
Abstract:
Agitation of two immiscible liquids or solid-liquid suspension is a frequent operation in chemical and metallurgical industries. The sizes of particles, bubbles or droplets can be determined by the Image Analysis Technique. It is known that the quality of captured images depends significantly on the original image background that is mainly affected by the type of the light source. The aim of this contribution is to investigate the effect of light source type on image quality. The four types of light sources were tested: 1) 1000 W halogen lamp, 2) 72 W LED bar panel, 3) 60 W LED chip, and 4) 90 W LED chip. The illumination intensity and image background quality were investigated for each tested light sources. The effect of the shutter speed on evaluated particle sizes was tested using monodisperse spherical calibration particles having diameter of 1.19 mm. The difference observed between particle sizes evaluated by image analysis for given light source and declared calibration particle diameter was used as a measure of light source quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Luo, Sheng, Yiming Ma, Feng Jiang, Hongying Wang, Qin Tong, and Liangju Wang. "Dead Laying Hens Detection Using TIR-NIR-Depth Images and Deep Learning on a Commercial Farm." Animals 13, no. 11 (June 2, 2023): 1861. http://dx.doi.org/10.3390/ani13111861.

Full text
Abstract:
In large-scale laying hen farming, timely detection of dead chickens helps prevent cross-infection, disease transmission, and economic loss. Dead chicken detection is still performed manually and is one of the major labor costs on commercial farms. This study proposed a new method for dead chicken detection using multi-source images and deep learning and evaluated the detection performance with different source images. We first introduced a pixel-level image registration method that used depth information to project the near-infrared (NIR) and depth image into the coordinate of the thermal infrared (TIR) image, resulting in registered images. Then, the registered single-source (TIR, NIR, depth), dual-source (TIR-NIR, TIR-depth, NIR-depth), and multi-source (TIR-NIR-depth) images were separately used to train dead chicken detecting models with object detection networks, including YOLOv8n, Deformable DETR, Cascade R-CNN, and TOOD. The results showed that, at an IoU (Intersection over Union) threshold of 0.5, the performance of these models was not entirely the same. Among them, the model using the NIR-depth image and Deformable DETR achieved the best performance, with an average precision (AP) of 99.7% (IoU = 0.5) and a recall of 99.0% (IoU = 0.5). While the IoU threshold increased, we found the following: The model with the NIR image achieved the best performance among models with single-source images, with an AP of 74.4% (IoU = 0.5:0.95) in Deformable DETR. The performance with dual-source images was higher than that with single-source images. The model with the TIR-NIR or NIR-depth image outperformed the model with the TIR-depth image, achieving an AP of 76.3% (IoU = 0.5:0.95) and 75.9% (IoU = 0.5:0.95) in Deformable DETR, respectively. The model with the multi-source image also achieved higher performance than that with single-source images. However, there was no significant improvement compared to the model with the TIR-NIR or NIR-depth image, and the AP of the model with multi-source image was 76.7% (IoU = 0.5:0.95) in Deformable DETR. By analyzing the detection performance with different source images, this study provided a reference for selecting and using multi-source images for detecting dead laying hens on commercial farms.
APA, Harvard, Vancouver, ISO, and other styles
13

Jin, S. Venus, and Ehri Ryu. "Instagram fashionistas, luxury visual image strategies and vanity." Journal of Product & Brand Management 29, no. 3 (September 14, 2019): 355–68. http://dx.doi.org/10.1108/jpbm-08-2018-1987.

Full text
Abstract:
Purpose Luxury fashion brands harness the power of Instagram and fashionistas for strategic brand management. This study aims to test interaction effects among luxury brand posts’ Instagram source type (brand versus fashionista), visual image type (product-centric versus consumer-centric) and consumers’ characteristics (vanity, opinion leadership and fashion consciousness) on brand recognition and trust. Design/methodology/approach A quantitative 2 (source type: brand versus fashionista) × 2 (branded visual image type: product-centric luxury versus consumer-centric luxury) between-subjects online experiment (N males = 195 and N females = 182) was conducted by recruiting participants from MTurk. Findings Logistic regression analyses indicated two-way interaction effects between sources and visual images on brand recognition. Brand recognition was higher for product-centric images when the source was the fashionista, whereas brand recognition was equivalent regardless of the image type when the source was the brand. Logistic regression and multiple regression analyses revealed the moderating effects of sources and visual images on the association between consumer traits and branding outcomes. Practical implications Meticulously choosing effective methods of showcasing branded content and persuasive luxury visual image strategies via Instagram is more important for fashionistas than for established brands in increasing brand recognition. Instagram fashionistas are more effective in increasing females’ brand trust through delivering product-centric visual images when targeting women with high vanity, opinion leadership and fashion consciousness. Brands as the Instagram profile source are more persuasive in increasing males’ brand trust through delivering product-centric visual images when targeting men with high vanity. Originality/value This experiment provides theoretical discussions and empirical findings about social media influencer marketing and managerial implications for Instagram-based luxury branding. This research revolves around the overarching theme of the interactive effects of multifaceted branded contents and market segments in social media influencer marketing environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yongxin, Deguang Li, and WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering." Mathematical Problems in Engineering 2020 (July 29, 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Full text
Abstract:
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and detail layers. An improved co-occurrence filter fuses the detail layers for preserving the thermal radiation of the source images. A guided filter fuses the base layers for retaining the background appearance information of the source images. Superposition of the fused base layer and fused detail layer generates the final fusion image. Subjective visual and objective quantitative evaluations comparing with other fusion algorithms demonstrate the better performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Czapla, Zbigniew. "Gradient characteristics of a detection field for image data." AUTOBUSY – Technika, Eksploatacja, Systemy Transportowe 19, no. 12 (December 31, 2018): 741–44. http://dx.doi.org/10.24136/atest.2018.489.

Full text
Abstract:
The paper presents a method of determination of gradient characteristics describing a detection field. The presented method is destined for image data. Image data are in the form of a source image sequence. Frames taken from a video stream, obtained at a measurement station, create a source image sequence. The same detection field is defined for all images of the source image sequence. The source image sequence is converted into binary form. Conversion is carried out on the basis of analysis of source images gradients. Layout of obtained binary vales of target images is in accordance with a content of source images. In the area of detection field, arithmetic and averaging sums of binary values are appropriately calculated. On the bases of averaging sums of binary values, gradient characteristics of the detection field are determined. Gradient characteristics of detection field are intended for vehicle detection and also can be utilized for vehicle speed determination or vehicle classification..
APA, Harvard, Vancouver, ISO, and other styles
16

MARUYAMA, Kazuo, Seiji HAYANO, Yoshifuru SAITO, Kazuo KATO, Naoko MOCHIZUKI, Keita YAMAZAKI, and Kiyoshi HORII. "Light Source IMAGE ANALYSIS." Journal of the Visualization Society of Japan 24, Supplement1 (2004): 223–26. http://dx.doi.org/10.3154/jvs.24.supplement1_223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ding, Xue Wen, Berthold Mahundi, Fei Yang, and Guang Quan Xu. "Image Fusion Using Wavelet Transform and Fuzzy Reasoning." Applied Mechanics and Materials 239-240 (December 2012): 1336–39. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.1336.

Full text
Abstract:
The image fusion algorithm discussed in this paper which utilizes wavelet decomposition and fuzzy reasoning combines images from diverse imaging sensors into a single composite image. It first decomposed source images through wavelet transform, computed the extent of each source image’s contribution through fuzzy reasoning using the area feature of source images, and then fused the coefficients through weighted averaging with the extents of each source images’ contributions as the weight coefficients. Experimental results indicate the final composite image may have more complete information content or better perceptual quality than any one of the source images.
APA, Harvard, Vancouver, ISO, and other styles
18

Lapena, Laurent, Djouher Bedrane, Alain Degiovanni, and Evelyne Salançon. "Bright sources under the projection microscope: using an insulating crystal on a conductor as electron source." European Physical Journal Applied Physics 97 (2022): 13. http://dx.doi.org/10.1051/epjap/2022210260.

Full text
Abstract:
The development of bright sources is allowing technological breakthroughs, especially in the field of microscopy. This requires a very advanced control and understanding of the emission mechanisms. For bright electron sources, a projection microscope with a field emission tip provides an interference image that corresponds to a holographic recording. Image reconstruction can be performed digitally to form a “real” image of the object. However, interference images can only be obtained with a bright source that is small: often, an ultra-thin tip of tungsten whose radius of curvature is of the order of 10nm. The contrast and ultimate resolution of this image-projecting microscope depend only on the size of the apparent source. Thus, a projection microscope can be used to characterize source brightness: for example, analyzing the interference contrast enables the size of the source to be estimated. Ultra-thin W tips are not the only way to obtain bright sources: field emission can also be achieved by applying voltages leading to a weak macroscopic electric field (< 1V∕μm) to insulating micron crystals deposited on conductors with a large radius of curvature (> 10 μm). Moreover, analyzing the holograms reveals the source size, and the brightness of these new emitters equals that of traditional field emission sources.
APA, Harvard, Vancouver, ISO, and other styles
19

Chennamma, H. R., and Lalitha Rangarajan. "Source Camera Identification Based on Sensor Readout Noise." International Journal of Digital Crime and Forensics 2, no. 3 (July 2010): 28–42. http://dx.doi.org/10.4018/jdcf.2010070103.

Full text
Abstract:
A digitally developed image is a viewable image (TIFF/JPG) produced by a camera’s sensor data (raw image) using computer software tools. Such images might use different colour space, demosaicing algorithms or by different post processing parameter settings which are not the one coded in the source camera. In this regard, the most reliable method of source camera identification is linking the given image with the sensor of camera. In this paper, the authors propose a novel approach for camera identification based on sensor’s readout noise. Readout noise is an important intrinsic characteristic of a digital imaging sensor (CCD or CMOS) and it cannot be removed. This paper quantitatively measures readout noise of the sensor from an image using the mean-standard deviation plot, while in order to evaluate the performance of the proposed approach, the authors tested against the images captured at two different exposure levels. Results show datasets containing 1200 images acquired from six different cameras of three different brands. The success of proposed method is corroborated through experiments.
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Jianpeng, Liang Sheng, Baojun Duan, Xinyi Wang, Dongwei Hei, and Huaibi Chen. "Three-dimensional iterative reconstruction of pulsed radiation sources using spherical harmonic decomposition." Review of Scientific Instruments 93, no. 11 (November 1, 2022): 113551. http://dx.doi.org/10.1063/5.0105279.

Full text
Abstract:
Neutron and x-ray imaging are essential ways to diagnose a pulsed radiation source. The three-dimensional (3D) intensity distribution reconstructed from two-dimensional (2D) radiation images can significantly promote research regarding the generation and variation mechanisms of pulsed radiation sources. Only a few ([Formula: see text]) projected images at one moment are available due to the difficulty in building imaging systems for high-radiation-intensity and short-pulsed sources. The reconstruction of a 3D source with a minimal number of 2D images is an ill-posed problem that leads to severe structural distortions and artifacts of the image reconstructed by conventional algorithms. In this paper, we present an iterative method to reconstruct a 3D source using spherical harmonic decomposition. Our algorithm improves the representation ability of spherical harmonic decomposition for 3D sources by enlarging the order of the expansion, which is limited in current analytical reconstruction algorithms. Prior knowledge of the source can be included to obtain a reasonable solution. Numerical simulations demonstrate that the reconstructed image quality of the iterative algorithm is better than that of the analytical algorithm. The iterative method can suppress the effect of noise in the integral projection image and has better robustness and adaptability than the analytical method.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Yan, Qindong Sun, Dongzhu Rong, Shancang Li, and Li Da Xu. "Image Source Identification Using Convolutional Neural Networks in IoT Environment." Wireless Communications and Mobile Computing 2021 (September 10, 2021): 1–12. http://dx.doi.org/10.1155/2021/5804665.

Full text
Abstract:
Digital image forensics is a key branch of digital forensics that based on forensic analysis of image authenticity and image content. The advances in new techniques, such as smart devices, Internet of Things (IoT), artificial images, and social networks, make forensic image analysis play an increasing role in a wide range of criminal case investigation. This work focuses on image source identification by analysing both the fingerprints of digital devices and images in IoT environment. A new convolutional neural network (CNN) method is proposed to identify the source devices that token an image in social IoT environment. The experimental results show that the proposed method can effectively identify the source devices with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Suryanarayana, Gunnam, Vijayakumar Varadarajan, Siva Ramakrishna Pillutla, Grande Nagajyothi, and Ghamya Kotapati. "Multiple Degradation Skilled Network for Infrared and Visible Image Fusion Based on Multi-Resolution SVD Updation." Mathematics 10, no. 18 (September 19, 2022): 3389. http://dx.doi.org/10.3390/math10183389.

Full text
Abstract:
Existing infrared (IR)-visible (VIS) image fusion algorithms demand source images with the same resolution levels. However, IR images are always available with poor resolution due to hardware limitations and environmental conditions. In this correspondence, we develop a novel image fusion model that brings resolution consistency between IR-VIS source images and generates an accurate high-resolution fused image. We train a single deep convolutional neural network model by considering true degradations in real time and reconstruct IR images. The trained multiple degradation skilled network (MDSNet) increases the prominence of objects in fused images from the IR source image. In addition, we adopt multi-resolution singular value decomposition (MRSVD) to capture maximum information from source images and update IR image coefficients with that of VIS images at the finest level. This ensures uniform contrast along with clear textural information in our results. Experiments demonstrate the efficiency of the proposed method over nine state-of-the-art methods using five image quality assessment metrics.
APA, Harvard, Vancouver, ISO, and other styles
23

Vadhi, R., V. S. Kilari, and S. Srinivas Kumar. "An Image Fusion Technique Based on Hadamard Transform and HVS." Engineering, Technology & Applied Science Research 6, no. 4 (August 26, 2016): 1075–79. http://dx.doi.org/10.48084/etasr.707.

Full text
Abstract:
The main endeavor of image fusion is to obtain an image that contains more visual quality information than any one of the source images. In general, the source images may be multi focus, multi modality, multi resolution, multi temporal, panchromatic, satellite images considered for fusion. This paper discusses image fusion using Hadamard Transform (HT). In this work, Human Visual System (HVS) is investigated for image fusion in the HT domain. The proposed fusion process contains three important parts, (1) divide source images into sub images / blocks and transform them into HT domain. (2) multiply transformed coefficients with HVS based weightage matrix of HT and select the highest value from them (3) fuse the corresponding block of selected coefficients from source images in to an empty image. The utility of HVS makes the coefficients more significant. The performance of the proposed method is analyzed and compared with Discrete Wavelet Transform (DWT) based image fusion technique. Implementation in HT domain is simple and time saving when compared with DWT.
APA, Harvard, Vancouver, ISO, and other styles
24

Qi, Guanqiu, Gang Hu, Neal Mazur, Huahua Liang, and Matthew Haner. "A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation." Computers 10, no. 10 (October 13, 2021): 129. http://dx.doi.org/10.3390/computers10100129.

Full text
Abstract:
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, which causes an increase in computation cost. To fuse noisy multi-modality image pairs accurately and efficiently, a multi-modality image simultaneous fusion and denoising method is proposed. In the proposed method, noisy source images are decomposed into cartoon and texture components. Cartoon-texture decomposition not only decomposes source images into detail and structure components for different image fusion schemes, but also isolates image noise from texture components. A Gaussian scale mixture (GSM) based sparse representation model is presented for the denoising and fusion of texture components. A spatial domain fusion rule is applied to cartoon components. The comparative experimental results confirm the proposed simultaneous image denoising and fusion method is superior to the state-of-the-art methods in terms of visual and quantitative evaluations.
APA, Harvard, Vancouver, ISO, and other styles
25

M.V., Srikanth, Sivalal Kethavath, Srinivas Yerram, SivaNagiReddy Kalli, Nagasirisha.B, and Jatothu Brahmaiah Naik. "Brain Tumor Detection through Image Fusion Using Cross Guided Filter and Convolutional Neural Network." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 18, no. 4 (October 19, 2024): 579–90. http://dx.doi.org/10.37936/ecti-cit.2024184.256650.

Full text
Abstract:
This Data fusion has become a significant issue in diagnostic imaging, particularly in medical applications like radiation and guided image surgery. Medical image fusion aims to enhance the precision of tumor diagnosis, by preserving the salient information and characteristics of the original images in the fused image. It has been shown that guided filters are capable of maintaining edges well. In this paper, we propose a novel cross-guided filter-based fusion approach for multimodal medical images utilizing convolutional neural networks. The cross-guided filter is used in the proposed algorithm to extract the detailed features from the source images. Convolutional neural networks are used to generate the feature weights of source images derived from the detail layers. The weighted average rule is used to merge the source images based on these weights. We used thirty distinct types of medical images from diverse sources to compare the effectiveness of the proposed strategy to that of existing methods, both numerically and visually. The experimental findings demonstrated that, in terms of both objective evaluation and qualitative image quality, the suggested system performs better than other standard methods already in use. The quantitative results show that compared to existing methods under consideration for comparison, the proposed algorithm improves mutual information by 25%, image entropy by 9.5%, spatial frequency by 21%, standard deviation by 18.1%, structural similarity index by 30%, and edge strength of the fused image by 39%.
APA, Harvard, Vancouver, ISO, and other styles
26

Kannath, Santhosh, Jayadevan Rajan, and Kamble Harsha. "Utility of Contrast-Enhanced Magnetic Resonance Angiography for Delayed Intracranial In-Stent Stenosis in Nonatherosclerotic Cerebral Vascular Diseases." Journal of Clinical Interventional Radiology ISVIR 01, no. 02 (July 28, 2017): 085–88. http://dx.doi.org/10.1055/s-0037-1602772.

Full text
Abstract:
AbstractNoninvasive imaging modalities are being used for long-term follow-up of intracranial stented patients of nonatherosclerotic etiology. The aim of this study is to determine the utility of contrast-enhanced magnetic resonance angiography (CE-MRA) source images in delayed intracranial in-stent stenosis. A total of 18 stented patients for nonatherosclerotic etiology were reviewed; all had follow-up digital subtraction angiography (DSA) and CE- and time-of-flight (TOF)-MRA. Four sets of MR images (TOF-MRA reformatted images, TOF-MRA source images, CE-MRA reformatted images, and CE-MRA source images) were reviewed for detection of ≥ 50% stenosis. Accuracy of each image set was calculated comparing to DSA. Overall delayed in-stent stenosis during follow-up DSA was 10%. The sensitivity of TOF reformatted image, TOF source image, CE-MRA reformatted image, CE-MRA source image are 33% (6/18), 55.6% (10/18), 77.8% (14/18), and 100% (18/18), respectively, while negative predictive value are 14.3% (2/14), 20% (2/10), 33% (2/6), and 100% (2/2), respectively. CE-MRA source images are equally efficacious as DSA to detect significant (≥ 50%) delayed in-stent stenosis.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Cuihong, Cheng Han, and Chao Zhang. "Multi-Source Training-Free Controllable Style Transfer via Diffusion Models." Symmetry 17, no. 2 (February 13, 2025): 290. https://doi.org/10.3390/sym17020290.

Full text
Abstract:
Diffusion models, as representative models in the field of artificial intelligence, have made significant progress in text-to-image synthesis. However, studies of style transfer using diffusion models typically require a large amount of text to describe semantic content or specific painting attributes, and the style and layout of semantic content in synthesized images are frequently uncertain. To accomplish high-quality fixed content style transfer, this paper adopts text-free guidance and proposes a multi-source, training-free and controllable style transfer method by using single image or video as content input and single or multiple style images as style guidance. To be specific, the proposed method firstly fuses the inversion noise of a content image with that of a single or multiple style images as the initial noise of stylized image sampling process. Then, the proposed method extracts the self-attention mechanism’s query, key, and value vectors from the DDIM inversion process of content and style images and injects them into the stylized image sampling process to improve the color, texture and semantics of stylized images. By setting the hyperparameters involved in the proposed method, the style transfer effect of symmetric style proportion and asymmetric style distribution can be achieved. By comparing with state-of-the-art baselines, the proposed method demonstrates high fidelity and excellent stylized performance, and can be applied to numerous image or video style transfer tasks.
APA, Harvard, Vancouver, ISO, and other styles
28

Suneel, Kumar A., M. V. Srikanth, B. Nagasirisha, and Lakshmi T. Venkata. "A method of image fusion using texture guidance based guided image filter and image statistics." i-manager’s Journal on Electronics Engineering 14, no. 3 (2024): 19. http://dx.doi.org/10.26634/jele.14.3.20647.

Full text
Abstract:
A technique of image fusion by means of a guided image filter is proposed in this paper. It employs image smoothing with a guided filter by making use of texture information as guidance for the guided filter. Then, image statistics-based pixel weight computation is utilized for generating the weight maps of the source images from detail layer features. Finally, the source images are integrated based on the weighted average combining strategy. The efficacy of the proposed algorithm is tested and compared with several state-of-the-art image fusion methods in terms of several objective image quality assessment parameters. The experimental result suggests the efficacy of the proposed algorithm in image fusion.
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Hengyong, Changguo Ji, and Ge Wang. "SART-Type Image Reconstruction from Overlapped Projections." International Journal of Biomedical Imaging 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/549537.

Full text
Abstract:
To maximize the time-integrated X-ray flux from multiple X-ray sources and shorten the data acquisition process, a promising way is to allow overlapped projections from multiple sources being simultaneously on without involving the source multiplexing technology. The most challenging task in this configuration is to perform image reconstruction effectively and efficiently from overlapped projections. Inspired by the single-source simultaneous algebraic reconstruction technique (SART), we hereby develop a multisource SART-type reconstruction algorithm regularized by a sparsity-oriented constraint in the soft-threshold filtering framework to reconstruct images from overlapped projections. Our numerical simulation results verify the correctness of the proposed algorithm and demonstrate the advantage of image reconstruction from overlapped projections.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Tianchi, Yong Gao, Zhiyong Wang, and Mingjun Zhang. "Underwater Image Restoration Method Based on Multi-Frame Image under Artificial Light Source." Journal of Marine Science and Engineering 11, no. 6 (June 12, 2023): 1213. http://dx.doi.org/10.3390/jmse11061213.

Full text
Abstract:
This paper studies the underwater image restoration problem in autonomous operation of AUV guided by underwater visual. An improved underwater image restoration method is developed based on multi-frame neighboring images under artificial light source. At first, multi-frame neighboring images are collected during AUV approaching the targets, and a transmittance estimation method is developed based on the multi-frame images to avoid the assumption of the known normalized residual energy ratio in the traditional methods. Then, the foreground and background regions of the images are segmented by locking the small area where the background light is located. Hence, the accuracy of background light estimation is improved for the underwater mages in turbid water to improve the accuracy of image restoration. Finally, the performance of the developed underwater image restoration method is verified by the comparative results in the pool environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Xi, Chengpeng Chai, Yun-Hsuan Chen, and Mohamad Sawan. "Skull Impact on Photoacoustic Imaging of Multi-Layered Brain Tissues with Embedded Blood Vessel Under Different Optical Source Types: Modeling and Simulation." Bioengineering 12, no. 1 (January 7, 2025): 40. https://doi.org/10.3390/bioengineering12010040.

Full text
Abstract:
Skulls with high optical scattering and acoustic attenuation are a great challenge for photoacoustic imaging for human beings. To explore and improve photoacoustic generation and propagation, we conducted the photoacoustic simulation and image reconstruction of the multi-layer brain model with an embedded blood vessel under different optical source types. Based on the optical simulation results under different types of optical sources, we explored the characteristics of reconstructed images obtained from acoustic simulations with and without skull conditions. Specifically, we focused on the detection of blood vessels and evaluated the image reconstruction features, morphological characteristics, and intensity of variations in the target vessels using optical and acoustic simulations. The results showed that under the initial PA signals, the types of optical source types corresponding to the strongest and weakest photoacoustic signals at different positions within the target region were consistent, while the optical source types were different in the reconstructed images. This study revealed the characteristics of acoustic signal transmission with and without skull conditions and its impact on image reconstruction. It further provides a theoretical basis for the selection of optical sources.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Jing, and Guang Xue Chen. "Research on the Color Correction Algorithm of Images Based on Histogram Matching." Applied Mechanics and Materials 469 (November 2013): 256–59. http://dx.doi.org/10.4028/www.scientific.net/amm.469.256.

Full text
Abstract:
Different rendering conditions (e.g., changes in lighting conditions or atmospheric conditions, changes of the imaging system) often cause significant color differences between two images. In the prepress process, the brightness and hue between two images should be adjusted to be as similar as possible. Currently, we generally use image processing software such as PhotoShop to adjust the image manually, it’s complex and time consuming. In this paper, the color correction algorithm based on histogram matching was put forward and implemented. Only one image needed to be adjusted well previously as the reference image, and the mapping relationship was established on pixels between the histogram of the source images and the reference image, then the source images would have the histograms similar to that of the reference image, so that the images would have similar color characteristic and achieve image color correction finally. The experimental result showed that the realized color correction algorithm was effective, it could not only maintain the visual effect of images, but also eliminate the color differences between the reference image and the source images.
APA, Harvard, Vancouver, ISO, and other styles
33

Moon, Seunghyuk, Jungsu Kang, Youngkwang Kim, Eunha Jo, Pilsoo Jeong, Youngjun Roh, and Jongduk Baek. "Carbon nanotube-based multiple source C-arm CT system: feasibility study with prototype system." Optics Express 31, no. 26 (December 15, 2023): 44772. http://dx.doi.org/10.1364/oe.503421.

Full text
Abstract:
To extend the field of view while reducing dimensions of the C-arm, we propose a carbon nanotube (CNT)-based C-arm computed tomography (CT) system with multiple X-ray sources. A prototype system was developed using three CNT X-ray sources, enabling a feasibility study. Geometry calibration and image reconstruction were performed to improve the quality of image acquisition. However, the geometry of the prototype system led to projection truncation for each source and an overlap region of object area covered by each source in the two-dimensional Radon space, necessitating specific corrective measures. We addressed these problems by implementing truncation correction and applying weighting techniques to the overlap region during the image reconstruction phase. Furthermore, to enable image reconstruction with a scan angle less than 360°, we designed a weighting function to solve data redundancy caused by the short scan angle. The accuracy of the geometry calibration method was evaluated via computer simulations. We also quantified the improvements in reconstructed image quality using mean-squared error and structural similarity. Moreover, detector lag correction was applied to address the afterglow observed in the experimental data obtained from the prototype system. Our evaluation of image quality involved comparing reconstructed images obtained with and without incorporating the geometry calibration results and images with and without lag correction. The outcomes of our simulation study and experimental investigation demonstrated the efficacy of our proposed geometry calibration, image reconstruction method, and lag correction in reducing image artifacts.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Yang, Zhen Han, Zeng Dou, Chengbin Huang, Li Cong, Ning Lv, and Chen Chen. "Edge Consistency Feature Extraction Method for Multi-Source Image Registration." Remote Sensing 15, no. 20 (October 21, 2023): 5051. http://dx.doi.org/10.3390/rs15205051.

Full text
Abstract:
Multi-source image registration has often suffered from great radiation and geometric differences. Specifically, grayscale and texture from similar landforms in different source images often show significantly different visual features, and these differences disturb the corresponding point extraction in the following image registration process. Considering that edges between heterogeneous images can provide homogeneous information and more consistent features can be extracted based on image edges, an edge consistency radiation-change insensitive feature transform (EC-RIFT) method is proposed in this paper. Firstly, the noise and texture interference are reduced by preprocessing according to the image characteristics. Secondly, image edges are extracted based on phase congruency, and an orthogonal Log-Gabor filter is performed to replace the global algorithm. Finally, the descriptors are built with logarithmic partition of the feature point neighborhood, which improves the robustness of the descriptors. Comparative experiments on datasets containing multi-source remote sensing image pairs show that the proposed EC-RIFT method outperforms other registration methods in terms of precision and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
35

Pereyra Irujo, Gustavo. "IRimage: open source software for processing images from infrared thermal cameras." PeerJ Computer Science 8 (May 10, 2022): e977. http://dx.doi.org/10.7717/peerj-cs.977.

Full text
Abstract:
IRimage aims at increasing throughput, accuracy and reproducibility of results obtained from thermal images, especially those produced with affordable, consumer-oriented cameras. IRimage processes thermal images, extracting raw data and calculating temperature values with an open and fully documented algorithm, making this data available for further processing using image analysis software. It also allows the making of reproducible measurements of the temperature of objects in a series of images, and produce visual outputs (images and videos) suitable for scientific reporting. IRimage is implemented in a scripting language of the scientific image analysis software ImageJ, allowing its use through a graphical user interface and also allowing for an easy modification or expansion of its functionality. IRimage’s results were consistent with those of standard software for 15 camera models of the most widely used brand. An example use case is also presented, in which IRimage was used to efficiently process hundreds of thermal images to reveal subtle differences in the daily pattern of leaf temperature of plants subjected to different soil water contents. IRimage’s functionalities make it better suited for research purposes than many currently available alternatives, and could contribute to making affordable consumer-grade thermal cameras useful for reproducible research.
APA, Harvard, Vancouver, ISO, and other styles
36

Vasconcelos, Ivan. "Source-receiver, reverse-time imaging of dual-source, vector-acoustic seismic data." GEOPHYSICS 78, no. 2 (March 1, 2013): WA123—WA145. http://dx.doi.org/10.1190/geo2012-0300.1.

Full text
Abstract:
Novel technologies in seismic data acquisition allow for recording full vector-acoustic (VA) data: pointwise recordings of pressure and its multicomponent gradient, excited by pressure only as well as dipole/gradient sources. Building on recent connections between imaging and seismic interferometry, we present a wave-equation-based, nonlinear, reverse-time imaging approach that takes full advantage of dual-source multicomponent data. The method’s formulation relies on source-receiver scattering reciprocity, thus making proper use of VA fields in the wavefield extrapolation and imaging condition steps in a self-consistent manner. The VA imaging method is capable of simultaneously focusing energy from all in- and outgoing waves: The receiver-side up- and downgoing (receiver ghosts) fields are handled by the VA receiver extrapolation, whereas source-side in- and outgoing (source ghosts) arrivals are accounted for when combining dual-source data at the imaging condition. Additionally, VA imaging handles image amplitudes better than conventional reverse-time migration because it properly handles finite-aperture directivity directly from dual-source, 4C data. For nonlinear imaging, we provide a complete source-receiver framework that relies only on surface integrals, thus being computationally applicable to practical problems. The nonlinear image can be implicitly interpreted as a superposition of several nonlinear interactions between scattering components of data with those corresponding to the extrapolators (i.e., to the model). We demonstrate various features of the method using synthetic examples with complex subsurface features. The numerical results show, e.g., that the dual-source, VA image retrieves subsurface features with “super-resolution”, i.e., with resolution higher than the limits of Born imaging, but at the cost of introducing image artifacts not present in the linear image. Although the method does not require any deghosting as a preprocessing step, it can use separated up- and downgoing fields to generate independent subsurface images.
APA, Harvard, Vancouver, ISO, and other styles
37

Tosi, Sébastien, Lídia Bardia, Maria Jose Filgueira, Alexandre Calon, and Julien Colombelli. "LOBSTER: an environment to design bioimage analysis workflows for large and complex fluorescence microscopy data." Bioinformatics 36, no. 8 (December 20, 2019): 2634–35. http://dx.doi.org/10.1093/bioinformatics/btz945.

Full text
Abstract:
Abstract Summary Open source software such as ImageJ and CellProfiler greatly simplified the quantitative analysis of microscopy images but their applicability is limited by the size, dimensionality and complexity of the images under study. In contrast, software optimized for the needs of specific research projects can overcome these limitations, but they may be harder to find, set up and customize to different needs. Overall, the analysis of large, complex, microscopy images is hence still a critical bottleneck for many Life Scientists. We introduce LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images exceeding workstation main memory. LOBSTER comes with a starting set of over 75 sample image analysis workflows and associated images stemming from state-of-the-art image-based research projects. Availability and implementation LOBSTER requires MATLAB (version ≥ 2015a), MATLAB Image processing toolbox, and MATLAB statistics and machine learning toolbox. Code source, online tutorials, video demonstrations, documentation and sample images are freely available from: https://sebastients.github.io. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Kunpeng, Mingyao Zheng, Hongyan Wei, Guanqiu Qi, and Yuanyuan Li. "Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid." Sensors 20, no. 8 (April 11, 2020): 2169. http://dx.doi.org/10.3390/s20082169.

Full text
Abstract:
Medical image fusion techniques can fuse medical images from different morphologies to make the medical diagnosis more reliable and accurate, which play an increasingly important role in many clinical applications. To obtain a fused image with high visual quality and clear structure details, this paper proposes a convolutional neural network (CNN) based medical image fusion algorithm. The proposed algorithm uses the trained Siamese convolutional network to fuse the pixel activity information of source images to realize the generation of weight map. Meanwhile, a contrast pyramid is implemented to decompose the source image. According to different spatial frequency bands and a weighted fusion operator, source images are integrated. The results of comparative experiments show that the proposed fusion algorithm can effectively preserve the detailed structure information of source images and achieve good human visual effects.
APA, Harvard, Vancouver, ISO, and other styles
39

Springer, Ofer, Eran O. Ofek, Barak Zackay, Ruslan Konno, Amir Sharon, Guy Nir, Adam Rubin, et al. "TRANSLIENT: Detecting Transients Resulting from Point-source Motion or Astrometric Errors." Astronomical Journal 167, no. 6 (May 23, 2024): 281. http://dx.doi.org/10.3847/1538-3881/ad408d.

Full text
Abstract:
Abstract Detection of moving sources over a complicated background is important for several reasons. First is measuring the astrophysical motion of the source. Second is that such motion resulting from atmospheric scintillation, color refraction, or astrophysical reasons is a major source of false alarms for image-subtraction methods. We extend the Zackay, Ofek, and Gal-Yam image-subtraction formalism to deal with moving sources. The new method, named the translient (translational transient) detector, applies hypothesis testing between the hypothesis that the source is stationary and that the source is moving. It can be used to detect source motion or to distinguish between stellar variability and motion. For moving source detection, we show the superiority of translient over the proper image subtraction, using the improvement in the receiver-operating characteristic curve. We show that in the small translation limit, translient is an optimal detector of point-source motion in any direction. Furthermore, it is numerically stable, fast to calculate, and presented in a closed form. Efficient transient detection requires both the proper image-subtraction statistics and the translient statistics: When the translient statistic is higher, then the subtraction residual is likely due to motion. We test our algorithm both on simulated data and on real images obtained by the Large Array Survey Telescope. We demonstrate the ability of translient to distinguish between motion and variability, which has the potential to reduce the number of false alarms in transients detection. We provide the translient implementation in Python and MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Shoude, Youxin Mao, and Costel Flueraru. "Dual-Source Swept-Source Optical Coherence Tomography Reconstructed on Integrated Spectrum." International Journal of Optics 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/565823.

Full text
Abstract:
Dual-source swept-source optical coherence tomography (DS-SSOCT) has two individual sources with different central wavelengths, linewidth, and bandwidths. Because of the difference between the two sources, the individually reconstructed tomograms from each source have different aspect ratio, which makes the comparison and integration difficult. We report a method to merge two sets of DS-SSOCT raw data in a common spectrum, on which both data have the same spectrum density and a correct separation. The reconstructed tomographic image can seamlessly integrate the two bands of OCT data together. The final image has higher axial resolution and richer spectroscopic information than any of the individually reconstructed tomography image.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Bin, Jinying Zhong, Yuehua Li, and Zhongze Chen. "Multi-focus image fusion and super-resolution with convolutional neural network." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 04 (April 28, 2017): 1750037. http://dx.doi.org/10.1142/s0219691317500370.

Full text
Abstract:
The aim of multi-focus image fusion is to create a synthetic all-in-focus image from several images each of which is obtained with different focus settings. However, if the resolution of source images is low, the fused images with traditional fusion method would be also in low-quality, which hinders further image analysis even the fused image is all-in-focus. This paper presents a novel joint multi-focus image fusion and super-resolution method via convolutional neural network (CNN). The first level network features of different source images are fused with the guidance of the local clarity calculated from the source images. The final high-resolution fused image is obtained with the reconstruction network filters which act like averaging filters. The experimental results demonstrate that the proposed approach can generate the fused images with better visual quality and acceptable computation efficiency as compared to other state-of-the-art works.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Yongxian, Guorui Ma, and Jiao Wu. "Air-Ground Multi-Source Image Matching Based on High-Precision Reference Image." Remote Sensing 14, no. 3 (January 26, 2022): 588. http://dx.doi.org/10.3390/rs14030588.

Full text
Abstract:
Robustness of aerial-ground multi-source image matching is closely related to the quality of the ground reference image. To explore the influence of reference images on the performance of air-ground multi-source image matching, we focused on the impact of the control point projection accuracy and tie point accuracy on bundle adjustment results for generating digital orthophoto images by using the Structure from Motion algorithm and Monte Carlo analysis. Additionally, we developed a method to learn local deep features in natural environments based on fine-tuning the pre-trained ResNet50 model and used the method to match multi-scale, multi-seasonal, and multi-viewpoint air-ground multi-source images. The results show that the proposed method could yield a relatively even distribution of feature corresponding points under different conditions, seasons, viewpoints, illuminations. Compared with state-of-the-art hand-crafted computer vision and deep learning matching methods, the proposed method demonstrated more efficient and robust matching performance that could be applied to a variety of unmanned aerial vehicle self- and target-positioning applications in GPS-denied areas.
APA, Harvard, Vancouver, ISO, and other styles
43

Ma, Xiaole, Shaohai Hu, Shuaiqi Liu, Jing Fang, and Shuwen Xu. "Remote Sensing Image Fusion Based on Sparse Representation and Guided Filtering." Electronics 8, no. 3 (March 8, 2019): 303. http://dx.doi.org/10.3390/electronics8030303.

Full text
Abstract:
In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Hengzhao, Bowen Tan, Leiming Sun, Hanye Liu, Haixi Zhang, and Bin Liu. "Multi-Source Image Fusion Based Regional Classification Method for Apple Diseases and Pests." Applied Sciences 14, no. 17 (August 31, 2024): 7695. http://dx.doi.org/10.3390/app14177695.

Full text
Abstract:
Efficient diagnosis of apple diseases and pests is crucial to the healthy development of the apple industry. However, the existing single-source image-based classification methods have limitations due to the constraints of single-source input image information, resulting in low classification accuracy and poor stability. Therefore, a classification method for apple disease and pest areas based on multi-source image fusion is proposed in this paper. Firstly, RGB images and multispectral images are obtained using drones to construct an apple diseases and pests canopy multi-source image dataset. Secondly, a vegetation index selection method based on saliency attention is proposed, which uses a multi-label ReliefF feature selection algorithm to obtain the importance scores of vegetation indices, enabling the automatic selection of vegetation indices. Finally, an apple disease and pest area multi-label classification model named AMMFNet is constructed, which effectively combines the advantages of RGB and multispectral multi-source images, performs data-level fusion of multi-source image data, and combines channel attention mechanisms to exploit the complementary aspects between multi-source data. The experimental results demonstrated that the proposed AMMFNet achieves a significant subset accuracy of 92.92%, a sample accuracy of 85.43%, and an F1 value of 86.21% on the apple disease and pest multi-source image dataset, representing improvements of 8.93% and 10.9% compared to prediction methods using only RGB or multispectral images. The experimental results also proved that the proposed method can provide technical support for the coarse-grained positioning of diseases and pests in apple orchards and has good application potential in the apple planting industry.
APA, Harvard, Vancouver, ISO, and other styles
45

Venkata, Udaya Sameer, and Ruchira Naskar. "Blind Image Source Device Identification." International Journal of Information Security and Privacy 12, no. 3 (July 2018): 84–99. http://dx.doi.org/10.4018/ijisp.2018070105.

Full text
Abstract:
This article describes how digital forensic techniques for source investigation and identification enable forensic analysts to map an image under question to its source device, in a completely blind way, with no a-priori information about the storage and processing. Such techniques operate based on blind image fingerprinting or machine learning based modelling using appropriate image features. Although researchers till date have succeeded to achieve extremely high accuracy, more than 99% with 10-12 candidate cameras, as far as source device prediction is concerned, the practical application of the existing techniques is still doubtful. This is due to the existence of some critical open challenges in this domain, such as exact device linking, open-set challenge, classifier overfitting and counter forensics. In this article, the authors identify those open challenges, with an insight into possible solution strategies.
APA, Harvard, Vancouver, ISO, and other styles
46

Brennan, P. C., and D. O'leary. "Source to image receptor distance." British Journal of Radiology 79, no. 939 (March 2006): 266. http://dx.doi.org/10.1259/bjr/60546865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Zhiguang, and Shan Zeng. "TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks." Entropy 24, no. 2 (February 19, 2022): 294. http://dx.doi.org/10.3390/e24020294.

Full text
Abstract:
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Dai, Pang Da, Yu Jun Zhang, Jing Li Wang, Chang Hua Lu, Yi Zhou, and Jing Liu. "Research on Algorithm of Vision-Based Night Visibility Estimation." Applied Mechanics and Materials 394 (September 2013): 566–70. http://dx.doi.org/10.4028/www.scientific.net/amm.394.566.

Full text
Abstract:
Accurately achieve the luminance of light sources at night images is significantly important to vision-based night visibility estimation. In this paper, we propose a practical night visibility algorithm. This algorithm contains two main parts, light sources recognition and edge extraction. Firstly, we give the model of vision-based visibility estimation, and propose the framework of algorithm. Secondly, we give the method to extract light sources from multiple potential targets by the prior knowledge of space information. Then, we analysis the features of light source image, and explain the expanded temple to cut sub-image of light source, and induce in PS level set to segment the edge. Experiments show that, the light source average recognition precision is approach to 0.95 at the condition of moderate breeze, and compared with the manual segment, the precision of light source segment is approach to 0.99 at the condition of real visibility larger than 500m.
APA, Harvard, Vancouver, ISO, and other styles
49

Nwokeji, Chijioke Emeka, Akbar Sheikh-Akbari, Anatoliy Gorbenko, and Iosif Mporas. "Source Camera Identification Techniques: A Survey." Journal of Imaging 10, no. 2 (January 25, 2024): 31. http://dx.doi.org/10.3390/jimaging10020031.

Full text
Abstract:
The successful investigation and prosecution of significant crimes, including child pornography, insurance fraud, movie piracy, traffic monitoring, and scientific fraud, hinge largely on the availability of solid evidence to establish the case beyond any reasonable doubt. When dealing with digital images/videos as evidence in such investigations, there is a critical need to conclusively prove the source camera/device of the questioned image. Extensive research has been conducted in the past decade to address this requirement, resulting in various methods categorized into brand, model, or individual image source camera identification techniques. This paper presents a survey of all those existing methods found in the literature. It thoroughly examines the efficacy of these existing techniques for identifying the source camera of images, utilizing both intrinsic hardware artifacts such as sensor pattern noise and lens optical distortion, and software artifacts like color filter array and auto white balancing. The investigation aims to discern the strengths and weaknesses of these techniques. The paper provides publicly available benchmark image datasets and assessment criteria used to measure the performance of those different methods, facilitating a comprehensive comparison of existing approaches. In conclusion, the paper outlines directions for future research in the field of source camera identification.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Yandong, and Bob A. Hardage. "SV-P extraction and imaging for far-offset vertical seismic profile data." Interpretation 3, no. 3 (August 1, 2015): SW27—SW35. http://dx.doi.org/10.1190/int-2015-0002.1.

Full text
Abstract:
We have analyzed vertical seismic profile (VSP) data acquired across a Marcellus Shale prospect and found that SV-P reflections could be extracted from far-offset VSP data generated by a vertical-vibrator source using time-variant receiver rotations. Optimal receiver rotation angles were determined by a dynamic steering of geophones to the time-varying approach directions of upgoing SV-P reflections. These SV-P reflections were then imaged using a VSP common-depth-point transformation based on ray tracing. Comparisons of our SV-P image with P-P and P-SV images derived from the same offset VSP data found that for deep targets, SV-P data created an image that extended farther from the receiver well than P-P and P-SV images and that spanned a wider offset range than P-P and P-SV images do. A comparison of our VSP SV-P image with a surface-based P-SV profile that traversed the VSP well demonstrated that SV-P data were equivalent to P-SV data for characterizing geology and that a VSP-derived SV-P image could be used to calibrate surface-recorded SV-P data that were generated by P-wave sources.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography