Статті в журналах з теми "Multi-Exposure Fusion"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Multi-Exposure Fusion.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Multi-Exposure Fusion".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Goshtasby, A. Ardeshir. "Fusion of multi-exposure images." Image and Vision Computing 23, no. 6 (June 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

CM, Sushmitha, and Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

LI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang, and 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion." Optics and Precision Engineering 24, no. 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shaikh, Uzmanaz A., Vivek J. Vishwakarma, and Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion." IETE Journal of Education 59, no. 2 (July 3, 2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Zhengguo, Zhe Wei, Changyun Wen, and Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion." IEEE Transactions on Image Processing 26, no. 3 (March 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Inoue, Kohei, Hengjun Yu, Kenji Hara, and Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion." Journal of the Institute of Image Information and Television Engineers 70, no. 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng, and Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.

Повний текст джерела
Анотація:
Although remarkable progress has been made in recent years, current multi-exposure image fusion (MEF) research is still bounded by the lack of real ground truth, objective evaluation function, and robust fusion strategy. In this paper, we study the MEF problem from a new perspective. We don’t utilize any synthesized ground truth, design any loss function, or develop any fusion strategy. Our proposed method EMEF takes advantage of the wisdom of multiple imperfect MEF contributors including both conventional and deep learning-based methods. Specifically, EMEF consists of two main stages: pre-train an imitator network and tune the imitator in the runtime. In the first stage, we make a unified network imitate different MEF targets in a style modulation way. In the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair. In the experiment, we construct EMEF from four state-of-the-art MEF methods and then make comparisons with the individuals and several other competitive methods on the latest released MEF benchmark dataset. The promising experimental results demonstrate that our ensemble framework can “get the best of all worlds”. The code is available at https://github.com/medalwill/EMEF.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xiang, Hu Yan, and Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm." Advanced Materials Research 403-408 (November 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.

Повний текст джерела
Анотація:
An improved Multi-Exposure image fusion scheme is proposed to fuse visual images for wide range illumination applications. While previous image fusion approaches perform the fusion only concern with local details such as regional contrast and gradient, the proposed algorithm takes global illumination contrast into consideration at the same time; this can extend the dynamic range evidently. Wavelet is used as Multi-Scale analysis tool in intensity fusion. For color fusion, HSI color model and weight map based method is used. The experimental results showed that the proposed fusion scheme has significant advantages in dynamic range, regional contrast and color saturation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Deng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu, and Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion." International Journal of Advanced Robotic Systems 14, no. 2 (March 1, 2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.

Повний текст джерела
Анотація:
Multi-exposure image fusion is becoming increasingly influential in enhancing the quality of experience of consumer electronics. However, until now few works have been conducted on the performance evaluation of multi-exposure image fusion, especially colorful multi-exposure image fusion. Conventional quality assessment methods for multi-exposure image fusion mainly focus on grayscale information, while ignoring the color components, which also convey vital visual information. We propose an objective method for the quality assessment of colored multi-exposure image fusion based on image saturation, together with texture and structure similarities, which are able to measure the perceived color, texture, and structure information of fused images. The final image quality is predicted using an extreme learning machine with texture, structure, and saturation similarities as image features. Experimental results for a public multi-exposure image fusion database show that the proposed model can accurately predict colored multi-exposure image fusion image quality and correlates well with human perception. Compared with state-of-the-art image quality assessment models for image fusion, the proposed metric has better evaluation performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hayat, Naila, and Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending." IET Image Processing 13, no. 13 (November 14, 2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Song, Changho, Soowoong Jeong, and Sangkeun Lee. "Colorful Multi-Exposure Fusion with Guided Filtering based Fusion Method." TECHART: Journal of Arts and Imaging Science 3, no. 4 (November 30, 2016): 27. http://dx.doi.org/10.15323/techart.2016.11.3.4.27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

LIU Xin-long, 刘鑫龙, and 易红伟 YI Hong-wei. "Improved Multi-exposure Image Pyramid Fusion Method." ACTA PHOTONICA SINICA 48, no. 8 (2019): 810002. http://dx.doi.org/10.3788/gzxb20194808.0810002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Babu, K. Suresh, S. Ali Asgar, Dr V. Thtimurthulu, and MSA Srivatsava. "Multi Exposure Image Fusion based on PCA." International Journal of Engineering Research and Advanced Technology 4, no. 8 (2018): 37–46. http://dx.doi.org/10.31695/ijerat.2018.3296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Yan, Qingsen, Yu Zhu, Yulin Zhou, Jinqiu Sun, Lei Zhang, and Yanning Zhang. "Enhancing image visuality by multi-exposure fusion." Pattern Recognition Letters 127 (November 2019): 66–75. http://dx.doi.org/10.1016/j.patrec.2018.10.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ahmad, Attiq, Muhammad Mohsin Riaz, Abdul Ghafoor, and Tahir Zaidi. "Noise Resistant Fusion for Multi-Exposure Sensors." IEEE Sensors Journal 16, no. 13 (July 2016): 5123–24. http://dx.doi.org/10.1109/jsen.2016.2556715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Paul, Sujoy, Ioana S. Sevcenco, and Panajotis Agathoklis. "Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain." Journal of Circuits, Systems and Computers 25, no. 10 (July 22, 2016): 1650123. http://dx.doi.org/10.1142/s0218126616501231.

Повний текст джерела
Анотація:
A multi-exposure and multi-focus image fusion algorithm is proposed. The algorithm is developed for color images and is based on blending the gradients of the luminance components of the input images using the maximum gradient magnitude at each pixel location and then obtaining the fused luminance using a Haar wavelet-based image reconstruction technique. This image reconstruction algorithm is of [Formula: see text] complexity and includes a Poisson solver at each resolution to eliminate artifacts that may appear due to the nonconservative nature of the resulting gradient. The fused chrominance, on the other hand, is obtained as a weighted mean of the chrominance channels. The particular case of grayscale images is treated as luminance fusion. Experimental results and comparison with other fusion techniques indicate that the proposed algorithm is fast and produces similar or better results than existing techniques for both multi-exposure as well as multi-focus images.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Shouhong, Chen, Zhao Shuang, Ma Jun, Liu Xinyu, and Hou Xingna. "A Multi-exposure Image Fusion Method with Detail Preservation." MATEC Web of Conferences 173 (2018): 03009. http://dx.doi.org/10.1051/matecconf/201817303009.

Повний текст джерела
Анотація:
In view of the problems of uneven exposure in the image acquisition and the serious loss of details in the traditional multi-exposure image fusion algorithm, a method of image fusion with details preservation is proposed. A weighted approach to multi-exposure image fusion is used, taking into account the features such as local contrast, exposure brightness, and color information to better preserve detail. For the purpose of eliminating the noise and interference, using the recursive filter to filter. Compared with other algorithms, the proposed algorithm can retain the rich detail information to meet the quality requirements of spot welding image fusion and has certain application value.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Buades, Antoni, Jose Luis Lisani, and Onofre Martorell. "Efficient joint noise removal and multi exposure fusion." PLOS ONE 17, no. 3 (March 25, 2022): e0265464. http://dx.doi.org/10.1371/journal.pone.0265464.

Повний текст джерела
Анотація:
Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, captured with different exposure times, into a single image. This combination process (also known as fusion) is performed in such a way that the parts with better exposure of each input image have a stronger influence. Therefore, in the result image all areas are well exposed. In this paper, we propose a new method that performs MEF and noise removal. Rather than denoising each input image individually and then fusing the obtained results, the proposed strategy jointly performs fusion and denoising in the Discrete Cosinus Transform (DCT) domain, which leads to a very efficient algorithm. The method takes advantage of spatio-temporal patch selection and collaborative 3D thresholding. Several experiments show that the obtained results are significantly superior to the existing state of the art.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Xu, Han, Liang Haochen, and Jiayi Ma. "Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3010–17. http://dx.doi.org/10.1609/aaai.v37i3.25404.

Повний текст джерела
Анотація:
This paper proposes an unsupervised multi-exposure image fusion (MEF) method via contrastive learning, termed as MEF-CL. It breaks exposure limits and performance bottleneck faced by existing methods. MEF-CL firstly designs similarity constraints to preserve contents in source images. It eliminates the need for ground truth (actually not exist and created artificially) and thus avoids negative impacts of inappropriate ground truth on performance and generalization. Moreover, we explore a latent feature space and apply contrastive learning in this space to guide fused image to approximate normal-light samples and stay away from inappropriately exposed ones. In this way, characteristics of fused images (e.g., illumination, colors) can be further improved without being subject to source images. Therefore, MEF-CL is applicable to image pairs of any multiple exposures rather than a pair of under-exposed and over-exposed images mandated by existing methods. By alleviating dependence on source images, MEF-CL shows better generalization for various scenes. Consequently, our results exhibit appropriate illumination, detailed textures, and saturated colors. Qualitative, quantitative, and ablation experiments validate the superiority and generalization of MEF-CL. Our code is publicly available at https://github.com/hanna-xu/MEF-CL.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Xingchen. "Benchmarking and comparing multi-exposure image fusion algorithms." Information Fusion 74 (October 2021): 111–31. http://dx.doi.org/10.1016/j.inffus.2021.02.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

PIAO Yong-jie, 朴永杰, 徐伟 XU Wei, 王绍举 WANG Shao-ju, and 陶淑苹 TAO Shu-ping. "Fast multi-exposure image fusion for HDR video." Chinese Journal of Liquid Crystals and Displays 29, no. 6 (2014): 1032–41. http://dx.doi.org/10.3788/yjyxs20142906.1032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Han, Dong, Liang Li, Xiaojie Guo, and Jiayi Ma. "Multi-exposure image fusion via deep perceptual enhancement." Information Fusion 79 (March 2022): 248–62. http://dx.doi.org/10.1016/j.inffus.2021.10.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Xu, Fang, Jinghong Liu, Yueming Song, Hui Sun, and Xuan Wang. "Multi-Exposure Image Fusion Techniques: A Comprehensive Review." Remote Sensing 14, no. 3 (February 7, 2022): 771. http://dx.doi.org/10.3390/rs14030771.

Повний текст джерела
Анотація:
Multi-exposure image fusion (MEF) is emerging as a research hotspot in the fields of image processing and computer vision, which can integrate images with multiple exposure levels into a full exposure image of high quality. It is an economical and effective way to improve the dynamic range of the imaging system and has broad application prospects. In recent years, with the further development of image representation theories such as multi-scale analysis and deep learning, significant progress has been achieved in this field. This paper comprehensively investigates the current research status of MEF methods. The relevant theories and key technologies for constructing MEF models are analyzed and categorized. The representative MEF methods in each category are introduced and summarized. Then, based on the multi-exposure image sequences in static and dynamic scenes, we present a comparative study for 18 representative MEF approaches using nine commonly used objective fusion metrics. Finally, the key issues of current MEF research are discussed, and a development trend for future research is put forward.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ma, Kede, Kai Zeng, and Zhou Wang. "Perceptual Quality Assessment for Multi-Exposure Image Fusion." IEEE Transactions on Image Processing 24, no. 11 (November 2015): 3345–56. http://dx.doi.org/10.1109/tip.2015.2442920.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhang, Wenlong, Xiaolin Liu, Wuchao Wang, and Yujun Zeng. "Multi-exposure image fusion based on wavelet transform." International Journal of Advanced Robotic Systems 15, no. 2 (March 2018): 172988141876893. http://dx.doi.org/10.1177/1729881418768939.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Choi, Seungcheol, Oh-Jin Kwon, and Jinhee Lee. "A Method for Fast Multi-Exposure Image Fusion." IEEE Access 5 (2017): 7371–80. http://dx.doi.org/10.1109/access.2017.2694038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Liu, Yu, and Zengfu Wang. "Dense SIFT for ghost-free multi-exposure fusion." Journal of Visual Communication and Image Representation 31 (August 2015): 208–24. http://dx.doi.org/10.1016/j.jvcir.2015.06.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wu, Shengcong, Ting Luo, Yang Song, and Haiyong Xu. "Multi-exposure image fusion based on tensor decomposition." Multimedia Tools and Applications 79, no. 33-34 (June 16, 2020): 23957–75. http://dx.doi.org/10.1007/s11042-020-09131-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Martorell, Onofre, Catalina Sbert, and Antoni Buades. "Ghosting-free DCT based multi-exposure image fusion." Signal Processing: Image Communication 78 (October 2019): 409–25. http://dx.doi.org/10.1016/j.image.2019.07.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Hu, Yunxue, Chao Xu, Zhengping Li, Fang Lei, Bo Feng, Lingling Chu, Chao Nie, and Dou Wang. "Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering." Electronics 11, no. 8 (April 11, 2022): 1211. http://dx.doi.org/10.3390/electronics11081211.

Повний текст джерела
Анотація:
Due to the large dynamic range of real scenes, it is difficult for images taken by ordinary devices to represent high-quality real scenes. To obtain high-quality images, the exposure fusion of multiple exposure images of the same scene is required. The fusion of multiple images results in the loss of edge detail in areas with large exposure differences. Aiming at this problem, this paper proposes a new method for the fusion of multi-exposure images with detail enhancement based on homomorphic filtering. First, a fusion weight map is constructed using exposure and local contrast. The exposure weight map is calculated by threshold segmentation and an adaptively adjustable Gaussian curve. The algorithm can assign appropriate exposure weights to well-exposed areas so that the fused image retains more details. Then, the weight map is denoised using fast-guided filtering. Finally, a fusion method for the detail enhancement of Laplacian pyramids with homomorphic filtering is proposed to enhance the edge information lost by Laplacian pyramid fusion. The experimental results show that the method can generate high-quality images with clear edges and details as well as similar color appearance to real scenes and can outperform existing algorithms in both subjective and objective evaluations.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Han, Yongcheng, Wenwen Zhang, and Weiji He. "Low-light image enhancement based on simulated multi-exposure fusion." Journal of Physics: Conference Series 2478, no. 6 (June 1, 2023): 062022. http://dx.doi.org/10.1088/1742-6596/2478/6/062022.

Повний текст джерела
Анотація:
Abstract We propose an efficient and novel framework for low-light image enhancement, which aims to reveal information hidden in the darkness and improve overall brightness and local contrast. Inspired by exposure fusion technique, we employ simulated multi-exposure images fusion to derive bright, natural and satisfactory results, while images are taken under poor conditions such as insufficient or uneven illumination, back-lit and limited exposure time. Specifically, we first design a novel method to generate synthesized images with varying exposure time from a single image. Thus, each image of these artificial sequences contains necessary information for the final desired enhanced result. We then introduce a flexible multi-exposure fusion framework to achieve fused images, which comprises a weight map prediction module and a multi-scale fusion module. Extensive experiments show that our approach can achieve similar or better performance compared to serval state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Jia, Jinquan, Jian Sun, and Zhiqin Zhu. "A multi-scale patch-wise algorithm for multi-exposure image fusion." Optik 248 (December 2021): 168120. http://dx.doi.org/10.1016/j.ijleo.2021.168120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Li, Hui, Kede Ma, Hongwei Yong, and Lei Zhang. "Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion." IEEE Transactions on Image Processing 29 (2020): 5805–16. http://dx.doi.org/10.1109/tip.2020.2987133.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Im, Chan-Gi, Dong-Min Son, Hyuk-Ju Kwon, and Sung-Hak Lee. "Multi-Task Learning Approach Using Dynamic Hyperparameter for Multi-Exposure Fusion." Mathematics 11, no. 7 (March 27, 2023): 1620. http://dx.doi.org/10.3390/math11071620.

Повний текст джерела
Анотація:
High-dynamic-range (HDR) image synthesis is a technology developed to accurately reproduce the actual scene of an image on a display by extending the dynamic range of an image. Multi-exposure fusion (MEF) technology, which synthesizes multiple low-dynamic-range (LDR) images to create an HDR image, has been developed in various ways including pixel-based, patch-based, and deep learning-based methods. Recently, methods to improve the synthesis quality of images using deep-learning-based algorithms have mainly been studied in the field of MEF. Despite the various advantages of deep learning, deep-learning-based methods have a problem in that numerous multi-exposed and ground-truth images are required for training. In this study, we propose a self-supervised learning method that generates and learns reference images based on input images during the training process. In addition, we propose a method to train a deep learning model for an MEF with multiple tasks using dynamic hyperparameters on the loss functions. It enables effective network optimization across multiple tasks and high-quality image synthesis while preserving a simple network architecture. Our learning method applied to the deep learning model shows superior synthesis results compared to other existing deep-learning-based image synthesis algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

BAI Bendu, 白本督, та 李俊鹏 LI Junpeng. "基于注意力机制的多曝光图像融合算法". ACTA PHOTONICA SINICA 51, № 4 (2022): 0410004. http://dx.doi.org/10.3788/gzxb20225104.0410004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Qu, Linhao, Shaolei Liu, Manning Wang, and Zhijian Song. "TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2126–34. http://dx.doi.org/10.1609/aaai.v36i2.20109.

Повний текст джерела
Анотація:
In this paper, we propose TransMEF, a transformer-based multi-exposure image fusion framework that uses self-supervised multi-task learning. The framework is based on an encoder-decoder network, which can be trained on large natural image datasets and does not require ground truth fusion images. We design three self-supervised reconstruction tasks according to the characteristics of multi-exposure images and conduct these tasks simultaneously using multi-task learning; through this process, the network can learn the characteristics of multi-exposure images and extract more generalized features. In addition, to compensate for the defect in establishing long-range dependencies in CNN-based architectures, we design an encoder that combines a CNN module with a transformer module. This combination enables the network to focus on both local and global information. We evaluated our method and compared it to 11 competitive traditional and deep learning-based methods on the latest released multi-exposure image fusion benchmark dataset, and our method achieved the best performance in both subjective and objective evaluations. Code will be available at https://github.com/miccaiif/TransMEF.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zhang, Jiebin, Shangyou Zeng, Ying Wang, Jinjin Wang, and Hongyang Chen. "An Efficient Extreme-Exposure Image Fusion Method." Journal of Physics: Conference Series 2137, no. 1 (December 1, 2021): 012061. http://dx.doi.org/10.1088/1742-6596/2137/1/012061.

Повний текст джерела
Анотація:
Abstract Since the existing commercial imaging equipment cannot meet the requirements of high dynamic range, multi-exposure image fusion is an economical and fast method to implement HDR. However, the existing multi-exposure image fusion algorithms have the problems of long fusion time and large data storage. We propose an extreme exposure image fusion method based on deep learning. In this method, two extreme exposure image sequences are sent to the network, channel and spatial attention mechanisms are introduced to automatically learn and optimize the weights, and the optimal fusion weights are output. In addition, the model in this paper adopts real-value training and makes the output closer to the real value through a new custom loss function. Experimental results show that this method is superior to existing methods in both objective and subjective aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Qi, Yanjie, Zehui Yang, and Lin Kang. "Multi-exposure X-ray image fusion quality evaluation based on CSF and gradient amplitude similarity." Journal of X-Ray Science and Technology 29, no. 4 (July 27, 2021): 697–709. http://dx.doi.org/10.3233/xst-210871.

Повний текст джерела
Анотація:
Due to the limitation of dynamic range of the imaging device, the fixed-voltage X-ray images often produce overexposed or underexposed regions. Some structure information of the composite steel component is lost. This problem can be solved by fusing the multi-exposure X-ray images taken by using different voltages in order to produce images with more detailed structures or information. Due to the lack of research on multi-exposure X-ray image fusion technology, there is no evaluation method specially for multi-exposure X-ray image fusion. For the multi-exposure X-ray fusion images obtained by different fusion algorithms may have problems such as the detail loss and structure disorder. To address these problems, this study proposes a new multi-exposure X-ray image fusion quality evaluation method based on contrast sensitivity function (CSF) and gradient amplitude similarity. First, with the idea of information fusion, multiple reference images are fused into a new reference image. Next, the gradient amplitude similarity between the new reference image and the test image is calculated. Then, the whole evaluation value can be obtained by weighting CSF. In the experiments of MEF Database, the SROCC of the proposed algorithm is about 0.8914, and the PLCC is about 0.9287, which shows that the proposed algorithm is more consistent with subjective perception in MEF Database. Thus, this study demonstrates a new objective evaluation method, which generates the results that are consistent with the subjective feelings of human eyes.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

KINOSHITA, Yuma, Sayaka SHIOTA, and Hitoshi KIYA. "A Pseudo Multi-Exposure Fusion Method Using Single Image." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101.A, no. 11 (November 1, 2018): 1806–14. http://dx.doi.org/10.1587/transfun.e101.a.1806.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ryu, Je-Ho, Jong-Han Kim, and Jong-Ok Kim. "Deep Gradual Multi-Exposure Fusion Via Recurrent Convolutional Network." IEEE Access 9 (2021): 144756–67. http://dx.doi.org/10.1109/access.2021.3122540.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kim, Jong-Han, Je-Ho Ryu, and Jong-Ok Kim. "FDD-MEF: Feature-Decomposition-Based Deep Multi-Exposure Fusion." IEEE Access 9 (2021): 164551–61. http://dx.doi.org/10.1109/access.2021.3134316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Singh, Harbinder, Gabriel Cristobal, Gloria Bueno, Saul Blanco, Simrandeep Singh, P. N. Hrisheekesha, and Nitin Mittal. "Multi-exposure microscopic image fusion-based detail enhancement algorithm." Ultramicroscopy 236 (June 2022): 113499. http://dx.doi.org/10.1016/j.ultramic.2022.113499.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yang, T. T., and P. Y. Fang. "Multi exposure image fusion algorithm based on YCbCr space." IOP Conference Series: Materials Science and Engineering 359 (May 2018): 012002. http://dx.doi.org/10.1088/1757-899x/359/1/012002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ma, Kede, Zhengfang Duanmu, Hanwei Zhu, Yuming Fang, and Zhou Wang. "Deep Guided Learning for Fast Multi-Exposure Image Fusion." IEEE Transactions on Image Processing 29 (2020): 2808–19. http://dx.doi.org/10.1109/tip.2019.2952716.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kou, Fei, Zhengguo Li, Changyun Wen, and Weihai Chen. "Edge-preserving smoothing pyramid based multi-scale exposure fusion." Journal of Visual Communication and Image Representation 53 (May 2018): 235–44. http://dx.doi.org/10.1016/j.jvcir.2018.03.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Rui Shen, I. Cheng, Jianbo Shi, and A. Basu. "Generalized Random Walks for Fusion of Multi-Exposure Images." IEEE Transactions on Image Processing 20, no. 12 (December 2011): 3634–46. http://dx.doi.org/10.1109/tip.2011.2150235.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yang, Yi, Wei Cao, Shiqian Wu, and Zhengguo Li. "Multi-Scale Fusion of Two Large-Exposure-Ratio Images." IEEE Signal Processing Letters 25, no. 12 (December 2018): 1885–89. http://dx.doi.org/10.1109/lsp.2018.2877893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Peng, Yan-Tsung, He-Hao Liao, and Ching-Fu Chen. "Two-Exposure Image Fusion Based on Optimized Adaptive Gamma Correction." Sensors 22, no. 1 (December 22, 2021): 24. http://dx.doi.org/10.3390/s22010024.

Повний текст джерела
Анотація:
In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gao, Mingyu, Junfan Wang, Yi Chen, Chenjie Du, Chao Chen, and Yu Zeng. "An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System." Electronics 10, no. 4 (February 4, 2021): 383. http://dx.doi.org/10.3390/electronics10040383.

Повний текст джерела
Анотація:
In this paper, an improved multi-exposure image fusion method for intelligent transportation systems (ITS) is proposed. Further, a new multi-exposure image dataset for traffic signs, TrafficSign, is presented to verify the method. In the intelligent transportation system, as a type of important road information, traffic signs are fused by this method to obtain a fused image with moderate brightness and intact information. By estimating the degree of retention of different features in the source image, the fusion results have adaptive characteristics similar to that of the source image. Considering the weather factor and environmental noise, the source image is preprocessed by bilateral filtering and dehazing algorithm. Further, this paper uses adaptive optimization to improve the quality of the output image of the fusion model. The qualitative and quantitative experiments on the new dataset show that the multi-exposure image fusion algorithm proposed in this paper is effective and practical in the ITS.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Liu, Jie, and Yuanyuan Peng. "Research on Image Enhancement Algorithm Based on Artificial Intelligence." Journal of Physics: Conference Series 2074, no. 1 (November 1, 2021): 012024. http://dx.doi.org/10.1088/1742-6596/2074/1/012024.

Повний текст джерела
Анотація:
Abstract With the continuous development of social science and technology, people have higher and higher requirements for image quality. This paper integrates artificial intelligence technology and proposes a low-illuminance panoramic image enhancement algorithm based on simulated multi-exposure fusion. First, the image information content is used as a metric to estimate the optimal exposure rate, and the brightness mapping function is used to enhance the V component, and the low-illuminance. The image and the overexposed image are input, the medium exposure image is synthesized by the exposure interpolation method, and the low illumination image, the medium exposure image and the overexposure image are merged using a multi-scale fusion strategy to obtain the fused image, which is corrected by a multi-scale detail enhancement algorithm. After the fusion, the details are enhanced to obtain the final enhanced image. Practice has proved that the algorithm can effectively improve the image quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії