Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Multi-Exposure Fusion.

Artykuły w czasopismach na temat „Multi-Exposure Fusion”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multi-Exposure Fusion”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Goshtasby, A. Ardeshir. "Fusion of multi-exposure images". Image and Vision Computing 23, nr 6 (czerwiec 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

CM, Sushmitha, i Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30.06.2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

LI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang i 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion". Optics and Precision Engineering 24, nr 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shaikh, Uzmanaz A., Vivek J. Vishwakarma i Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion". IETE Journal of Education 59, nr 2 (3.07.2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Li, Zhengguo, Zhe Wei, Changyun Wen i Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion". IEEE Transactions on Image Processing 26, nr 3 (marzec 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Inoue, Kohei, Hengjun Yu, Kenji Hara i Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion". Journal of the Institute of Image Information and Television Engineers 70, nr 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng i Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.

Pełny tekst źródła
Streszczenie:
Although remarkable progress has been made in recent years, current multi-exposure image fusion (MEF) research is still bounded by the lack of real ground truth, objective evaluation function, and robust fusion strategy. In this paper, we study the MEF problem from a new perspective. We don’t utilize any synthesized ground truth, design any loss function, or develop any fusion strategy. Our proposed method EMEF takes advantage of the wisdom of multiple imperfect MEF contributors including both conventional and deep learning-based methods. Specifically, EMEF consists of two main stages: pre-train an imitator network and tune the imitator in the runtime. In the first stage, we make a unified network imitate different MEF targets in a style modulation way. In the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair. In the experiment, we construct EMEF from four state-of-the-art MEF methods and then make comparisons with the individuals and several other competitive methods on the latest released MEF benchmark dataset. The promising experimental results demonstrate that our ensemble framework can “get the best of all worlds”. The code is available at https://github.com/medalwill/EMEF.
Style APA, Harvard, Vancouver, ISO itp.
8

Xiang, Hu Yan, i Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm". Advanced Materials Research 403-408 (listopad 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.

Pełny tekst źródła
Streszczenie:
An improved Multi-Exposure image fusion scheme is proposed to fuse visual images for wide range illumination applications. While previous image fusion approaches perform the fusion only concern with local details such as regional contrast and gradient, the proposed algorithm takes global illumination contrast into consideration at the same time; this can extend the dynamic range evidently. Wavelet is used as Multi-Scale analysis tool in intensity fusion. For color fusion, HSI color model and weight map based method is used. The experimental results showed that the proposed fusion scheme has significant advantages in dynamic range, regional contrast and color saturation.
Style APA, Harvard, Vancouver, ISO itp.
9

Deng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu i Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion". International Journal of Advanced Robotic Systems 14, nr 2 (1.03.2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.

Pełny tekst źródła
Streszczenie:
Multi-exposure image fusion is becoming increasingly influential in enhancing the quality of experience of consumer electronics. However, until now few works have been conducted on the performance evaluation of multi-exposure image fusion, especially colorful multi-exposure image fusion. Conventional quality assessment methods for multi-exposure image fusion mainly focus on grayscale information, while ignoring the color components, which also convey vital visual information. We propose an objective method for the quality assessment of colored multi-exposure image fusion based on image saturation, together with texture and structure similarities, which are able to measure the perceived color, texture, and structure information of fused images. The final image quality is predicted using an extreme learning machine with texture, structure, and saturation similarities as image features. Experimental results for a public multi-exposure image fusion database show that the proposed model can accurately predict colored multi-exposure image fusion image quality and correlates well with human perception. Compared with state-of-the-art image quality assessment models for image fusion, the proposed metric has better evaluation performance.
Style APA, Harvard, Vancouver, ISO itp.
10

Hayat, Naila, i Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending". IET Image Processing 13, nr 13 (14.11.2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Song, Changho, Soowoong Jeong i Sangkeun Lee. "Colorful Multi-Exposure Fusion with Guided Filtering based Fusion Method". TECHART: Journal of Arts and Imaging Science 3, nr 4 (30.11.2016): 27. http://dx.doi.org/10.15323/techart.2016.11.3.4.27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

LIU Xin-long, 刘鑫龙, i 易红伟 YI Hong-wei. "Improved Multi-exposure Image Pyramid Fusion Method". ACTA PHOTONICA SINICA 48, nr 8 (2019): 810002. http://dx.doi.org/10.3788/gzxb20194808.0810002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Babu, K. Suresh, S. Ali Asgar, Dr V. Thtimurthulu i MSA Srivatsava. "Multi Exposure Image Fusion based on PCA". International Journal of Engineering Research and Advanced Technology 4, nr 8 (2018): 37–46. http://dx.doi.org/10.31695/ijerat.2018.3296.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Yan, Qingsen, Yu Zhu, Yulin Zhou, Jinqiu Sun, Lei Zhang i Yanning Zhang. "Enhancing image visuality by multi-exposure fusion". Pattern Recognition Letters 127 (listopad 2019): 66–75. http://dx.doi.org/10.1016/j.patrec.2018.10.008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Ahmad, Attiq, Muhammad Mohsin Riaz, Abdul Ghafoor i Tahir Zaidi. "Noise Resistant Fusion for Multi-Exposure Sensors". IEEE Sensors Journal 16, nr 13 (lipiec 2016): 5123–24. http://dx.doi.org/10.1109/jsen.2016.2556715.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Paul, Sujoy, Ioana S. Sevcenco i Panajotis Agathoklis. "Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain". Journal of Circuits, Systems and Computers 25, nr 10 (22.07.2016): 1650123. http://dx.doi.org/10.1142/s0218126616501231.

Pełny tekst źródła
Streszczenie:
A multi-exposure and multi-focus image fusion algorithm is proposed. The algorithm is developed for color images and is based on blending the gradients of the luminance components of the input images using the maximum gradient magnitude at each pixel location and then obtaining the fused luminance using a Haar wavelet-based image reconstruction technique. This image reconstruction algorithm is of [Formula: see text] complexity and includes a Poisson solver at each resolution to eliminate artifacts that may appear due to the nonconservative nature of the resulting gradient. The fused chrominance, on the other hand, is obtained as a weighted mean of the chrominance channels. The particular case of grayscale images is treated as luminance fusion. Experimental results and comparison with other fusion techniques indicate that the proposed algorithm is fast and produces similar or better results than existing techniques for both multi-exposure as well as multi-focus images.
Style APA, Harvard, Vancouver, ISO itp.
17

Shouhong, Chen, Zhao Shuang, Ma Jun, Liu Xinyu i Hou Xingna. "A Multi-exposure Image Fusion Method with Detail Preservation". MATEC Web of Conferences 173 (2018): 03009. http://dx.doi.org/10.1051/matecconf/201817303009.

Pełny tekst źródła
Streszczenie:
In view of the problems of uneven exposure in the image acquisition and the serious loss of details in the traditional multi-exposure image fusion algorithm, a method of image fusion with details preservation is proposed. A weighted approach to multi-exposure image fusion is used, taking into account the features such as local contrast, exposure brightness, and color information to better preserve detail. For the purpose of eliminating the noise and interference, using the recursive filter to filter. Compared with other algorithms, the proposed algorithm can retain the rich detail information to meet the quality requirements of spot welding image fusion and has certain application value.
Style APA, Harvard, Vancouver, ISO itp.
18

Buades, Antoni, Jose Luis Lisani i Onofre Martorell. "Efficient joint noise removal and multi exposure fusion". PLOS ONE 17, nr 3 (25.03.2022): e0265464. http://dx.doi.org/10.1371/journal.pone.0265464.

Pełny tekst źródła
Streszczenie:
Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, captured with different exposure times, into a single image. This combination process (also known as fusion) is performed in such a way that the parts with better exposure of each input image have a stronger influence. Therefore, in the result image all areas are well exposed. In this paper, we propose a new method that performs MEF and noise removal. Rather than denoising each input image individually and then fusing the obtained results, the proposed strategy jointly performs fusion and denoising in the Discrete Cosinus Transform (DCT) domain, which leads to a very efficient algorithm. The method takes advantage of spatio-temporal patch selection and collaborative 3D thresholding. Several experiments show that the obtained results are significantly superior to the existing state of the art.
Style APA, Harvard, Vancouver, ISO itp.
19

Xu, Han, Liang Haochen i Jiayi Ma. "Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 3 (26.06.2023): 3010–17. http://dx.doi.org/10.1609/aaai.v37i3.25404.

Pełny tekst źródła
Streszczenie:
This paper proposes an unsupervised multi-exposure image fusion (MEF) method via contrastive learning, termed as MEF-CL. It breaks exposure limits and performance bottleneck faced by existing methods. MEF-CL firstly designs similarity constraints to preserve contents in source images. It eliminates the need for ground truth (actually not exist and created artificially) and thus avoids negative impacts of inappropriate ground truth on performance and generalization. Moreover, we explore a latent feature space and apply contrastive learning in this space to guide fused image to approximate normal-light samples and stay away from inappropriately exposed ones. In this way, characteristics of fused images (e.g., illumination, colors) can be further improved without being subject to source images. Therefore, MEF-CL is applicable to image pairs of any multiple exposures rather than a pair of under-exposed and over-exposed images mandated by existing methods. By alleviating dependence on source images, MEF-CL shows better generalization for various scenes. Consequently, our results exhibit appropriate illumination, detailed textures, and saturated colors. Qualitative, quantitative, and ablation experiments validate the superiority and generalization of MEF-CL. Our code is publicly available at https://github.com/hanna-xu/MEF-CL.
Style APA, Harvard, Vancouver, ISO itp.
20

Zhang, Xingchen. "Benchmarking and comparing multi-exposure image fusion algorithms". Information Fusion 74 (październik 2021): 111–31. http://dx.doi.org/10.1016/j.inffus.2021.02.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

PIAO Yong-jie, 朴永杰, 徐伟 XU Wei, 王绍举 WANG Shao-ju i 陶淑苹 TAO Shu-ping. "Fast multi-exposure image fusion for HDR video". Chinese Journal of Liquid Crystals and Displays 29, nr 6 (2014): 1032–41. http://dx.doi.org/10.3788/yjyxs20142906.1032.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Han, Dong, Liang Li, Xiaojie Guo i Jiayi Ma. "Multi-exposure image fusion via deep perceptual enhancement". Information Fusion 79 (marzec 2022): 248–62. http://dx.doi.org/10.1016/j.inffus.2021.10.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Xu, Fang, Jinghong Liu, Yueming Song, Hui Sun i Xuan Wang. "Multi-Exposure Image Fusion Techniques: A Comprehensive Review". Remote Sensing 14, nr 3 (7.02.2022): 771. http://dx.doi.org/10.3390/rs14030771.

Pełny tekst źródła
Streszczenie:
Multi-exposure image fusion (MEF) is emerging as a research hotspot in the fields of image processing and computer vision, which can integrate images with multiple exposure levels into a full exposure image of high quality. It is an economical and effective way to improve the dynamic range of the imaging system and has broad application prospects. In recent years, with the further development of image representation theories such as multi-scale analysis and deep learning, significant progress has been achieved in this field. This paper comprehensively investigates the current research status of MEF methods. The relevant theories and key technologies for constructing MEF models are analyzed and categorized. The representative MEF methods in each category are introduced and summarized. Then, based on the multi-exposure image sequences in static and dynamic scenes, we present a comparative study for 18 representative MEF approaches using nine commonly used objective fusion metrics. Finally, the key issues of current MEF research are discussed, and a development trend for future research is put forward.
Style APA, Harvard, Vancouver, ISO itp.
24

Ma, Kede, Kai Zeng i Zhou Wang. "Perceptual Quality Assessment for Multi-Exposure Image Fusion". IEEE Transactions on Image Processing 24, nr 11 (listopad 2015): 3345–56. http://dx.doi.org/10.1109/tip.2015.2442920.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Zhang, Wenlong, Xiaolin Liu, Wuchao Wang i Yujun Zeng. "Multi-exposure image fusion based on wavelet transform". International Journal of Advanced Robotic Systems 15, nr 2 (marzec 2018): 172988141876893. http://dx.doi.org/10.1177/1729881418768939.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Choi, Seungcheol, Oh-Jin Kwon i Jinhee Lee. "A Method for Fast Multi-Exposure Image Fusion". IEEE Access 5 (2017): 7371–80. http://dx.doi.org/10.1109/access.2017.2694038.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Liu, Yu, i Zengfu Wang. "Dense SIFT for ghost-free multi-exposure fusion". Journal of Visual Communication and Image Representation 31 (sierpień 2015): 208–24. http://dx.doi.org/10.1016/j.jvcir.2015.06.021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Wu, Shengcong, Ting Luo, Yang Song i Haiyong Xu. "Multi-exposure image fusion based on tensor decomposition". Multimedia Tools and Applications 79, nr 33-34 (16.06.2020): 23957–75. http://dx.doi.org/10.1007/s11042-020-09131-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Martorell, Onofre, Catalina Sbert i Antoni Buades. "Ghosting-free DCT based multi-exposure image fusion". Signal Processing: Image Communication 78 (październik 2019): 409–25. http://dx.doi.org/10.1016/j.image.2019.07.020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Hu, Yunxue, Chao Xu, Zhengping Li, Fang Lei, Bo Feng, Lingling Chu, Chao Nie i Dou Wang. "Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering". Electronics 11, nr 8 (11.04.2022): 1211. http://dx.doi.org/10.3390/electronics11081211.

Pełny tekst źródła
Streszczenie:
Due to the large dynamic range of real scenes, it is difficult for images taken by ordinary devices to represent high-quality real scenes. To obtain high-quality images, the exposure fusion of multiple exposure images of the same scene is required. The fusion of multiple images results in the loss of edge detail in areas with large exposure differences. Aiming at this problem, this paper proposes a new method for the fusion of multi-exposure images with detail enhancement based on homomorphic filtering. First, a fusion weight map is constructed using exposure and local contrast. The exposure weight map is calculated by threshold segmentation and an adaptively adjustable Gaussian curve. The algorithm can assign appropriate exposure weights to well-exposed areas so that the fused image retains more details. Then, the weight map is denoised using fast-guided filtering. Finally, a fusion method for the detail enhancement of Laplacian pyramids with homomorphic filtering is proposed to enhance the edge information lost by Laplacian pyramid fusion. The experimental results show that the method can generate high-quality images with clear edges and details as well as similar color appearance to real scenes and can outperform existing algorithms in both subjective and objective evaluations.
Style APA, Harvard, Vancouver, ISO itp.
31

Han, Yongcheng, Wenwen Zhang i Weiji He. "Low-light image enhancement based on simulated multi-exposure fusion". Journal of Physics: Conference Series 2478, nr 6 (1.06.2023): 062022. http://dx.doi.org/10.1088/1742-6596/2478/6/062022.

Pełny tekst źródła
Streszczenie:
Abstract We propose an efficient and novel framework for low-light image enhancement, which aims to reveal information hidden in the darkness and improve overall brightness and local contrast. Inspired by exposure fusion technique, we employ simulated multi-exposure images fusion to derive bright, natural and satisfactory results, while images are taken under poor conditions such as insufficient or uneven illumination, back-lit and limited exposure time. Specifically, we first design a novel method to generate synthesized images with varying exposure time from a single image. Thus, each image of these artificial sequences contains necessary information for the final desired enhanced result. We then introduce a flexible multi-exposure fusion framework to achieve fused images, which comprises a weight map prediction module and a multi-scale fusion module. Extensive experiments show that our approach can achieve similar or better performance compared to serval state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
32

Jia, Jinquan, Jian Sun i Zhiqin Zhu. "A multi-scale patch-wise algorithm for multi-exposure image fusion". Optik 248 (grudzień 2021): 168120. http://dx.doi.org/10.1016/j.ijleo.2021.168120.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Li, Hui, Kede Ma, Hongwei Yong i Lei Zhang. "Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion". IEEE Transactions on Image Processing 29 (2020): 5805–16. http://dx.doi.org/10.1109/tip.2020.2987133.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Im, Chan-Gi, Dong-Min Son, Hyuk-Ju Kwon i Sung-Hak Lee. "Multi-Task Learning Approach Using Dynamic Hyperparameter for Multi-Exposure Fusion". Mathematics 11, nr 7 (27.03.2023): 1620. http://dx.doi.org/10.3390/math11071620.

Pełny tekst źródła
Streszczenie:
High-dynamic-range (HDR) image synthesis is a technology developed to accurately reproduce the actual scene of an image on a display by extending the dynamic range of an image. Multi-exposure fusion (MEF) technology, which synthesizes multiple low-dynamic-range (LDR) images to create an HDR image, has been developed in various ways including pixel-based, patch-based, and deep learning-based methods. Recently, methods to improve the synthesis quality of images using deep-learning-based algorithms have mainly been studied in the field of MEF. Despite the various advantages of deep learning, deep-learning-based methods have a problem in that numerous multi-exposed and ground-truth images are required for training. In this study, we propose a self-supervised learning method that generates and learns reference images based on input images during the training process. In addition, we propose a method to train a deep learning model for an MEF with multiple tasks using dynamic hyperparameters on the loss functions. It enables effective network optimization across multiple tasks and high-quality image synthesis while preserving a simple network architecture. Our learning method applied to the deep learning model shows superior synthesis results compared to other existing deep-learning-based image synthesis algorithms.
Style APA, Harvard, Vancouver, ISO itp.
35

BAI Bendu, 白本督, i 李俊鹏 LI Junpeng. "基于注意力机制的多曝光图像融合算法". ACTA PHOTONICA SINICA 51, nr 4 (2022): 0410004. http://dx.doi.org/10.3788/gzxb20225104.0410004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Qu, Linhao, Shaolei Liu, Manning Wang i Zhijian Song. "TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 2 (28.06.2022): 2126–34. http://dx.doi.org/10.1609/aaai.v36i2.20109.

Pełny tekst źródła
Streszczenie:
In this paper, we propose TransMEF, a transformer-based multi-exposure image fusion framework that uses self-supervised multi-task learning. The framework is based on an encoder-decoder network, which can be trained on large natural image datasets and does not require ground truth fusion images. We design three self-supervised reconstruction tasks according to the characteristics of multi-exposure images and conduct these tasks simultaneously using multi-task learning; through this process, the network can learn the characteristics of multi-exposure images and extract more generalized features. In addition, to compensate for the defect in establishing long-range dependencies in CNN-based architectures, we design an encoder that combines a CNN module with a transformer module. This combination enables the network to focus on both local and global information. We evaluated our method and compared it to 11 competitive traditional and deep learning-based methods on the latest released multi-exposure image fusion benchmark dataset, and our method achieved the best performance in both subjective and objective evaluations. Code will be available at https://github.com/miccaiif/TransMEF.
Style APA, Harvard, Vancouver, ISO itp.
37

Zhang, Jiebin, Shangyou Zeng, Ying Wang, Jinjin Wang i Hongyang Chen. "An Efficient Extreme-Exposure Image Fusion Method". Journal of Physics: Conference Series 2137, nr 1 (1.12.2021): 012061. http://dx.doi.org/10.1088/1742-6596/2137/1/012061.

Pełny tekst źródła
Streszczenie:
Abstract Since the existing commercial imaging equipment cannot meet the requirements of high dynamic range, multi-exposure image fusion is an economical and fast method to implement HDR. However, the existing multi-exposure image fusion algorithms have the problems of long fusion time and large data storage. We propose an extreme exposure image fusion method based on deep learning. In this method, two extreme exposure image sequences are sent to the network, channel and spatial attention mechanisms are introduced to automatically learn and optimize the weights, and the optimal fusion weights are output. In addition, the model in this paper adopts real-value training and makes the output closer to the real value through a new custom loss function. Experimental results show that this method is superior to existing methods in both objective and subjective aspects.
Style APA, Harvard, Vancouver, ISO itp.
38

Qi, Yanjie, Zehui Yang i Lin Kang. "Multi-exposure X-ray image fusion quality evaluation based on CSF and gradient amplitude similarity". Journal of X-Ray Science and Technology 29, nr 4 (27.07.2021): 697–709. http://dx.doi.org/10.3233/xst-210871.

Pełny tekst źródła
Streszczenie:
Due to the limitation of dynamic range of the imaging device, the fixed-voltage X-ray images often produce overexposed or underexposed regions. Some structure information of the composite steel component is lost. This problem can be solved by fusing the multi-exposure X-ray images taken by using different voltages in order to produce images with more detailed structures or information. Due to the lack of research on multi-exposure X-ray image fusion technology, there is no evaluation method specially for multi-exposure X-ray image fusion. For the multi-exposure X-ray fusion images obtained by different fusion algorithms may have problems such as the detail loss and structure disorder. To address these problems, this study proposes a new multi-exposure X-ray image fusion quality evaluation method based on contrast sensitivity function (CSF) and gradient amplitude similarity. First, with the idea of information fusion, multiple reference images are fused into a new reference image. Next, the gradient amplitude similarity between the new reference image and the test image is calculated. Then, the whole evaluation value can be obtained by weighting CSF. In the experiments of MEF Database, the SROCC of the proposed algorithm is about 0.8914, and the PLCC is about 0.9287, which shows that the proposed algorithm is more consistent with subjective perception in MEF Database. Thus, this study demonstrates a new objective evaluation method, which generates the results that are consistent with the subjective feelings of human eyes.
Style APA, Harvard, Vancouver, ISO itp.
39

KINOSHITA, Yuma, Sayaka SHIOTA i Hitoshi KIYA. "A Pseudo Multi-Exposure Fusion Method Using Single Image". IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101.A, nr 11 (1.11.2018): 1806–14. http://dx.doi.org/10.1587/transfun.e101.a.1806.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Ryu, Je-Ho, Jong-Han Kim i Jong-Ok Kim. "Deep Gradual Multi-Exposure Fusion Via Recurrent Convolutional Network". IEEE Access 9 (2021): 144756–67. http://dx.doi.org/10.1109/access.2021.3122540.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Kim, Jong-Han, Je-Ho Ryu i Jong-Ok Kim. "FDD-MEF: Feature-Decomposition-Based Deep Multi-Exposure Fusion". IEEE Access 9 (2021): 164551–61. http://dx.doi.org/10.1109/access.2021.3134316.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Singh, Harbinder, Gabriel Cristobal, Gloria Bueno, Saul Blanco, Simrandeep Singh, P. N. Hrisheekesha i Nitin Mittal. "Multi-exposure microscopic image fusion-based detail enhancement algorithm". Ultramicroscopy 236 (czerwiec 2022): 113499. http://dx.doi.org/10.1016/j.ultramic.2022.113499.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Yang, T. T., i P. Y. Fang. "Multi exposure image fusion algorithm based on YCbCr space". IOP Conference Series: Materials Science and Engineering 359 (maj 2018): 012002. http://dx.doi.org/10.1088/1757-899x/359/1/012002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Ma, Kede, Zhengfang Duanmu, Hanwei Zhu, Yuming Fang i Zhou Wang. "Deep Guided Learning for Fast Multi-Exposure Image Fusion". IEEE Transactions on Image Processing 29 (2020): 2808–19. http://dx.doi.org/10.1109/tip.2019.2952716.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Kou, Fei, Zhengguo Li, Changyun Wen i Weihai Chen. "Edge-preserving smoothing pyramid based multi-scale exposure fusion". Journal of Visual Communication and Image Representation 53 (maj 2018): 235–44. http://dx.doi.org/10.1016/j.jvcir.2018.03.020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Rui Shen, I. Cheng, Jianbo Shi i A. Basu. "Generalized Random Walks for Fusion of Multi-Exposure Images". IEEE Transactions on Image Processing 20, nr 12 (grudzień 2011): 3634–46. http://dx.doi.org/10.1109/tip.2011.2150235.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Yang, Yi, Wei Cao, Shiqian Wu i Zhengguo Li. "Multi-Scale Fusion of Two Large-Exposure-Ratio Images". IEEE Signal Processing Letters 25, nr 12 (grudzień 2018): 1885–89. http://dx.doi.org/10.1109/lsp.2018.2877893.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Peng, Yan-Tsung, He-Hao Liao i Ching-Fu Chen. "Two-Exposure Image Fusion Based on Optimized Adaptive Gamma Correction". Sensors 22, nr 1 (22.12.2021): 24. http://dx.doi.org/10.3390/s22010024.

Pełny tekst źródła
Streszczenie:
In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.
Style APA, Harvard, Vancouver, ISO itp.
49

Gao, Mingyu, Junfan Wang, Yi Chen, Chenjie Du, Chao Chen i Yu Zeng. "An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System". Electronics 10, nr 4 (4.02.2021): 383. http://dx.doi.org/10.3390/electronics10040383.

Pełny tekst źródła
Streszczenie:
In this paper, an improved multi-exposure image fusion method for intelligent transportation systems (ITS) is proposed. Further, a new multi-exposure image dataset for traffic signs, TrafficSign, is presented to verify the method. In the intelligent transportation system, as a type of important road information, traffic signs are fused by this method to obtain a fused image with moderate brightness and intact information. By estimating the degree of retention of different features in the source image, the fusion results have adaptive characteristics similar to that of the source image. Considering the weather factor and environmental noise, the source image is preprocessed by bilateral filtering and dehazing algorithm. Further, this paper uses adaptive optimization to improve the quality of the output image of the fusion model. The qualitative and quantitative experiments on the new dataset show that the multi-exposure image fusion algorithm proposed in this paper is effective and practical in the ITS.
Style APA, Harvard, Vancouver, ISO itp.
50

Liu, Jie, i Yuanyuan Peng. "Research on Image Enhancement Algorithm Based on Artificial Intelligence". Journal of Physics: Conference Series 2074, nr 1 (1.11.2021): 012024. http://dx.doi.org/10.1088/1742-6596/2074/1/012024.

Pełny tekst źródła
Streszczenie:
Abstract With the continuous development of social science and technology, people have higher and higher requirements for image quality. This paper integrates artificial intelligence technology and proposes a low-illuminance panoramic image enhancement algorithm based on simulated multi-exposure fusion. First, the image information content is used as a metric to estimate the optimal exposure rate, and the brightness mapping function is used to enhance the V component, and the low-illuminance. The image and the overexposed image are input, the medium exposure image is synthesized by the exposure interpolation method, and the low illumination image, the medium exposure image and the overexposure image are merged using a multi-scale fusion strategy to obtain the fused image, which is corrected by a multi-scale detail enhancement algorithm. After the fusion, the details are enhanced to obtain the final enhanced image. Practice has proved that the algorithm can effectively improve the image quality.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii