Academic literature on the topic 'Multi-Exposure Fusion'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Exposure Fusion.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multi-Exposure Fusion"
Goshtasby, A. Ardeshir. "Fusion of multi-exposure images." Image and Vision Computing 23, no. 6 (June 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.
Full textCM, Sushmitha, and Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.
Full textLI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang, and 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion." Optics and Precision Engineering 24, no. 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.
Full textShaikh, Uzmanaz A., Vivek J. Vishwakarma, and Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion." IETE Journal of Education 59, no. 2 (July 3, 2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.
Full textLi, Zhengguo, Zhe Wei, Changyun Wen, and Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion." IEEE Transactions on Image Processing 26, no. 3 (March 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.
Full textInoue, Kohei, Hengjun Yu, Kenji Hara, and Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion." Journal of the Institute of Image Information and Television Engineers 70, no. 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.
Full textLiu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng, and Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.
Full textXiang, Hu Yan, and Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm." Advanced Materials Research 403-408 (November 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.
Full textDeng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu, and Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion." International Journal of Advanced Robotic Systems 14, no. 2 (March 1, 2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.
Full textHayat, Naila, and Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending." IET Image Processing 13, no. 13 (November 14, 2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.
Full textDissertations / Theses on the topic "Multi-Exposure Fusion"
Saravi, Sara. "Use of Coherent Point Drift in computer vision applications." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Full textShen, Xuan-Wei, and 沈軒緯. "ROI-Based Fusion of Multi-Exposure Images." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/u7wnc3.
Full text國立中正大學
電機工程研究所
102
In this thesis we propose a technique to blend multiple exposure images into a high-quality result, without generating a physically-based high dynamic range (HDR) image. This avoids physically influence like camera response curve or Bright change like flash. Our method is selecting the best image in the multiple exposure images for leading, and the other images for supportings. The leading mostly use directly in the result image expect where the ill-exposured region in leading image. In this region we fused the supportings to “support” the leading to have the high-quality result image.
Guo, Bo-Yi, and 郭柏易. "Multi-exposure image fusion using tone reproduction." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81661073819289386294.
Full text清雲科技大學
電子工程所
99
The high dynamic range (HDR) imaging is a technique that allows saving intact luminance information of an image in real scene. The main disadvantage of the HDR imaging is that it requires huge memory storage and may cause difficulties in transmission. Thus, most digital cameras in current market use low dynamic range (LDR) imaging technique for image storage. However, the LDR image lacks the ability to perform intact luminance information of an image in real scene. Many researchers have developed techniques on merging several LDR images to produce a new LDR image with the quality of a HDR image. This paper proposes to fuse multiple exposure low dynamic range (LDR) images by using tone reproduction method. The produced image is another LDR image which has the visual quality of a high dynamic range (HDR) image. The input is a series of multiple exposure images of the same scene. Each input image is equally segmented to several blocks. For each block, the one with the best visual effect is selected from one of these input images to integrate a new image. A tone reproduction algorithm is used to fuse these selected blocks to form an image with the visual effect of a HDR image.
Chien-Chih, Hsu. "Multi-Exposure Image Fusion for Digital Still Cameras." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200718214103.
Full textHsu, Chien-Chih, and 徐健智. "Multi-Exposure Image Fusion for Digital Still Cameras." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03114246831843680298.
Full text國立臺灣師範大學
應用電子科技研究所
94
Fusing multiple frames with different exposure time can accommodate the scenes with high dynamic range. In this thesis, we propose an approach that is to fuse two consecutive video frames with different exposure time. Finding moving objects and human faces in such a higher dynamic range fused image is much easier than the typical exposed frame. The proposed approach has been implemented on a commercial digital camera with robust hardware and software platform and the experimental result shows that the fusion speed is around 4 frames/seconds. Fusing several differently exposed images is particular useful for taking pictures in high dynamic range scenes. However, the scene changes resulted from moving objects and vibrations caused by photographers must be compensated adaptively in practical camera applications. In this thesis, we propose a complete image fusion system aiming at extending dynamic range of a picture by fusing three differently exposed images. Unlike most of fusion algorithms operate on processed images and try to recovery the transfer functions of imaging systems, the proposed image fusion algorithm directly works on raw image data before performing any color image processing. The proposed global and local stabilization algorithms efficiently remedy the vibration problems and achieve a quite stable image fusion result.
LIU, TING-CHI, and 劉丁綺. "Automatic Multi-Exposure Image Fusion Based on Visual Saliency Map." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t79r2z.
Full text國立臺北科技大學
自動化科技研究所
107
Due to the limitation of camera sensors, the high dynamic range imaging(HDRI) techniques are popular in recent years. Although HDRI is getting mathematically sophisticated, such as global filter or local filter of eliminating noise, pixel variation and optimization of preserving details, scientists are still looking for a good model of weight map generation for multiple-exposure image fusion which produces HDR images. In the research of human vision system, we also try to understand the fineness of image and what defines good image feature to human vision. In this study, we utilize the concept of salient region detection in weight map determination. We combine two points of view, which are the human vision perception and the image mathematical features, to find color-contrast cue and exposure cue. Through cues-formed weight map and pyramid fusion, the results appear fine contrast and saturation while preserving details in different scene of images.
Ram, Prabhakar Kathirvel. "Advances in High Dynamic Range Imaging Using Deep Learning." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5515.
Full textBooks on the topic "Multi-Exposure Fusion"
Low Choy, Samantha, Justine Murray, Allan James, and Kerrie Mengersen. Combining monitoring data and computer model output in assessing environmental exposure. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.18.
Full textBook chapters on the topic "Multi-Exposure Fusion"
May, Michael, Martin Turner, and Tim Morris. "FAW for Multi-exposure Fusion Features." In Advances in Image and Video Technology, 289–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25367-6_26.
Full textYu, Hanyi, and Yue Zhou. "Fusion of Multi-view Multi-exposure Images with Delaunay Triangulation." In Neural Information Processing, 682–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_76.
Full textBhateja, Vikrant, Ashutosh Singhal, and Anil Singh. "Multi-exposure Image Fusion Method Using Anisotropic Diffusion." In Advances in Intelligent Systems and Computing, 893–900. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1165-9_80.
Full textPatel, Diptiben, Bhoomika Sonane, and Shanmuganathan Raman. "Multi-exposure Image Fusion Using Propagated Image Filtering." In Advances in Intelligent Systems and Computing, 431–41. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2104-6_39.
Full textXue, Xiao, and Yue Zhou. "Multi-view Multi-exposure Image Fusion Based on Random Walks Model." In Computer Vision – ACCV 2016 Workshops, 491–99. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_36.
Full textJishnu, C. R., and S. Vishnukumar. "An Effective Multi-exposure Fusion Approach Using Exposure Correction and Recursive Filter." In Inventive Systems and Control, 625–37. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1624-5_46.
Full textBiswas, Anmol, K. S. Green Rosh, and Sachin Deepak Lomte. "Spatially Variant Laplacian Pyramids for Multi-frame Exposure Fusion." In Communications in Computer and Information Science, 73–81. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4015-8_7.
Full textBai, Yuanchao, Huizhu Jia, Hengjin Liu, Guoqing Xiang, Xiaodong Xie, Ming Jiang, and Wen Gao. "A Multi-exposure Fusion Method Based on Locality Properties." In Advances in Multimedia Information Processing – PCM 2014, 333–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13168-9_37.
Full textDhivya Lakshmi, R., K. V. Rekha, E. Ilin Shantha Mary, Gandhapu Yashwanth, Gokavarapu Manikanta Kalyan, Singamsetty Phanindra, M. Jasmine Pemeena Priyadarsini, and N. Sardar Basha. "Multi-exposure Image Reconstruction by Energy-Based Fusion Technique." In Advances in Automation, Signal Processing, Instrumentation, and Control, 1403–10. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_130.
Full textPapachary, Biroju, N. L. Aravinda, and A. Srinivasula Reddy. "DLCNN Model with Multi-exposure Fusion for Underwater Image Enhancement." In Advances in Cognitive Science and Communications, 179–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8086-2_18.
Full textConference papers on the topic "Multi-Exposure Fusion"
Kinoshita, Yuman, Sayaka Shiota, Hitoshi Kiya, and Taichi Yoshida. "Multi-Exposure Image Fusion Based on Exposure Compensation." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461604.
Full textKinoshita, Yuma, Sayaka Shiota, and Hitoshi Kiya. "Automatic Exposure Compensation for Multi-Exposure Image Fusion." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451401.
Full textMartorell, O., C. Sbert, and A. Buades. "DCT based Multi Exposure Image Fusion." In 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356700002108.
Full textWang, Chunmeng, Mingyi Bao, and Chen He. "Interactive Fusion for Multi-exposure Images." In ICIT 2020: IoT and Smart City. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3446999.3447014.
Full textMartorell, O., C. Sbert, and A. Buades. "DCT based Multi Exposure Image Fusion." In 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356701150122.
Full textZhang, Wenlong, Xiaolin Liu, and Wuchao Wang. "Wavelet-Based Multi-Exposure Image Fusion." In the 8th International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3015166.3015199.
Full textWang, Qiantong, Weihai Chen, Xingming Wu, and Zhengguo Li. "Detail Preserving Multi-Scale Exposure Fusion." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451177.
Full textLi, Hui, and Lei Zhang. "Multi-Exposure Fusion with CNN Features." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451689.
Full textLi, Yanfeng, Mingyang Liu, and Kaixu Han. "Overview of Multi-Exposure Image Fusion." In 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021. http://dx.doi.org/10.1109/iceib53692.2021.9686453.
Full textZhang, Xingdi, Shuaicheng Liu, Shuyuan Zhu, and Bing Zeng. "Multi-exposure Fusion With JPEG Compression Guidance." In 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698717.
Full text