Literatura académica sobre el tema "Multi-Exposure Fusion"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Multi-Exposure Fusion".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Multi-Exposure Fusion"
Goshtasby, A. Ardeshir. "Fusion of multi-exposure images". Image and Vision Computing 23, n.º 6 (junio de 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.
Texto completoCM, Sushmitha y Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30 de junio de 2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.
Texto completoLI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang y 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion". Optics and Precision Engineering 24, n.º 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.
Texto completoShaikh, Uzmanaz A., Vivek J. Vishwakarma y Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion". IETE Journal of Education 59, n.º 2 (3 de julio de 2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.
Texto completoLi, Zhengguo, Zhe Wei, Changyun Wen y Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion". IEEE Transactions on Image Processing 26, n.º 3 (marzo de 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.
Texto completoInoue, Kohei, Hengjun Yu, Kenji Hara y Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion". Journal of the Institute of Image Information and Television Engineers 70, n.º 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.
Texto completoLiu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng y Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.
Texto completoXiang, Hu Yan y Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm". Advanced Materials Research 403-408 (noviembre de 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.
Texto completoDeng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu y Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion". International Journal of Advanced Robotic Systems 14, n.º 2 (1 de marzo de 2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.
Texto completoHayat, Naila y Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending". IET Image Processing 13, n.º 13 (14 de noviembre de 2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.
Texto completoTesis sobre el tema "Multi-Exposure Fusion"
Saravi, Sara. "Use of Coherent Point Drift in computer vision applications". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Texto completoShen, Xuan-Wei y 沈軒緯. "ROI-Based Fusion of Multi-Exposure Images". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/u7wnc3.
Texto completo國立中正大學
電機工程研究所
102
In this thesis we propose a technique to blend multiple exposure images into a high-quality result, without generating a physically-based high dynamic range (HDR) image. This avoids physically influence like camera response curve or Bright change like flash. Our method is selecting the best image in the multiple exposure images for leading, and the other images for supportings. The leading mostly use directly in the result image expect where the ill-exposured region in leading image. In this region we fused the supportings to “support” the leading to have the high-quality result image.
Guo, Bo-Yi y 郭柏易. "Multi-exposure image fusion using tone reproduction". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81661073819289386294.
Texto completo清雲科技大學
電子工程所
99
The high dynamic range (HDR) imaging is a technique that allows saving intact luminance information of an image in real scene. The main disadvantage of the HDR imaging is that it requires huge memory storage and may cause difficulties in transmission. Thus, most digital cameras in current market use low dynamic range (LDR) imaging technique for image storage. However, the LDR image lacks the ability to perform intact luminance information of an image in real scene. Many researchers have developed techniques on merging several LDR images to produce a new LDR image with the quality of a HDR image. This paper proposes to fuse multiple exposure low dynamic range (LDR) images by using tone reproduction method. The produced image is another LDR image which has the visual quality of a high dynamic range (HDR) image. The input is a series of multiple exposure images of the same scene. Each input image is equally segmented to several blocks. For each block, the one with the best visual effect is selected from one of these input images to integrate a new image. A tone reproduction algorithm is used to fuse these selected blocks to form an image with the visual effect of a HDR image.
Chien-Chih, Hsu. "Multi-Exposure Image Fusion for Digital Still Cameras". 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200718214103.
Texto completoHsu, Chien-Chih y 徐健智. "Multi-Exposure Image Fusion for Digital Still Cameras". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03114246831843680298.
Texto completo國立臺灣師範大學
應用電子科技研究所
94
Fusing multiple frames with different exposure time can accommodate the scenes with high dynamic range. In this thesis, we propose an approach that is to fuse two consecutive video frames with different exposure time. Finding moving objects and human faces in such a higher dynamic range fused image is much easier than the typical exposed frame. The proposed approach has been implemented on a commercial digital camera with robust hardware and software platform and the experimental result shows that the fusion speed is around 4 frames/seconds. Fusing several differently exposed images is particular useful for taking pictures in high dynamic range scenes. However, the scene changes resulted from moving objects and vibrations caused by photographers must be compensated adaptively in practical camera applications. In this thesis, we propose a complete image fusion system aiming at extending dynamic range of a picture by fusing three differently exposed images. Unlike most of fusion algorithms operate on processed images and try to recovery the transfer functions of imaging systems, the proposed image fusion algorithm directly works on raw image data before performing any color image processing. The proposed global and local stabilization algorithms efficiently remedy the vibration problems and achieve a quite stable image fusion result.
LIU, TING-CHI y 劉丁綺. "Automatic Multi-Exposure Image Fusion Based on Visual Saliency Map". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t79r2z.
Texto completo國立臺北科技大學
自動化科技研究所
107
Due to the limitation of camera sensors, the high dynamic range imaging(HDRI) techniques are popular in recent years. Although HDRI is getting mathematically sophisticated, such as global filter or local filter of eliminating noise, pixel variation and optimization of preserving details, scientists are still looking for a good model of weight map generation for multiple-exposure image fusion which produces HDR images. In the research of human vision system, we also try to understand the fineness of image and what defines good image feature to human vision. In this study, we utilize the concept of salient region detection in weight map determination. We combine two points of view, which are the human vision perception and the image mathematical features, to find color-contrast cue and exposure cue. Through cues-formed weight map and pyramid fusion, the results appear fine contrast and saturation while preserving details in different scene of images.
Ram, Prabhakar Kathirvel. "Advances in High Dynamic Range Imaging Using Deep Learning". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5515.
Texto completoLibros sobre el tema "Multi-Exposure Fusion"
Low Choy, Samantha, Justine Murray, Allan James y Kerrie Mengersen. Combining monitoring data and computer model output in assessing environmental exposure. Editado por Anthony O'Hagan y Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.18.
Texto completoCapítulos de libros sobre el tema "Multi-Exposure Fusion"
May, Michael, Martin Turner y Tim Morris. "FAW for Multi-exposure Fusion Features". En Advances in Image and Video Technology, 289–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25367-6_26.
Texto completoYu, Hanyi y Yue Zhou. "Fusion of Multi-view Multi-exposure Images with Delaunay Triangulation". En Neural Information Processing, 682–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_76.
Texto completoBhateja, Vikrant, Ashutosh Singhal y Anil Singh. "Multi-exposure Image Fusion Method Using Anisotropic Diffusion". En Advances in Intelligent Systems and Computing, 893–900. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1165-9_80.
Texto completoPatel, Diptiben, Bhoomika Sonane y Shanmuganathan Raman. "Multi-exposure Image Fusion Using Propagated Image Filtering". En Advances in Intelligent Systems and Computing, 431–41. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2104-6_39.
Texto completoXue, Xiao y Yue Zhou. "Multi-view Multi-exposure Image Fusion Based on Random Walks Model". En Computer Vision – ACCV 2016 Workshops, 491–99. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_36.
Texto completoJishnu, C. R. y S. Vishnukumar. "An Effective Multi-exposure Fusion Approach Using Exposure Correction and Recursive Filter". En Inventive Systems and Control, 625–37. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1624-5_46.
Texto completoBiswas, Anmol, K. S. Green Rosh y Sachin Deepak Lomte. "Spatially Variant Laplacian Pyramids for Multi-frame Exposure Fusion". En Communications in Computer and Information Science, 73–81. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4015-8_7.
Texto completoBai, Yuanchao, Huizhu Jia, Hengjin Liu, Guoqing Xiang, Xiaodong Xie, Ming Jiang y Wen Gao. "A Multi-exposure Fusion Method Based on Locality Properties". En Advances in Multimedia Information Processing – PCM 2014, 333–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13168-9_37.
Texto completoDhivya Lakshmi, R., K. V. Rekha, E. Ilin Shantha Mary, Gandhapu Yashwanth, Gokavarapu Manikanta Kalyan, Singamsetty Phanindra, M. Jasmine Pemeena Priyadarsini y N. Sardar Basha. "Multi-exposure Image Reconstruction by Energy-Based Fusion Technique". En Advances in Automation, Signal Processing, Instrumentation, and Control, 1403–10. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_130.
Texto completoPapachary, Biroju, N. L. Aravinda y A. Srinivasula Reddy. "DLCNN Model with Multi-exposure Fusion for Underwater Image Enhancement". En Advances in Cognitive Science and Communications, 179–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8086-2_18.
Texto completoActas de conferencias sobre el tema "Multi-Exposure Fusion"
Kinoshita, Yuman, Sayaka Shiota, Hitoshi Kiya y Taichi Yoshida. "Multi-Exposure Image Fusion Based on Exposure Compensation". En ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461604.
Texto completoKinoshita, Yuma, Sayaka Shiota y Hitoshi Kiya. "Automatic Exposure Compensation for Multi-Exposure Image Fusion". En 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451401.
Texto completoMartorell, O., C. Sbert y A. Buades. "DCT based Multi Exposure Image Fusion". En 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356700002108.
Texto completoWang, Chunmeng, Mingyi Bao y Chen He. "Interactive Fusion for Multi-exposure Images". En ICIT 2020: IoT and Smart City. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3446999.3447014.
Texto completoMartorell, O., C. Sbert y A. Buades. "DCT based Multi Exposure Image Fusion". En 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356701150122.
Texto completoZhang, Wenlong, Xiaolin Liu y Wuchao Wang. "Wavelet-Based Multi-Exposure Image Fusion". En the 8th International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3015166.3015199.
Texto completoWang, Qiantong, Weihai Chen, Xingming Wu y Zhengguo Li. "Detail Preserving Multi-Scale Exposure Fusion". En 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451177.
Texto completoLi, Hui y Lei Zhang. "Multi-Exposure Fusion with CNN Features". En 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451689.
Texto completoLi, Yanfeng, Mingyang Liu y Kaixu Han. "Overview of Multi-Exposure Image Fusion". En 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021. http://dx.doi.org/10.1109/iceib53692.2021.9686453.
Texto completoZhang, Xingdi, Shuaicheng Liu, Shuyuan Zhu y Bing Zeng. "Multi-exposure Fusion With JPEG Compression Guidance". En 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698717.
Texto completo