Gotowa bibliografia na temat „Multi-Exposure Fusion”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-Exposure Fusion”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Multi-Exposure Fusion"
Goshtasby, A. Ardeshir. "Fusion of multi-exposure images". Image and Vision Computing 23, nr 6 (czerwiec 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.
Pełny tekst źródłaCM, Sushmitha, i Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30.06.2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.
Pełny tekst źródłaLI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang i 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion". Optics and Precision Engineering 24, nr 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.
Pełny tekst źródłaShaikh, Uzmanaz A., Vivek J. Vishwakarma i Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion". IETE Journal of Education 59, nr 2 (3.07.2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.
Pełny tekst źródłaLi, Zhengguo, Zhe Wei, Changyun Wen i Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion". IEEE Transactions on Image Processing 26, nr 3 (marzec 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.
Pełny tekst źródłaInoue, Kohei, Hengjun Yu, Kenji Hara i Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion". Journal of the Institute of Image Information and Television Engineers 70, nr 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.
Pełny tekst źródłaLiu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng i Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.
Pełny tekst źródłaXiang, Hu Yan, i Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm". Advanced Materials Research 403-408 (listopad 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.
Pełny tekst źródłaDeng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu i Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion". International Journal of Advanced Robotic Systems 14, nr 2 (1.03.2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.
Pełny tekst źródłaHayat, Naila, i Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending". IET Image Processing 13, nr 13 (14.11.2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.
Pełny tekst źródłaRozprawy doktorskie na temat "Multi-Exposure Fusion"
Saravi, Sara. "Use of Coherent Point Drift in computer vision applications". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Pełny tekst źródłaShen, Xuan-Wei, i 沈軒緯. "ROI-Based Fusion of Multi-Exposure Images". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/u7wnc3.
Pełny tekst źródła國立中正大學
電機工程研究所
102
In this thesis we propose a technique to blend multiple exposure images into a high-quality result, without generating a physically-based high dynamic range (HDR) image. This avoids physically influence like camera response curve or Bright change like flash. Our method is selecting the best image in the multiple exposure images for leading, and the other images for supportings. The leading mostly use directly in the result image expect where the ill-exposured region in leading image. In this region we fused the supportings to “support” the leading to have the high-quality result image.
Guo, Bo-Yi, i 郭柏易. "Multi-exposure image fusion using tone reproduction". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81661073819289386294.
Pełny tekst źródła清雲科技大學
電子工程所
99
The high dynamic range (HDR) imaging is a technique that allows saving intact luminance information of an image in real scene. The main disadvantage of the HDR imaging is that it requires huge memory storage and may cause difficulties in transmission. Thus, most digital cameras in current market use low dynamic range (LDR) imaging technique for image storage. However, the LDR image lacks the ability to perform intact luminance information of an image in real scene. Many researchers have developed techniques on merging several LDR images to produce a new LDR image with the quality of a HDR image. This paper proposes to fuse multiple exposure low dynamic range (LDR) images by using tone reproduction method. The produced image is another LDR image which has the visual quality of a high dynamic range (HDR) image. The input is a series of multiple exposure images of the same scene. Each input image is equally segmented to several blocks. For each block, the one with the best visual effect is selected from one of these input images to integrate a new image. A tone reproduction algorithm is used to fuse these selected blocks to form an image with the visual effect of a HDR image.
Chien-Chih, Hsu. "Multi-Exposure Image Fusion for Digital Still Cameras". 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200718214103.
Pełny tekst źródłaHsu, Chien-Chih, i 徐健智. "Multi-Exposure Image Fusion for Digital Still Cameras". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03114246831843680298.
Pełny tekst źródła國立臺灣師範大學
應用電子科技研究所
94
Fusing multiple frames with different exposure time can accommodate the scenes with high dynamic range. In this thesis, we propose an approach that is to fuse two consecutive video frames with different exposure time. Finding moving objects and human faces in such a higher dynamic range fused image is much easier than the typical exposed frame. The proposed approach has been implemented on a commercial digital camera with robust hardware and software platform and the experimental result shows that the fusion speed is around 4 frames/seconds. Fusing several differently exposed images is particular useful for taking pictures in high dynamic range scenes. However, the scene changes resulted from moving objects and vibrations caused by photographers must be compensated adaptively in practical camera applications. In this thesis, we propose a complete image fusion system aiming at extending dynamic range of a picture by fusing three differently exposed images. Unlike most of fusion algorithms operate on processed images and try to recovery the transfer functions of imaging systems, the proposed image fusion algorithm directly works on raw image data before performing any color image processing. The proposed global and local stabilization algorithms efficiently remedy the vibration problems and achieve a quite stable image fusion result.
LIU, TING-CHI, i 劉丁綺. "Automatic Multi-Exposure Image Fusion Based on Visual Saliency Map". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t79r2z.
Pełny tekst źródła國立臺北科技大學
自動化科技研究所
107
Due to the limitation of camera sensors, the high dynamic range imaging(HDRI) techniques are popular in recent years. Although HDRI is getting mathematically sophisticated, such as global filter or local filter of eliminating noise, pixel variation and optimization of preserving details, scientists are still looking for a good model of weight map generation for multiple-exposure image fusion which produces HDR images. In the research of human vision system, we also try to understand the fineness of image and what defines good image feature to human vision. In this study, we utilize the concept of salient region detection in weight map determination. We combine two points of view, which are the human vision perception and the image mathematical features, to find color-contrast cue and exposure cue. Through cues-formed weight map and pyramid fusion, the results appear fine contrast and saturation while preserving details in different scene of images.
Ram, Prabhakar Kathirvel. "Advances in High Dynamic Range Imaging Using Deep Learning". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5515.
Pełny tekst źródłaKsiążki na temat "Multi-Exposure Fusion"
Low Choy, Samantha, Justine Murray, Allan James i Kerrie Mengersen. Combining monitoring data and computer model output in assessing environmental exposure. Redaktorzy Anthony O'Hagan i Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.18.
Pełny tekst źródłaCzęści książek na temat "Multi-Exposure Fusion"
May, Michael, Martin Turner i Tim Morris. "FAW for Multi-exposure Fusion Features". W Advances in Image and Video Technology, 289–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25367-6_26.
Pełny tekst źródłaYu, Hanyi, i Yue Zhou. "Fusion of Multi-view Multi-exposure Images with Delaunay Triangulation". W Neural Information Processing, 682–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_76.
Pełny tekst źródłaBhateja, Vikrant, Ashutosh Singhal i Anil Singh. "Multi-exposure Image Fusion Method Using Anisotropic Diffusion". W Advances in Intelligent Systems and Computing, 893–900. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1165-9_80.
Pełny tekst źródłaPatel, Diptiben, Bhoomika Sonane i Shanmuganathan Raman. "Multi-exposure Image Fusion Using Propagated Image Filtering". W Advances in Intelligent Systems and Computing, 431–41. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2104-6_39.
Pełny tekst źródłaXue, Xiao, i Yue Zhou. "Multi-view Multi-exposure Image Fusion Based on Random Walks Model". W Computer Vision – ACCV 2016 Workshops, 491–99. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_36.
Pełny tekst źródłaJishnu, C. R., i S. Vishnukumar. "An Effective Multi-exposure Fusion Approach Using Exposure Correction and Recursive Filter". W Inventive Systems and Control, 625–37. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1624-5_46.
Pełny tekst źródłaBiswas, Anmol, K. S. Green Rosh i Sachin Deepak Lomte. "Spatially Variant Laplacian Pyramids for Multi-frame Exposure Fusion". W Communications in Computer and Information Science, 73–81. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4015-8_7.
Pełny tekst źródłaBai, Yuanchao, Huizhu Jia, Hengjin Liu, Guoqing Xiang, Xiaodong Xie, Ming Jiang i Wen Gao. "A Multi-exposure Fusion Method Based on Locality Properties". W Advances in Multimedia Information Processing – PCM 2014, 333–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13168-9_37.
Pełny tekst źródłaDhivya Lakshmi, R., K. V. Rekha, E. Ilin Shantha Mary, Gandhapu Yashwanth, Gokavarapu Manikanta Kalyan, Singamsetty Phanindra, M. Jasmine Pemeena Priyadarsini i N. Sardar Basha. "Multi-exposure Image Reconstruction by Energy-Based Fusion Technique". W Advances in Automation, Signal Processing, Instrumentation, and Control, 1403–10. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_130.
Pełny tekst źródłaPapachary, Biroju, N. L. Aravinda i A. Srinivasula Reddy. "DLCNN Model with Multi-exposure Fusion for Underwater Image Enhancement". W Advances in Cognitive Science and Communications, 179–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8086-2_18.
Pełny tekst źródłaStreszczenia konferencji na temat "Multi-Exposure Fusion"
Kinoshita, Yuman, Sayaka Shiota, Hitoshi Kiya i Taichi Yoshida. "Multi-Exposure Image Fusion Based on Exposure Compensation". W ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461604.
Pełny tekst źródłaKinoshita, Yuma, Sayaka Shiota i Hitoshi Kiya. "Automatic Exposure Compensation for Multi-Exposure Image Fusion". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451401.
Pełny tekst źródłaMartorell, O., C. Sbert i A. Buades. "DCT based Multi Exposure Image Fusion". W 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356700002108.
Pełny tekst źródłaWang, Chunmeng, Mingyi Bao i Chen He. "Interactive Fusion for Multi-exposure Images". W ICIT 2020: IoT and Smart City. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3446999.3447014.
Pełny tekst źródłaMartorell, O., C. Sbert i A. Buades. "DCT based Multi Exposure Image Fusion". W 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356701150122.
Pełny tekst źródłaZhang, Wenlong, Xiaolin Liu i Wuchao Wang. "Wavelet-Based Multi-Exposure Image Fusion". W the 8th International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3015166.3015199.
Pełny tekst źródłaWang, Qiantong, Weihai Chen, Xingming Wu i Zhengguo Li. "Detail Preserving Multi-Scale Exposure Fusion". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451177.
Pełny tekst źródłaLi, Hui, i Lei Zhang. "Multi-Exposure Fusion with CNN Features". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451689.
Pełny tekst źródłaLi, Yanfeng, Mingyang Liu i Kaixu Han. "Overview of Multi-Exposure Image Fusion". W 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021. http://dx.doi.org/10.1109/iceib53692.2021.9686453.
Pełny tekst źródłaZhang, Xingdi, Shuaicheng Liu, Shuyuan Zhu i Bing Zeng. "Multi-exposure Fusion With JPEG Compression Guidance". W 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698717.
Pełny tekst źródła