Gotowa bibliografia na temat „Multi-Exposure Fusion”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-Exposure Fusion”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Multi-Exposure Fusion"

1

Goshtasby, A. Ardeshir. "Fusion of multi-exposure images". Image and Vision Computing 23, nr 6 (czerwiec 2005): 611–18. http://dx.doi.org/10.1016/j.imavis.2005.02.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

CM, Sushmitha, i Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30.06.2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

LI Wei-zhong, 李卫中, 易本顺 YI Ben-shun, 邱. 康. QIU Kang i 彭. 红. PENG Hong. "Detail preserving multi-exposure image fusion". Optics and Precision Engineering 24, nr 9 (2016): 2283–92. http://dx.doi.org/10.3788/ope.20162409.2283.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shaikh, Uzmanaz A., Vivek J. Vishwakarma i Shubham S. Mahale. "Dynamic Scene Multi-Exposure Image Fusion". IETE Journal of Education 59, nr 2 (3.07.2018): 53–61. http://dx.doi.org/10.1080/09747338.2018.1510744.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Li, Zhengguo, Zhe Wei, Changyun Wen i Jinghong Zheng. "Detail-Enhanced Multi-Scale Exposure Fusion". IEEE Transactions on Image Processing 26, nr 3 (marzec 2017): 1243–52. http://dx.doi.org/10.1109/tip.2017.2651366.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Inoue, Kohei, Hengjun Yu, Kenji Hara i Kiichi Urahama. "Saturation-Enhancing Multi-Exposure Image Fusion". Journal of the Institute of Image Information and Television Engineers 70, nr 8 (2016): J185—J187. http://dx.doi.org/10.3169/itej.70.j185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Renshuai, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng i Xuan Cheng. "EMEF: Ensemble Multi-Exposure Image Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1710–18. http://dx.doi.org/10.1609/aaai.v37i2.25259.

Pełny tekst źródła
Streszczenie:
Although remarkable progress has been made in recent years, current multi-exposure image fusion (MEF) research is still bounded by the lack of real ground truth, objective evaluation function, and robust fusion strategy. In this paper, we study the MEF problem from a new perspective. We don’t utilize any synthesized ground truth, design any loss function, or develop any fusion strategy. Our proposed method EMEF takes advantage of the wisdom of multiple imperfect MEF contributors including both conventional and deep learning-based methods. Specifically, EMEF consists of two main stages: pre-train an imitator network and tune the imitator in the runtime. In the first stage, we make a unified network imitate different MEF targets in a style modulation way. In the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair. In the experiment, we construct EMEF from four state-of-the-art MEF methods and then make comparisons with the individuals and several other competitive methods on the latest released MEF benchmark dataset. The promising experimental results demonstrate that our ensemble framework can “get the best of all worlds”. The code is available at https://github.com/medalwill/EMEF.
Style APA, Harvard, Vancouver, ISO itp.
8

Xiang, Hu Yan, i Xi Rong Ma. "An Improved Multi-Exposure Image Fusion Algorithm". Advanced Materials Research 403-408 (listopad 2011): 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.

Pełny tekst źródła
Streszczenie:
An improved Multi-Exposure image fusion scheme is proposed to fuse visual images for wide range illumination applications. While previous image fusion approaches perform the fusion only concern with local details such as regional contrast and gradient, the proposed algorithm takes global illumination contrast into consideration at the same time; this can extend the dynamic range evidently. Wavelet is used as Multi-Scale analysis tool in intensity fusion. For color fusion, HSI color model and weight map based method is used. The experimental results showed that the proposed fusion scheme has significant advantages in dynamic range, regional contrast and color saturation.
Style APA, Harvard, Vancouver, ISO itp.
9

Deng, Chenwei, Zhen Li, Shuigen Wang, Xun Liu i Jiahui Dai. "Saturation-based quality assessment for colorful multi-exposure image fusion". International Journal of Advanced Robotic Systems 14, nr 2 (1.03.2017): 172988141769462. http://dx.doi.org/10.1177/1729881417694627.

Pełny tekst źródła
Streszczenie:
Multi-exposure image fusion is becoming increasingly influential in enhancing the quality of experience of consumer electronics. However, until now few works have been conducted on the performance evaluation of multi-exposure image fusion, especially colorful multi-exposure image fusion. Conventional quality assessment methods for multi-exposure image fusion mainly focus on grayscale information, while ignoring the color components, which also convey vital visual information. We propose an objective method for the quality assessment of colored multi-exposure image fusion based on image saturation, together with texture and structure similarities, which are able to measure the perceived color, texture, and structure information of fused images. The final image quality is predicted using an extreme learning machine with texture, structure, and saturation similarities as image features. Experimental results for a public multi-exposure image fusion database show that the proposed model can accurately predict colored multi-exposure image fusion image quality and correlates well with human perception. Compared with state-of-the-art image quality assessment models for image fusion, the proposed metric has better evaluation performance.
Style APA, Harvard, Vancouver, ISO itp.
10

Hayat, Naila, i Muhammad Imran. "Multi-exposure image fusion technique using multi-resolution blending". IET Image Processing 13, nr 13 (14.11.2019): 2554–61. http://dx.doi.org/10.1049/iet-ipr.2019.0438.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Multi-Exposure Fusion"

1

Saravi, Sara. "Use of Coherent Point Drift in computer vision applications". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.

Pełny tekst źródła
Streszczenie:
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.
Style APA, Harvard, Vancouver, ISO itp.
2

Shen, Xuan-Wei, i 沈軒緯. "ROI-Based Fusion of Multi-Exposure Images". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/u7wnc3.

Pełny tekst źródła
Streszczenie:
碩士
國立中正大學
電機工程研究所
102
In this thesis we propose a technique to blend multiple exposure images into a high-quality result, without generating a physically-based high dynamic range (HDR) image. This avoids physically influence like camera response curve or Bright change like flash. Our method is selecting the best image in the multiple exposure images for leading, and the other images for supportings. The leading mostly use directly in the result image expect where the ill-exposured region in leading image. In this region we fused the supportings to “support” the leading to have the high-quality result image.
Style APA, Harvard, Vancouver, ISO itp.
3

Guo, Bo-Yi, i 郭柏易. "Multi-exposure image fusion using tone reproduction". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81661073819289386294.

Pełny tekst źródła
Streszczenie:
碩士
清雲科技大學
電子工程所
99
The high dynamic range (HDR) imaging is a technique that allows saving intact luminance information of an image in real scene. The main disadvantage of the HDR imaging is that it requires huge memory storage and may cause difficulties in transmission. Thus, most digital cameras in current market use low dynamic range (LDR) imaging technique for image storage. However, the LDR image lacks the ability to perform intact luminance information of an image in real scene. Many researchers have developed techniques on merging several LDR images to produce a new LDR image with the quality of a HDR image. This paper proposes to fuse multiple exposure low dynamic range (LDR) images by using tone reproduction method. The produced image is another LDR image which has the visual quality of a high dynamic range (HDR) image. The input is a series of multiple exposure images of the same scene. Each input image is equally segmented to several blocks. For each block, the one with the best visual effect is selected from one of these input images to integrate a new image. A tone reproduction algorithm is used to fuse these selected blocks to form an image with the visual effect of a HDR image.
Style APA, Harvard, Vancouver, ISO itp.
4

Chien-Chih, Hsu. "Multi-Exposure Image Fusion for Digital Still Cameras". 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2004200718214103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hsu, Chien-Chih, i 徐健智. "Multi-Exposure Image Fusion for Digital Still Cameras". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03114246831843680298.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣師範大學
應用電子科技研究所
94
Fusing multiple frames with different exposure time can accommodate the scenes with high dynamic range. In this thesis, we propose an approach that is to fuse two consecutive video frames with different exposure time. Finding moving objects and human faces in such a higher dynamic range fused image is much easier than the typical exposed frame. The proposed approach has been implemented on a commercial digital camera with robust hardware and software platform and the experimental result shows that the fusion speed is around 4 frames/seconds. Fusing several differently exposed images is particular useful for taking pictures in high dynamic range scenes. However, the scene changes resulted from moving objects and vibrations caused by photographers must be compensated adaptively in practical camera applications. In this thesis, we propose a complete image fusion system aiming at extending dynamic range of a picture by fusing three differently exposed images. Unlike most of fusion algorithms operate on processed images and try to recovery the transfer functions of imaging systems, the proposed image fusion algorithm directly works on raw image data before performing any color image processing. The proposed global and local stabilization algorithms efficiently remedy the vibration problems and achieve a quite stable image fusion result.
Style APA, Harvard, Vancouver, ISO itp.
6

LIU, TING-CHI, i 劉丁綺. "Automatic Multi-Exposure Image Fusion Based on Visual Saliency Map". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t79r2z.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
自動化科技研究所
107
Due to the limitation of camera sensors, the high dynamic range imaging(HDRI) techniques are popular in recent years. Although HDRI is getting mathematically sophisticated, such as global filter or local filter of eliminating noise, pixel variation and optimization of preserving details, scientists are still looking for a good model of weight map generation for multiple-exposure image fusion which produces HDR images. In the research of human vision system, we also try to understand the fineness of image and what defines good image feature to human vision. In this study, we utilize the concept of salient region detection in weight map determination. We combine two points of view, which are the human vision perception and the image mathematical features, to find color-contrast cue and exposure cue. Through cues-formed weight map and pyramid fusion, the results appear fine contrast and saturation while preserving details in different scene of images.
Style APA, Harvard, Vancouver, ISO itp.
7

Ram, Prabhakar Kathirvel. "Advances in High Dynamic Range Imaging Using Deep Learning". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5515.

Pełny tekst źródła
Streszczenie:
Natural scenes have a wide range of brightness, from dark starry nights to bright sunlit beaches. Our human eyes can perceive such a vast range of illumination through various adaptation techniques, thus allowing us to enjoy them. Contrarily, digital cameras can capture a limited brightness range due to their sensor limitations. Often, the dynamic range of the scene far exceeds the hardware limit of standard digital camera sensors. In such scenarios, the resulting photos will consist of saturated regions, either too dark or too bright to visually comprehend. An easy to deploy and widely used algorithmic solution to this problem is to merge multiple Low Dynamic Range (LDR) images captured with varying exposures into a single High Dynamic Range (HDR) image. Such a fusion process is simple for static sequences that have no camera or object motion. However, in most practical situations, a certain amount of camera and object motions are inevitable, leading to ghost-like artifacts in the final fused result. The process of fusing the LDR images without such ghosting artifacts is known as HDR deghosting. In this thesis, we make several contributions to the literature on HDR deghosting. First, we present a novel method to utilize auxiliary motion segmentation for efficient HDR deghosting. By segmenting the input LDR images into static and moving regions, we propose to learn effective fusion rules for various challenging saturation and motion types. Additionally, we introduce a novel memory network that accumulates the necessary features required to generate plausible details that were lost in the saturated regions. We also present a large-scale motion segmentation dataset of 3683 varying exposure images to benefit the research community. The lack of large and diverse data on exposure brackets is a critical problem for the learning-based HDR deghosting methods. Our next work's main contribution is to generate dynamic bracketed exposure images with ground truth HDR from static sequences. We achieve this data augmentation by synthetically introducing motions through affine transformations. Through experiments, we show that the proposed method generalizes well onto other datasets with real-world motions. Next, we explore data-efficient image fusion techniques for HDR imaging. Convolutional Neural Networks (CNN) have shown tremendous success in many image reconstruction problems. However, CNN-based Multi-Exposure Fusion (MEF) and HDR imaging methods require collecting large datasets with ground truth, which is a tedious and time-consuming process. To address this issue, we propose novel zero and few-shot HDR image fusion methods. First, we introduce an unsupervised deep learning framework for static MEF utilizing a no-reference quality metric as the loss function. In our approach, we modify the Structural Similarity Index Metric (SSIM) to generate expected ground truth statistics and compare them with the predicted output. Second, we propose an approach for training a deep neural network for HDR image deghosting with few labeled and many unlabeled images. The training is done in two stages. In the first stage, the network is trained on a set of dynamic and static images with corresponding ground truth. In the second stage, the network is trained on artificial dynamic sequences and corresponding ground truth generated from stage one. The proposed approach performs comparably to existing methods with only five labeled images. Despite their impressive performance, existing CNN-based HDR deghosting methods are rigid in terms of the number of images to fuse. They are not scalable to fuse arbitrary length LDR sequences during validation. We address this issue by proposing two scalable HDR deghosting algorithms. First, we propose a modular fusion technique that uses mean-max feature aggregation to fuse an arbitrary number of LDR images. Second, we propose a recurrent neural network using a novel Self-Gated Memory (SGM) cell for scalable HDR deghosting. In the SGM cell, the information flow is controlled by multiplying the gate's output by a function of itself. Additionally, we use two SGM cells in a bidirectional setting to improve the output quality. The promising experimental results demonstrate the effectiveness of the proposed recurrent model in HDR deghosting. There are many successful deep learning-based approaches for HDR deghosting, but their computational cost is high, refraining us from generating high spatial resolution images. We address this problem by performing motion compensation in low resolution and use Bilateral Guided Upsampling (BGU) to generate a sharp high-resolution HDR image. The guide image for BGU is synthesized in the weight maps domain with bicubic upsampling. The proposed method outperforms existing methods in terms of computational efficiency while still being accurate. Our proposed method is fast and can fuse a sequence of three 16 megapixels high-resolution images in about 10 seconds.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Multi-Exposure Fusion"

1

Low Choy, Samantha, Justine Murray, Allan James i Kerrie Mengersen. Combining monitoring data and computer model output in assessing environmental exposure. Redaktorzy Anthony O'Hagan i Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.18.

Pełny tekst źródła
Streszczenie:
This article discusses an approach that combines monitoring data and computer model outputs for environmental exposure assessment. It describes the application of Bayesian data fusion methods using spatial Gaussian process models in studies of weekly wet deposition data for 2001 from 120 sites monitored by the US National Atmospheric Deposition Program (NADP) in the eastern United States. The article first provides an overview of environmental computer models, with a focus on the CMAQ (Community Multi-Scale Air Quality) Eta forecast model, before considering some algorithmic and pseudo-statistical approaches in weather prediction. It then reviews current state of the art fusion methods for environmental data analysis and introduces a non-dynamic downscaling approach. The static version of the dynamic spatial model is used to analyse the NADP weekly wet deposition data.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Multi-Exposure Fusion"

1

May, Michael, Martin Turner i Tim Morris. "FAW for Multi-exposure Fusion Features". W Advances in Image and Video Technology, 289–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25367-6_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yu, Hanyi, i Yue Zhou. "Fusion of Multi-view Multi-exposure Images with Delaunay Triangulation". W Neural Information Processing, 682–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_76.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bhateja, Vikrant, Ashutosh Singhal i Anil Singh. "Multi-exposure Image Fusion Method Using Anisotropic Diffusion". W Advances in Intelligent Systems and Computing, 893–900. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1165-9_80.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Patel, Diptiben, Bhoomika Sonane i Shanmuganathan Raman. "Multi-exposure Image Fusion Using Propagated Image Filtering". W Advances in Intelligent Systems and Computing, 431–41. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2104-6_39.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Xue, Xiao, i Yue Zhou. "Multi-view Multi-exposure Image Fusion Based on Random Walks Model". W Computer Vision – ACCV 2016 Workshops, 491–99. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_36.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Jishnu, C. R., i S. Vishnukumar. "An Effective Multi-exposure Fusion Approach Using Exposure Correction and Recursive Filter". W Inventive Systems and Control, 625–37. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1624-5_46.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Biswas, Anmol, K. S. Green Rosh i Sachin Deepak Lomte. "Spatially Variant Laplacian Pyramids for Multi-frame Exposure Fusion". W Communications in Computer and Information Science, 73–81. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4015-8_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Bai, Yuanchao, Huizhu Jia, Hengjin Liu, Guoqing Xiang, Xiaodong Xie, Ming Jiang i Wen Gao. "A Multi-exposure Fusion Method Based on Locality Properties". W Advances in Multimedia Information Processing – PCM 2014, 333–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13168-9_37.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Dhivya Lakshmi, R., K. V. Rekha, E. Ilin Shantha Mary, Gandhapu Yashwanth, Gokavarapu Manikanta Kalyan, Singamsetty Phanindra, M. Jasmine Pemeena Priyadarsini i N. Sardar Basha. "Multi-exposure Image Reconstruction by Energy-Based Fusion Technique". W Advances in Automation, Signal Processing, Instrumentation, and Control, 1403–10. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_130.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Papachary, Biroju, N. L. Aravinda i A. Srinivasula Reddy. "DLCNN Model with Multi-exposure Fusion for Underwater Image Enhancement". W Advances in Cognitive Science and Communications, 179–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8086-2_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Multi-Exposure Fusion"

1

Kinoshita, Yuman, Sayaka Shiota, Hitoshi Kiya i Taichi Yoshida. "Multi-Exposure Image Fusion Based on Exposure Compensation". W ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461604.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kinoshita, Yuma, Sayaka Shiota i Hitoshi Kiya. "Automatic Exposure Compensation for Multi-Exposure Image Fusion". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451401.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Martorell, O., C. Sbert i A. Buades. "DCT based Multi Exposure Image Fusion". W 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356700002108.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Chunmeng, Mingyi Bao i Chen He. "Interactive Fusion for Multi-exposure Images". W ICIT 2020: IoT and Smart City. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3446999.3447014.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Martorell, O., C. Sbert i A. Buades. "DCT based Multi Exposure Image Fusion". W 14th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007356701150122.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zhang, Wenlong, Xiaolin Liu i Wuchao Wang. "Wavelet-Based Multi-Exposure Image Fusion". W the 8th International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3015166.3015199.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Wang, Qiantong, Weihai Chen, Xingming Wu i Zhengguo Li. "Detail Preserving Multi-Scale Exposure Fusion". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451177.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Hui, i Lei Zhang. "Multi-Exposure Fusion with CNN Features". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451689.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Yanfeng, Mingyang Liu i Kaixu Han. "Overview of Multi-Exposure Image Fusion". W 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021. http://dx.doi.org/10.1109/iceib53692.2021.9686453.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Xingdi, Shuaicheng Liu, Shuyuan Zhu i Bing Zeng. "Multi-exposure Fusion With JPEG Compression Guidance". W 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698717.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii