Gotowa bibliografia na temat „Low-light images”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Low-light images”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Low-light images"

1

Patil, Akshay, Tejas Chaudhari, Ketan Deo, Kalpesh Sonawane i Rupali Bora. "Low Light Image Enhancement for Dark Images". International Journal of Data Science and Analysis 6, nr 4 (2020): 99. http://dx.doi.org/10.11648/j.ijdsa.20200604.11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hu, Zhe, Sunghyun Cho, Jue Wang i Ming-Hsuan Yang. "Deblurring Low-Light Images with Light Streaks". IEEE Transactions on Pattern Analysis and Machine Intelligence 40, nr 10 (1.10.2018): 2329–41. http://dx.doi.org/10.1109/tpami.2017.2768365.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yang, Yi, Zhengguo Li i Shiqian Wu. "Low-Light Image Brightening via Fusing Additional Virtual Images". Sensors 20, nr 16 (17.08.2020): 4614. http://dx.doi.org/10.3390/s20164614.

Pełny tekst źródła
Streszczenie:
Capturing high-quality images via mobile devices in low-light or backlighting conditions is very challenging. In this paper, a new, single image brightening algorithm is proposed to enhance an image captured in low-light conditions. Two virtual images with larger exposure times are generated to increase brightness and enhance fine details of the underexposed regions. In order to reduce the brightness change, the virtual images are generated via intensity mapping functions (IMFs) which are computed using available camera response functions (CRFs). To avoid possible color distortion in the virtual image due to one-to-many mapping, a least square minimization problem is formulated to determine brightening factors for all pixels in the underexposed regions. In addition, an edge-preserving smoothing technique is adopted to avoid noise in the underexposed regions from being amplified in the virtual images. The final brightened image is obtained by fusing the original image and two virtual images via a gradient domain guided image filtering (GGIF) based multiscale exposure fusion (MEF) with properly defined weights for all the images. Experimental results show that the relative brightness and color are preserved better by the proposed algorithm. The details in bright regions are also preserved well in the final image. The proposed algorithm is expected to be useful for computational photography on smart phones.
Style APA, Harvard, Vancouver, ISO itp.
4

FENG Wei, 冯. 维., 吴贵铭 WU Gui-ming, 赵大兴 ZHAO Da-xing i 刘红帝 LIU Hong-di. "Multi images fusion Retinex for low light image enhancement". Optics and Precision Engineering 28, nr 3 (2020): 736–44. http://dx.doi.org/10.3788/ope.20202803.0736.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lee, Hosang. "Successive Low-Light Image Enhancement Using an Image-Adaptive Mask". Symmetry 14, nr 6 (6.06.2022): 1165. http://dx.doi.org/10.3390/sym14061165.

Pełny tekst źródła
Streszczenie:
Low-light images are obtained in dark environments or in environments where there is insufficient light. Because of this, low-light images have low intensity values and dimmed features, making it difficult to directly apply computer vision or image recognition software to them. Therefore, to use computer vision processing on low-light images, an image improvement procedure is needed. There have been many studies on how to enhance low-light images. However, some of the existing methods create artifact and distortion effects in the resulting images. To improve low-light images, their contrast should be stretched naturally according to their features. This paper proposes the use of a low-light image enhancement method utilizing an image-adaptive mask that is composed of an image-adaptive ellipse. As a result, the low-light regions of the image are stretched and the bright regions are enhanced in a way that appears natural by an image-adaptive mask. Moreover, images that have been enhanced using the proposed method are color balanced, as this method has a color compensation effect due to the use of an image-adaptive mask. As a result, the improved image can better reflect the image’s subject, such as a sunset, and appears natural. However, when low-light images are stretched, the noise elements are also enhanced, causing part of the enhanced image to look dim and hazy. To tackle this issue, this paper proposes the use of guided image filtering based on using triple terms for the image-adaptive value. Images enhanced by the proposed method look natural and are objectively superior to those enhanced via other state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Xu, Xin, Shiqin Wang, Zheng Wang, Xiaolong Zhang i Ruimin Hu. "Exploring Image Enhancement for Salient Object Detection in Low Light Images". ACM Transactions on Multimedia Computing, Communications, and Applications 17, nr 1s (31.03.2021): 1–19. http://dx.doi.org/10.1145/3414839.

Pełny tekst źródła
Streszczenie:
Low light images captured in a non-uniform illumination environment usually are degraded with the scene depth and the corresponding environment lights. This degradation results in severe object information loss in the degraded image modality, which makes the salient object detection more challenging due to low contrast property and artificial light influence. However, existing salient object detection models are developed based on the assumption that the images are captured under a sufficient brightness environment, which is impractical in real-world scenarios. In this work, we propose an image enhancement approach to facilitate the salient object detection in low light images. The proposed model directly embeds the physical lighting model into the deep neural network to describe the degradation of low light images, in which the environment light is treated as a point-wise variate and changes with local content. Moreover, a Non-Local-Block Layer is utilized to capture the difference of local content of an object against its local neighborhood favoring regions. To quantitative evaluation, we construct a low light Images dataset with pixel-level human-labeled ground-truth annotations and report promising results on four public datasets and our benchmark dataset.
Style APA, Harvard, Vancouver, ISO itp.
7

Huang, Haofeng, Wenhan Yang, Yueyu Hu, Jiaying Liu i Ling-Yu Duan. "Towards Low Light Enhancement With RAW Images". IEEE Transactions on Image Processing 31 (2022): 1391–405. http://dx.doi.org/10.1109/tip.2022.3140610.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Yufei, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau i Alex Kot. "Low-Light Image Enhancement with Normalizing Flow". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 2604–12. http://dx.doi.org/10.1609/aaai.v36i3.20162.

Pełny tekst źródła
Streszczenie:
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
Style APA, Harvard, Vancouver, ISO itp.
9

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano i Yoichi Sato. "Image Enhancement of Low-light Scenes with Near-infrared Flash Images". IPSJ Transactions on Computer Vision and Applications 2 (2010): 215–23. http://dx.doi.org/10.2197/ipsjtcva.2.215.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Cao, Shuning, Yi Chang, Shengqi Xu, Houzhang Fang i Luxin Yan. "Nonlinear Deblurring for Low-Light Saturated Image". Sensors 23, nr 8 (7.04.2023): 3784. http://dx.doi.org/10.3390/s23083784.

Pełny tekst źródła
Streszczenie:
Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Low-light images"

1

McKoen, K. M. H. H. "Digital restoration of low light level video images". Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343720.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sankaran, Sharlini. "The influence of ambient light on the detectability of low-contrast lesions in simulated ultrasound images". Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175627273.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Авраменко, Віктор Васильович, Виктор Васильевич Авраменко, Viktor Vasylovych Avramenko i К. Salnik. "Recognition of fragments of standard images at low light level and the presence of additive impulsive noise". Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55739.

Pełny tekst źródła
Streszczenie:
On the basis of integral disproportion function of the first-order the algorithm recognizing fragments of standards is created. It works in low light image that is analyzed and the presence of additive impulse noise. This algorithm permits to fin an appropriate pixel in one of several standards for each pixel of the image.
Style APA, Harvard, Vancouver, ISO itp.
4

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Pełny tekst źródła
Streszczenie:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.
Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
Style APA, Harvard, Vancouver, ISO itp.
5

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing". University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography". University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment". Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.

Pełny tekst źródła
Streszczenie:
Light field image quality assessment (LF-IQA) has attracted increasing research interests due to the fast-growing demands for immersive media experience. The majority of existing LF-IQA metrics, however, heavily rely on high-complexity statistics-based feature extraction for the quality assessment task, which will be hardly sustainable in real-time applications or power-constrained consumer electronic devices in future real-life applications. In this research, a low-complexity Deep learning-based Light Field Image Quality Evaluator (DeLFIQE) is proposed to automatically and efficiently extract features for LF-IQA. To the best of my knowledge, this is the first attempt in LF-IQA with a dedicatedly designed convolutional neural network (CNN) based deep learning model. First, to significantly accelerate the training process, discriminative Epipolar Plane Image (EPI) patches, instead of the full light field images (LFIs) or full EPIs, are obtained and used as input for training and testing in DeLFIQE. By utilizing the EPI patches as input, the quality evaluation of 4-D LFIs is converted to the evaluation of 2-D EPI patches, thus significantly reducing the computational complexity. Furthermore, discriminative EPI patches are selected in such a way that they contain most of the distortion information, thus further improving the training efficiency. Second, to improve the quality assessment accuracy and robustness, a multi-task learning mechanism is designed and employed in DeLFIQE. Specifically, alongside the main task that predicts the final quality score, an auxiliary classification task is designed to classify LFIs based on their distortion types and severity levels. That way, the features are extracted to reflect the distortion types and severity levels, which in turn helps the main task improve the accuracy and robustness of the prediction. The extensive experiments show that DeLFIQE outperforms state-of-the-art metrics from both accuracy and correlation perspectives, especially on benchmark LF datasets of high angular resolutions. When tested on the LF datasets of low angular resolutions, however, the performance of DeLFIQE slightly declines, although still remains competitive. It is believed that it is due to the fact that the distortion feature information contained in the EPI patches gets reduced with the decrease of the LFIs’ angular resolutions, thus reducing the training efficiency and the overall performance of DeLFIQE. Therefore, a General-purpose deep learning-based Light Field Image Quality Evaluator (GeLFIQE) is proposed to perform accurately and efficiently on LF datasets of both high and low angular resolutions. First, a deep CNN model is pre-trained on one of the most comprehensive benchmark LF datasets of high angular resolutions containing abundant distortion features. Next, the features learned from the pre-trained model are transferred to the target LF dataset-specific CNN model to help improve the generalisation and overall performance on low-resolution LFIs containing fewer distortion features. The experimental results show that GeLFIQE substantially improves the performance of DeLFIQE on low-resolution LF datasets, which makes it a real general-purpose LF-IQA metric for LF datasets of various resolutions.
Style APA, Harvard, Vancouver, ISO itp.
8

Anzagira, Leo. "Imaging performance in advanced small pixel and low light image sensors". Thesis, Dartmouth College, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10144602.

Pełny tekst źródła
Streszczenie:

Even though image sensor performance has improved tremendously over the years, there are two key areas where sensor performance leaves room for improvement. Firstly, small pixel performance is limited by low full well, low dynamic range and high crosstalk, which greatly impact the sensor performance. Also, low light color image sensors, which use color filter arrays, have low sensitivity due to the selective light rejection by the color filters. The quanta image sensor (QIS) concept was proposed to mitigate the full well and dynamic range issues in small pixel image sensors. In this concept, spatial and temporal oversampling is used to address the full well and dynamic range issues. The QIS concept however does not address the issue of crosstalk. In this dissertation, the high spatial and temporal oversampling of the QIS concept is leveraged to enhance small pixel performance in two ways. Firstly, the oversampling allows polarization sensitive QIS jots to be incorporated to obtain polarization information. Secondly, the oversampling in the QIS concept allows the design of alternative color filter array patterns for mitigating the impact of crosstalk on color reproduction in small pixels. Finally, the problem of performing color imaging in low light conditions is tackled with a proposed stacked pixel concept. This concept which enables color sampling without the use of absorption color filters, improves low light sensitivity. Simulations are performed to demonstrate the advantage of this proposed pixel structure over sensors employing color filter arrays such as the Bayer pattern. A color correction algorithm for improvement of color reproduction in low light is also developed and demonstrates improved performance.

Style APA, Harvard, Vancouver, ISO itp.
9

Hurle, Bernard Alfred. "The charge coupled device as a low light detector in beam foil spectroscopy". Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332296.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Raventos, Joaquin. "New Test Set for Video Quality Benchmarking". Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1226.

Pełny tekst źródła
Streszczenie:
A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target’s contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Low-light images"

1

The low light photography field guide: Go beyond daylight to capture stunning low light images. Lewes, East Sussex: ILEX, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

The low light photography field guide: Go beyond daylight to capture stunning low light images. Waltham, MA: Focal Press/Elsevier, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

B, Johnson C., Sinha Divyendu, Laplante Phillip A, Society of Photo-optical Instrumentation Engineers. i Boeing Company, red. Low-light-level and real-time imaging systems, components, and applications: 9-11 July 2002, Seattle, Washington, USA. Bellingham, Wash., USA: SPIE, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Scrofani, James William. An adaptive method for the enhanced fusion of low-light visible and uncooled thermal infrared imagery. Monterey, Calif: Naval Postgraduate School, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Master Low Light Photography: Create Beautiful Images from Twilight to Dawn. Amherst Media, Incorporated, 2016.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Freeman, Michael. Low Light Photography Field Guide: The Essential Guide to Getting Perfect Images in Challenging Light. Taylor & Francis Group, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

An Adaptive Method for the Enhanced Fusion of Low-Light Visible and Uncooled Thermal Infrared Imagery. Storming Media, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Vanacker, Beatrijs, i Lieke van Deinsen, red. Portraits and Poses. Leuven University Press, 2022. http://dx.doi.org/10.11116/9789461664532.

Pełny tekst źródła
Streszczenie:
The complex relation between gender and the representation of intellectual authority has deep roots in European history. Portraits and Poses adopts a historical approach to shed new light on this topical subject. It addresses various modes and strategies by which learned women (authors, scientists, jurists, midwifes, painters, and others) sought to negotiate and legitimise their authority at the dawn of modern science in Early Modern and Enlightenment Europe (1600–1800). This volume explores the transnational dimensions of intellectual networks in France, Italy, Britain, the German states and the Low Countries. Drawing on a wide range of case studies from different spheres of professionalisation, it examines both individual and collective constructions of female intellectual authority through word and image. In its innovative combination of an interdisciplinary and transnational approach, this volume contributes to the growing literature on women and intellectual authority in the Early Modern Era and outlines contours for future research.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Low-light images"

1

Gogineni, Navyadhara, Yashashvini Rachamallu, Rineeth Saladi i K. V. V. Bhanu Prakash. "Image Caption Generation for Low Light Images". W Communications in Computer and Information Science, 57–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20977-2_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rollin, Joël. "Optics for Images at Low Light Levels". W Optics in Instruments, 235–66. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118744321.ch8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kavya, Avvaru Greeshma, Uruguti Aparna i Pallikonda Sarah Suhasini. "Enhancement of Low-Light Images Using CNN". W Emerging Research in Computing, Information, Communication and Applications, 1–9. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano i Yoichi Sato. "Image Enhancement of Low-Light Scenes with Near-Infrared Flash Images". W Computer Vision – ACCV 2009, 213–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12307-8_20.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Xu, Chenmin, Shijie Hao, Yanrong Guo i Richang Hong. "Enhancing Low-Light Images with JPEG Artifact Based on Image Decomposition". W Advances in Multimedia Information Processing – PCM 2018, 3–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00767-6_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Bai, Lianfa, Jing Han i Jiang Yue. "Colourization of Low-Light-Level Images Based on Rule Mining". W Night Vision Processing and Understanding, 235–66. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-1669-2_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sun, Jianing, Jiaao Zhang, Risheng Liu i Fan Xin. "Brightening the Low-Light Images via a Dual Guided Network". W Artificial Intelligence, 240–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Mading, Jiaying Liu, Wenhan Yang i Zongming Guo. "Joint Denoising and Enhancement for Low-Light Images via Retinex Model". W Communications in Computer and Information Science, 91–99. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Jian, Wuzhen, Hui Zhao, Zhe Bai i Xuewu Fan. "Low-Light Remote Sensing Images Enhancement Algorithm Based on Fully Convolutional Neural Network". W Proceedings of the 5th China High Resolution Earth Observation Conference (CHREOC 2018), 56–65. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6553-9_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ghosh, Archan, Kalporoop Goswami, Riju Chatterjee i Paramita Sarkar. "A Light SRGAN for Up-Scaling of Low Resolution and High Latency Images". W Communications in Computer and Information Science, 56–67. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81462-5_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Low-light images"

1

Hu, Zhe, Sunghyun Cho, Jue Wang i Ming-Hsuan Yang. "Deblurring Low-Light Images with Light Streaks". W 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.432.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cagigal, Manuel P., i Pedro M. Prieto. "Low-light-level images reconstruction". W EI 92, redaktorzy James R. Sullivan, Benjamin M. Dawson i Majid Rabbani. SPIE, 1992. http://dx.doi.org/10.1117/12.58331.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Cagigal, Manuel P., i Pedro M. Prieto. "Recovery from low light level images". W Education in Optics. SPIE, 1992. http://dx.doi.org/10.1117/12.57882.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mpouziotas, Dimitrios, Eleftherios Mastrapas, Nikos Dimokas, Petros Karvelis i Evripidis Glavas. "Object Detection for Low Light Images". W 2022 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2022. http://dx.doi.org/10.1109/seeda-cecnsm57760.2022.9932921.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cheng, B. T., M. A. Fiddy, J. D. Newman, R. C. Van Vranken i D. L. Clark. "Image restoration from low light level degraded data". W Quantum-Limited Imaging and Image Processing. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/qlip.1989.tuc4.

Pełny tekst źródła
Streszczenie:
We present some preliminary work on the reconstruction of low contrast images for remote sensing type applications. We assume the data to be a set of noise degraded images, and report on the application of reconstruction techniques that both estimate the support of the image use the triple correlation method to obtain the image itself. These reconstruction methods are applied to simulated data in the first instance.
Style APA, Harvard, Vancouver, ISO itp.
6

Zhuo, Shaojie, Xiaopeng Zhang, Xiaoping Miao i Terence Sim. "Enhancing low light images using near infrared flash images". W 2010 17th IEEE International Conference on Image Processing (ICIP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icip.2010.5652900.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hayashi, Masahiro, Fumihiko Sakaue, Jun Sato, Yoshiteru Koreeda, Masakatsu Higashikubo i Hidenori Yamamoto. "Recovering High Intensity Images from Sequential Low Light Images". W 17th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0010891600003124.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wernick, Miles N., i G. Michael Morris. "Image Classification at Low Light Levels". W Quantum-Limited Imaging and Image Processing. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/qlip.1986.tud2.

Pełny tekst źródła
Streszczenie:
At low light levels, a two-dimensional photon-counting detector may be employed to provide an estimate of the cross-correlation between an input image and a set of reference images stored in computer memory. Recent experiments1,2 have demonstrated that in many cases, a small number of detected photoevents is sufficient to yield reliable image recognition. The binary nature of the photon­limited input, coupled with the high rate of operation afforded by commercially available detectors, leads to recognition times on the order of tens of milliseconds.
Style APA, Harvard, Vancouver, ISO itp.
9

Puzovic, Snezana, Ranko Petrovic, Milos Pavlovic i Srdan Stankovic. "Enhancement Algorithms for Low-Light and Low-Contrast Images". W 2020 19th International Symposium INFOTEH-JAHORINA (INFOTEH). IEEE, 2020. http://dx.doi.org/10.1109/infoteh48170.2020.9066316.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Isberg, Thomas A., i G. Michael Morris. "Rotation-Invariant image recognition at low light levels". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.tur4.

Pełny tekst źródła
Streszczenie:
It has been recently demonstrated1 that fast, reliable image recognition can be accomplished using a commercially available 2-D photon counting detector and position computing electronics. In this paper the method of circular harmonic function expansion for coherent rotation-invariant image recognition2 is extended to the case of photon-limited image recognition. Theory for rotation-invariant filtering with incoherent illumination is presented and applied to photon-limited image recognition. Low light level input images are cross correlated with the square modulus of a single circular harmonic component of a high light level reference image stored in computer memory. The mean value of the correlation signal is found to be invariant with respect to rotation of the input image. The experimental results for the correlation signals for various input images are presented. Histograms of the correlation signal are shown and compared with theoretical predictions for the probability density functions. It is demonstrated that reliable image recognition, independent of the rotational orientation of the input, is possible with as few as 5000 detected photoevents.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Low-light images"

1

Sinai, Michael J., Jason S. McCarley i William K. Krebs. Scene Recognition with Infrared, Low-Light, and Sensor-Fused Imagery. Fort Belvoir, VA: Defense Technical Information Center, luty 1999. http://dx.doi.org/10.21236/ada389643.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii