Добірка наукової літератури з теми "Low-light images"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Low-light images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Low-light images"

1

Patil, Akshay, Tejas Chaudhari, Ketan Deo, Kalpesh Sonawane, and Rupali Bora. "Low Light Image Enhancement for Dark Images." International Journal of Data Science and Analysis 6, no. 4 (2020): 99. http://dx.doi.org/10.11648/j.ijdsa.20200604.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hu, Zhe, Sunghyun Cho, Jue Wang, and Ming-Hsuan Yang. "Deblurring Low-Light Images with Light Streaks." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 10 (October 1, 2018): 2329–41. http://dx.doi.org/10.1109/tpami.2017.2768365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yang, Yi, Zhengguo Li, and Shiqian Wu. "Low-Light Image Brightening via Fusing Additional Virtual Images." Sensors 20, no. 16 (August 17, 2020): 4614. http://dx.doi.org/10.3390/s20164614.

Повний текст джерела
Анотація:
Capturing high-quality images via mobile devices in low-light or backlighting conditions is very challenging. In this paper, a new, single image brightening algorithm is proposed to enhance an image captured in low-light conditions. Two virtual images with larger exposure times are generated to increase brightness and enhance fine details of the underexposed regions. In order to reduce the brightness change, the virtual images are generated via intensity mapping functions (IMFs) which are computed using available camera response functions (CRFs). To avoid possible color distortion in the virtual image due to one-to-many mapping, a least square minimization problem is formulated to determine brightening factors for all pixels in the underexposed regions. In addition, an edge-preserving smoothing technique is adopted to avoid noise in the underexposed regions from being amplified in the virtual images. The final brightened image is obtained by fusing the original image and two virtual images via a gradient domain guided image filtering (GGIF) based multiscale exposure fusion (MEF) with properly defined weights for all the images. Experimental results show that the relative brightness and color are preserved better by the proposed algorithm. The details in bright regions are also preserved well in the final image. The proposed algorithm is expected to be useful for computational photography on smart phones.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

FENG Wei, 冯. 维., 吴贵铭 WU Gui-ming, 赵大兴 ZHAO Da-xing, and 刘红帝 LIU Hong-di. "Multi images fusion Retinex for low light image enhancement." Optics and Precision Engineering 28, no. 3 (2020): 736–44. http://dx.doi.org/10.3788/ope.20202803.0736.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lee, Hosang. "Successive Low-Light Image Enhancement Using an Image-Adaptive Mask." Symmetry 14, no. 6 (June 6, 2022): 1165. http://dx.doi.org/10.3390/sym14061165.

Повний текст джерела
Анотація:
Low-light images are obtained in dark environments or in environments where there is insufficient light. Because of this, low-light images have low intensity values and dimmed features, making it difficult to directly apply computer vision or image recognition software to them. Therefore, to use computer vision processing on low-light images, an image improvement procedure is needed. There have been many studies on how to enhance low-light images. However, some of the existing methods create artifact and distortion effects in the resulting images. To improve low-light images, their contrast should be stretched naturally according to their features. This paper proposes the use of a low-light image enhancement method utilizing an image-adaptive mask that is composed of an image-adaptive ellipse. As a result, the low-light regions of the image are stretched and the bright regions are enhanced in a way that appears natural by an image-adaptive mask. Moreover, images that have been enhanced using the proposed method are color balanced, as this method has a color compensation effect due to the use of an image-adaptive mask. As a result, the improved image can better reflect the image’s subject, such as a sunset, and appears natural. However, when low-light images are stretched, the noise elements are also enhanced, causing part of the enhanced image to look dim and hazy. To tackle this issue, this paper proposes the use of guided image filtering based on using triple terms for the image-adaptive value. Images enhanced by the proposed method look natural and are objectively superior to those enhanced via other state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xu, Xin, Shiqin Wang, Zheng Wang, Xiaolong Zhang, and Ruimin Hu. "Exploring Image Enhancement for Salient Object Detection in Low Light Images." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–19. http://dx.doi.org/10.1145/3414839.

Повний текст джерела
Анотація:
Low light images captured in a non-uniform illumination environment usually are degraded with the scene depth and the corresponding environment lights. This degradation results in severe object information loss in the degraded image modality, which makes the salient object detection more challenging due to low contrast property and artificial light influence. However, existing salient object detection models are developed based on the assumption that the images are captured under a sufficient brightness environment, which is impractical in real-world scenarios. In this work, we propose an image enhancement approach to facilitate the salient object detection in low light images. The proposed model directly embeds the physical lighting model into the deep neural network to describe the degradation of low light images, in which the environment light is treated as a point-wise variate and changes with local content. Moreover, a Non-Local-Block Layer is utilized to capture the difference of local content of an object against its local neighborhood favoring regions. To quantitative evaluation, we construct a low light Images dataset with pixel-level human-labeled ground-truth annotations and report promising results on four public datasets and our benchmark dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Haofeng, Wenhan Yang, Yueyu Hu, Jiaying Liu, and Ling-Yu Duan. "Towards Low Light Enhancement With RAW Images." IEEE Transactions on Image Processing 31 (2022): 1391–405. http://dx.doi.org/10.1109/tip.2022.3140610.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Yufei, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, and Alex Kot. "Low-Light Image Enhancement with Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2604–12. http://dx.doi.org/10.1609/aaai.v36i3.20162.

Повний текст джерела
Анотація:
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano, and Yoichi Sato. "Image Enhancement of Low-light Scenes with Near-infrared Flash Images." IPSJ Transactions on Computer Vision and Applications 2 (2010): 215–23. http://dx.doi.org/10.2197/ipsjtcva.2.215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cao, Shuning, Yi Chang, Shengqi Xu, Houzhang Fang, and Luxin Yan. "Nonlinear Deblurring for Low-Light Saturated Image." Sensors 23, no. 8 (April 7, 2023): 3784. http://dx.doi.org/10.3390/s23083784.

Повний текст джерела
Анотація:
Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Low-light images"

1

McKoen, K. M. H. H. "Digital restoration of low light level video images." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sankaran, Sharlini. "The influence of ambient light on the detectability of low-contrast lesions in simulated ultrasound images." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175627273.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Авраменко, Віктор Васильович, Виктор Васильевич Авраменко, Viktor Vasylovych Avramenko, and К. Salnik. "Recognition of fragments of standard images at low light level and the presence of additive impulsive noise." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55739.

Повний текст джерела
Анотація:
On the basis of integral disproportion function of the first-order the algorithm recognizing fragments of standards is created. It works in low light image that is analyzed and the presence of additive impulse noise. This algorithm permits to fin an appropriate pixel in one of several standards for each pixel of the image.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Повний текст джерела
Анотація:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.
Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.

Повний текст джерела
Анотація:
Light field image quality assessment (LF-IQA) has attracted increasing research interests due to the fast-growing demands for immersive media experience. The majority of existing LF-IQA metrics, however, heavily rely on high-complexity statistics-based feature extraction for the quality assessment task, which will be hardly sustainable in real-time applications or power-constrained consumer electronic devices in future real-life applications. In this research, a low-complexity Deep learning-based Light Field Image Quality Evaluator (DeLFIQE) is proposed to automatically and efficiently extract features for LF-IQA. To the best of my knowledge, this is the first attempt in LF-IQA with a dedicatedly designed convolutional neural network (CNN) based deep learning model. First, to significantly accelerate the training process, discriminative Epipolar Plane Image (EPI) patches, instead of the full light field images (LFIs) or full EPIs, are obtained and used as input for training and testing in DeLFIQE. By utilizing the EPI patches as input, the quality evaluation of 4-D LFIs is converted to the evaluation of 2-D EPI patches, thus significantly reducing the computational complexity. Furthermore, discriminative EPI patches are selected in such a way that they contain most of the distortion information, thus further improving the training efficiency. Second, to improve the quality assessment accuracy and robustness, a multi-task learning mechanism is designed and employed in DeLFIQE. Specifically, alongside the main task that predicts the final quality score, an auxiliary classification task is designed to classify LFIs based on their distortion types and severity levels. That way, the features are extracted to reflect the distortion types and severity levels, which in turn helps the main task improve the accuracy and robustness of the prediction. The extensive experiments show that DeLFIQE outperforms state-of-the-art metrics from both accuracy and correlation perspectives, especially on benchmark LF datasets of high angular resolutions. When tested on the LF datasets of low angular resolutions, however, the performance of DeLFIQE slightly declines, although still remains competitive. It is believed that it is due to the fact that the distortion feature information contained in the EPI patches gets reduced with the decrease of the LFIs’ angular resolutions, thus reducing the training efficiency and the overall performance of DeLFIQE. Therefore, a General-purpose deep learning-based Light Field Image Quality Evaluator (GeLFIQE) is proposed to perform accurately and efficiently on LF datasets of both high and low angular resolutions. First, a deep CNN model is pre-trained on one of the most comprehensive benchmark LF datasets of high angular resolutions containing abundant distortion features. Next, the features learned from the pre-trained model are transferred to the target LF dataset-specific CNN model to help improve the generalisation and overall performance on low-resolution LFIs containing fewer distortion features. The experimental results show that GeLFIQE substantially improves the performance of DeLFIQE on low-resolution LF datasets, which makes it a real general-purpose LF-IQA metric for LF datasets of various resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Anzagira, Leo. "Imaging performance in advanced small pixel and low light image sensors." Thesis, Dartmouth College, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10144602.

Повний текст джерела
Анотація:

Even though image sensor performance has improved tremendously over the years, there are two key areas where sensor performance leaves room for improvement. Firstly, small pixel performance is limited by low full well, low dynamic range and high crosstalk, which greatly impact the sensor performance. Also, low light color image sensors, which use color filter arrays, have low sensitivity due to the selective light rejection by the color filters. The quanta image sensor (QIS) concept was proposed to mitigate the full well and dynamic range issues in small pixel image sensors. In this concept, spatial and temporal oversampling is used to address the full well and dynamic range issues. The QIS concept however does not address the issue of crosstalk. In this dissertation, the high spatial and temporal oversampling of the QIS concept is leveraged to enhance small pixel performance in two ways. Firstly, the oversampling allows polarization sensitive QIS jots to be incorporated to obtain polarization information. Secondly, the oversampling in the QIS concept allows the design of alternative color filter array patterns for mitigating the impact of crosstalk on color reproduction in small pixels. Finally, the problem of performing color imaging in low light conditions is tackled with a proposed stacked pixel concept. This concept which enables color sampling without the use of absorption color filters, improves low light sensitivity. Simulations are performed to demonstrate the advantage of this proposed pixel structure over sensors employing color filter arrays such as the Bayer pattern. A color correction algorithm for improvement of color reproduction in low light is also developed and demonstrates improved performance.

Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hurle, Bernard Alfred. "The charge coupled device as a low light detector in beam foil spectroscopy." Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Raventos, Joaquin. "New Test Set for Video Quality Benchmarking." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1226.

Повний текст джерела
Анотація:
A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target’s contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Low-light images"

1

The low light photography field guide: Go beyond daylight to capture stunning low light images. Lewes, East Sussex: ILEX, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

The low light photography field guide: Go beyond daylight to capture stunning low light images. Waltham, MA: Focal Press/Elsevier, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

B, Johnson C., Sinha Divyendu, Laplante Phillip A, Society of Photo-optical Instrumentation Engineers., and Boeing Company, eds. Low-light-level and real-time imaging systems, components, and applications: 9-11 July 2002, Seattle, Washington, USA. Bellingham, Wash., USA: SPIE, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Scrofani, James William. An adaptive method for the enhanced fusion of low-light visible and uncooled thermal infrared imagery. Monterey, Calif: Naval Postgraduate School, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Master Low Light Photography: Create Beautiful Images from Twilight to Dawn. Amherst Media, Incorporated, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Freeman, Michael. Low Light Photography Field Guide: The Essential Guide to Getting Perfect Images in Challenging Light. Taylor & Francis Group, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

An Adaptive Method for the Enhanced Fusion of Low-Light Visible and Uncooled Thermal Infrared Imagery. Storming Media, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vanacker, Beatrijs, and Lieke van Deinsen, eds. Portraits and Poses. Leuven University Press, 2022. http://dx.doi.org/10.11116/9789461664532.

Повний текст джерела
Анотація:
The complex relation between gender and the representation of intellectual authority has deep roots in European history. Portraits and Poses adopts a historical approach to shed new light on this topical subject. It addresses various modes and strategies by which learned women (authors, scientists, jurists, midwifes, painters, and others) sought to negotiate and legitimise their authority at the dawn of modern science in Early Modern and Enlightenment Europe (1600–1800). This volume explores the transnational dimensions of intellectual networks in France, Italy, Britain, the German states and the Low Countries. Drawing on a wide range of case studies from different spheres of professionalisation, it examines both individual and collective constructions of female intellectual authority through word and image. In its innovative combination of an interdisciplinary and transnational approach, this volume contributes to the growing literature on women and intellectual authority in the Early Modern Era and outlines contours for future research.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Low-light images"

1

Gogineni, Navyadhara, Yashashvini Rachamallu, Rineeth Saladi, and K. V. V. Bhanu Prakash. "Image Caption Generation for Low Light Images." In Communications in Computer and Information Science, 57–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20977-2_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rollin, Joël. "Optics for Images at Low Light Levels." In Optics in Instruments, 235–66. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118744321.ch8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kavya, Avvaru Greeshma, Uruguti Aparna, and Pallikonda Sarah Suhasini. "Enhancement of Low-Light Images Using CNN." In Emerging Research in Computing, Information, Communication and Applications, 1–9. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano, and Yoichi Sato. "Image Enhancement of Low-Light Scenes with Near-Infrared Flash Images." In Computer Vision – ACCV 2009, 213–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12307-8_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Chenmin, Shijie Hao, Yanrong Guo, and Richang Hong. "Enhancing Low-Light Images with JPEG Artifact Based on Image Decomposition." In Advances in Multimedia Information Processing – PCM 2018, 3–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00767-6_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bai, Lianfa, Jing Han, and Jiang Yue. "Colourization of Low-Light-Level Images Based on Rule Mining." In Night Vision Processing and Understanding, 235–66. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-1669-2_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sun, Jianing, Jiaao Zhang, Risheng Liu, and Fan Xin. "Brightening the Low-Light Images via a Dual Guided Network." In Artificial Intelligence, 240–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Mading, Jiaying Liu, Wenhan Yang, and Zongming Guo. "Joint Denoising and Enhancement for Low-Light Images via Retinex Model." In Communications in Computer and Information Science, 91–99. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jian, Wuzhen, Hui Zhao, Zhe Bai, and Xuewu Fan. "Low-Light Remote Sensing Images Enhancement Algorithm Based on Fully Convolutional Neural Network." In Proceedings of the 5th China High Resolution Earth Observation Conference (CHREOC 2018), 56–65. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6553-9_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ghosh, Archan, Kalporoop Goswami, Riju Chatterjee, and Paramita Sarkar. "A Light SRGAN for Up-Scaling of Low Resolution and High Latency Images." In Communications in Computer and Information Science, 56–67. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81462-5_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Low-light images"

1

Hu, Zhe, Sunghyun Cho, Jue Wang, and Ming-Hsuan Yang. "Deblurring Low-Light Images with Light Streaks." In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.432.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cagigal, Manuel P., and Pedro M. Prieto. "Low-light-level images reconstruction." In EI 92, edited by James R. Sullivan, Benjamin M. Dawson, and Majid Rabbani. SPIE, 1992. http://dx.doi.org/10.1117/12.58331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Cagigal, Manuel P., and Pedro M. Prieto. "Recovery from low light level images." In Education in Optics. SPIE, 1992. http://dx.doi.org/10.1117/12.57882.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mpouziotas, Dimitrios, Eleftherios Mastrapas, Nikos Dimokas, Petros Karvelis, and Evripidis Glavas. "Object Detection for Low Light Images." In 2022 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2022. http://dx.doi.org/10.1109/seeda-cecnsm57760.2022.9932921.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cheng, B. T., M. A. Fiddy, J. D. Newman, R. C. Van Vranken, and D. L. Clark. "Image restoration from low light level degraded data." In Quantum-Limited Imaging and Image Processing. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/qlip.1989.tuc4.

Повний текст джерела
Анотація:
We present some preliminary work on the reconstruction of low contrast images for remote sensing type applications. We assume the data to be a set of noise degraded images, and report on the application of reconstruction techniques that both estimate the support of the image use the triple correlation method to obtain the image itself. These reconstruction methods are applied to simulated data in the first instance.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhuo, Shaojie, Xiaopeng Zhang, Xiaoping Miao, and Terence Sim. "Enhancing low light images using near infrared flash images." In 2010 17th IEEE International Conference on Image Processing (ICIP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icip.2010.5652900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hayashi, Masahiro, Fumihiko Sakaue, Jun Sato, Yoshiteru Koreeda, Masakatsu Higashikubo, and Hidenori Yamamoto. "Recovering High Intensity Images from Sequential Low Light Images." In 17th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0010891600003124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wernick, Miles N., and G. Michael Morris. "Image Classification at Low Light Levels." In Quantum-Limited Imaging and Image Processing. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/qlip.1986.tud2.

Повний текст джерела
Анотація:
At low light levels, a two-dimensional photon-counting detector may be employed to provide an estimate of the cross-correlation between an input image and a set of reference images stored in computer memory. Recent experiments1,2 have demonstrated that in many cases, a small number of detected photoevents is sufficient to yield reliable image recognition. The binary nature of the photon­limited input, coupled with the high rate of operation afforded by commercially available detectors, leads to recognition times on the order of tens of milliseconds.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Puzovic, Snezana, Ranko Petrovic, Milos Pavlovic, and Srdan Stankovic. "Enhancement Algorithms for Low-Light and Low-Contrast Images." In 2020 19th International Symposium INFOTEH-JAHORINA (INFOTEH). IEEE, 2020. http://dx.doi.org/10.1109/infoteh48170.2020.9066316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Isberg, Thomas A., and G. Michael Morris. "Rotation-Invariant image recognition at low light levels." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.tur4.

Повний текст джерела
Анотація:
It has been recently demonstrated1 that fast, reliable image recognition can be accomplished using a commercially available 2-D photon counting detector and position computing electronics. In this paper the method of circular harmonic function expansion for coherent rotation-invariant image recognition2 is extended to the case of photon-limited image recognition. Theory for rotation-invariant filtering with incoherent illumination is presented and applied to photon-limited image recognition. Low light level input images are cross correlated with the square modulus of a single circular harmonic component of a high light level reference image stored in computer memory. The mean value of the correlation signal is found to be invariant with respect to rotation of the input image. The experimental results for the correlation signals for various input images are presented. Histograms of the correlation signal are shown and compared with theoretical predictions for the probability density functions. It is demonstrated that reliable image recognition, independent of the rotational orientation of the input, is possible with as few as 5000 detected photoevents.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Low-light images"

1

Sinai, Michael J., Jason S. McCarley, and William K. Krebs. Scene Recognition with Infrared, Low-Light, and Sensor-Fused Imagery. Fort Belvoir, VA: Defense Technical Information Center, February 1999. http://dx.doi.org/10.21236/ada389643.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії