Academic literature on the topic 'Low-light images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low-light images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Low-light images"

1

McKoen, K. M. H. H. "Digital restoration of low light level video images." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sankaran, Sharlini. "The influence of ambient light on the detectability of low-contrast lesions in simulated ultrasound images." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175627273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Авраменко, Віктор Васильович, Виктор Васильевич Авраменко, Viktor Vasylovych Avramenko, and К. Salnik. "Recognition of fragments of standard images at low light level and the presence of additive impulsive noise." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55739.

Full text
Abstract:
On the basis of integral disproportion function of the first-order the algorithm recognizing fragments of standards is created. It works in low light image that is analyzed and the presence of additive impulse noise. This algorithm permits to fin an appropriate pixel in one of several standards for each pixel of the image.
APA, Harvard, Vancouver, ISO, and other styles
4

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Full text
Abstract:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.<br>Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
APA, Harvard, Vancouver, ISO, and other styles
5

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.

Full text
Abstract:
Light field image quality assessment (LF-IQA) has attracted increasing research interests due to the fast-growing demands for immersive media experience. The majority of existing LF-IQA metrics, however, heavily rely on high-complexity statistics-based feature extraction for the quality assessment task, which will be hardly sustainable in real-time applications or power-constrained consumer electronic devices in future real-life applications. In this research, a low-complexity Deep learning-based Light Field Image Quality Evaluator (DeLFIQE) is proposed to automatically and efficiently extract features for LF-IQA. To the best of my knowledge, this is the first attempt in LF-IQA with a dedicatedly designed convolutional neural network (CNN) based deep learning model. First, to significantly accelerate the training process, discriminative Epipolar Plane Image (EPI) patches, instead of the full light field images (LFIs) or full EPIs, are obtained and used as input for training and testing in DeLFIQE. By utilizing the EPI patches as input, the quality evaluation of 4-D LFIs is converted to the evaluation of 2-D EPI patches, thus significantly reducing the computational complexity. Furthermore, discriminative EPI patches are selected in such a way that they contain most of the distortion information, thus further improving the training efficiency. Second, to improve the quality assessment accuracy and robustness, a multi-task learning mechanism is designed and employed in DeLFIQE. Specifically, alongside the main task that predicts the final quality score, an auxiliary classification task is designed to classify LFIs based on their distortion types and severity levels. That way, the features are extracted to reflect the distortion types and severity levels, which in turn helps the main task improve the accuracy and robustness of the prediction. The extensive experiments show that DeLFIQE outperforms state-of-the-art metrics from both accuracy and correlation perspectives, especially on benchmark LF datasets of high angular resolutions. When tested on the LF datasets of low angular resolutions, however, the performance of DeLFIQE slightly declines, although still remains competitive. It is believed that it is due to the fact that the distortion feature information contained in the EPI patches gets reduced with the decrease of the LFIs’ angular resolutions, thus reducing the training efficiency and the overall performance of DeLFIQE. Therefore, a General-purpose deep learning-based Light Field Image Quality Evaluator (GeLFIQE) is proposed to perform accurately and efficiently on LF datasets of both high and low angular resolutions. First, a deep CNN model is pre-trained on one of the most comprehensive benchmark LF datasets of high angular resolutions containing abundant distortion features. Next, the features learned from the pre-trained model are transferred to the target LF dataset-specific CNN model to help improve the generalisation and overall performance on low-resolution LFIs containing fewer distortion features. The experimental results show that GeLFIQE substantially improves the performance of DeLFIQE on low-resolution LF datasets, which makes it a real general-purpose LF-IQA metric for LF datasets of various resolutions.
APA, Harvard, Vancouver, ISO, and other styles
8

Anzagira, Leo. "Imaging performance in advanced small pixel and low light image sensors." Thesis, Dartmouth College, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10144602.

Full text
Abstract:
<p> Even though image sensor performance has improved tremendously over the years, there are two key areas where sensor performance leaves room for improvement. Firstly, small pixel performance is limited by low full well, low dynamic range and high crosstalk, which greatly impact the sensor performance. Also, low light color image sensors, which use color filter arrays, have low sensitivity due to the selective light rejection by the color filters. The quanta image sensor (QIS) concept was proposed to mitigate the full well and dynamic range issues in small pixel image sensors. In this concept, spatial and temporal oversampling is used to address the full well and dynamic range issues. The QIS concept however does not address the issue of crosstalk. In this dissertation, the high spatial and temporal oversampling of the QIS concept is leveraged to enhance small pixel performance in two ways. Firstly, the oversampling allows polarization sensitive QIS jots to be incorporated to obtain polarization information. Secondly, the oversampling in the QIS concept allows the design of alternative color filter array patterns for mitigating the impact of crosstalk on color reproduction in small pixels. Finally, the problem of performing color imaging in low light conditions is tackled with a proposed stacked pixel concept. This concept which enables color sampling without the use of absorption color filters, improves low light sensitivity. Simulations are performed to demonstrate the advantage of this proposed pixel structure over sensors employing color filter arrays such as the Bayer pattern. A color correction algorithm for improvement of color reproduction in low light is also developed and demonstrates improved performance.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Hurle, Bernard Alfred. "The charge coupled device as a low light detector in beam foil spectroscopy." Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Raventos, Joaquin. "New Test Set for Video Quality Benchmarking." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1226.

Full text
Abstract:
A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target’s contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography