Дисертації з теми "Low-light images"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Low-light images.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-19 дисертацій для дослідження на тему "Low-light images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

McKoen, K. M. H. H. "Digital restoration of low light level video images." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sankaran, Sharlini. "The influence of ambient light on the detectability of low-contrast lesions in simulated ultrasound images." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175627273.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Авраменко, Віктор Васильович, Виктор Васильевич Авраменко, Viktor Vasylovych Avramenko, and К. Salnik. "Recognition of fragments of standard images at low light level and the presence of additive impulsive noise." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55739.

Повний текст джерела
Анотація:
On the basis of integral disproportion function of the first-order the algorithm recognizing fragments of standards is created. It works in low light image that is analyzed and the presence of additive impulse noise. This algorithm permits to fin an appropriate pixel in one of several standards for each pixel of the image.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Повний текст джерела
Анотація:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.
Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.

Повний текст джерела
Анотація:
Light field image quality assessment (LF-IQA) has attracted increasing research interests due to the fast-growing demands for immersive media experience. The majority of existing LF-IQA metrics, however, heavily rely on high-complexity statistics-based feature extraction for the quality assessment task, which will be hardly sustainable in real-time applications or power-constrained consumer electronic devices in future real-life applications. In this research, a low-complexity Deep learning-based Light Field Image Quality Evaluator (DeLFIQE) is proposed to automatically and efficiently extract features for LF-IQA. To the best of my knowledge, this is the first attempt in LF-IQA with a dedicatedly designed convolutional neural network (CNN) based deep learning model. First, to significantly accelerate the training process, discriminative Epipolar Plane Image (EPI) patches, instead of the full light field images (LFIs) or full EPIs, are obtained and used as input for training and testing in DeLFIQE. By utilizing the EPI patches as input, the quality evaluation of 4-D LFIs is converted to the evaluation of 2-D EPI patches, thus significantly reducing the computational complexity. Furthermore, discriminative EPI patches are selected in such a way that they contain most of the distortion information, thus further improving the training efficiency. Second, to improve the quality assessment accuracy and robustness, a multi-task learning mechanism is designed and employed in DeLFIQE. Specifically, alongside the main task that predicts the final quality score, an auxiliary classification task is designed to classify LFIs based on their distortion types and severity levels. That way, the features are extracted to reflect the distortion types and severity levels, which in turn helps the main task improve the accuracy and robustness of the prediction. The extensive experiments show that DeLFIQE outperforms state-of-the-art metrics from both accuracy and correlation perspectives, especially on benchmark LF datasets of high angular resolutions. When tested on the LF datasets of low angular resolutions, however, the performance of DeLFIQE slightly declines, although still remains competitive. It is believed that it is due to the fact that the distortion feature information contained in the EPI patches gets reduced with the decrease of the LFIs’ angular resolutions, thus reducing the training efficiency and the overall performance of DeLFIQE. Therefore, a General-purpose deep learning-based Light Field Image Quality Evaluator (GeLFIQE) is proposed to perform accurately and efficiently on LF datasets of both high and low angular resolutions. First, a deep CNN model is pre-trained on one of the most comprehensive benchmark LF datasets of high angular resolutions containing abundant distortion features. Next, the features learned from the pre-trained model are transferred to the target LF dataset-specific CNN model to help improve the generalisation and overall performance on low-resolution LFIs containing fewer distortion features. The experimental results show that GeLFIQE substantially improves the performance of DeLFIQE on low-resolution LF datasets, which makes it a real general-purpose LF-IQA metric for LF datasets of various resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Anzagira, Leo. "Imaging performance in advanced small pixel and low light image sensors." Thesis, Dartmouth College, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10144602.

Повний текст джерела
Анотація:

Even though image sensor performance has improved tremendously over the years, there are two key areas where sensor performance leaves room for improvement. Firstly, small pixel performance is limited by low full well, low dynamic range and high crosstalk, which greatly impact the sensor performance. Also, low light color image sensors, which use color filter arrays, have low sensitivity due to the selective light rejection by the color filters. The quanta image sensor (QIS) concept was proposed to mitigate the full well and dynamic range issues in small pixel image sensors. In this concept, spatial and temporal oversampling is used to address the full well and dynamic range issues. The QIS concept however does not address the issue of crosstalk. In this dissertation, the high spatial and temporal oversampling of the QIS concept is leveraged to enhance small pixel performance in two ways. Firstly, the oversampling allows polarization sensitive QIS jots to be incorporated to obtain polarization information. Secondly, the oversampling in the QIS concept allows the design of alternative color filter array patterns for mitigating the impact of crosstalk on color reproduction in small pixels. Finally, the problem of performing color imaging in low light conditions is tackled with a proposed stacked pixel concept. This concept which enables color sampling without the use of absorption color filters, improves low light sensitivity. Simulations are performed to demonstrate the advantage of this proposed pixel structure over sensors employing color filter arrays such as the Bayer pattern. A color correction algorithm for improvement of color reproduction in low light is also developed and demonstrates improved performance.

Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hurle, Bernard Alfred. "The charge coupled device as a low light detector in beam foil spectroscopy." Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Raventos, Joaquin. "New Test Set for Video Quality Benchmarking." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1226.

Повний текст джерела
Анотація:
A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target’s contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Scrofani, James William. "An adaptive method for the enhanced fusion of low-light visible and uncooled thermal infrared imagery." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA334031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Malik, Sameer. "Low Light Image Restoration: Models, Algorithms and Learning with Limited Data." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6120.

Повний текст джерела
Анотація:
The ability to capture high quality images under low-light conditions is an important feature of most hand-held devices and surveillance cameras. Images captured under such conditions often suffer from multiple distortions such as poor contrast, low brightness, color-cast and severe noise. While adjusting camera hardware settings such as aperture width, ISO level and exposure time can improve the contrast and brightness levels in the captured image, they often introduce artifacts including shallow depth-of-field, noise and motion blur. Thus, it is important to study image processing approaches to improve the quality of low-light images. In this thesis, we study the problem of low-light image restoration. In particular, we study the design of low-light image restoration algorithms based on statistical models, deep learning architectures and learning approaches when only limited labelled training data is available. In our statistical model approach, the low-light natural image in the band pass domain is modelled by statistically relating a Gaussian scale mixture model for the pristine image, with the low-light image, through a detail loss coefficient and Gaussian noise. The detail loss coefficient in turn is statistically described using a posterior distribution with respect to its estimate based on a prior contrast enhancement algorithm. We then design our low-light enhancement and denoising (LLEAD) method by computing the minimum mean squared error estimate of the pristine image band pass coefficients. We create the Indian Institute of Science low-light image dataset of well-lit and low-light image pairs to learn the model parameters and evaluate our enhancement method. We show through extensive experiments on multiple datasets that our method helps better enhance the contrast while simultaneously controlling the noise when compared to other classical joint contrast enhancement and denoising methods. Deep convolutional neural networks (CNNs) based on residual learning and end-to-end multiscale learning have been successful in achieving state of the art performance in image restoration. However, their application to joint contrast enhancement and denoising under low-light conditions is challenging owing to the complex nature of the distortion process involving both loss of details and noise. We address this challenge through two lines of approaches, one which exploits the statistics of natural images and the other which exploits the structure of the distortion process. We first propose a multiscale learning approach by learning the subbands obtained in a Laplacian pyramid decomposition. We refer to our framework as low-light restoration network (LLRNet). Our approach consists of a bank of CNNs where each CNN is trained to learn to explicitly predict different subbands of the Laplacian pyramid of the well exposed image. We show through extensive experiments on multiple datasets that our approach produces better quality restored images when compared to other low-light restoration methods. In our second line of approach, we learn a distortion model that relates a noisy low- light and ground truth image pair. The low-light image is modeled to suffer from contrast distortion and additive noise. We model the loss of contrast through a parametric function, which enables the estimation of the underlying noise. We then use a pair of CNN models to learn the noise and the parameters of a function to achieve contrast enhancement. This contrast enhancement function is modeled as a linear combination of multiple gamma transforms. We show through extensive evaluations that our low-light Image Model for Enhancement Network (LLIMENet) achieves superior restoration performance when compared to other methods on several publicly available datasets. While CNN models are fairly successful in low-light image restoration, such approaches require a large number of paired low-light and ground truth image pairs for training. Thus, we study the problem of semi-supervised learning for low-light image restoration when limited low-light images have ground truth labels. Our main contributions in this work are twofold. We first deploy an ensemble of low-light restoration networks to restore the unlabeled images and generate a set of potential pseudo-labels. We model the contrast distortions in the labeled set to generate different sets of training data and create the ensemble of networks. We then design a contrastive self-supervised learning based image quality measure to obtain the pseudo-label among the images restored by the ensemble. We show that training the restoration network with the pseudo-labels allows us to achieve excellent restoration performance even with very few labeled pairs. Our extensive experiments on multiple datasets show the superior performance of our semi-supervised low-light image restoration compared to other approaches. Finally, we study an even more constrained problem setting when only very few labelled image pairs are available for training. To address this challenge, we augment the available labelled data with large number of low-light and ground-truth image pairs through a CNN based model that generates low-light images. In particular, we introduce a contrast distortion auto-encoder framework that learns to disentangle the contrast distortion and content features from a low-light image. The contrast distortion features from a low-light image are then fused with the content features from another pristine image to create a low-light version of the pristine image. We achieve the disentanglement of distortion from image content through the novel use of a contrastive loss to constrain the training. We then use the generated data to train low-light restoration models. We evaluate our data generation method in the 5-shot and 10-shot labelled data settings to show the effectiveness of our models.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Szu-Chieh, and 王思傑. "Extreme Low Light Image Enhancement with Generative Adversarial Networks." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cz8pqb.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊工程學研究所
107
Taking photos under low light environments is always a challenge for current imaging pipelines. Image noise and artifacts corrupt the image. Tak- ing the great success of deep learning into consideration recently, it may be straightforward to train a deep convolutional network to perform enhance- ment on such images to restore the underlying clean image. However, the large number of parameters in deep models may require a large amount of data to train. For the low light image enhancement task, paired data requires a short exposure image and a long exposure image to be taken with perfect alignment, which may not be achievable in every scene, thus limiting the choice of possible scenes to capture paired data and increasing the effort to collect training data. Also, data-driven solutions tend to replace the entire camera pipeline and cannot be easily integrated to existing pipelines. There- fore, we propose to handle the task with our 2-stage pipeline, consisting of an imperfect denoise network, and a bias correction net BC-UNet. Our method only requires noisy bursts of short exposure images and unpaired long expo- sure images, relaxing the effort of collecting training data. Also, our method works in raw domain and is capable of being easily integrated into the ex- isting camera pipeline. Our method achieves comparable improvements to other methods under the same settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Chen, Hsueh-I., and 陳學儀. "Deep Burst Low Light Image Enhancement with Alignment, Denoising and Blending." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/sfk685.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊網路與多媒體研究所
106
Taking photos under low light environment is always a challenge for most camera. In this thesis, we propose a neural network pipeline for processing burst short-exposure raw data. Our method contains alignment, denoising and blending. First, we use FlowNet2.0 to predict the optical flow between burst images and align these burst images. And then, we feed the aligned burst raw data into a DenoiseUNet, which includes denoise-part and color-part, to generate an RGB image. Finally, we use a MaskUNet to generate a mask that can distinguish misalignment. We blend the outputs from single raw image and from burst raw images by the mask. Our method proves that using burst inputs has significantly improvement than single input.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chen, Chih-Ming, and 陳知名. "FPGA-based Real-time Low-Light Image Enhancement for Side-Mirror System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/75th2h.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電子工程系
106
In recent years, camera and display are widely used on vehicle. Because the camera wide angle is greater than the field of the lens-view, traditional side-mirror gradually replaced by camera and display. When driving at night, the images always suffer from low visibility when captures in low-light conditions, so driver and pedestrians are in danger. In this paper, design of PCB circuit to connect two motor control modules and side-mirror control lines to integrate FPGA, and presents a high-speed method to enhanced low-light image. The proposed brightnss enhanced algorithm is based on YUV space using non-linear transfers function and implemented on hardware for Xilinx ZedBoard to achieve the requirements of 25fps. The software algorithm execution time and brightness enhanced image LOE (Lightness-Order-Error) are used as performance evaluation. Compared with the proposed brightnss enhanced algorithm and others enhanced algorithms, the parameter of execution time and LOE can reduce by up to 97.5%, and 77%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

"High Speed CMOS Image Sensor." Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40301.

Повний текст джерела
Анотація:
abstract: High speed image sensors are used as a diagnostic tool to analyze high speed processes for industrial, automotive, defense and biomedical application. The high fame rate of these sensors, capture a series of images that enables the viewer to understand and analyze the high speed phenomena. However, the pixel readout circuits designed for these sensors with a high frame rate (100fps to 1 Mfps) have a very low fill factor which are less than 58%. For high speed operation, the exposure time is less and (or) the light intensity incident on the image sensor is less. This makes it difficult for the sensor to detect faint light signals and gives a lower limit on the signal levels being detected by the sensor. Moreover, the leakage paths in the pixel readout circuit also sets a limit on the signal level being detected. Therefore, the fill factor of the pixel should be maximized and the leakage currents in the readout circuits should be minimized. This thesis work presents the design of the pixel readout circuit suitable for high speed and low light imaging application. The circuit is an improvement to the 6T pixel readout architecture. The designed readout circuit minimizes the leakage currents in the circuit and detects light producing a signal level of 350µV at the cathode of the photodiode. A novel layout technique is used for the pixel, which improves the fill factor of the pixel to 64.625%. The read out circuit designed is an integral part of high speed image sensor, which is fabricated using a 0.18 µm CMOS technology with the die size of 3.1mm x 3.4 mm, the pixel size of 20µm x 20 µm, number of pixel of 96 x 96 and four 10-bit pipelined ADC’s. The image sensor achieves a high frame rate of 10508 fps and readout speed of 96 M pixels / sec.
Dissertation/Thesis
Masters Thesis Electrical Engineering 2016
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Chen, Ming-Wei, and 陳明偉. "A Novel Log-Lin-Log Response CMOS Image sensor with High Low-light Sensitivity and High Dynamic Range." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/52347884012525449141.

Повний текст джерела
Анотація:
碩士
元智大學
電機工程學系
96
A novel CMOS image sensor with log-lin-log response is presented. The pixel cell has logarithmic response in very low illumination intensity, linear response in low and medium illumination intensity, and logarithmic response in high illumination. In this scheme, the sensor is highly sensitive to very low light, while still owning the properties of high voltage swing of 0.53V (from 1.8V supply) and high dynamic range of 120dB. Furthermore, CDS technique can be applied to the proposed sensor array to reduce the fixed pattern noise. For the purpose of demonstration, a prototyped image sensor array of 75x54 with readout circuit and CDS is designed and realized from 1.8V supply in the TSMC 0.35µm CMOS 2P4M standard process.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Goy, J. "Etude, conception, et réalisation d'un capteur d'image APS en technologie standard CMOS pour des applications faible flux de type viseur d'étoiles = Study, conception and fabrication of an APS image sensor in standard CMOS technology for low light level applications such as star trackers." Phd thesis, 2002. http://tel.archives-ouvertes.fr/tel-00002934.

Повний текст джерела
Анотація:
Dans le domaine des capteurs d'image est apparu récemment la technologie dite de "capteurs CMOS" ou APS (Active Pixel Sensor) qui intègre à l'intérieur de chaque pixel quelques transistors MOS chargés d'amplifier et de conduire le signal le long de lignes de métal parcourant l'ensemble de la zone image. Le domaine spatial en particulier s'intéresse à cette
technologie du fait qu'elle est moins sensible aux radiations que les capteurs CCD, et qu'elle atteint à présent des coûts et des niveaux de bruit de lecture satisfaisants. Cette thèse explore les améliorations
qui peuvent être apportées aux capteur CMOS traditionnels afin de les rendre plus proches des contraintes requises pour l'utilisation spatiale. Ces améliorations concernent notamment l'étude de la partie photosensible (photodiode ou photoMOS), le choix d'une architecture de pixel permettant d'augmenter son gain intrinsèque tout en réduisant son bruit de lecture,
et la réalisation d'un système de balayage de la matrice avec possibilité de fenêtrage et de temps d'exposition programmable. Dans ce cadre, plusieurs solutions ont été fabriquées et testées, et les conclusions permettent de dresser une large vision des avantages et des inconvénients de chaque type de capteur.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Eloff, Corné. "Spatial technology as a tool to analyse and combat crime." Thesis, 2006. http://hdl.handle.net/10500/1193.

Повний текст джерела
Анотація:
This study explores the utilisation of spatial technologies as a tool to analyse and combat crime. The study deals specifically with remote sensing and its potential for being integrated with geographical information systems (GIS). The integrated spatial approach resulted in the understanding of land use class behaviour over time and its relationship to specific crime incidents per police precinct area. The incorporation of spatial technologies to test criminological theories in practice, such as the ecological theories of criminology, provides the science with strategic value. It proves the value of combining multi-disciplinary scientific fields to create a more advanced platform to understand land use behaviour and its relationship to crime. Crime in South Africa is a serious concern and it impacts negatively on so many lives. The fear of crime, the loss of life, the socio-economic impact of crime, etc. create the impression that the battle against crime has been lost. The limited knowledge base within the law enforcement agencies, limited logistical resources and low retention rate of critical staff all contribute to making the reduction of crime more difficult to achieve. A practical procedure of using remote sensing technology integrated with geographical information systems (GIS), overlaid with geo-coded crime data to provide a spatial technological basis to analyse and combat crime, is illustrated by a practical study of the Tshwane municipality area. The methodology applied in this study required multi-skilled resources incorporating GIS and the understanding of crime to integrate the diverse scientific fields into a consolidated process that can contribute to the combating of crime in general. The existence of informal settlement areas in South Africa stresses the socio-economic problems that need to be addressed as there is a clear correlation of land use data with serious crime incidents in these areas. The fact that no formal cadastre exists for these areas, combined with a great diversity in densification and growth of the periphery, makes analysis very difficult without remote sensing imagery. Revisits over time to assess changes in these areas in order to adapt policing strategies will create an improved information layer for responding to crime. Final computerised maps generated from remote sensing and GIS layers are not the only information that can be used to prevent and combat crime. An important recipe for ultimately successfully managing and controlling crime in South Africa is to strategically combine training of the law enforcement agencies in the use of spatial information with police science. The researcher concludes with the hope that this study will contribute to the improved utilisation of spatial technology to analyse and combat crime in South Africa. The ultimate vision is the expansion of the science of criminology by adding an advanced spatial technology module to its curriculum.
Criminology
D.Litt. et Phil. (Criminology)
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії