Segui questo link per vedere altri tipi di pubblicazioni sul tema: Histogram.

Tesi sul tema "Histogram"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Histogram".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Kvapil, Jiří. "Adaptivní ekvalizace histogramu digitálních obrazů". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228687.

Testo completo
Abstract (sommario):
The diploma thesis is focused on histogram equalization method and his extension by the adaptive boundary. This thesis contains explanations of basic notions on that histogram equalization method was created. Next part is described the human vision and priciples of his imitation. In practical part of this thesis was created software that makes it possible to use methods of adaptive histogram equalization on real images. At the end is showed some results that was reached.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kurak, Charles W. Jr. "Adaptive Histogram Equalization, a Parallel Implementation". UNF Digital Commons, 1990. http://digitalcommons.unf.edu/etd/260.

Testo completo
Abstract (sommario):
Adaptive Histogram Equalization (AHE) has been recognized as a valid method of contrast enhancement. The main advantage of AHE is that it can provide better contrast in local areas than that achievable utilizing traditional histogram equalization methods. Whereas traditional methods consider the entire image, AHE utilizes a local contextual region. However, AHE is computationally expensive, and therefore time-consuming. In this work two areas of computer science, image processing and parallel processing, are combined to produce an efficient algorithm. In particular, the AHE algorithm is implemented with a Multiple-Instruction-Multiple-Data (MIMD) parallel architecture. It is proposed that, as MIMD machines become more powerful and prevalent, this methodology can be applied to not only this particular algorithm, but also to many others in its class.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Jirka, Roman. "Časosběrné video". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236934.

Testo completo
Abstract (sommario):
This thesis deals with the introduction into the topic of time-lapse video creation. It focuses on cases where tripod is not used and therefore it is  necessary to eliminate incurred shortcomings. The main shortcomings are different position of individual frames, different brightness and color adjustment. The next topic describes which principles should be followed during the creation process. Thesis describes and implements methods for elimination of main shortcomings during process long time-lapse videos, which are recorded by hand. Thesis also precisely describes image registration, correction of brightness and colors. Thesis is also considers histograms comparison. Result of this work is application, which eliminates problems described above.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Müller, Patrice. "Scalable localized histogram aggregation for P2P MMOGs". Zürich : ETH, Eidgenössische Technische Hochschule Zürich, 2005. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=169.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

SKARPMAN, MUNTER JOHANNA. "Dose-Volume Histogram Prediction using KernelDensity Estimation". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155893.

Testo completo
Abstract (sommario):
Dose plans developed for stereotactic radiosurgery are assessed by studying so called Dose-Volume Histograms. Since it is hard to compare an individual dose plan with doseplans created for other patients, much experience and knowledge is lost. This thesis therefore investigates a machine learning approach to predicting such Dose-Volume Histograms for a new patient, by learning from previous dose plans.The training set is chosen based on similarity in terms of tumour size. The signed distances between voxels in the considered volume and the tumour boundary decide the probability of receiving a certain dose in the volume. By using a method based on Kernel Density Estimation, the intrinsic probabilistic properties of a Dose-Volume Histogramare exploited.Dose-Volume Histograms for the brainstem of 22 Acoustic Schwannoma patients, treated with the Gamma Knife,have been predicted, solely based on each patient’s individual anatomical disposition. The method has proved higher prediction accuracy than a “quick-and-dirty” approach implemented for comparison. Analysis of the bias and variance of the method also indicate that it captures the main underlying factors behind individual variations. However,the degree of variability in dose planning results for the Gamma Knife has turned out to be very limited. Therefore, the usefulness of a data driven dose planning tool for the Gamma Knife has to be further investigated.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Yakoubian, Jeffrey Scott. "Adaptive histogram equalization for mammographic image processing". Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/16387.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Potgieter, Andrew. "A Parallel Multidimensional Weighted Histogram Analysis Method". Thesis, University of Cape Town, 2014. http://pubs.cs.uct.ac.za/archive/00000986/.

Testo completo
Abstract (sommario):
The Weighted Histogram Analysis Method (WHAM) is a technique used to calculate free energy from molecular simulation data. WHAM recombines biased distributions of samples from multiple Umbrella Sampling simulations to yield an estimate of the global unbiased distribution. The WHAM algorithm iterates two coupled, non-linear, equations, until convergence at an acceptable level of accuracy. The equations have quadratic time complexity for a single reaction coordinate. However, this increases exponentially with the number of reaction coordinates under investigation, which makes multidimensional WHAM a computationally expensive procedure. There is potential to use general purpose graphics processing units (GPGPU) to accelerate the execution of the algorithm. Here we develop and evaluate a multidimensional GPGPU WHAM implementation to investigate the potential speed-up attained over its CPU counterpart. In addition, to avoid the cost of multiple Molecular Dynamics simulations and for validation of the implementations we develop a test system to generate samples analogous to Umbrella Sampling simulations. We observe a maximum problem size dependent speed-up of approximately 19 for the GPGPU optimized WHAM implementation over our single threaded CPU optimized version. We find that the WHAM algorithm is amenable to GPU acceleration, which provides the means to study ever more complex molecular systems in reduced time periods.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Thapa, Mandira. "Optimal Feature Selection for Spatial Histogram Classifiers". Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1513710294627304.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Yang. "Face Recognition Based on Histogram And Spin Image". Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485831.

Testo completo
Abstract (sommario):
This thesis presents our research work on shape-based human face recognition exploiting curvature-based histogram and spin image. Instead of the popular 2D shape information represented by fiducial points, the novelty here is the use of 2.5D shape information obtained by shape-from-shading (SFS). Though surface normals generated by performing shape-from-shading on objects are not widely accepted as a precise shape representation for face recognition purposes, recent research in shape-from-shading [Ragheb and Hancock, 2003] [prados et aI., 2006] [Castelan, 2006] [Smith and Hancock, 2006] has made it possible to recover fairly accurate shape information under various conditions from face images in the real world. These contributions make our face recognition approaches based on 2.5D shape recovered from a single image possi.ble. Chapter 2 is a thorough review of the existing literature in the areas pf surface reconstruction using shape-from-shading, appearance-based and model-based face recognition on 2D and 3D data, and histogram-based image representation and recognition. The literature is pretty sparse on works on shape-from-shading in the face recognition area, however there are plenty of approaches based on 2D images ~ and 3D range data. With accurate height map being recovered from single face image [Prados et aI., 2006] [Castelan, 2006] and statistical model being proposed to recover surface normals [Smith and Hancock, 2006], we have enough 2.5D shape infonnation recovered' from 2D image based on which we can perfonn face recognition. In Chapter 3, we present our curvature-based histogram appro,ach as our first contribution, which employs principal curvature infonnation calculated from the Hessian matrix based on the recovered surface nonnals. Generalized entropies are introduced as similarity measurements, which give stable performance especially when the number of relative images varies. While curvature-based histogram proves to be a fairly good face recognition approach with concise representation and easy comparison, its performance is not always stable when applied to different databases. Therefore more advanced facerecognition approach is required to bring better and more stable identification results. In Chapter 4, we derive patch-based spin image as a local shape representation from the idea of global curvature-based histogram in Chapter 3. This representation is inspired by [Johnson and Hebert, 1999] and adapted in this thesis as a solution to face recognition problem. Instead of using 3D range data, the estimated height map reconstructed by shape-from-shading is employed in the spin image construction. Also the mean needle map model is used as the preprocessing to correct the errors and noise that exists in the surface nonnal estimates. Face surface is segmented into small patches and the spin image corresponding to the surface is composed of histograms constructed on each surface patch. In Appendix A, we propose dual spin image to address the difficulties of recognizing face in rotation, among which the two major problems of point correspondence and point occlusion are of particular importance. Therefore we propose the idea of neighbour area spin image to construct the pointwise neighbourhood surface feature collection and use the ratio of projected distances to relative angles to alleviate the errors introduced by surface rotation. We also present the linear model and the finite Gaussian mixture model to approximate novel dual spin image using the existing dual spin images. Face recognition is performed based on the model parameters. .The work in this thesis suggests that 2.5D shape feature recovered by shapefrom- shading can be used for the purpose offace recognition. Also the appearancebased approaches derived from histogram are effective for face recognition. The result suggests that shape information recovered from single image is sufficient for face recognition based on the condition that shape-from:-shading can successfully recover the surface normal fields and the height map from the image.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gomes, David Menotti. "Contrast enhancement in digital imaging using histogram equalization". Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00470545.

Testo completo
Abstract (sommario):
Nowadays devices are able to capture and process images from complex surveillance monitoring systems or from simple mobile phones. In certain applications, the time necessary to process the image is not as important as the quality of the processed images (e.g., medical imaging), but in other cases the quality can be sacrificed in favour of time. This thesis focuses on the latter case, and proposes two methodologies for fast image contrast enhancement methods. The proposed methods are based on histogram equalization (HE), and some for handling gray-level images and others for handling color images As far as HE methods for gray-level images are concerned, current methods tend to change the mean brightness of the image to the middle level of the gray-level range. This is not desirable in the case of image contrast enhancement for consumer electronics products, where preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. To overcome this drawback, Bi-histogram equalization methods for both preserving the brightness and contrast enhancement have been proposed. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images which do not look as natural as the ones which have been input. In order to overcome this drawback, we propose a technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one of them. This methodology performs a less intensive image contrast enhancement, in a way that the output image presented looks more natural. We propose two discrepancy functions for image decomposition which lead to two new Multi-HE methods. A cost function is also used for automatically deciding in how many sub-images the input image will be decomposed on. Experimental results show that our methods are better in preserving the brightness and producing more natural looking images than the other HE methods. In order to deal with contrast enhancement in color images, we introduce a generic fast hue-preserving histogram equalization method based on the RGB color space, and two instances of the proposed generic method. The first instance uses R-red, G-green, and Bblue 1D histograms to estimate a RGB 3D histogram to be equalized, whereas the second instance uses RG, RB, and GB 2D histograms. Histogram equalization is performed using 7 Abstract 8 shift hue-preserving transformations, avoiding the appearance of unrealistic colors. Our methods have linear time and space complexities with respect to the image dimension, and do not require conversions between color spaces in order to perform image contrast enhancement. Objective assessments comparing our methods and others are performed using a contrast measure and color image quality measures, where the quality is established as a weighed function of the naturalness and colorfulness indexes. This is the first work to evaluate histogram equalization methods with a well-known database of 300 images (one dataset from the University of Berkeley) by using measures such as naturalness and colorfulness. Experimental results show that the value of the image contrast produced by our methods is in average 50% greater than the original image value, and still keeping the quality of the output images close to the original
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Choy, Siu Kai. "Statistical histogram characterization and modeling : theory and applications". HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/913.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Mlsna, Phillip Anthony 1956. "Color image enhancement by three-dimensional histogram modification". Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278247.

Testo completo
Abstract (sommario):
Histogram-based color image enhancement is usually accomplished by transforming from RGB coordinates to another coordinate system, modifying the components represented in that system, and converting the results back to RGB. Although a few methods function directly in RGB space, they also attempt to reduce dimensonality from the histogram's original three dimensions. Such methods seldom yield images that use the full extent of RGB color range. A new method called "histogram explosion" has been developed to perform true multivariate enhancement directly in RGB color space. Discussed are the algorithm's operational parameters, behavior, implementation, and possible improvements. Results show histogram explosion to be very effective and flexible in enhancing color images. Finally, two iterative methods are suggested as possible approaches for the extension of the histogram equalization algorithm to operate upon three dimensional histograms.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Přibyl, Jakub. "Sledování objektu ve videosekvenci pomocí integrálního histogramu". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413048.

Testo completo
Abstract (sommario):
This thesis focuses on object tracking in real-time. Tracked object is defined by bounding rectangle. The thesis works on issue of image processing and using histogram for real-time object tracking. The main contribution of the work is the extension of the provided program to track object in real-time with changing bounding rectangle. Size of the rectangle is changing as the object moves closer of further from camera. Furthemore the detection behavior in different scenarios is analyzed. In addition, various weight calculations were tested. The program is written in C++ using OpenCV library.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Sojma, Zdeněk. "Sledování objektu ve videu". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237013.

Testo completo
Abstract (sommario):
This master's thesis describes principles of the most widely used object tracking systems in video and then mainly focuses on characterization and on implementation of an interactive offline tracking system for generic color objects. The algorithm quality consists in high accuracy evaluation of object trajectory. The system creates the output trajectory from input data specified by user which may be interactively modified and added to improve the system accuracy. The algorithm is based on a detector which uses a color bin features and on the temporal coherence of object motion to generate multiple candidate object trajectories. Optimal output trajectory is then calculated by dynamic programming whose parameters are also interactively modified by user. The system achieves 15-70 fps on a 480x360 video. The thesis describes implementation of an application which purpose is to optimally evaluate the tracker accuracy. The final results are also discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Bashar, M. K., e N. Ohnishi. "Image Retrieval By Local Contrast Patterns and Color Histogram". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10434.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Chan, Chi Ho. "Multi-scale local Binary Pattern Histogram for Face Recognition". Thesis, University of Surrey, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493135.

Testo completo
Abstract (sommario):
Recently, the research in face recognition has focused on developing a face representation that is capable of capturing the relevant information in a manner which is invariant to facial expression and illumination. Motivated by a simple but powerful texture descriptor, called Local Binary Pattern (LBP), our proposed system extends this descriptor to evoke multiresolution and multispectral analysis for face recognition. The first descriptor, namely Multi-scale Local Binary Pattern Histogram (MLBPH), provides a robust system which is relatively insensitive to localisation errors because it benefits from the multiresolution information captured from the regional histogram.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Roychoudhury, Shoumik. "Tracking Human in Thermal Vision using Multi-feature Histogram". Master's thesis, Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/203794.

Testo completo
Abstract (sommario):
Electrical Engineering
M.S.E.E.
This thesis presents a multi-feature histogram approach to track a person in thermal vision. Illumination variation is a primary constraint in the performance of object tracking in visible spectrum. Thermal infrared (IR) sensor, which measures the heat energy emitted from an object, is less sensitive to illumination variations. Therefore, thermal vision has immense advantage in object tracking in varying illumination conditions. Kernel based approaches such as mean shift tracking algorithm which uses a single feature histogram for object representation, has gained popularity in the field of computer vision due its efficiency and robustness to track non-rigid object in significant complex background. However, due to low resolution of IR images the gray level intensity information is not sufficient enough to give a strong cue for object representation using histogram. Multi-feature histogram, which is the combination of the gray level intensity information and edge information, generates an object representation which is more robust in thermal vision. The objective of this research is to develop a robust human tracking system which can autonomously detect, identify and track a person in a complex thermal IR scene. In this thesis the tracking procedure has been adapted from the well-known and efficient mean shift tracking algorithm and has been modified to enable fusion of multiple features to increase the robustness of the tracking procedure in thermal vision. In order to identify the object of interest before tracking, rapid human detection in thermal IR scene is achieved using Adaboost classification algorithm. Furthermore, a computationally efficient body pose recognition method is developed which uses Hu-invariant moments for matching object shapes. An experimental setup consisting of a Forward Looking Infrared (FLIR) camera, mounted on a Pioneer P3-DX mobile robot platform was used to test the proposed human tracking system in both indoor and uncontrolled outdoor environments. The performance evaluation of the proposed tracking system on the OTCBVS benchmark dataset shows improvement in tracking performance in comparison to the traditional mean-shift tracking algorithm. Moreover, experimental results in different indoor and outdoor tracking scenarios involving different appearances of people show tracking is robust under cluttered background, varying illumination and partial occlusion of target object.
Temple University--Theses
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Muñoz, José Daniel. "The broad histogram method : an extension to continuous systems /". Berlin : Logos, 2001. http://catalogue.bnf.fr/ark:/12148/cb40181410z.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Shimazaki, Hideaki. "Recipes for selecting the bin size of a histogram". 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/136749.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Ishikawa, Yoshiharu, Yoji Machida e Hiroyuki Kitagawa. "A Dynamic Mobility Histogram Construction Method Based on Markov Chains". IEEE, 2006. http://hdl.handle.net/2237/7521.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Pahalawatta, Kapila Kithsiri. "Image histogram features for nano-scale particle detection and classification". Thesis, University of Canterbury. Computer Science and Software Engineering, 2015. http://hdl.handle.net/10092/10866.

Testo completo
Abstract (sommario):
This research proposes a method to detect and classify the smoke particles of common household fires by analysing the image histogram features of smoke particles generated by Rayleigh scattered light. This research was motivated by the failure of commercially available photoelectric smoke detectors to detect smoke particles less than 100 nm in diameter, such as those in polyurethane (in furniture) fires, and the occurrence of false positives such as those caused by steam. Seven different types of particles (pinewood smoke, polyurethane smoke, steam, kerosene smoke, cotton wool smoke, cooking oil smoke and a test Smoke) were selected and exposed to a continuous spectrum of light in a closed particle chamber. A significant improvement over the common photoelectric smoke detectors was demonstrated by successfully detecting and classifying all test particles using colour histograms. As Rayleigh theory suggested, comparing the intensities of scattered light of different wavelengths is the best method to classify different sized particles. Existing histogram comparison methods based on histogram bin values failed to evaluate a relationship between the scattered intensities of individual red, green and blue laser beams with different sized particles due to the uneven particles movements inside the chamber. The current study proposes a new method to classify these nano-scale particles using the particle density independent intensity histograms feature; Maximum Value Index. When a Rayleigh scatter (particles that have the diameter which is less than one tenth of the incident wavelength) is exposed to a light with different wavelengths, the intensities of scattered light of each wavelength is unique according to the particle size and hence, a single unique maximum value index in the image intensity histogram can be detected. Each captured image in the video frame sequence was divided into its red, green and blue planes (single R, G, B channel arrays) and the particles were isolated using a modified frame difference method. Mean and the standard deviation of the Maximum Value Index of intensity histograms over predefined number of frames (N) were used to differentiate different types of particles. The proposed classification algorithm successfully classified all the monotype particles with 100% accuracy when N ≥ 100. As expected, the classifier failed to distinguish wood smoke from other monotype particles due to the rapid variation of the maximum value index of the intensity histograms of the consecutive images of the image sequence since wood smoke is itself a complex composition of many monotype particles such as water vapour and resin smoke. The results suggest that the proposed algorithm may enable a smoke detector to be safer by detecting a wider range of fires and reduce false alarms such as those caused by steam.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Gaddam, Purna Chandra Srinivas Kumar, e Prathik Sunkara. "Advanced Image Processing Using Histogram Equalization and Android Application Implementation". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13735.

Testo completo
Abstract (sommario):
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Yi-ShanLin e 林怡珊. "Partitioned Dynamic Range Histogram and Its Application to Obtain Better Histogram Equalization". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37273415144723906115.

Testo completo
Abstract (sommario):
碩士
國立成功大學
電腦與通信工程研究所
101
Image contrast enhancement algorithms have been designed to adjust contrast conforming to human visual perception. Histogram equalization (HE) is a very widely used and a popular technique for image contrast enhancement. However, it may produce over-enhancement, washed out, and detail loss in some parts of the processed image and thus makes the processed image unnatural. This thesis proposes a novel compensatory histogram equalization method. Originally when applying HE, it needs to map intensities by calculating the cumulative distribution function (CDF) which is derived from the probability density function (PDF). The proposed technique modifies the PDF of an image by using the range distribution function (RDF) which is defined in this thesis as the constraint prior to the process of HE, so that it performs the enhancement on the image without making fatal loss of details. By remapping intensity levels, this approach provides a convenient and effective way to control the enhancement process. The proposed method can be applied on high dynamic range (HDR) images and low dynamic range (LDR) images. To adapt more different kinds of image store technologies, it combines a simple preprocessing method on HDR images. Therefore, this method can be widely used on more kinds of image formats. Finally, experimental results show that the proposed method can achieve better results in terms of Information Fidelity Criterion (IFC) values, the image quality evaluation, than some previous modified histogram-based equalization methods. Further, a fusion algorithm is adopted to combine processed images with different parameters for an optimal result. We believe that it is a strategy worthy for further exploration.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Kumar, Pankaj. "Image Enhancement Using Histogram Equalization and Histogram Specification on Different Color Spaces". Thesis, 2014. http://ethesis.nitrkl.ac.in/5490/1/pankaj_arora_thesis.pdf.

Testo completo
Abstract (sommario):
Image Enhancement is one of the important requirements in Digital Image Processing which is important in making an image useful for various applications which can be seen in the areas of Digital photography, Medicine, Geographic Information System, Industrial Inspection , Law Enforcement and many more Digital Image Applications. Image Enhancement is used to improve the quality of poor images. The focus of this paper is an attempt to improve the quality of digital images using Histogram Equalization and Histogram Specification. In this paper we are applying Histogram Equalization on color images with Different Color Space like RGB ,HSV ,YIQ and Histogram Specification on gray Scale images and color images.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

MAO, LI-QIN, e 毛麗琴. "Histogram and model selection". Thesis, 1992. http://ndltd.ncl.edu.tw/handle/49364316338099396246.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Chou, Sheng-Che, e 周聖哲. "Analysis and Comparison of Instrumental Music using Spectrum Histogram, Periodicity Histogram and Fluctuation Pattern". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/68032956454611928546.

Testo completo
Abstract (sommario):
碩士
國立臺灣大學
電信工程學研究所
96
In this thesis, I introduce three different methods for music similarity measuring in detail. And then I apply these three methods to 50 songs I chose (frankly viewed as 5 groups, each one has 10 songs). By comparing these results, we can understand which method is the most effective and most accurate. Because we can view an audio data in many different views of point, we have to choose a property on which we focus before starting the similarity measures. In this thesis, I choose the property “timbre” as my yardstick. In chapter 2, I describe the property “timbre” in detail. Other properties such as melody, rhythm, and genre are referred to. Based on these properties, the very first step of measuring similarity of music is constructed. In chapter 3, some backgrounds of building these similarity measure methods are discussed in detail, e.g. Mel-Frequency Cepstrum Coefficient and Sone-Bark Representation. In chapter 4, three methods for measuring the music similarity are particularly described. They are spectrum histogram (SH), periodicity histogram (PH), and fluctuation pattern (FP). In Chapter 5, SH, PH, and FP will be applied to instrumental music. We can know the best method to distinguish different instruments. Chapter 6 is about conclusion and future works.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Wu, Yia-Ching, e 吳雅菁. "Histogram Enhancement Using Visual Algorithm". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/71284912461699151569.

Testo completo
Abstract (sommario):
碩士
國防大學理工學院
電子工程碩士班
100
Image enhancement can effectively enhance the clarity of the image, and clearly reflect the brightness and subtle color differences of the shooting scene. The most common image enhancement method is gray-scale histogram adjustment. The traditional histogram equalization method is the most widely used technique in gray-scale histogram adjustment, and it utilizes the cumulative probability value to effectively pull the gray scale spacing of histogram. However, a part of the export image is brightness distortion and loss of original information. In order to improve its shortcomings, many researchers have proposed the section of the image contrast enhancement algorithms. It is mainly based on the histogram of the peaks and troughs value or fixed average way to do the cutting. Although these methods avoid the shortcomings of the histogram equalization, they only apply to the enhancement of specific image. This paper proposed the establishment of the image enhancement algorithm based on the human visual system. The method that we have developed can be effectively improved the disadvantages of the histogram equalization, enhance the clarity of the image details and retain the bright detailed information of the original image. The experimental results compare with the algorithm of the previous studies, verify that the visual images enhance the robustness of the algorithms (VHE) and enhanced visual quality and performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Santelli, Francesco. "Archetypes for histogram-valued data". Tesi di dottorato, 2018. http://www.fedoa.unina.it/12637/1/francesco_santelli_31.pdf.

Testo completo
Abstract (sommario):
Il principale sviluppo innovativo del lavoro è quello di propone una estensione dell'analisi archetipale per dati ad istogramma. Per quanto concerne l'impianto metodologico nell'approccio all'analisi di dati ad istogramma, che sono di natura complessa, il presente lavora utilizza le intuizioni della "Symbolic Data Analysis" (SDA) e le relazioni intrinseche tra dati valutati ad intervallo e dati valutati ad istogramma. Dopo aver discusso la tecnica sviluppata in ambiente Matlab, il suo funzionamento e le sue proprietà su di un esempio di comodo, tale tecnica viene proposta, nella sezione applicativa, come strumento per effettuare una analisi di tipo "benchmarking" quantitativo. Nello specifico, si propongono i principali risultati ottenuti da una applicazione degli archetipi per dati ad istogramma ad un caso di benchmarking interno del sistema scolastico, utilizzando dati provenienti dal test INVALSI relativi all'anno scolastico 2015/2016. In questo contesto l'unità di analisi è considerata essere la singola scuola, definita operativamente attraverso le distribuzioni dei punteggi dei propri alunni valutate, congiuntamente, sotto forma di oggetti simbolici ad istogramma.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Hung-Yuan, Chen. "Improving Histogram Approximation with Line Representation". 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-1303200709332516.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Wang, Chu-Hsuan, e 王楚軒. "Robust indoor localization using histogram equalization". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5vkxfq.

Testo completo
Abstract (sommario):
博士
元智大學
電機工程學系
104
Indoor positioning systems have received increasing attention for supporting locationbased services in indoor environments. Received Signal Strength (RSS), mostly utilized ngerprinting systems in Wi-Fi, is known to be unreliable due to environmental and hardware eects. The PHY layer information about channel quality known as Channel State Information (CSI) can be used due to its frequency diversity (OFDM sub-carriers) and spatial diversity (multiple antennas). The extension of CSI dimensions causes over-tting should be considered. This paper proposes two approaches based on histogram equalization (HEQ) and information theoretic learning (ITL) to compensate for hardware variation, orientation mismatch and over-tting problems in robust localization system. The proposed method involves converting the temporal{spatial radio signal strength into a reference function (i.e., equalizing the histogram). This paper makes two principal contributions: First, the equalized RF signal is capable of improving the robustness of location estimation, and second, ITL greater discriminative components provides increased exibility in determining the number of required components and achieves better computational eciency.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Chen, Hung-Yuan, e 陳鴻元. "Improving Histogram Approximation with Line Representation". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96749204682141130073.

Testo completo
Abstract (sommario):
碩士
國立清華大學
資訊工程學系
94
Histograms are popularly used to approximate data distribution by a small number of step functions. Maintaining histograms for every single attribute in databases helps us estimate the cost of database operations. The query optimizers usually require such estimation of cost to decide an efficient access query plan. Histograms are also widely used in approximate query answering systems and data mining. The techniques that store precomputed histograms in the database require some overhead of memory consumption. Therefore, the problem of compressing the histogram in a fixed amount of space with the least error is considered as an important issue and has been investigated by researchers for many years. The most common algorithm to compress the histogram is to divide the histogram into buckets and estimate every bucket by uniform representation. The problem becomes how to choose the bucket boundaries to minimize the estimation error for a given number of buckets. The pervious approach has provided a desirable solution to this problem of bucket boundaries selection. However, many data distributions in real-life are well known to be extremely skewed. The pervious algorithms do not perform well for the real data because they do not consider the tendency of data distribution. In this paper, we propose an algorithm that utilizes a line segment to estimate each bucket in replace of uniform representation. The algorithm can construct the histogram more precisely when the data distribution is skewed and is more suitable for the real-world data. We performed a series of experiments, and the results show that our method has better accuracy when the data distribution is skewed.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Hsieh, Pin-Jou, e 謝品柔. "Virtual Fitting Scheme Using Histogram Technology". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/54752263438383977667.

Testo completo
Abstract (sommario):
碩士
國立勤益科技大學
電子工程系
101
With the increasing development of information technology, internet has become one of the modern major means of communication. The Internet auction market is also booming, where costumes are the most frequently purchased items. However, people have experienced that clothes on mannequin is nice, but the actual product received is found not suitable for them. Therefore, if there is virtual fitting system, you can match online to confirm whether it is right to buy so to save users a lot of time and money and so on.   This study takes portrait and jacket images by a CCD camera. By pro-processing images, color images are separated into R, G, B three planes and grayscale processing, a total of four images. Then it is followed by using the Sobel filter to detect the measured edge of portraits and costumes images.   It is also through closing the edge image gaps and filling the edge image to achieve completing capture of portraits and clothes. Using the full portrait histogram analysis again will partition the shoulder line and midline of the portrait. As for the costumes the highest point of neckline is used for positioning. At last through the inspection of portrait shoulder width and upper body length, and scaling shirt size, the jacket can be set on users based on located points. Experimental results show that using this method can accurately locate the shoulder line and the midline of a jacket, so it is applied to facilitate the user to achieve the effect of fitting.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Bhubaneswari, M. "Optimized Histogram Equalization for Image Enhancement". Thesis, 2015. http://ethesis.nitrkl.ac.in/6802/1/Optimized_Bhubaneswari_2015.pdf.

Testo completo
Abstract (sommario):
In this project, Image Enhancement has been achieved by performing Histogram Equalization that uses optimization algorithms to optimize parameters.Histogram equalization is a spatial domain image enhancement technique, which effectively enhances the contrast of an image.However, while it takes care of contrast enhancement,it does not consider the abrupt changes in the image brightness due to which image brightness is not preserved.Hence,in this project a modified histogram equalization technique using optimization algorithm has been proposed, which takes care of contrast enhancement while ensuring brightness preservation.The idea used here is to first ,section the data image histogram into two, utilizing otsu's limit .Then an arrangement of streamlined measuring requirements are formed and connected on both the sub-images. Then, the sub-images are evened out freely and their union creates the contrast enhanced , brightness preserved output image .Here we have used three Optimization Algorithms for finding the optimal constraints . First , Genetic Algorithm(GA) has been used , to optimise the constraints .Second , Particle Swarm Optimization (PSO) has been used and third ,a Hybrid PSO Optimization Algorithm has been used for the same .Then the results produced by the above algorithms are compared to find out which one outperforms the other , by comparing various parameters like Discrete Entropy , Mean , Number of Generations.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

HUSEMANN, JOYCE ANN STEVENS. "HISTOGRAM ESTIMATORS OF BIVARIATE DENSITIES (MULTIVARIATE, STATISTICS)". Thesis, 1986. http://hdl.handle.net/1911/15984.

Testo completo
Abstract (sommario):
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals whose lengths are determined by the criterion of integrated mean squared error (IMSE) minimization. Similarly, two-dimensional fixed-cell-size histogram estimators of bivariate probability density functions are less efficient than variable-cell-size estimators whose cell sizes are determined from IMSE minimization. Only estimators whose cell sides are parallel to the coordinate axes are examined. The estimators are classified according to the functional dependence of their cell dimensions upon x and y: each cell dimension of the Minimally Restricted Mesh depends upon both x and y; one cell dimension of the Semi-fixed-dimension Mesh is fixed, and the other depends upon either x alone or y alone; one cell dimensions of the Variable-dimensions Mesh I depends upon x and the other upon y; one cell dimension of the Variable-dimension Mesh II depends upon x alone or y alone and the other depends upon both x and y. The Minimally Restricted Mesh results in the smallest IMSE of the four types, but is not implementable. The other meshes are implementable and are listed above in order of decreasing IMSE. Random vectors from Dirichlet, mixed bivariate and elliptical bivariate normal distributions were generated and used to construct optimal histograms. The Variable-dimension Mesh II produced histograms having IMSEs from 20 to 90 percent smaller than those from histograms based upon optimal fixed-dimension meshes. The most substantial improvements were observed for mixed bivariate normal densities having strongly unequal variances. Modest improvements (20%) were observed for skewed densities and slightly elliptical densities, but no improvements were observed in cases of highly elliptical densities whose axes were rotated 45(DEGREES) from the coordinate axes.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Huang, Hui-Tzu, e 黃惠慈. "Reversible Data Embedding Based on Histogram Manipulation". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/mexq49.

Testo completo
Abstract (sommario):
碩士
朝陽科技大學
資訊管理系碩士班
93
Digital watermarking is a key ingredient to multimedia protection. However, most existing techniques distort the original content as a side effect of image protection. As a way to overcome such distortion, reversible data embedding has recently been introduced and is growing rapidly. In reversible data embedding, the original content can be completely restored after the removal of the watermark. Therefore, it is very practical to protect legal, medical, or other important imagery. In this research, three novel and reversible data embedding algorithms based on histogram manipulation are proposed. First, the reversible data embedding algorithm based on contrast stretching is presented. Contrast stretching is employed to produce extra space for embedding, and the redundancy in digital images is exploited to achieve very high embedding capacity. Second, a high-capacity reversible data embedding via multilayer embedding is designed. The performance of the proposed method outperforms the previously proposed techniques. And finally, a robust near-reversible data embedding technique is developed to resist such operations as blurring, sharpening, JEPG compression, noise addition, and brightness/contrast adjustment. These three methods have been applied to various standard images, and the experimental results have demonstrated a promising outcome and the proposed techniques achieved satisfactory and stable performance both on embedding capacity and visual quality.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Chung, Xin-fang, e 鍾欣芳. "Simulation of Histogram Equalization for Classification Problem". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/6xk7u8.

Testo completo
Abstract (sommario):
碩士
國立臺灣科技大學
資訊管理系
99
Histogram equalization (HEQ) is a technology for improving the darkness and the brightness of the image by adjusting the gray levels based on the cumulative distribution function (CDF). In recent years, this method has been applied to different issues, including robust speech recognition for solving the mismatch between the noisy speech and the clean speech, and natural language processing for the cross-database problem. This paper analyzed how histogram equalization may influence a simple classification problem by simulation. The results showed the rough curve of CDF caused by insufficient data would lead to the poor mapping between training and test data and degrade the performance. Direct and indirect operations of histogram equalization achieve similar performance for linear or non-linear transformation, while the performance of the indirect one is more sensitive to type of classifiers. With sufficient amount of training data, HEQ and mean-standard deviation weight (MSW) can achieve compatible performances for linear transformation, while HEQ appears superior for nonlinear transformation.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

I-CHIN, CHEN, e 陳逸勤. "Image Contrast Enhancement using Histogram-based Techniques". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/475gn7.

Testo completo
Abstract (sommario):
碩士
國立金門大學
電子工程學系碩士班
103
Many image enhancement methods have been proposed. Histogram equalization is a very popular approach. Some histogram-based methods have been proposed to improve the performance of enhancement under different conditions. In this thesis, we propose two new histogram-based image enhancement methods with different criteria. Our first method is the Adaptive Bi-Segment Histogram Equalization (ABHE). In this method, the foreground pixels and background pixels are separated by image segmentation based on the histogram. Then, we assign a smaller range of gray levels to background pixels and a larger range to the foreground pixels. Finally, the histogram equalization is applied to each range separately. Experimental results show that the objects in the foreground are enhanced, so that we can recognize more details of them. Our second method is the Local statistic information Histogram Equalization (LSIHE). In this method, we focus on the following two factors. Firstly, the image enhancement method should be robust to noise. Due to noise, the image may have +5 to -5 additive gray level variation. We treat the low level variation as the noise and it will not be expended in the histogram equalization process. Secondly, expending the contrast between distant pixels is not as important as the neighboring pixels. For human eyes, the contrast between distant pixels has no significant effect on image enhancement. Based on the two criteria, we propose a new algorithm to enhance the images. Experimental results show that our method is satisfactory.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Kao, Chuan-Ho, e 高泉合. "Image retrieval engine based on color histogram". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/12590954256662331564.

Testo completo
Abstract (sommario):
碩士
淡江大學
資訊工程學系
88
Content-based image retrieval has become more desirable for developing large image database. We propose a new method of retrieving images from an image database in this research plan. We combine the color, shape and spatial relation features of a picture to index and measure the similarity of images. For any color-based image retrieval system, the key issues are the selection of color space and reducing the resolutions of the color histograms in order to decrease the computing complexity and to increase the performance of similarity measurement. In this plan, several color spaces that widely used in computer graphic were discussed and compared for color clustering. In addition, we propose a new automatic indexing scheme of image database according to our clustering method, which could filter the image efficiently. As a technical contribution, a Seed-Filling like algorithm that could extract the shape and spatial relationship feature of image is proposed. Also, we propose a shape normalization algorithm to increase the precision of image retrieval. And, we extend temporal interval relation by means of a complete analysis for spatial computation of the image. Besides, the system is incorporated with a friendly graphic user interface, which allows the user to retrieval the image easily.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Waqa, Joel Jacob, e 瓦佳. "Tattoo Image Retrieval using Spatial Color Histogram". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/59673552674788485436.

Testo completo
Abstract (sommario):
碩士
健行科技大學
資訊管理所
101
In the ever development of technology, humanity has joint biometrics and soft biometric traits in almost every possible methods to retrieve information. Technology has given users a much easier form of searching digital data in numerous fields such as crime, suspects, and victim identification. The common issue of overflowing information in databases is being gradually more stressful to image databases; it is tough and time consuming to search individuals from a database just by fingerprint alone. It is also beneficial and effective when searching people using unique soft biometrics such as adding tattoos as an alternative search identifier. With this being said, in this thesis, we propose a method that uses spatial color histogram which is a method commonly used for retrieving images. We use preprocessing to attain the minimal bundle square (MBS). Using gridlines division to split images, and then use its extracted vectors to search and compare their similarities with Euclidean distance. The experiment results show that the recall and precision are better than the results of using identity features with city block distance and Canberra distance. This alternative form of image retrieval can surely define a particular image and also it can facilitate forensics.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Yang, Han-Ni, e 楊漢妮. "The Study of Adaptive Segmentation Histogram Enhancement". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35798870691610518156.

Testo completo
Abstract (sommario):
碩士
國防大學理工學院
電子工程碩士班
99
An image enhancement algorithm based on adaptive segmentation for image contrast enhancement is presented. In this study, an automatic adaptive segmentation histogram enhancement (ASHE), based on discriminant analysis, is utilized to recursively segment an image into several clusters first. After segmentation, different object and background components are segmented into separate clusters, called object planes. Then, the dynamic range of each object plane is adjusted according to its visual characteristics. Finally, each object plane is enhanced within the new dynamic range respectively. Because the proposed algorithm can automatically segment an image into different object planes and enhance the image according to the visual characteristic of each object plane, each object and background components of the image can be well enhanced. Experimental results for poor-contrast images and the comparisons for some of the previous studies are provided to demonstrate the robustness, visual quality, and effectiveness of the proposed algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Jhan, Shih-Sian, e 詹士賢. "Sobel Histogram Equalization for Image Contrast Enhancement". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76534008994939627485.

Testo completo
Abstract (sommario):
碩士
立德管理學院
應用資訊研究所
95
Contrast enhancement is an important technique for image processing. Although many contrast enhancement methods had been proposed, these designed methods do not focus on the edge quality of image. In this study, the sobel histogram equalization (SHE) is proposed to enhance the contrast of image. In SHE, the image is divided into two regions, edge and non-edge, by using the sobel edge detector. The contrast of these two regions can be individually enhanced, and then these two regions can be merged into a whole image by the histogram equalization. In our experiments, SHE outperforms other methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Lin, Jun-Yu, e 林俊宇. "Defect correction algorithms for the Image Histogram". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/17225215578480383870.

Testo completo
Abstract (sommario):
碩士
僑光科技大學
資訊科技研究所
101
The development and progress of modern video imaging technology is not only limited to imaging itself. It has also led to the development of panel display technology with the resolution of electronic devices becoming more and more detailed, with an increasingly wider colour gamut and colour accuracy becoming higher and higher. Hence the requirement for details of the colour image information has increased. However, due to the improper use of image processing technology or a lack of detailed calculation, these often cause the loss of detailed image or colour information. The main purpose of this paper is to provide a new algorithm which can repair the gradation of the image when detailed information is lost by calculations; re-patching the missing colour information so that the colour information of the image can be recovered with as much detail as possible. The image would then have better visual display for screen display or on print.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Pyng, Sue Yuh, e 蘇玉平. "Image segmentation by using adaptive window histogram". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/11884872135793268364.

Testo completo
Abstract (sommario):
碩士
國立臺灣科技大學
工程技術研究所
83
In this thesis, we present an algorithm for segmentating gray images. This algorithm accounts for local intensityations includes spatial constraints in the image .Local intensity variations are accounted for local average estimation ovre a sliding window whose size determined by minimizing acriterion based on AIC information criterion. We use K_means algorithm to estimate the initial class and local intensity variations in the capture window. Spatial constraints are included by the use of a Markov random field model. This segmentation algorithm is adaptive. The segmented images are preserved improtant details. Moreover this algorithm is suitable for parallel computing. It is easy to implement in hardware.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Fu, Li Yao, e 李曜輔. "Color Image Segmentation Using Circular Histogram Thresholding". Thesis, 1994. http://ndltd.ncl.edu.tw/handle/42055582127526155434.

Testo completo
Abstract (sommario):
碩士
國立中央大學
資訊及電子工程研究所
82
A circular histogram thresholding for color image segmentation is proposed. At first, a hue circular histogram is constructed based on an UCS (I,H,S) color space. Next, the histogram is smoothed by a scale-space filter, then the circular histogram is transformed to a traditional histogram form. At last the histogram is recursively threshold based on the maximum principle of analysis of variance. Three comparisons of performance are reported, which are (1) the proposed thresholding on the circular histogram and a traditional histogram; (2) the proposed thresholding and clustering; (3) thresholding on hue attributes of UCS and non-UCS color spaces. Several benefits of proposed approach are identified in the experiments.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Wu, Zong-Han, e 吳宗翰. "Face Recognition based on Vector Quantization Histogram". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/nw268m.

Testo completo
Abstract (sommario):
碩士
國立暨南國際大學
電機工程學系
105
Face recognition is a popular field of computer technology research which belongs to biometric identification technology, the biological can distinguish different biological by their own biological characteristics. Biometric identification techniques include face, fingerprint, palm, iris, sound, body and personal habits. The corresponding technology is face recognition, fingerprint recognition, iris recognition, retinal recognition, speech recognition and signature recognition. This thesis uses the vector quantization (VQ) histogram method which is a simple and reliable face recognition method, by cutting the image into small pieces, and matching code vector to get indexed, and the statistical index is the effective personal feature of the histogram. Using two-dimensional discrete wavelet transform (DWT) processing, low-pass filtering processing, minimum intensity subtraction and VQ processing produce histogram. The study found that adding the same person's histogram to produce an average histogram can improve the recognition rate, and the size of the image is reduced appropriately can increase the processing speed without affecting the recognition rate. The result shows that the ORL face database, experimental result show an average recognition rate of 96.2% for 400 images of 40 persons (10 images per person), every image collect at different times, different lighting, different expressions, different face details (wearing glasses or not wearing glasses).
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Poosala, Viswanath. "Histogram-based estimation techniques in database systems". 1997. http://catalog.hathitrust.org/api/volumes/oclc/37585530.html.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--University of Wisconsin--Madison, 1997.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 209-220).
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Tsai, Mei-ju, e 蔡梅茹. "Lossless Information Hiding Based on Histogram Modification". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/21149759073439377234.

Testo completo
Abstract (sommario):
碩士
朝陽科技大學
資訊管理系碩士班
98
Due to the fast growing and popularizing of Internet, people can transmit information quickly and easily. In the past, it took many days to receive information from friends far away. However, as a result of the using of Internet, the probability of information interception by hackers increases invisibly. For this reason, information security has become more important in this decade. Recently the reversible data hiding methods have studied a lot seriously by many professionals and experts. Its basic concept is to completely extract the embedded information after receiving the stego image, restore to its original image, and furthermore re-utilize the image. Many techniques of reversible data hiding have been addressed, such as difference expansion, histogram modification, Integer Wavelet Transform, and so on. Among them, histogram modification that uses the peak pixel values of statistic of histogram to hide messages and achieves the highest image quality. We propose two methods of reversible data hiding based on histogram modification in this paper: the first one is based on codebook-clustering; the second one use adaptive scanning. Then the differences are collected to generate histogram. Finally, the histogram-based hiding mechanism is used to embed secret information. The two methods we proposed have good image quality and normal embedded capacity as shown in the experimental results.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Chuang, Chialung, e 莊佳龍. "Piece-Wise Histogram Equalization For Image Enhancement". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/03838447444191654772.

Testo completo
Abstract (sommario):
碩士
義守大學
資訊工程學系碩士在職專班
100
Histogram equalization (HE), which has been intensively studied for decades, is one of the most popular technologies because it can produce high performance results without complex parameters. Histogram equalization is widely used for a variety of image applications, for instance, radar signal processing and medical image processing. However, HE suffers from choosing a proper dynamic range, which could over-enhance images and causes poor visual quality. Common HE methods use piece-wise algorithm that decomposes input image into N sub-images, and then enhances the sub-images individually. Result image is a combination of the enhanced sub-images. However, existing piece-wise algorithms do not guarantee successful enhancement. In this thesis, we propose a novel piece-wise algorithm that uses ‘’unilateralism’’ method to enhance the image details without loosing the original brightness of the source image. Results indicate the proposed method provides efficient enhancement. Furthermore, the proposed method is extended to enhance color images. Simulation results are demonstrated and discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

KHANNA, CHINTAN. "SATELLITE IMAGE CONTRAST ENHANCEMENT USING MODIFIED HISTOGRAM". Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15235.

Testo completo
Abstract (sommario):
This project presents a study of various Histogram Equalisation based Contrast Enhancement (CE) techniques followed by the proposal of a novel CE algorithm for satellite and aerial images. The algorithm is referred as Contour Based Histogram Equalisation (CBHE). The algorithm presents a novel method to capture the structural property of an image using the contour of the image. The algorithm addresses the inherent drawbacks of HE viz. artefacts and saturation by decreasing the contribution of high probability pixels in the histogram and increasing that of low probability pixels. Finally the algorithm enhances the features of the image by adjusting the coefficients of DCT. The algorithm generates good contrast images with richer details over a varied set of images including satellite and aerial images. It is computationally comparable to HE, does not introduce noise and saturation, preserves characteristic shape of original image histogram and does not enhance an already high contrast image. It thus qualifies as an effective pre-processing step. CBHE is compared with the conventional and best of HE based techniques both quantitatively and visually. The quantitative analysis of the results is carried out using several standard measures like Discrete Entropy, Signal to Noise Ratio (PSNR), Measurement of Enhancement (EME), Average Mean brightness Error (AMBE), Gradient Magnitude Similarity Index (GMSD) and Structural Similarity Index (SSI) over varied datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Muller, John Craig. "Adaptive histogram regrading for real-time image enhancement". 1989. http://hdl.handle.net/1993/17096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia