To see the other types of publications on this topic, follow the link: ENHANCEMNT ALGORITHM.

Dissertations / Theses on the topic 'ENHANCEMNT ALGORITHM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ENHANCEMNT ALGORITHM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andrianakis, Ioannis. "Bayesian algorithms for speech enhancement." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/66244/.

Full text
Abstract:
The portability of modern voice processing devices allows them to be used in environments where background noise conditions can be adverse. Background noise can deteriorate the quality of speech transmitted through such devices, but speech enhancement algorithms can ameliorate this degradation to some extent. The development of speech enhancement algorithms that improve the quality of noisy speech is the aim of this thesis, which consists of three main parts. In the first part, we propose a framework of algorithms that estimate the clean speech Short Time Fourier Transform (STFT) coefficients. The algorithms are derived from the Bayesian theory of estimation and can be grouped according to i) the STFT representation they estimate ii) the estimator they apply and iii) the speech prior density they assume. Apart from the introduction of algorithms that surpass the performance of similar algorithms that exist in the literature, the compilation of the above framework offers insight on the effect and relative importance of the different components of the algorithms (e.g. prior, estimator) to the quality of the enhanced speech. In the second part of this thesis, we develop methods for the estimation of the power of time varying noise. The main outcome is a method that exploits some similarities between the distribution of the noisy speech spectral amplitude coefficients within a single frequency bin, and the corresponding distribution of the corrupting noise. The above similarities allow the extraction of samples that are more likely to correspond to noise, from a window of past spectral amplitude observations. The extracted samples are then used to produce an estimate of the noise power. In the final part of this thesis, we are concerned with the incorporation of the time and frequency dependencies of speech signals in our estimation model. The theoretical framework on which the modelling is based is provided by Markov Random Fields (MRF’s). Initially, we develop a MAP estimator of speech based on the Gaussian MRF prior. In the following, we introduce the Chi MRF, which is employed in the development of an improved speech estimator. Finally, the performance of fixed and adaptive schemes for the estimation of the MRF parameters is investigated.
APA, Harvard, Vancouver, ISO, and other styles
2

Imanguliyev, Azar. "Enhancements for the Bees Algorithm." Thesis, Cardiff University, 2013. http://orca.cf.ac.uk/56503/.

Full text
Abstract:
This work introduces new enhancements to the Bees Algorithm in order to improve its overall performance. These enhancements are early neighbourhood search process, efficiency based recruitment for neighbourhood search process, hybrid strategy involving tabu search, new escape mechanism to escape locals with similar fitness values and autonomy to minimise interaction between search process and the user. The proposed enhancements were applied alone or in pair to develop improved versions of the Bees Algorithm. Three Enhanced Bees Algorithms were introduced: the Early Neighbourhood Search and Efficiency Based recruitment Bees Algorithm (ENSEBRBA), the Hybrid Tabu Bees Algorithm (TBA) and the Autonomous Bees Algorithm (ABA). The ENSEBRBA with an empowered initialisation stage and extra recruitment for neighbourhood search is introduced to improve performance of the Bees Algorithms on high dimensional problems. The TBA is proposed as a new version of the Bees Algorithm which utilises the memory lists to memorise less productive patches. Moreover, the local escape strategy was also implemented to this algorithm. Proposed modifications increased the productivity of the Bees Algorithm by decreasing number of evaluations needed to converge to the global optimum. iii The ABA is developed to provide independency to the Bees Algorithm, thus it is able to self tune its control parameters in a sub-optimal manner. All enhanced Algorithms were tested on continuous type benchmark functions and additionally, statistical analysis was carried out. Observed experimental results proved that proposed enhancements improved the Bees Algorithm’s performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Gagnon, Luc. "A speech enhancement algorithm based upon resonator filterbanks." Thesis, University of Ottawa (Canada), 1991. http://hdl.handle.net/10393/7767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weith-Glushko, Seth A. "Quantitative analysis of infrared contrast enhancement algorithms /." Online version of thesis, 2007. http://hdl.handle.net/1850/4208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beattie, Robert Scott. "Side scan sonar image formation, restoration and modelling." Thesis, Robert Gordon University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arif, Annatoma Arif. "BLURRED FINGERPRINT IMAGE ENHANCEMENT: ALGORITHM ANALYSIS AND PERFORMANCE EVALUATION." Miami University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=miami1473428137332997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hines, Glenn Derrick. "Real -time Retinex image enhancement: Algorithm and architecture optimizations." W&M ScholarWorks, 2006. https://scholarworks.wm.edu/etd/1539623490.

Full text
Abstract:
The field of digital image processing encompasses the study of algorithms applied to two-dimensional digital images, such as photographs, or three-dimensional signals, such as digital video. Digital image processing algorithms are generally divided into several distinct branches including image analysis, synthesis, segmentation, compression, restoration, and enhancement. One particular image enhancement algorithm that is rapidly gaining widespread acceptance as a near optimal solution for providing good visual representations of scenes is the Retinex.;The Retinex algorithm performs a non-linear transform that improves the brightness, contrast and sharpness of an image. It simultaneously provides dynamic range compression, color constancy, and color rendition. It has been successfully applied to still imagery---captured from a wide variety of sources including medical radiometry, forensic investigations, and consumer photography. Many potential users require a real-time implementation of the algorithm. However, prior to this research effort, no real-time version of the algorithm had ever been achieved.;In this dissertation, we research and provide solutions to the issues associated with performing real-time Retinex image enhancement. We design, develop, test, and evaluate the algorithm and architecture optimizations that we developed to enable the implementation of the real-time Retinex specifically targeting specialized, embedded digital signal processors (DSPs). This includes optimization and mapping of the algorithm to different DSPs, and configuration of these architectures to support real-time processing.;First, we developed and implemented the single-scale monochrome Retinex on a Texas Instruments TMS320C6711 floating-point DSP and attained 21 frames per second (fps) performance. This design was then transferred to the faster TMS320C6713 floating-point DSP and ran at 28 fps. Then we modified our design for the fixed-point TMS320DM642 DSP and achieved an execution rate of 70 fps. Finally, we migrated this design to the fixed-point TMS320C6416 DSP. After making several additional optimizations and exploiting the enhanced architecture of the TMS320C6416, we achieved 108 fps and 20 fps performance for the single-scale, monochrome Retinex and three-scale, color Retinex, respectively. We also applied a version of our real-time Retinex in an Enhanced Vision System. This provides a general basis for using the algorithm in other applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Melnik, Sergey. "Generic model management : concepts and algorithms /." Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0813/2004104636-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Teillaud, Monique. "Towards dynamic randomized algorithms in computational geometry /." Berlin [u.a.] : Springer, 1993. http://www.loc.gov/catdir/enhancements/fy0815/93023628-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gorokhovskiy, Konstantin. "Enhancement of demosaicking algorithms for digital still cameras." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/35680.

Full text
Abstract:
Demosaicking is the interpolation of missed colour samples in a colour filter array (CFA). The term demosaicking has its roots in the word "mosaic" which, in turn explains the structure of a colour filter array typically used in a digital camera. The detectors (cells) of blue, red and green colours or their combinations are spread regularly (mosaicked) on the electronic sensor chip (CMOS, CCD or other technology). The resulting mosaic of colour samples is passed through an interpolation procedure to determine the intensities of colours that are not sampled by the array. The pattern of the mosaic is important as most interpolation methods make use of a priori knowledge of the configuration for a more precise image restoration. The most popular is currently the Bayer CFA. It has twice as many green detectors than blue or red, however there are alternative sensors which are based on cyan, magenta, yellow and green colours.
APA, Harvard, Vancouver, ISO, and other styles
11

Nahle, Salim. "Algorithms for capacity enhancement in wireless mesh networks." Paris 6, 2009. http://www.theses.fr/2009PA066523.

Full text
Abstract:
Dans cette these on étudie le problème d'équité dans les réseaux maillés sans fil basé sur la technologie Wi-Fi. On propose un algorithme distribué pour améliorer l'équité et la capacité des réseaux qui est basé sur l’utilisation d’une deuxième file d’attente par nœud dédié pour stocker les paquets des autres nœuds. De plus, il change les paramètres des contentions inversement proportionnels aux débits équitables calculés, en se basant sur le ‘weighted contention graphs’. Dans la deuxième partie de cette these, On propose des algorithmes de routage et d'ordonnancement pour les réseaux maillés WiMAX dans les deux cas: un seul canal ou plusieurs. On montre par simulation l’efficacité de ces algorithmes. Enfin, on propose des algorithmes de placement des nœuds dans un réseau maillé sans fil WIMAX, dont le but est de minimiser le nombre de stations nécessaires pour assurer une couverture totale d’un itinéraire pour un débit donné.
APA, Harvard, Vancouver, ISO, and other styles
12

Mist, Joseph James. "Interactive evolutionary algorithms for image enhancement and creation." Thesis, University of Kent, 2014. https://kar.kent.ac.uk/47662/.

Full text
Abstract:
Image enhancement and creation, particularly for aesthetic purposes, are tasks for which the use of interactive evolutionary algorithms would seem to be well suited. Previous work has concentrated on the development of various aspects of the interactive evolutionary algorithms and their application to various image enhancement and creation problems. Robust evaluation of algorithmic design options in interactive evolutionary algorithms and the comparison of interactive evolutionary algorithms to alternative approaches to achieving the same goals is generally less well addressed. The work presented in this thesis is primarily concerned with different interactive evolutionary algorithms, search spaces, and operators for setting the input values required by image processing and image creation tasks. A secondary concern is determining when the use of the interactive evolutionary algorithm approach to image enhancement problems is warranted and how it compares with alternative approaches. Various interactive evolutionary algorithms were implemented and compared in a number of specifically devised experiments using tasks of varying complexity. A novel aspect of this thesis, with regards to other work in the study of interactive evolutionary algorithms, was that statistical analysis of the data gathered from the experiments was performed. This analysis demonstrated, contrary to popular assumption, that the choice of algorithm parameters, operators, search spaces, and even the underlying evolutionary algorithm has little effect on the quality of the resulting images or the time it takes to develop them. It was found that the interaction methods chosen when implementing the user interface of the interactive evolutionary algorithms had a greater influence on the performances of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Reddy, Manoj Kumar M. "A new adaptive edge enhancement algorithm for color laser printers /." Online version of thesis, 2007. http://hdl.handle.net/1850/5535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Okuno, Shingo. "Parallelization of Graph Mining using Backtrack Search Algorithm." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

O'Rourke, William Thomas. "Real-world evaluation of mobile phone speech enhancement algorithms." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gerlach, Adam R. "Performance Enhancements of the Spin-Image Pose Estimation Algorithm." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1267730727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cimiano, Philipp. "Ontology learning and population from text : algorithms, evaluation and applications /." New York, NY : Springer, 2006. http://www.loc.gov/catdir/enhancements/fy0824/2006931701-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Enlund, Anders. "How do bass enhancement algorithms impact mixing decisions when monitoring on headphones?" Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71449.

Full text
Abstract:
As music production moves from the big professional studios into small bedroom studios, Headphones become more commonly used for monitoring. This brings a new set of problems based on the limitations of headphones compared to loudspeakers. This research explores how a bass enhancement algorithm impacts the results when mixing low frequency audio on headphones. This is done through a simple mixing experiment where subjects are tasked with balancing the amplitude of a low frequency element in a song, both with and without a bass enhancement algorithm enhancing the headphone monitoring. It is shown that while subjects do not perceive a difference in difficulty with this task, the results differ as subjects overall mix the bass frequencies lower in amplitude when aided by the bass enhancement algorithm. It is concluded that the bass enhancement algorithm is useful in this manner.
APA, Harvard, Vancouver, ISO, and other styles
19

Nasiri, Avanaki Mohammad Reza. "Image enhancement algorithms and system optimization for optical coherence tomography." Thesis, University of Kent, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.587527.

Full text
Abstract:
Optical imaging is becoming a method of choice in applications where high resolution images are required non-invasively. Optical imaging technologies are capable of representing the internal structure of the sample investigated across a range of spatial scales. In this project, we worked on two optical imaging systems, including the optical coherence tomography (OCT) and confocal microscopy (CM). Similar to every other imaging system, optical imaging systems have limitations mainly due to scattering and noise. Four separate limitation factors of imaging, including the speckle noise, intensity decay due to tissue absorption, aberrations, and point spread function (PSF) distortion are investigated in this thesis and a number of algorithms are devised to reduce their impact so an enhanced image is achieved. The hardware of the imaging systems is also modified to improve their performances. We have developed two speckle reduction algorithms based on artificial neural network (ANN) and temporal compounding methods. The algorithms are tested successfully on varieties of skin images, retina, larynx, human teeth and also drosophila images with the view to improve the signal-to-noise ratio (SNR) and contrast. An attenuation compensation algorithm is designed based on Beer- Lambert law using a novel skin layer detection method. The algorithm is successfully tested on in-vivo OCT skin images of human fingertip. For aberration correction, a sensor-less adaptive optics system is studied along with a blind optimization algorithm. Three optimization algorithms are tested effectively on a CM system; simulated annealing, genetic algorithm and particle swarm optimization. To eliminate the effect of the distortion of the PSF of the OCT system, a deconvolution technique with Lucy-Richardson algorithm is used. The PSF of the OCT system is estimated from images of a specially designed phantom. The algorithm is successfully evaluated using OCT images of healthy tissue including dorsal skin of hand, basaloid eyelid skin, skin of fingertip, and basaloid larynx tissues. Compared to the original images, the improved images are less blurred with higher contrast. We improve a dynamic focus (DF-) OCT system operated at 830 nm to be able to image at 1300 nm wavelength. Different tissues of skin, larynx, eyelid, and several phantoms are imaged by this system.
APA, Harvard, Vancouver, ISO, and other styles
20

Virgulti, Marco. "Advancend algorithms for sound reproduction enhancement in portable multimedia devices." Doctoral thesis, Università Politecnica delle Marche, 2015. http://hdl.handle.net/11566/243003.

Full text
Abstract:
In questa tesi e' stato svolto uno studio sulle problematiche relative alla riproduzione audio nel contesto dei sistemi mobile. Le tematiche principali di studio riguardano la spazializzazione, il miglioramento della riproduzione delle basse frequenze e tecniche avanzate di equalizzazione acustica ambientale. Il lavoro puo essere diviso in due parti. Nella prima parte, e' stato illustrato un sistema acustico integrato per sistemi mobili. Le difficolta incontrate in questo contesto sono dovute essenzialmente alle ridotte capacita di calcolo e dalle dimensioni fisiche degli altoparlanti. Lo scopo principale e' stato di sviluppare una architettura di algoritmi avanzati volta a migliorare l'esperienza acustica con un basso costo computazionale. Essa e' composta da tre componenti quali un cancellatore di crosstalk, un equalizzatore multipunto avanzato ed un sistema di miglioramento della riproduzione delle basse frequenze. Nella seconda parte del lavoro e' stato condotto uno studio sugli equalizzatori grafici. Sono state studiate due strategie di ottimizzazione partendo da un equalizzatore a banchi filtri FIR ad alte prestazioni; esse consistono nella sostituzione del filtro prototipo FIR con due filtri IIR approssimanti, mantenendo la qualita complessiva originale. Dopo uno studio teorico, il lavoro si e' concentrato sull'implementazione degli approcci proposti. Per il sistema audio integrato, sono state realizzate le implementazioni iOS ed Android; per gli equalizzatori grafici ottimizzati l'implementazione e' stata realizzata in ambiente NU-Tech. Una sessione di test e' stata condotta per ottenere una valutazione delle prestazioni. L'implementazione del sistema audio integrato e' stata testata utilizzando test sia oggettivi che soggettivi. Per quanto concerne gli equalizzatori grafici ottimizzati proposti sono state valutate le prestazioni sia in termini di banda che di complessita computazionale.
In this thesis, a study of the problems related to the audio reproduction in the context of mobile devices has been carried out. The main topics are audio spatialization techniques, low frequency enhancement approaches and advanced techniques for room equalization, taking into consideration different types of equalizers. The work could be divided in two parts. In the first part an advanced audio reproduction enhancement architecture for mobile devices has been presented. The main problems to solve are the low computational capabilities and the physical dimensions of the loudspeakers. The proposed solution is to develop a suitable architecture of advanced algorithms, capable to enhance the audio reproduction, creating an integrated acoustic system with low computational effort. The system architecture is composed of three components, i.e., a crosstalk canceller, a multipoint equalizer and a virtual bass enhancer. In the second part of the work, the study of graphic equalizers has been presented. In this context, a high quality graphic equalizer FIR filterbank structure has been optimized in order to reduce its computational cost but keeping untouched its perfomance in terms of quality. Two different IIR filters approximation have been proposed. After a theoric study, the work has been focused on the software implementations of the proposed approaches. For the advanced audio system, two software implementations have been realized for the main mobile operating systems i.e., Android and iOS. For the high performance graphic equalizers, the implementations are realized on the NU-Tech real time platform. Finally, a full set of tests have been performed on these platforms to obtain performance evaluation and measurement of the required computational cost. Differently the proposed graphic equalizers have been tested considering its performance in terms of frequency bandwidth and computational complexity. The optimized graphic equalizers are compared with the reference FIR approach showing the effectiveness of the proprosed approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

BASTARI, ALESSANDRO. "New topics and innovative algorithms for the speech enhancement problem." Doctoral thesis, Università Politecnica delle Marche, 2007. http://hdl.handle.net/11566/242583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Turroni, Francesco <1983&gt. "Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4378/1/turroni_francesco_tesi.pdf.

Full text
Abstract:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
APA, Harvard, Vancouver, ISO, and other styles
23

Turroni, Francesco <1983&gt. "Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4378/.

Full text
Abstract:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Michael, and Daniel Doonan. "Image Reconstruction and Resolution Enhancement Algorithm for FMCW Medical Ultrasound Imaging Systems." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595623.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
This paper presents an overview of the graduate research program on the design and development of the high-performance image reconstruction and resolution enhancement algorithm for an advanced medical ultrasound imaging system. The data acquisition is conducted with a micro flexible transceiver array, operating in the multi-static stepped-frequency FMCW mode. The objective of this system is to perform high-speed high-resolution image reconstruction for biomedical applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Alexander, Jeremy Brandon. "Enhancement of the daytime GOES-based aircraft icing potential algorithm using MODIS." Thesis, Monterey California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2326.

Full text
Abstract:
Approved for public release, distribution is unlimited
In this thesis, a fuzzy logic algorithm is developed for the detection of potential aircraft icing conditions using the Moderate-Resolution Imaging Spectroradiometer (MODIS). The fuzzy MODIS algorithm is developed in a manner similar to the cloud mask currently used to process MODIS imagery. The MODIS icing potential detection algorithm uses thresholds for 8 channels in a series of 12 tests to determine the probability of icing conditions being present within a cloud. The MODIS algorithm results were compared to results of the GOES icing potential detection algorithm run on MODIS imagery for 4 cases. When compared to positive and icing pilot reports for the cases, the MODIS algorithm identified regions where icing was encountered more effectively than the GOES algorithm. Furthermore, the use of fuzzy thresholds on MODIS rather than the hard thresholds of the GOES algorithm allowed for less restrictive coverage of potential icing conditions, making the MODIS algorithm more reasonable in assessing all cloud regions for icing potential. The results found here are preliminary, as further statistical analysis with a larger validation dataset would be more effective. Algorithm details are provided in the appendix for reference.
Captain, United States Air Force
APA, Harvard, Vancouver, ISO, and other styles
26

Alexander, Jeremy B. "Enhancement of the daytime goes-based aircraft icing potential algorithm using MODIS /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FAlexander.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shannon, Benjamin J. "Speech Recognition and Enhancement using Autocorrelation Domain Processing." Thesis, Griffith University, 2007. http://hdl.handle.net/10072/365193.

Full text
Abstract:
From a young age, humans learn language skills and develop them to the point that they become reflex like. As a communication modality, speech is efficient, natural and intrinsically understood. By developing spoken language interfaces for machines, the same kinds of benefits can be realised for the human-machine interaction. Development of machine based speech recognition has been in progress for the past 50 years. In this time significant advances have been made, but the performance of current solutions in the presence of ambient acoustic noise is one factor holding the technology back. Contributing to the overall deficiency of the system is the performance of current feature extraction methods. These techniques cannot be described as robust when deployed in the dynamic acoustic environments typically encountered in everyday life. Ambient background noise also affects speech communication between humans. Restoration of a degraded speech signal by a speech enhancement algorithm can help to reduce this effect. Techniques developed for improving the noise robustness of feature extraction algorithms can also find application in speech enhancement algorithms. Contributions made in this thesis are aimed at improving the performance of automatic speech recognition in the presence of ambient acoustic noise and the quality of speech perceived by human listeners in the same conditions. The proposed techniques are based on processing the degraded speech signal in the ii autocorrelation domain. Based on the differences in the production mechanisms of speech and noise signals, transforming them into the autocorrelation domain provides a favourable representation for noise robust processing. We found that by utilising the higher-lag coefficients of the autocorrelation sequence and discarding the lower-lag coefficients, more noise robust spectral estimates could be made. This approach was found to be adept at suppressing particular classes of non-stationary noise that conventional methods fail to handle. We also explored a topic in speech enhancement of phase spectrum estimation and showed positive results. The proposed feature extraction and speech enhancement techniques, while performing very well for some non-stationary noises, were less effective against the stationary cases. This work highlights the autocorrelation domain as a domain for noise robust speech processing in the presence of dynamic ambient noises. With improvements in short-time autocorrelation estimation, it is expected that the performance of the techniques for stationary noises can also be improved.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
28

Fattori, Alves Allan Felipe. "Image Processing for Enhancement of Ischemic Stroke in Computed Tomography Examinations." Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE2003.

Full text
Abstract:
L’Accident Vasculaire Cérébral (AVC) est l'une des principales causes de décès dans le monde. Le scanner et l'Imagerie par Résonance Magnétique (IRM) sont les deux principales techniques d'imagerie utilisées pour détecter les AVC. L’examen par scanner reste donc la principale méthode de diagnostic. Dans la plupart des cas, l'évaluation de la région cérébrale compromise est effectuée de manière subjective et peut entraîner des difficultés pour déterminer la région atteinte. Ce travail de thèse propose une approche basée sur un algorithme permettant de mettre en évidence les régions atteintes d’AVC ischémique dans les examens de scanner rétrospectifs. Différentes méthodes de traitement des images ont été utilisées pour réhausser les régions des tissus ischémiques. Afin de permettre aux médecins moins expérimentés de détecter de manière fiable les signes précoces AVC, une nouvelle approche est proposée pour améliorer la perception visuelle de l’accident ischémique cérébral. Une série de 41 images scanner rétrospectifs ont été utilisées, réparties en 25 cas d’AVC ischémiques et 16 patients normaux. Les cas d'AVC ont été obtenus dans les 4,5 heures suivant l'apparition des symptômes. Après la sélection des coupes importantes, une moyenne d'image est effectuée pour réduire le bruit. Ensuite, un modèle de décomposition variationnelle est appliqué afin de conserver la composante pertinente de l'image. Enfin, un algorithme d’espérance-maximisation est appliqué. Un test est proposé afin d’évaluer la performance des observateurs dans un environnement clinique avec et sans l'aide d'images rehaussées. La sensibilité globale de l'analyse de l'observateur a été améliorée de 64,5% à 89,6% et la spécificité de 83,3% à 91,7%. Ces résultats montrent l'importance d'un outil informatique d'aide à la décision en neuroradiologie, notamment dans les situations critiques telles que le diagnostic d'accident ischémique cérébral
Stroke is one of the highest causes of death worldwide. Non-enhanced computed tomography (CT) and nuclear magnetic resonance imaging (MRI) are the two main imaging techniques used to detect stroke. CT has a lower cost and greater accessibility of the population, so it is still the main method used. In most cases, the assessment of the compromised brain area is performed subjectively and may lead to difficulties in diagnosis. This research proposes an approach based on a computational algorithm, highlighting regions of ischemic stroke. Different image processing methods were used to enhance ischemic tissues. A set of 41 retrospective CT scans from Botucatu Medical School (Brazil) was used, divided into 25 cases of acute ischemic stroke and 16 normal patients. Stroke cases were obtained within 4.5 h of symptom onset. After selection of CT slices, image averaging was performed to reduce the noise. This was followed by a variational decomposition model and the expectation maximization method was applied to generate enhanced images. We determined a test to evaluate the performance of observers in a clinical environment with and without the aid of enhanced images. The overall sensitivity of the observer’s analysis was 64.5 % and increased to 89.6 % and specificity was 83.3 % and increased to 91.7 %. These results show the importance of a computational tool to assist neuroradiology decisions, especially in critical situations such as the diagnosis of ischemic stroke
APA, Harvard, Vancouver, ISO, and other styles
29

Roy, Sujan K. "Kalman Filtering with Machine Learning Methods for Speech Enhancement." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/404456.

Full text
Abstract:
Speech corrupted by background noise (or noisy speech) can reduce the efficiency of communication between man-man and man-machine. A speech enhancement algorithm (SEA) can be used to suppress the embedded background noise and increase the quality and intelligibility of noisy speech. Many applications, such as speech communication systems, hearing aid devices, and speech recognition systems, typically rely upon speech enhancement algorithms for robustness. This dissertation focuses on single-channel speech enhancement using Kalman filtering with machine learning methods. In Kalman filter (KF)-based speech enhancement, each clean speech frame is represented by an auto-regressive (AR) process, whose parameters comprise the linear prediction coefficients (LPCs) and prediction error variance. The LPC parameters and the additive noise variance are used to form the recursive equations of the KF. In augmented KF (AKF), both the clean speech and additive noise LPC parameters are incorporated into an augmented matrix to construct the recursive equations of AKF. Given a frame of noisy speech samples, the KF and AKF give a linear MMSE estimate of the clean speech samples using the recursive equations. Usually, the inaccurate estimates of the parameters introduce bias in the KF and AKF gain, leading to a degradation in speech enhancement performance. The research contributions in this dissertation can be grouped into three focus areas. In the first work, we propose an iterative KF (IT-KF) to offset the bias in KF gain for speech enhancement through utilizing the parameters in real-life noise conditions. In the second work, we jointly incorporate the robustness and sensitivity metrics to offset the bias in the KF and AKF gain - which address speech enhancement in real-life noise conditions. The third focus area consists of the deep neural network (DNN) and whitening filter assisted KF and AKF for speech enhancement. Specifically, DNN and whitening filter-based approaches utilize the parameter estimates for the KF and AKF for speech enhancement. However, the whitening filter still produces biased speech LPC estimates for the KF and AKF, results in degraded speech. To address this, we propose a DeepLPC framework constructed with the state-of-the-art residual network and temporal convolutional network (ResNet-TCN) to jointly estimate the speech and noise LPC parameters from the noisy speech for the AKF. Recently, the multi-head self-attention network (MHANet) has demonstrated the ability to more efficiently model the long-term dependencies of noisy speech than ResNet-TCN. Therefore, we employ the MHANet within DeepLPC, termed as DeepLPC-MHANet, to further improve the speech and noise LPC parameter estimates for the AKF. Finally, we perform a comprehensive study on four different training targets for LPC estimation using ResNet-TCN and MHANet. This study aims to determine which training target as well as DNN method produces accurate speech and noise LPC parameter with an application of AKF-based speech enhancement in practice. Objective and subjective scores demonstrate that the proposed methods in this dissertation produce enhanced speech with higher quality and intelligibility than the competing methods in various noise conditions for a wide range of signal-to-noise ratio (SNR) levels.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
30

Alhusseinawi, Haider. "Body Sensor Networks Reliability Enhancement Algorithms in Sport and Biomedical Applications." Thesis, Griffith University, 2017. http://hdl.handle.net/10072/370350.

Full text
Abstract:
Wireless Networking is the fastest growing segment of the networking industry. In medical and sport industries, Body Sensor Networks (BSNs) are used to monitor the human body. BSNs have numerous design challenges, including network topology, reliable data delivery in a timely manner, and minimal power consumption. Other performance parameters, such as throughput and coverage, are equally important. The object of this project was to achieve an efficient and optimal design of a BSN that acquire movement and physiological data, using sensors at various places on the body and sending data to a receiving unit (gateway) located on the chest for re-transmission of information to a remote computer (locally or in the cloud) for further processing and presentation. This is important in sport for the athletes, coaches and the viewing public, it provides real-time monitoring for rapid improvement in player performance during training or rehabilitation sessions. An investigation into the best node locations on the human body was made to achieve maximum connectivity to a receiving unit (gateway) on the chest, and the best angular window for the nodes to transmit data during typical human movements, such as running and walking. Preliminary results showed that, while the distance between the transmitting and receiving nodes changes significantly, the presence of scattering from limbs causes the most significant effect on the received signal strength. These measurements demonstrated that a received signal strength greater than -70dBm (the radio communications threshold) can vary from 20% to 74% of the recorded time for different node locations. The use of an accelerometer sensor at each node allows these positions to be identified in real-time for burst transmission to occur reliably. Wireless accelerometer sensor modules were used to determine the link performance by recording the data and traffic lost on different runners and for different transmitter locations on the human body (foot, leg and arm), to identify these time windows from the diverse angles of rotation of the human limbs during running. The results showed that the sensor on the wrist gives the best connectivity. An approximate swing time calculation algorithm was employed to find the swing time effect on these losses. Different data rates were tested against traffic loss and showed 98% and 62% of reliability at 250kbps and 2Mbps respectively. With a central node on the chest, a novel energy-efficient time multiplexing transmission method for on-body wireless communication was implemented based on the human rhythmic movement of running. The running style of each individual allows the network to self-calibrate the communication scheme so that transmissions occur only when high link reliability is predicted. This technique takes advantage of the periodic running actions to implement a dynamic time division multiple access (TDMA) strategy for a five-node body network with very little communication overhead, long sleep times for the sensor transceivers and robustness to communication errors. The results showed that all wireless communications were successful, except when two nodes attempted to use the transmission medium simultaneously. An aggregated network reliability of 90% was achieved, compared to 63% when employing traditional time multiplexing algorithms. The results also showed a trade-off between the channel occupancy and traffic generated to provide high channel reliability for the body network. An advanced gesture transmission technique was adopted to collect acceleration data and to predict the best limb position for communications while the athlete is moving. This reduced the overall transmission power and increased the reliability. As a result, data losses were reduced from 30% to 1%, compared to continuous communications. Experimental measurements were reported on five human subjects moving in formations on a grassed field with a smart algorithm that takes advantage of the Carrier Sensing Multiple Access (CSMA) technique. The algorithm forces the sensor nodes to repeatedly sense the carrier frequency to recognize when other nodes are in or out of coverage. The test reports different sink node locations on the body and the effect of transmitter power on the network reliability. A sink placed on the head position provides a normally distributed coverage of approximately 5 meters with link reliability of at least 80%. The smart algorithm showed 100% successful wireless communications between sink nodes within coverage when the nodes were programmed not to use the transmission medium simultaneously. The neighbours list and time of existence of all neighbours for each node were recorded and modified accordingly at each stage of the test. The availability of this information in real-time can be used to determine athlete proximity in the playing field and their performance with respect to time spent on one or multiple locations on the field.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
31

Roscoe, Keigh Anthony. "Enhancement of digitised analogue video recordings using multiple image processing algorithms." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Turner, Scott J. "Use of evolutionary algorithms to select filters for evoked potential enhancement." Thesis, University of Northampton, 2000. http://nectar.northampton.ac.uk/4400/.

Full text
Abstract:
Evoked potentials are electrical signals produced by the nervous system in response to a stimulus. In general these signals are noisy with a low signal to noise ratio. The aim was to investigate ways of extracting the evoked response within an evoked potential recording, achieving a similar signal to noise ratio as conventional averaging but with less repetitions per average. In this thesis, evolutionary algorithms were used in three ways to extract the evoked potentials from a noisy background. First, evolutionary algorithms selected the cut—off frequencies for a set of filters. A different filter or filter bank was produced for each data set. The noisy signal was passed through each filter in a bank of filters the filter bank output was a weighted sum of the individual filter outputs. The goal was to use three filters ideally one for each of the three regions (early, middle and late components), but the use of five filters was also investigated. Each signal was split into two time domains: the first 3Oms of the signal and the region 30 to 400ms. Filter banks were then developed Ibr these regions separately. Secondly, instead of using a single set of filters applied to the whole signal, different filters (or combinations of filters) were applied at different times. Evolutionary algorithms are used to select the duration of each filter, as well as the frequency parameters and weightings of the filters. Three filtering approaches were investigated. Finally, wavelets in conjunction with an evolutionary algorithm were used to select particular wavelets and wavelet parameters. A comparison of these methods with optimal filtering methods and averaging was made. Averages of 1 0 signals were ibund suitable, and time-varying techniques were Ibund to perlbrm better than applying one filter to the whole signal
APA, Harvard, Vancouver, ISO, and other styles
33

Kotha, Aravind Eswar Ravi Raja, and Lakshmi Ratna Hima Rajitha Majety. "Performance Comparison of Image Enhancement Algorithms Evaluated on Poor Quality Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13880.

Full text
Abstract:
Many applications require automatic image analysis for different quality of the input images. In many cases, the quality of acquired images is suitable for the purpose of the application. However, in some cases the quality of the acquired image has to be modified according to needs of a specific application. A higher quality of the image can be achieved by Image Enhancement (IE) algorithms. The choice of IE technique is challenging as this choice varies with the application purpose. The goal of this research is to investigate the possibility of the selective application for the IE algorithms. The values of entropy and Peak Signal to Noise Ratio (PSNR) of the acquired image are considered as parameters for selectivity. Three algorithms such as Retinex, Bilateral filter and Bilateral tone adjustment have been chosen as IE techniques for evaluation in this work. Entropy and PSNR are used for the performance evaluation of selected IE algorithms. In this study, we considered the images from three fingerprint image databases as input images to investigate the algorithms. The decision to enhance an image in these databases by the considered algorithms is based on the empirically evaluated entropy and PSNR thresholds. Automatic Fingerprint Identification System (AFIS) has been selected as the application of interest. The evaluation results show that the performance of the investigated IE algorithms affects significantly the performance of AFIS. The second conclusion is that entropy and PSNR might be considered as indicators for required IE of the input image for AFIS.
APA, Harvard, Vancouver, ISO, and other styles
34

Birt, James. "Software Reliability Enhancement Through Error-Prone Path Identification using Genetic Algorithms." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/367831.

Full text
Abstract:
This thesis presents a pre-processing stage for optimising software reliability by quantitatively identifying the most error prone regions in a software program. These error prone regions are identified using Genetic Algorithms on the source code’s graph representation, weighted with potential Sources of Error. By identifying these potentially error dense regions, the efficiency of the software quality activities can be increased. The information about quantitative error proneness can be used for more accurate effort and cost estimations of quality assurance. Although various methods have been applied for detecting and reducing errors in software, little research has been done into partitioning a system into smaller, error prone domains for a more targeted Software Quality Assurance. To identify error proneness in software regions is important as these domains can be given priority in code inspections or testing. Quality activities come at a high price, typically requiring more than half of the project resources to produce a working program. However, a working program does not necessarily mean a defect free program. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Inspections require experts; they can be subjective and expensive. Typically due to project budget constraints only parts of a program can be tested or inspected, but these parts are not necessarily the most error prone. A more effective approach is to focus inspection and testing efforts on those regions that are most likely to contain faults, that is, the most error prone regions. The strategic approach presented in this thesis consists in parsing a software source code and attributing weights to software’s paths using a method for assessing quantitatively the error proneness of software modules. By representing these paths as a weighted connectivity matrix, a Genetic Algorithm is applied to the paths with a strategy of finding a selection of paths with maximum weights as potential error carriers. These maximum error prone paths can then be selected for priority in testing and inspection. The approach does not deal with the actual inspection, testing or test cases per se, but it makes an informed choice on where to focus the main effort possible. This in turn aids project management by eliminating the guesswork of where to focus the effort and budget for quality assurance activities. The technique presented in this thesis is supported by a set of experiments: (i) empirical analysis of Genetic Algorithm variables and their effect on performance; (ii) Pareto analysis using error seeding identification with best fit, random and clustered approaches; (iii) segmenting path strata and identifying error prone regions in the path (iv) comparison with traditional software inspection. Results from the experiments conducted in the thesis support the proposed technique, through error identification rates greater than 85% from only 20% of the most error prone code. This is a strong result as it fits with the Pareto analysis or the 80/20 rule as a standard analysis technique.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Yia-Ching, and 吳雅菁. "Histogram Enhancement Using Visual Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/71284912461699151569.

Full text
Abstract:
碩士
國防大學理工學院
電子工程碩士班
100
Image enhancement can effectively enhance the clarity of the image, and clearly reflect the brightness and subtle color differences of the shooting scene. The most common image enhancement method is gray-scale histogram adjustment. The traditional histogram equalization method is the most widely used technique in gray-scale histogram adjustment, and it utilizes the cumulative probability value to effectively pull the gray scale spacing of histogram. However, a part of the export image is brightness distortion and loss of original information. In order to improve its shortcomings, many researchers have proposed the section of the image contrast enhancement algorithms. It is mainly based on the histogram of the peaks and troughs value or fixed average way to do the cutting. Although these methods avoid the shortcomings of the histogram equalization, they only apply to the enhancement of specific image. This paper proposed the establishment of the image enhancement algorithm based on the human visual system. The method that we have developed can be effectively improved the disadvantages of the histogram equalization, enhance the clarity of the image details and retain the bright detailed information of the original image. The experimental results compare with the algorithm of the previous studies, verify that the visual images enhance the robustness of the algorithms (VHE) and enhanced visual quality and performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Calderón, González Julian, and Salazar Òscar Daniel Carmona. "Image Enhancement with Matlab Algorithms." Thesis, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Tse-Chung, and 陳則仲. "Resolution Enhancement Based on Genetic Algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/47328623823762993164.

Full text
Abstract:
碩士
逢甲大學
電機工程所
94
For the image reconstruction from a limited number of Fourier data, it is difficult to specify an accurate solution. To overcome this problem, an algorithm was developed on the basis of incorporating priori knowledge. This algorithm has been proved a great potential in obtaining a high-resolution image, while choosing good values of parameters for the object function’s support domain and sinc functions'' profile is necessary. In this research we particularly seek optimal values of parameters by using Genetic Algorithm. With the search process based on Genetic Algorithm, in the sense of optimization, the results can be applied for the future.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Po-Cheng, and 李柏成. "A Tone Mapping Algorithm with Detail Enhancement Based on Retinex Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/70759937164346636036.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
100
Because of the progress of the digital camera technique recently, we can directly obtain the HDRI (High Dynamic Range Image) from camera. Nevertheless, limited by display, we still transfer the HDRI to the display which can show LDRI (Low Dynamic Range Image). This technique is known as tone-mapping. The goal of tone-mapping is to compress the luminance dynamic range into low dynamic range while decreasing distortion and preserving detail. We use logarithm first to compress high dynamic range based on background luminance. The retinex local contrast enhancement is thus being performed to enhancement the image in dark regions. Using our method can preserve most of detail without contrast distortion especially dark areas.
APA, Harvard, Vancouver, ISO, and other styles
39

Ko, Chia-Chieh, and 柯佳伽. "Color Image Quality Enhancement Using Retinex Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/565fvu.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
96
This study aims the visual image contrast enhancement of the color images. Amongst of them, Retinex is the most significant enhancement for human visual system, taking into account in both color component of the rendering and the image contrast enhanced that makes the choices. Because of the Retinex absence use images histogram information distribution, image histogram distribution in deriving Quartile Sigmoid Function (QSF) is proposed. By combining the QSF mapping and Retinex theory, the resulting has shown well in both the rending and contrast enhancement. Sets of images to be used for proposed images enhancement model. As for bands of information of Histogram equalization is employed, integration of both MSR and the Retinex with Quartile Sigmoid Function (QSF) demonstrates the effectiveness of the algorithm in terms of the image quality
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Jin-Fu, and 吳金富. "Speech Enhancement Using Subspace Noise Tracking Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/p86vku.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系研究所
99
Speech signals are tend to decrease the overall quality and recognition rates when corrupted by background noises. Speech enhancement is a technique usually used in speech transmission and speech recognition that recovers the clean speech from noisy speech by using a noise estimator, i.e., a noise tracking algorithm. More accurate the noise estimator, more efficiency the enhancement technique is. There exist many kinds of noises with different characteristics in our environment. That’s why the design of an accurate noise estimator is not an easy task since it could not know the noise type it will deal with in advance. In this thesis we propose an effective noise tracking algorithm based on frequency-domain subspace decomposition method. We analyze the energy of noise contained in the speech signal then filter out the noise by speech enhancement technology. Four speech enhancement techniques, including the spectrum subtraction method (SS), the time-domain Wiener filter (TDWF), the frequency-domain Wiener filter (FDWF), and the subspace method (SM) incorporated with the proposed tracking algorithm are investigated. Both well-known minimum statistics (MS) and minima controlled recursive averaging (MCRA) noise tracking algorithms are also included for comparison in the experiments. The experimental results show that in average the proposed noise tracking algorithm can achieve higher segmental signal to noise improvement (SSNRI) when compared with both minimum statistics (MS) and minima controlled recursive averaging (MCRA) methods. Using the spectrum subtraction (SS) enhancement method as an example, when the signal to noise ratio (SNR) of test speech is at 10 dB, the SSNRI of the proposed tracking algorithm is up to 2.7146 dB. It performs better than the SSNRI of 1.4837 dB for MCRA and 0.3418 dB for MS, respectively. As a result, it could provide a superior communication quality in noisy environment.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Yun-Ju, and 李昀儒. "Color Image Enhancement Using Hybrid Retinex Algorithm." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/21207789503399867611.

Full text
Abstract:
碩士
世新大學
圖文傳播暨數位出版學研究所(含碩專班)
93
An image enhancement model based on the Human Visual System is proposed. Image enhancement processing normally decreases image quality in some aspects, because the representation of color space is not linear. The Retinex algorithm can be generally used in mamy kinds of images by considering human visual perception. The existing Retinex algorithm both Single-Scale Retinex or Multiple-Scale Retinex, do not take into account the local variations of the contrast within an image scence. Therefore the performance of those models are limited. A modified Retinex algorithm by using non-isotropic Gaussian kernel filters to fit the human visual system and add the the image quality is proposed. A number of experiments are conducted to demonstrate the performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Cao, Letian, and Yazhou Wang. "Fingerprint image enhancement and minutiae extraction algorithm." Thesis, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64819.

Full text
Abstract:
This work aims to study the procedures of fingerprint identification system and to present some efficient algorithms for pre-processing and minutiae extraction. Most pre-processing steps consist of normalization, segmentation and orientation estimation, and background which focus on decreasing the variance of fingerprints, separating fore and background areas and tracking the direction of ridge lines, respectively. Minutiae extractionis typically divided into two approaches: binarization based method and directgray scale extraction. However, we put emphasis on binarization based method in thisresearch since it is more commonly used method in research papers. The results of simulationbased on a set of fingerprints downloaded from FVC 2006 database showed thatalgorithms we used are accurate and reliable.
APA, Harvard, Vancouver, ISO, and other styles
43

Su-ShiaWang and 王素霞. "Low-complexity Defogging Algorithm with Color Enhancement." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/86834180314643460233.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
100
In some computer vision applications, such as monitors and obstacle detection systems, the visibility and color of the scene taken by these devices might be severely degraded due to bad weather conditions such as fog or haze. Therefore, defogging technology has becomes an important issue. Nowadays, there are a lot of papers that discuss how to recover a clear image from a foggy one, but these methods are often so complex that they need much more time to process an image. For this reason, a good image defogging technique that is low-complexity is crucial. In this paper, we bring up a low-complexity defogging algorithm with color enhancement. There are three main steps to this method. First, we calculate an the value of atmospheric light from the lighter group and the darker group of an input image. Second, we use differential transmission estimation to detect edges and ease the halo effect. Finally, we employ tone mapping and illumination adjustment to optimize restored images. The execution time of our method is faster than other papers. It only takes about 0.2 sec for processing a 600×400 image. Furthermore, we know our resulting images are better in both visibility and color enhancement from comparison.
APA, Harvard, Vancouver, ISO, and other styles
44

Hong-JheChen and 陳宏哲. "An Edge Enhancement Algorithm for Upscaled Images." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/69585437677582167844.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
100
Image upscaling has recently become a hot research topic. The main purpose of image upscaling is to obtain high-resolution images from low-resolution ones and these upscaled images should look like they had been taken with a camera having the resolution the same as upscaled images, and at least, present natural textures. This technique is also known as super-resolution which had been widely used in high definition televisions, smart phones, satellite images, and surveillance cameras. In spatial domain, interpolation-based methods might meet defects such as blurring and jagged artifact. However, the upscaling methods in wavelet domain might fall into ringing artifact. In this thesis, we propose an edge enhancement method for upscaled images which is able to suppress the jagged artifact effectively, and it also provides more natural and clearer edges. The experimental results show that our algorithm provides better subjective visual qualities, and meanwhile, the peak signal-to-noise ratio(PSNR) is still good.
APA, Harvard, Vancouver, ISO, and other styles
45

Yu-HsuanHuang and 黃裕軒. "Deep Learning Applied to Speech Enhancement Algorithm." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/q2wt6u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chien, Hui-Yu, and 簡彗羽. "Optimized E-shifting Algorithm for Resolution Enhancement." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3s95k3.

Full text
Abstract:
碩士
國立交通大學
光電工程研究所
107
Near-eye displays (NED) provide whole new viewing experiences for users. Most traditional NED create 3D images by using binocular parallax method. The right eye and the left eye receives different images, and the human brain will generate 3D images by merging the images. However, this method causes traditional NED to suffer from accommodation-convergence conflict (A.C. conflict), and it will cause visual fatigue to users. To solve the issue, light field near-eye displays (LFNED) was proposed in 2013. Unlike traditional NED, in the LFNED, viewers can see a reconstructed 3D image by tracing rays back through the image plane with the same accommodation distance and the convergence distance. Therefore, LFNED eliminate A.C. conflict. However, LFNED suffer from the low spatial resolution. In the recent work, a time-multiplexed method was proposed. The hardware includes a birefringent plate and a twisted nematic liquid crystal plate (TN plate) which can achieve image shifting. The software includes an e-shifting sampling algorithm. With the hardware and the software, images will be enhanced by the visual persistence. The results show that the resolution was enhanced for 1.4 times. Moreover, it can reduce screen-door effect and rasterized effect to get smoother image. In this research, using a diverse database to optimize the e-shifting sampling algorithm is proposed. To achieve the traditional interpolation in the proposed e-shifting sampling algorithm, I divide it into three steps: the upscaling methods, the upscaling ratios, and the downsampling methods. The traditional interpolation will be replaced with the optimization of each step. In the analyses of the optimization, there are five upscaling methods. The edge directed unsharp masking sharpening method is chosen as the upscaling method in the proposed e-shifting sampling algorithm, because of its higher PSNR value and higher spatial frequency at the same contrast. Moreover, the upscaling ratios in this research are from 1.3 to 2.5 times. At last, there are two downsampling methods which are corresponding to the upscaling ratios. In addition, to develop the strength of each upscaling ratio, I categorize the image database into four cases by the spatial frequency. And practicing the e-shifting sampling algorithm with the different upscaling ratios to four cases. Finally, the best e-shifting sampling algorithm for the high-spatial frequency images and the best e-shifting sampling algorithm for the low-spatial frequency images are presented. The edges of the high-spatial frequency images are sharper than the ones by the previous e-shifting sampling algorithm, the resolution is still high at the same time. Comparing to the native resolution of the Full HD panels, the resolution of the proposed e-shifting method is enhanced for 1.78 times of the images. Also, the resolution of the proposed e-shifting method is enhanced for 1.59 times in the light filed near-eye displays.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Chung-Han, and 黃重翰. "Noise Reduction Algorithms for Speech Enhancement." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/70999296683649720524.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
99
When clean speech affected by various types of noise, there are many methods to reduce noise. Companying more noise we reduce, more speech distortions the enhanced signals produce. Although wiener filter makes mean square error minimum(MMSE), it also has highly speech distortions. The thesis improves wiener filter to become a tradeoff filter and implements adaptive tradeoff parameter to let it have different degree of suppression in different SNR. We also improve the wiener filter with different gain function in order to make a balance between noise reduction and speech distortion, and it saves some information of clean speech. Besides, a new noise estimation method originated from minimum statistical estimation is proposed. All of the improvement methods will reduce distortion of speech and maintain a certain degree of noise reduction. At last we compare result of the improved algorithms with conventional wiener filter in white noise, babble noise and Vuvuzela noise based on speech distortion measure and noise reduction measure.
APA, Harvard, Vancouver, ISO, and other styles
48

Hsueh, Shyang-lin, and 薛翔霖. "An Efficient Video Inpainting Algorithm with Shadow Enhancement." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/3b2484.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
100
Video inpainting is the process of repairing lost or deteriorated pixels in a video. Most existing video inpainting techniques are computationally expensive and cannot manage shadow reconstructions. In this work, we propose an efficient video inpainting algorithm with shadow enhancement. We evaluate the performance of our system based on several video sequences taken with a consumer hand-held camcorder. Experimental results show that our system not only provides more realistic inpainted video, but also saves up to 5 times of computational cost when comparing with the existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
49

WU, WAN-JU, and 吳宛儒. "Performance Optimization of Retinex–Based Image Enhancement Algorithm." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/b8kkxn.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
105
Image enhancement plays an important role in digital image processing and has been applied to fields of science and engineering. Among all of the image enhancement algorithms, Retinex-based algorithms are considered most effective in maintaining details of images. Despite the advantages, Retinex-based algorithms are generally more complex and consume considerably more time to maintain the balance between image detail and ambient illumination. And like most image enhancement algorithms, conventional Retinex-based algorithms are prone to over-enhancement when processing images with non-uniform illumination. To deal with aforementioned issues, various approaches are proposed and explored in the thesis. First, an optimized illumination estimation, which is the most time-consuming part of conventional Retinex-based algorithms, is proposed to reduce the time consumption. Furthermore, we propose an adaptive restoration process, which autonomously detects the brightest and darkest parts of the image and make modifications to the core algorithm accordingly to avoid over-enhancement. We also explored the possibility of hardware acceleration in the final chapter. It is hoped that the optimal performance can be achieved by the hardware and software co-design. Experimental results show that proposed Retinex-based algorithm can enhance details of images while preserving perceived naturalness without over-enhancement. And the execution time is greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Shih-Hsien, and 楊士賢. "A Novel Wavelet-Based Image Resolution Enhancement Algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/90103106978409834700.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
94
Wavelet transform and its theory is one of the most exciting developments in the last decade. In fact, the wavelet transform has been developed independently for various fields such as signal processing, image processing, audio processing, communication, and applied mathematics. Due to the wavelet representation has characteristics of the efficient time-frequency localization and the multi-resolution analysis, the wavelet transforms are suitable for processing the image resolution enhancement. Therefore, this thesis focuses on the study of wavelet-based image resolution enhancement and proposes a framework of image resolution enhancement using wavelet transform. This thesis proposed a wavelet-domain image resolution enhancement algorithm which is based on the estimation of detail wavelet coefficients at high resolution scales. The method exploits shape function according to wavelet coefficient correlation in a local neighborhood sense and employs undecimated discrete wavelet transform to estimate the unknown detail coefficients. Results show that the proposed method is considerably superior to conventional image interpolation techniques, both in objective and subjective terms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography