Dissertations / Theses on the topic 'Minimum Classification Error algorithm'

To see the other types of publications on this topic, follow the link: Minimum Classification Error algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Minimum Classification Error algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Xuechuan, and n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Seungju. "A family of minimum Renyi's error entropy algorithm for information processing." [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fu, Qiang. "A generalization of the minimum classification error (MCE) training method for speech recognition and detection." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22705.

Full text
Abstract:
The model training algorithm is a critical component in the statistical pattern recognition approaches which are based on the Bayes decision theory. Conventional applications of the Bayes decision theory usually assume uniform error cost and result in a ubiquitous use of the maximum a posteriori (MAP) decision policy and the paradigm of distribution estimation as practice in the design of a statistical pattern recognition system. The minimum classification error (MCE) training method is proposed to overcome some substantial limitations for the conventional distribution estimation methods. In this thesis, three aspects of the MCE method are generalized. First, an optimal classifier/recognizer design framework is constructed, aiming at minimizing non-uniform error cost.A generalized training criterion named weighted MCE is proposed for pattern and speech recognition tasks with non-uniform error cost. Second, the MCE method for speech recognition tasks requires appropriate management of multiple recognition hypotheses for each data segment. A modified version of the MCE method with a new approach to selecting and organizing recognition hypotheses is proposed for continuous phoneme recognition. Third, the minimum verification error (MVE) method for detection-based automatic speech recognition (ASR) is studied. The MVE method can be viewed as a special version of the MCE method which aims at minimizing detection/verification errors. We present many experiments on pattern recognition and speech recognition tasks to justify the effectiveness of our generalizations.
APA, Harvard, Vancouver, ISO, and other styles
5

Albarakati, Noor. "FAST NEURAL NETWORK ALGORITHM FOR SOLVING CLASSIFICATION TASKS." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2740.

Full text
Abstract:
Classification is one-out-of several applications in the neural network (NN) world. Multilayer perceptron (MLP) is the common neural network architecture which is used for classification tasks. It is famous for its error back propagation (EBP) algorithm, which opened the new way for solving classification problems given a set of empirical data. In the thesis, we performed experiments by using three different NN structures in order to find the best MLP neural network structure for performing the nonlinear classification of multiclass data sets. A developed learning algorithm used here is the batch EBP algorithm which uses all the data as a single batch while updating the NN weights. The batch EBP speeds up training significantly and this is also why the title of the thesis is dubbed 'fast NN …'. In the batch EBP, and when in the output layer a linear neurons are used, one implements the pseudo-inverse algorithm to calculate the output layer weights. In this way one always finds the local minimum of a cost function for a given hidden layer weights. Three different MLP neural network structures have been investigated while solving classification problems having K classes: one model/K output layer neurons, K separate models/One output layer neuron, and K joint models/One output layer neuron. The extensive series of experiments performed within the thesis proved that the best structure for solving multiclass classification problems is a K joint models/One output layer neuron structure.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Nan. "An IEEE 802.15.4 Packet Error Classification Algorithm : Discriminating Between Multipath Fading and Attenuation and WLAN." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-24918.

Full text
Abstract:
In wireless sensor networks, communications are usually destroyed by signal attenuation, multipath fading and different kinds of interferences like WLAN and microwave oven interference. In order to build a stable wireless communication system, reactions like retransmission mechanisms are necessary. Since the way we must react to interference is different from the way we react to multipathfading and attenuation, the retransmission mechanism should be adjusted in different ways under those different cicumstances. Under this condition, channel diagnostics for discriminating the causes that corrupt the packets between multipath fading and attenuation (MFA) and WLAN interference are imperative. This paper presents a frame bit error rate (F-BER) regulated algorithm based on a joint RSSI-LQI classifier that may correctly diagnose the channel status. This discriminator is implemented on MicaZ sensor devices equipped with CC2420 transceivers. This discriminator is able to improve the accuracy to 91%. Although we need to wait for 2 or 3 necessary packets to make a decision, higher stability and reliability are presented when operating this discriminator.
APA, Harvard, Vancouver, ISO, and other styles
7

Kunert, Gerd. "Anisotropic mesh construction and error estimation in the finite element method." Universitätsbibliothek Chemnitz, 2000. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200000033.

Full text
Abstract:
In an anisotropic adaptive finite element algorithm one usually needs an error estimator that yields the error size but also the stretching directions and stretching ratios of the elements of a (quasi) optimal anisotropic mesh. However the last two ingredients can not be extracted from any of the known anisotropic a posteriori error estimators. Therefore a heuristic approach is pursued here, namely, the desired information is provided by the so-called Hessian strategy. This strategy produces favourable anisotropic meshes which result in a small discretization error. The focus of this paper is on error estimation on anisotropic meshes. It is known that such error estimation is reliable and efficient only if the anisotropic mesh is aligned with the anisotropic solution. The main result here is that the Hessian strategy produces anisotropic meshes that show the required alignment with the anisotropic solution. The corresponding inequalities are proven, and the underlying heuristic assumptions are given in a stringent yet general form. Hence the analysis provides further inside into a particular aspect of anisotropic error estimation.
APA, Harvard, Vancouver, ISO, and other styles
8

Shin, Sung-Hwan. "Objective-driven discriminative training and adaptation based on an MCE criterion for speech recognition and detection." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50255.

Full text
Abstract:
Acoustic modeling in state-of-the-art speech recognition systems is commonly based on discriminative criteria. Different from the paradigm of the conventional distribution estimation such as maximum a posteriori (MAP) and maximum likelihood (ML), the most popular discriminative criteria such as MCE and MPE aim at direct minimization of the empirical error rate. As recent ASR applications become diverse, it has been increasingly recognized that realistic applications often require a model that can be optimized for a task-specific goal or a particular scenario beyond the general purposes of the current discriminative criteria. These specific requirements cannot be directly handled by the current discriminative criteria since the objective of the criteria is to minimize the overall empirical error rate. In this thesis, we propose novel objective-driven discriminative training and adaptation frameworks, which are generalized from the minimum classification error (MCE) criterion, for various tasks and scenarios of speech recognition and detection. The proposed frameworks are constructed to formulate new discriminative criteria which satisfy various requirements of the recent ASR applications. In this thesis, each objective required by an application or a developer is directly embedded into the learning criterion. Then, the objective-driven discriminative criterion is used to optimize an acoustic model in order to achieve the required objective. Three task-specific requirements that the recent ASR applications often require in practice are mainly taken into account in developing the objective-driven discriminative criteria. First, an issue of individual error minimization of speech recognition is addressed and we propose a direct minimization algorithm for each error type of speech recognition. Second, a rapid adaptation scenario is embedded into formulating discriminative linear transforms under the MCE criterion. A regularized MCE criterion is proposed to efficiently improve the generalization capability of the MCE estimate in a rapid adaptation scenario. Finally, the particular operating scenario that requires a system model optimized at a given specific operating point is discussed over the conventional receiver operating characteristic (ROC) optimization. A constrained discriminative training algorithm which can directly optimize a system model for any particular operating need is proposed. For each of the developed algorithms, we provide an analytical solution and an appropriate optimization procedure.
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Zekun. "Algorithm Design and Optimization of Convolutional Neural Networks Implemented on FPGAs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254575.

Full text
Abstract:
Deep learning develops rapidly in recent years. It has been applied to many fields, which are the main areas of artificial intelligence. The combination of deep learning and embedded systems is a good direction in the technical field. This project is going to design a deep learning neural network algorithm that can be implemented on hardware, for example, FPGA. This project based on current researches about deep learning neural network and hardware features. The system uses PyTorch and CUDA as assistant methods. This project focuses on image classification based on a convolutional neural network (CNN). Many good CNN models can be studied, like ResNet, ResNeXt, and MobileNet. By applying these models to the design, an algorithm is decided with the model of MobileNet. Models are selected in some ways, like floating point operations (FLOPs), number of parameters and classification accuracy. Finally, the algorithm based on MobileNet is selected with a top-1 error of 5.5%on software with a 6-class data set.Furthermore, the hardware simulation comes on the MobileNet based algorithm. The parameters are transformed from floating point numbers to 8-bit integers. The output numbers of each individual layer are cut to fixed-bit integers to fit the hardware restriction. A number handling method is designed to simulate the number change on hardware. Based on this simulation method, the top-1 error increases to 12.3%, which is acceptable.
Deep learning har utvecklats snabbt under den senaste tiden. Det har funnit applikationer inom många områden, som är huvudfälten inom Artificial Intelligence. Kombinationen av Deep Learning och innbyggda system är en god inriktning i det tekniska fältet. Syftet med detta projekt är att designa en Deep Learning-baserad Neural Network algoritm som kan implementeras på hårdvara, till exempel en FPGA. Projektet är baserat på modern forskning inom Deep Learning Neural Networks samt hårdvaruegenskaper.Systemet är baserat på PyTorch och CUDA. Projektets fokus är bild klassificering baserat på Convolutional Neural Networks (CNN). Det finns många bra CNN modeller att studera, t.ex. ResNet, ResNeXt och MobileNet. Genom att applicera dessa modeller till designen valdes en algoritm med MobileNetmodellen. Valet av modell är baserat på faktorer så som antal flyttalsoperationer, antal modellparametrar och klassifikationsprecision. Den mjukvarubaserade versionen av den MobileNet-baserade algoritmen har top-1 error på 5.5En hårdvarusimulering av MobileNet nätverket designades, i vilket parametrarna är konverterade från flyttal till 8-bit heltal. Talen från varje lager klipps till fixed-bit heltal för att anpassa nätverket till befintliga hårdvarubegränsningar. En metod designas för att simulera talförändringen på hårdvaran. Baserat på denna simuleringsmetod reduceras top-1 error till 12.3
APA, Harvard, Vancouver, ISO, and other styles
10

Palkki, Ryan D. "Chemical identification under a poisson model for Raman spectroscopy." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45935.

Full text
Abstract:
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
APA, Harvard, Vancouver, ISO, and other styles
11

Challakere, Nagaravind. "Carrier Frequency Offset Estimation for Orthogonal Frequency Division Multiplexing." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1423.

Full text
Abstract:
This thesis presents a novel method to solve the problem of estimating the carrier frequency set in an Orthogonal Frequency Division Multiplexing (OFDM) system. The approach is based on the minimization of the probability of symbol error. Hence, this approach is called the Minimum Symbol Error Rate (MSER) approach. An existing approach based on Maximum Likelihood (ML) is chosen to benchmark the performance of the MSER-based algorithm. The MSER approach is computationally intensive. The thesis evaluates the approximations that can be made to the MSER-based objective function to make the computation tractable. A modified gradient function based on the MSER objective is developed which provides better performance characteristics than the ML-based estimator. The estimates produced by the MSER approach exhibit lower Mean Squared Error compared to the ML benchmark. The performance of MSER-based estimator is simulated with Quaternary Phase Shift Keying (QPSK) symbols, but the algorithm presented is applicable to all complex symbol constellations.
APA, Harvard, Vancouver, ISO, and other styles
12

Kunert, Gerd. "A note on the energy norm for a singularly perturbed model problem." Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100062.

Full text
Abstract:
A singularly perturbed reaction-diffusion model problem is considered, and the choice of an appropriate norm is discussed. Particular emphasis is given to the energy norm. Certain prejudices against this norm are investigated and disproved. Moreover, an adaptive finite element algorithm is presented which exhibits an optimal error decrease in the energy norm in some simple numerical experiments. This underlines the suitability of the energy norm.
APA, Harvard, Vancouver, ISO, and other styles
13

Nieuwoudt, Christoph. "Cross-language acoustic adaptation for automatic speech recognition." Thesis, Pretoria : [s.n.], 2000. http://upetd.up.ac.za/thesis/available/etd-01062005-071829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kahaei, Mohammad Hossein. "Performance analysis of adaptive lattice filters for FM signals and alpha-stable processes." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36044/7/36044_Digitised_Thesis.pdf.

Full text
Abstract:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
15

Grosman, Sergey. "Adaptivity in anisotropic finite element calculations." Doctoral thesis, Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600815.

Full text
Abstract:
When the finite element method is used to solve boundary value problems, the corresponding finite element mesh is appropriate if it is reflects the behavior of the true solution. A posteriori error estimators are suited to construct adequate meshes. They are useful to measure the quality of an approximate solution and to design adaptive solution algorithms. Singularly perturbed problems yield in general solutions with anisotropic features, e.g. strong boundary or interior layers. For such problems it is useful to use anisotropic meshes in order to reach maximal order of convergence. Moreover, the quality of the numerical solution rests on the robustness of the a posteriori error estimation with respect to both the anisotropy of the mesh and the perturbation parameters. There exist different possibilities to measure the a posteriori error in the energy norm for the singularly perturbed reaction-diffusion equation. One of them is the equilibrated residual method which is known to be robust as long as one solves auxiliary local Neumann problems exactly on each element. We provide a basis for an approximate solution of the aforementioned auxiliary problem and show that this approximation does not affect the quality of the error estimation. Another approach that we develope for the a posteriori error estimation is the hierarchical error estimator. The robustness proof for this estimator involves some stages including the strengthened Cauchy-Schwarz inequality and the error reduction property for the chosen space enrichment. In the rest of the work we deal with adaptive algorithms. We provide an overview of the existing methods for the isotropic meshes and then generalize the ideas for the anisotropic case. For the resulting algorithm the error reduction estimates are proven for the Poisson equation and for the singularly perturbed reaction-difussion equation. The convergence for the Poisson equation is also shown. Numerical experiments for the equilibrated residual method, for the hierarchical error estimator and for the adaptive algorithm confirm the theory. The adaptive algorithm shows its potential by creating the anisotropic mesh for the problem with the boundary layer starting with a very coarse isotropic mesh.
APA, Harvard, Vancouver, ISO, and other styles
16

Irmer, Ralf. "Multiuser Transmission in Code Division Multiple Access Mobile Communications Systems." Doctoral thesis, Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A24546.

Full text
Abstract:
Code Division Multiple Access (CDMA) is the technology used in all third generation cellular communications networks, and it is a promising candidate for the definition of fourth generation standards. The wireless mobile channel is usually frequency-selective causing interference among the users in one CDMA cell. Multiuser Transmission (MUT) algorithms for the downlink can increase the number of supportable users per cell, or decrease the necessary transmit power to guarantee a certain quality-of-service. Transmitter-based algorithms exploiting the channel knowledge in the transmitter are also motivated by information theoretic results like the Writing-on-Dirty-Paper theorem. The signal-to-noise ratio (SNR) is a reasonable performance criterion for noise-dominated scenarios. Using linear filters in the transmitter and the receiver, the SNR can be maximized with the proposed Eigenprecoder. Using multiple transmit and receive antennas, the performance can be significantly improved. The Generalized Selection Combining (GSC) MIMO Eigenprecoder concept enables reduced complexity transceivers. Methods eliminating the interference completely or minimizing the mean squared error exist for both the transmitter and the receiver. The maximum likelihood sequence detector in the receiver minimizes the bit error rate (BER), but it has no direct transmitter counterpart. The proposed Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimizes the BER at the detectors by transmit signal processing. This nonlinear approach uses the knowledge of the transmit data symbols and the wireless channel to calculate a transmit signal optimizing the BER with a transmit power constraint by nonlinear optimization methods like sequential quadratic programming (SQP). The performance of linear and nonlinear MUT algorithms with linear receivers is compared at the example of the TD-SCDMA standard. The interference problem can be solved with all MUT algorithms, but the TxMinBer approach requires less transmit power to support a certain number of users. The high computational complexity of MUT algorithms is also an important issue for their practical real-time application. The exploitation of structural properties of the system matrix reduces the complexity of the linear MUT mthods significantly. Several efficient methods to invert the ystem matrix are shown and compared. Proposals to reduce the omplexity of the Minimum Bit Error Rate Multiuser Transmission mehod are made, including a method avoiding the constraint by pase-only optimization. The complexity of the nonlinear methods i still some magnitudes higher than that of the linear MUT lgorithms, but further research on this topic and the increasing processing power of integrated circuits will eventually allow to exploit their better performance.
Der codegeteilte Mehrfachzugriff (CDMA) wird bei allen zellularen Mobilfunksystemen der dritten Generation verwendet und ist ein aussichtsreicher Kandidat für zukünftige Technologien. Die Netzkapazität, also die Anzahl der Nutzer je Funkzelle, ist durch auftretende Interferenzen zwischen den Nutzern begrenzt. Für die Aufwärtsstrecke von den mobilen Endgeräten zur Basisstation können die Interferenzen durch Verfahren der Mehrnutzerdetektion im Empfänger verringert werden. Für die Abwärtsstrecke, die höhere Datenraten bei Multimedia-Anwendungen transportiert, kann das Sendesignal im Sender so vorverzerrt werden, dass der Einfluß der Interferenzen minimiert wird. Die informationstheoretische Motivation liefert dazu das Writing-on-Dirty-Paper Theorem. Das Signal-zu-Rausch-Verhältnis ist ein geeignetes Kriterium für die Performanz in rauschdominierten Szenarien. Mit Sende- und Empfangsfiltern kann das SNR durch den vorgeschlagenen Eigenprecoder maximiert werden. Durch den Einsatz von Mehrfachantennen im Sender und Empfänger kann die Performanz signifikant erhöht werden. Mit dem Generalized Selection MIMO Eigenprecoder können Transceiver mit reduzierter Komplexität ermöglicht werden. Sowohl für den Empfänger als auch für den Sender existieren Methoden, die Interferenzen vollständig zu eliminieren, oder den mittleren quadratischen Fehler zu minimieren. Der Maximum-Likelihood-Empfänger minimiert die Bitfehlerwahrscheinlichkeit (BER), hat jedoch kein entsprechendes Gegenstück im Sender. Die in dieser Arbeit vorgeschlagene Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimiert die BER am Detektor durch Sendesignalverarbeitung. Dieses nichtlineare Verfahren nutzt die Kenntnis der Datensymbole und des Mobilfunkkanals, um ein Sendesignal zu generieren, dass die BER unter Berücksichtigung einer Sendeleistungsnebenbedingung minimiert. Dabei werden nichtlineare Optimierungsverfahren wie Sequentielle Quadratische Programmierung (SQP) verwendet. Die Performanz linearer und nichtlinearer MUT-Verfahren MUT-Algorithmen mit linearen Empfängern wird am Beispiel des TD-SCDMA-Standards verglichen. Das Problem der Interferenzen kann mit allen untersuchten Verfahren gelöst werden, die TxMinBer-Methode benötigt jedoch die geringste Sendeleistung, um eine bestimmt Anzahl von Nutzern zu unterstützen. Die hohe Rechenkomplexität der MUT-Algorithmen ist ein wichtiges Problem bei der Implementierung in Real-Zeit-Systemen. Durch die Ausnutzung von Struktureigenschaften der Systemmatrizen kann die Komplexität der linearen MUT-Verfahren signifikant reduziert werden. Verschiedene Verfahren zur Invertierung der Systemmatrizen werden aufgezeigt und verglichen. Es werden Vorschläge gemacht, die Komplexität der Minimum Bit Error Rate Multiuser Transmission zu reduzieren, u.a. durch Vermeidung der Sendeleistungsnebenbedingung durch eine Beschränkung der Optimierung auf die Phasen des Sendesignalvektors. Die Komplexität der nichtlinearen Methoden ist um einige Größenordungen höher als die der linearen Verfahren. Weitere Forschungsanstrengungen an diesem Thema sowie die wachsende Rechenleistung von integrierten Halbleitern werden künftig die Ausnutzung der besseren Leistungsfähigkeit der nichtlinearen MUT-Verfahren erlauben.
APA, Harvard, Vancouver, ISO, and other styles
17

Chang, Yung-Di, and 張詠棣. "Chip Design of Memory-Based Architecture for Minimum Classification Error." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/02852597645232023458.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
98
This thesis proposed a minimum classification error (MCE) processor. The well characteristic of MCE is to retrain data for each group, such that the distance between group and group is pulled farther. MCE can improve recognition rate and reduce misrecognition, so MCE is widely applied to speech processing and image processing. Besides, MCE is also applied to hand writing, face detection, and neural network, etc. This MCE processor is, in our knowledge, the first proposed chip design. It is also the main contribution of this thesis. Due to the MCE algorithm includes vector operation, matrix operation, exponent function, natural logarithm function, sigmoid function, square root function, and also needs iteration, we adopted In-Place Mode of memory architecture, and used look-up table (LUT) to map functions. The memory-based architecture MCE processor is synthesized by UMC 90nm standard cell library. The input format of the MCE processor is 24 bits, and the area is approximately 8.07mm2. The power consumption is approximately 3.6393mW. The maximum operating frequency is 83MHz.
APA, Harvard, Vancouver, ISO, and other styles
18

Chu, Ying-Lin, and 朱映霖. "SPEAKER IDENTIFICATION BASED ON AN IMPROVED MINIMUM CLASSIFICATION ERROR METHOD." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/52761181230782785251.

Full text
Abstract:
碩士
國立中央大學
電機工程研究所
95
In speaker recognition, it is important to have effective training data to train speaker models which have a great effect on recognition performance. In abundant training data, traditional speaker models which is based on maximum likelihood have a good effect, but it is opposite in slight training data. Besides, being independent with other speakers, we used training data for the same speaker to train speaker model owning to the method of maximum likelihood. In the stage of training model, we did not concern the relation of different speaker model, so we would get confused easily in speaker recognition. In recent years, Discriminative Acoustic Model Training is proposed to minimize classification error, not maximizing training acoustic models likelihood. In this thesis, we use minimum classification error to train speaker models, and support vector machines to improve minimum classification error. Due to the non-robustness of minimum classification error in setup for the amount of competitive speakers, we use the scores of speaker models for training data as labels of classes to train support vector machines. Then, we use support vectors to choose competitive speakers to make more robust and higher speaker recognition performance than minimum classification error.
APA, Harvard, Vancouver, ISO, and other styles
19

"Parametric classification and variable selection by the minimum integrated squared error criterion." Thesis, 2012. http://hdl.handle.net/1911/70219.

Full text
Abstract:
This thesis presents a robust solution to the classification and variable selection problem when the dimension of the data, or number of predictor variables, may greatly exceed the number of observations. When faced with the problem of classifying objects given many measured attributes of the objects, the goal is to build a model that makes the most accurate predictions using only the most meaningful subset of the available measurements. The introduction of [cursive l] 1 regularized model titling has inspired many approaches that simultaneously do model fitting and variable selection. If parametric models are employed, the standard approach is some form of regularized maximum likelihood estimation. While this is an asymptotically efficient procedure under very general conditions, it is not robust. Outliers can negatively impact both estimation and variable selection. Moreover, outliers can be very difficult to identify as the number of predictor variables becomes large. Minimizing the integrated squared error, or L 2 error, while less efficient, has been shown to generate parametric estimators that are robust to a fair amount of contamination in several contexts. In this thesis, we present a novel robust parametric regression model for the binary classification problem based on L 2 distance, the logistic L 2 estimator (L 2 E). To perform simultaneous model fitting and variable selection among correlated predictors in the high dimensional setting, an elastic net penalty is introduced. A fast computational algorithm for minimizing the elastic net penalized logistic L 2 E loss is derived and results on the algorithm's global convergence properties are given. Through simulations we demonstrate the utility of the penalized logistic L 2 E at robustly recovering sparse models from high dimensional data in the presence of outliers and inliers. Results on real genomic data are also presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Hung, Tsz-Ying, and 洪慈霙. "Matrix-Type Minimum Classification Error Based Two-Dimension Cepstrum for Speech Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/71026177856492665602.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
98
The thesis is investigated into training models of matrix-type minimum classification error (MMCE) to compare with other ways, and used different methods of enhancement to improve the performance in the speech recognition system. For using the method of minimum classification error on the special dimension, we propose matrix-type minimum classification error. The feature extraction bases on Modified Two-Dimension Cepstrum (MTDC), and template matching employs Gaussian Mixture Models (GMM). However, the noisy background in our life may interfere with the performance. Hence, we adopt matrix-type minimum classification(MMCE) to enhance speech features. Next, we used the system to identify the speech. We adopted numbers in Chinese (0-9) from 10 speakers (5 males and 5 females), then everyone chanted 10 times for each number (total files: 10400). We selected 980 files of each one as the training file, the remainder as the testing files. Finally, we compared and discussed the results which are tested in several variable background noises form different conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, Chun-Chieh, and 黃俊傑. "Maximum Likelihood and Minimum Classification Error Beam-forming for Robust Speech Recognition." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/p8n75s.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
96
The paper discusses the approach to improve the speech recognition in the far speaking environment by the array signal process. Because the speech signal became distorted and the adding noise, it will degrade the recognizable rate. The microphone array can improve the quality of the speech record. The traditional array process focuses on increasing the signal to noise ratio or robust signal by the some constrains. But these methods ignore the operation principle of the speech recognizer. The robust speech signal wave can’t correspond to the well recognizable rate. The paper provides the array process don’t only improve the signal wave but also utilize the statistics property of the hidden Markov model, HMM widely used in the speech recognition. Adjusting the microphone array filter parameters practices the Maximum Likelihood algorithm in microphone array and this method has been suggested. It will increase the likelihood with the supervised sentence. Lastly, the paper provides the minimum classification error algorithm in the array process. The approach reduces the loss function with corresponding the classification error by adjusting the frequency response of the array. So we will utilize the feedback between the speech recognizer and the array to improve the speech recognition accuracy. The simulative speech signal experiments have finished. The ML will provide the better performance than the traditional Delay and Sum algorithm. The MCE algorithm will advance the recognition accuracy than ML algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Hsieh, Ping-Ju, and 謝秉儒. "Optimal Design of Minimum Mean-Square Error Noise Reduction Algorithm Using Simulated Annealing Technique." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/75286118336700183998.

Full text
Abstract:
碩士
國立交通大學
機械工程系所
96
This paper proposes an optimized speech enhancement algorithm aimed at single-channel noise reduction (NR). The optimization process is based on an objective function obtained in a regression model and the simulated annealing (SA) algorithm that is well suited for problems with many local optima. The NR algorithm, minimum mean-square error noise reduction (MMSE-NR) algorithm, employs a time-recursive averaging (TRA) method for noise estimation. It was found in a sensitivity analysis that one of the two optimal parameters remains relatively constant, while the other parameter varies drastically in different noise scenarios. Another NR algorithm proposed in the paper employs linear prediction coding (LPC) as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. White noise and car noise at signal-to-noise ratio (SNR) 5 dB are used in these tests. The results of subjective test were processed by using analysis of variance (ANOVA) to justify the statistic significance. A post-hoc test (Tukey’s HSD) was conducted to assess the statistical difference between the NR algorithms. As compared to conventional algorithms, the optimized MMSE-TRA-NR algorithm proved effective in enhancing noise-corrupted speech signals, without compromising the timbral quality.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Chien-Shun, and 王建順. "A Novel Color Interpolation Algorithm and Hardware Architecture by Pre-estimating Minimum Square Error." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/43359896324429417896.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
93
In recent years, digital still cameras (DSCs) are becoming very popular consumer electronic devices. Most digital still cameras use single Charge-Coupled Device (CCD) with Color Filter Array (CFA) to capture sub-sampled digital color images. In this thesis, a novel color interpolation algorithm for Color Filter Array (CFA) in digital still cameras (DSCs) is presented. This work introduces pre-estimating the minimum square error to address the color interpolation for CFA. In order to estimate the missing pixels in Bayer CFA pattern, the weights of adjacent color pattern pairs are decided by the matrix computation. We adopt the color model (KR, KB) used in many color interpolation algorithms for CFA. The proposed algorithm can achieve better performance shown in the experimental results. Comparing the previous methods, the proposed color interpolation algorithm can provide high quality image in DSCs and regular architecture for VLSI design. For the purpose of the efficient hardware implementation, we propose a modified recursive Schur algorithm for hardware design of the proposed interpolation Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Jheng-Yao, and 林政曜. "Utilizing Minimum Mean-Square-Error algorithm and Kalman Filter for channel estimation in Orthogonal Frequency-Division Multiplexing system." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/94895513978379800505.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
94
In this report, the channel frequency response of the wireless Orthogonal Frequency-Division Multiplexing system has been estimated and modeled by utilizing the minimum mean-square-error (MSE) algorithm and kalman filtering algorithm. Two channel models have been developed for the system considered, namely, an additive white Gaussian noise only channel and a Rayleigh slow-fading channel with white Gaussian noise added channel. With the models developed several examples have been simulated to study the resulting system symbol error rates vs. signal to noise ratios and fading factors to study the effectiveness of the developed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Dalton, Lori Anne. "Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10873.

Full text
Abstract:
With the advent of high-throughput genomic and proteomic technologies, in conjunction with the difficulty in obtaining even moderately sized samples, small-sample classifier design has become a major issue in the biological and medical communities. Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework of minimum mean-square error (MMSE) signal estimation in the presence of uncertainty, where uncertainty is relative to a prior over a family of distributions. This results in a Bayesian approach to error estimation that is optimal and unbiased relative to the model. The prior addresses a trade-off between estimator robustness (modeling assumptions) and accuracy. Closed-form representations for Bayesian error estimators are provided for two important models: discrete classification with Dirichlet priors (the discrete model) and linear classification of Gaussian distributions with fixed, scaled identity or arbitrary covariances and conjugate priors (the Gaussian model). We examine robustness to false modeling assumptions and demonstrate that Bayesian error estimators perform especially well for moderate true errors. The Bayesian modeling framework facilitates both optimization and analysis. It naturally gives rise to a practical expected measure of performance for arbitrary error estimators: the sample-conditioned mean-square error (MSE). Closed-form expressions are provided for both Bayesian models. We examine the consistency of Bayesian error estimation and illustrate a salient application in censored sampling, where sample points are collected one at a time until the conditional MSE reaches a stopping criterion. We address practical considerations for gene-expression microarray data, including the suitability of the Gaussian model, a methodology for calibrating normal-inverse-Wishart priors from unused data, and an approximation method for non-linear classification. We observe superior performance on synthetic high-dimensional data and real data, especially for moderate to high expected true errors and small feature sizes. Finally, arbitrary error estimators may be optimally calibrated assuming a fixed Bayesian model, sample size, classification rule, and error estimation rule. Using a calibration function mapping error estimates to their optimally calibrated values off-line, error estimates may be calibrated on the fly whenever the assumptions apply.
APA, Harvard, Vancouver, ISO, and other styles
26

Nagaraja, Srinidhi. "Multi-Antenna Communication Receivers Using Metaheuristics and Machine Learning Algorithms." Thesis, 2013. http://etd.iisc.ernet.in/2005/3442.

Full text
Abstract:
In this thesis, our focus is on low-complexity, high-performance detection algorithms for multi-antenna communication receivers. A key contribution in this thesis is the demonstration that efficient algorithms from metaheuristics and machine learning can be gainfully adapted for signal detection in multi- antenna communication receivers. We first investigate a popular metaheuristic known as the reactive tabu search (RTS), a combinatorial optimization technique, to decode the transmitted signals in large-dimensional communication systems. A basic version of the RTS algorithm is shown to achieve near-optimal performance for 4-QAM in large dimensions. We then propose a method to obtain a lower bound on the BER performance of the optimal detector. This lower bound is tight at moderate to high SNRs and is useful in situations where the performance of optimal detector is needed for comparison, but cannot be obtained due to very high computational complexity. To improve the performance of the basic RTS algorithm for higher-order modulations, we propose variants of the basic RTS algorithm using layering and multiple explorations. These variants are shown to achieve near-optimal performance in higher-order QAM as well. Next, we propose a new receiver called linear regression of minimum mean square error (MMSE) residual receiver (referred to as LRR receiver). The proposed LRR receiver improves the MMSE receiver by learning a linear regression model for the error of the MMSE receiver. The LRR receiver uses pilot data to estimate the channel, and then uses locally generated training data (not transmitted over the channel) to find the linear regression parameters. The LRR receiver is suitable for applications where the channel remains constant for a long period (slow-fading channels) and performs well. Finally, we propose a receiver that uses a committee of linear receivers, whose parameters are estimated from training data using a variant of the AdaBoost algorithm, a celebrated supervised classification algorithm in ma- chine learning. We call our receiver boosted MMSE (B-MMSE) receiver. We demonstrate that the performance and complexity of the proposed B-MMSE receiver are quite attractive for multi-antenna communication receivers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography