Dissertations / Theses on the topic 'Linear Pattern Recognition'

To see the other types of publications on this topic, follow the link: Linear Pattern Recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 dissertations / theses for your research on the topic 'Linear Pattern Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Reed, Stuart. "Cascaded linear shift invariant processing in pattern recognition." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/7481.

Full text
Abstract:
Image recognition is the process of classifying a pattern in an image into one of a number of stored classes. It is used in such diverse applications as medical screening, quality control in manufacture and military target recognition. An image recognition system is called shift invariant if a shift of the pattern in the input image produces a proportional shift in the output, meaning that both the class and location of the object in the image are identified. The work presented in this thesis considers a cascade of linear shift invariant optical processors, or correlators, separated by fields of point non-lineari ties, called the cascaded correlator. This is introduced as a method of providing parallel, shiftinvariant, non-linear pattern recognition in a system that can learn in the manner of neural networks. It is shown that if a neural network is constrained to give overall shift invariance, the resulting structure is a cascade of correlators, meaning that the cascaded correlator is the only architecture which will provide fully shift invariant pattern recognition. The issues of training of such a non-linear system are discussed in neural network terms, and the non-linear decisions of the system are investigated. By considering digital simulations of a two-stage system, it is shown that the cascaded correlator is superior to linear filtering for both discrimination and tolerance to image distortion. This is shown for theoretical images and in real-world applications based on fault identification in can manufacture. The cascaded correlator has also been proven as an optical system by implementation in a joint transform correlator architecture. By comparing simulated and optical results, the resulting practical errors are analysed and compensated. It is shown that the optical implementation produces results similar to those of the simulated system, meaning that it is possible to provide a highly non-linear decision using robust parallel optical processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Richard. "3D non-linear image restoration algorithms." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jian. "Non-linear techniques for image processing." Thesis, King's College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ruan, Yang. "Smooth and locally linear semi-supervised metric learning /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20RUAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Powell, Heather M. "Impedance imaging using linear arrays of electrodes." Thesis, University of Sheffield, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gonzalez, Adrian. "Spatial pattern recognition for crop-livestock systems using multispectral data." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3790.

Full text
Abstract:
Within the field of pattern recognition (PR) a very active area is the clustering and classification of multispectral data, which basically aims to allocate the right class of ground category to a reflectance or radiance signal. Generally, the problem complexity is related to the incorporation of spatial characteristics that are complementary to the nonlinearities of land surface process heterogeneity, remote sensing effects and multispectral features. The present research describes the application of learning machine methods to accomplish the above task by inducting a relationship between the spectral response of farms’ land cover, and their farming system typology from a representative set of instances. Such methodologies are not traditionally used in crop-livestock studies. Nevertheless, this study shows that its application leads to simple and theoretically robust classification models. The study has covered the following phases: a)geovisualization of crop-livestock systems; b)feature extraction of both multispectral and attributive data and; c)supervised farm classification. The first is a complementary methodology to represent the spatial feature intensity of farming systems in the geographical space. The second belongs to the unsupervised learning field, which mainly involves the appropriate description of input data in a lower dimensional space. The last is a method based on statistical learning theory, which has been successfully applied to supervised classification problems and to generate models described by implicit functions. In this research the performance of various kernel methods applied to the representation and classification of crop-livestock systems described by multispectral response is studied and compared. The data from those systems include linear and nonlinearly separable groups that were labelled using multidimensional attributive data. Geovisualization findings show the existence of two well-defined farm populations within the whole study area; and three subgroups in relation to the Guarico section. The existence of these groups was confirmed by both hierarchical and kernel clustering methods, and crop-livestock systems instances were segmented and labeled into farm typologies based on: a)milk and meat production; b)reproductive management; c)stocking rate; and d)crop-forage-forest land use. The minimum set of labeled examples to properly train the kernel machine was 20 instances. Models inducted by training data sets using kernel machines were in general terms better than those from hierarchical clustering methodologies. However, the size of the training data set represents one of the main difficulties to be overcome in permitting the more general application of this technique in farming system studies. These results attain important implications for large scale monitoring of crop-livestock system; particularly to the establishment of balanced policy decision, intervention plans formulation, and a proper description of target typologies to enable investment efforts to be more focused at local issues.
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Jinhua. "Dependency modeling for information fusion with applications in visual recognition." HKBU Institutional Repository, 2013. https://repository.hkbu.edu.hk/etd_ra/1522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peacegood, Gillian. "A knowledge-based system for extraction and recognition of linear features in high resolution remotely-sensed imagery." Thesis, Kingston University, 1989. http://eprints.kingston.ac.uk/20529/.

Full text
Abstract:
A knowledge-based system for the automatic extraction and recognition of linear features from digital imagery has been developed, with a knowledge base applied to the recognition of linear features in high resolution remotely sensed imagery, such as SPOT HRV and XS, Thematic Mapper and high altitude aerial photography. In contrast to many knowledge-based vision systems, emphasis is placed on uncertainty and the exploitation of context via statistical inferencing techniques, and issues of strategy and control are given less emphasis. Linear features are extracted from imagery, which may be multiband imagery, using an edge detection and tracking algorithm. A relational database for the representation of linear features has been developed, and this is shown to be useful in a number of applications, including general purpose query and display. A number of proximity relationships between the linear features in the database are established, using computationally efficient algorithms. Three techniques for classifying the linear features by exploiting uncertainty and context have been implemented and are compared. These are Bayesian inferencing using belief networks, a new inferencing technique based on belief functions and relaxation labelling using belief functions. The two inferencing techniques are shown to produce more realistic results than probabilistic relaxation, and the new inferericing method based on belief functions to perform best in practical situations. Overall, the system is shown to produce reasonably good classification results on hand extracted linear features, although the classification is less good on automatically extracted linear features because of shortcomings in the edge detection and extraction processes. The system adopts many of the features of expert systems, including complete separation of control from stored knowledge and justification for the conclusions reached.
APA, Harvard, Vancouver, ISO, and other styles
9

Sharma, Alok. "Linear Models for Dimensionality Reduction and Statistical Pattern Recognition for Supervised and Unsupervised Tasks." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/365298.

Full text
Abstract:
In this dissertation a number of novel algorithms for dimension reduction and statistical pattern recognition for both supervised and unsupervised learning tasks have been presented. Several existing pattern classifiers and dimension reduction algorithms are studied. Their limitations and/or weaknesses are considered and accordingly improved techniques are given which overcome several of their shortcomings. In particular, the following research works are carried out: • Literature survey of basic techniques for pattern classification like Gaussian mixture model (GMM), expectation-maximization (EM) algorithm, minimum distance classifier (MDC), vector quantization (VQ), nearest neighbour (NN) and k-nearest neighbour (kNN) are conducted. • Survey of basic dimensional reduction tools viz. principal component analysis (PCA) and linear discriminant analysis (LDA) are conducted. These techniques are also considered for pattern classification purposes. • Development of Fast PCA technique which finds the desired number of leading eigenvectors with much less computational cost and requires extremely low processing time as compared to the basic PCA model. • Development of gradient LDA technique which solves the small sample size problem as was not possible by basic LDA technique. • The rotational LDA technique is developed which efficiently reduces the overlapping of samples between the classes to a large extent as compared to the basic LDA technique. • A combined classifier using MDC, class-dependent PCA and LDA is designed which improves the performance of the classifier which was not possible by using single classifiers. The application of PCA prior to LDA is conducted in such a way that it avoids small sample size problem (if present). • The splitting technique initialization is introduced in the local PCA technique. The proposed integration enables easier data processing and more accurate representation of multivariate data. • A combined technique using VQ and vector quantized principal component analysis (VQPCA) is presented which provides significant improvement in the classifier performance (in terms of accuracy) at very low storage and processing time requirements compared to individual and several other classifiers. • Survey on unsupervised learning task like independent component analysis (ICA) is conducted. • A new perspective of subspace ICA (generalized ICA, where all the components need not be independent) is introduced by developing vector kurtosis (an extension of kurtosis) function.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Xuechuan, and n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
12

Bridges, Seth. "Low-power visual pattern classification in analog VLSI /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/6984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Aliyev, Denis Aliyevich. "Visualization and Unsupervised Pattern Recognition in Multidimensional Data Using a New Heuristic for Linear Data Ordering." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1479420043962505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Medonza, Dharshan C. "AUTOMATIC DETECTION OF SLEEP AND WAKE STATES IN MICE USING PIEZOELECTRIC SENSORS." UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/271.

Full text
Abstract:
Currently technologies such as EEG, EMG and EOG recordings are the established methods used in the analysis of sleep. But if these methods are to be employed to study sleep in rodents, extensive surgery and recovery is involved which can be both time consuming and costly. This thesis presents and analyzes a cost effective, non-invasive, high throughput system for detecting the sleep and wake patterns in mice using a piezoelectric sensor. This sensor was placed at the bottom of the mice cages to monitor the movements of the mice. The thesis work included the development of the instrumentation and signal acquisition system for recording the signals critical to sleep and wake classification. Classification of the mouse sleep and wake states were studied for a linear classifier and a Neural Network classifier based on 23 features extracted from the Power Spectrum (PS), Generalized Spectrum (GS), and Autocorrelation (AC) functions of short data intervals. The testing of the classifiers was done on two data sets collected from two mice, with each data set having around 5 hours of data. A scoring of the sleep and wake states was also done via human observation to aid in the training of the classifiers. The performances of these two classifiers were analyzed by looking at the misclassification error of a set of test features when run through a classifier trained by a set of training features. The best performing features were selected by first testing each of the 23 features individually in a linear classifier and ranking them according to their misclassification rate. A test was then done on the 10 best individually performing features where they were grouped in all possible combinations of 5 features to determine the feature combinations leading to the lowest error rates in a multi feature classifier. From this test 5 features were eventually chosen to do the classification. It was found that the features related to the signal energy and the spectral peaks in the 3Hz range gave the lowest errors. Error rates as low as 4% and 9% were achieved from a 5-feature linear classifier for the two data sets. The error rates from a 5-feature Neural Network classifier were found to be 6% and 12% respectively for these two data sets.
APA, Harvard, Vancouver, ISO, and other styles
15

Pessoa, Lucio Flavio Cavalcanti. "Nonlinear systems and neural networks with hybrid morphological/rank/linear nodes : optimal design and applications to image processing and pattern recognition." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shakeel, Mohammad Danish. "Land Cover Classification Using Linear Support Vector Machines." Connect to resource online, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1231812653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Khosla, Nitin, and n/a. "Dimensionality Reduction Using Factor Analysis." Griffith University. School of Engineering, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061010.151217.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
APA, Harvard, Vancouver, ISO, and other styles
18

Khosla, Nitin. "Dimensionality Reduction Using Factor Analysis." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366058.

Full text
Abstract:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
19

Oh, Sang Min. "Switching linear dynamic systems with higher-order temporal structure." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29698.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010.
Committee Chair: Dellaert, Frank; Committee Co-Chair: Rehg, James; Committee Member: Bobick, Aaron; Committee Member: Essa, Irfan; Committee Member: Smyth, Padhraic. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
20

Karvir, Hrishikesh. "Design and Validation of a Sensor Integration and Feature Fusion Test-Bed for Image-Based Pattern Recognition Applications." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1291753291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Darwiche, Mostafa. "When operations research meets structural pattern recognition : on the solution of error-tolerant graph matching problems." Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4022/document.

Full text
Abstract:
Cette thèse se situe à l’intersection de deux domaines de recherche scientifique la Reconnaissance d’Objets Structurels (ROS) et la Recherche Opérationnelle (RO). Le premier consiste à rendre la machine plus intelligente et à reconnaître les objets, en particulier ceux basés sur les graphes. Alors que le second se focalise sur la résolution de problèmes d’optimisation combinatoire difficiles. L’idée principale de cette thèse est de combiner les connaissances de ces deux domaines. Parmi les problèmes difficiles existants en ROS, le problème de la distance d’édition entre graphes (DEG) a été sélectionné comme le cœur de ce travail. Les contributions portent sur la conception de méthodes adoptées du domaine RO pour la résolution du problème de DEG. Explicitement, des nouveaux modèles linéaires en nombre entiers et des matheuristiques ont été développé à cet effet et de très bons résultats ont été obtenus par rapport à des approches existantes
This thesis is focused on Graph Matching (GM) problems and in particular the Graph Edit Distance (GED) problems. There is a growing interest in these problems due to their numerous applications in different research domains, e.g. biology, chemistry, computer vision, etc. However, these problems are known to be complex and hard to solve, as the GED is a NP-hard problem. The main objectives sought in this thesis, are to develop methods for solving GED problems to optimality and/or heuristically. Operations Research (OR) field offers a wide range of exact and heuristic algorithms that have accomplished very good results when solving optimization problems. So, basically all the contributions presented in thesis are methods inspired from OR field. The exact methods are designed based on deep analysis and understanding of the problem, and are presented as Mixed Integer Linear Program (MILP) formulations. The proposed heuristic approaches are adapted versions of existing MILP-based heuristics (also known as matheuristics), by considering problem-dependent information to improve their performances and accuracy
APA, Harvard, Vancouver, ISO, and other styles
22

Niezen, Gerrit. "The optimization of gesture recognition techniques for resource-constrained devices." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01262009-125121/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lynch, Michael Richard. "Adaptive techniques in signal processing and connectionist models." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/244884.

Full text
Abstract:
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
24

Vaizurs, Raja Sarath Chandra Prasad. "Atrial Fibrillation Signal Analysis." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3386.

Full text
Abstract:
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia encountered in clinical practice and is associated with an increased mortality and morbidity. Identification of the sources of AF has been a goal of researchers for over 20 years. Current treatment procedures such as Cardio version, Radio Frequency Ablation, and multiple drugs have reduced the incidence of AF. Nevertheless, the success rate of these treatments is only 35-40% of the AF patients as they have limited effect in maintaining the patient in normal sinus rhythm. The problem stems from the fact that there are no methods developed to analyze the electrical activity generated by the cardiac cells during AF and to detect the aberrant atrial tissue that triggers it. In clinical practice, the sources triggering AF are generally expected to be at one of the four pulmonary veins in the left atrium. Classifying the signals originated from four pulmonary veins in left atrium has been the mainstay of signal analysis in this thesis which ultimately leads to correctly locating the source triggering AF. Unlike many of the current researchers where they use ECG signals for AF signal analysis, we collect intra cardiac signals along with ECG signals for AF analysis. AF Signal collected from catheters placed inside the heart gives us a better understanding of AF characteristics compared to the ECG. . In recent years, mechanisms leading to AF induction have begun to be explored but the current state of research and diagnosis of AF is mainly about the inspection of 12 lead ECG, QRS subtraction methods, spectral analysis to find the fibrillation rate and limited to establishment of its presence or absence. The main goal of this thesis research is to develop methodology and algorithm for finding the source of AF. Pattern recognition techniques were used to classify the AF signals originated from the four pulmonary veins. The classification of AF signals recorded by a stationary intra-cardiac catheter was done based on dominant frequency, frequency distribution and normalized power. Principal Component Analysis was used to reduce the dimensionality and further, Linear Discriminant Analysis was used as a classification technique. An algorithm has been developed and tested during recorded periods of AF with promising results.
APA, Harvard, Vancouver, ISO, and other styles
25

Pacola, Edras Reily. "Uso da análise de discriminante linear em conjunto com a transformada wavelet discreta no reconhecimento de espículas." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1828.

Full text
Abstract:
CAPES
Pesquisadores têm concentrado esforços, nos últimos 20 anos, aplicando a transformada wavelet no processamento, filtragem, reconhecimento de padrões e na classificação de sinais biomédicos, especificamente em sinais de eletroencefalografia (EEG) contendo eventos característicos da epilepsia, as espículas. Várias famílias de wavelets-mães foram utilizadas, mas sem um consenso sobre qual wavelet-mãe é a mais adequada para essa finalidade. Os sinais utilizados apresentam uma gama muito grande de eventos e não possuem características padronizadas. A literatura relata sinais de EEG amostrados entre 100 a 600 Hz, com espículas variando de 20 a 200 ms. Nesse estudo foram utilizadas 98 wavelets. Os sinais de EEG foram amostrados de 200 a 1 kHz. Um neurologista marcou um conjunto de 494 espículas e um conjunto de 1500 eventos não-espícula. Esse estudo inicia avaliando a quantidade de decomposições wavelets necessárias para a detecção de espículas, seguido pela análise detalhada do uso combinado de wavelets-mães de uma mesma família e entre famílias. Na sequência é analisada a influência de descritores e o uso combinado na detecção de espículas. A análise dos resultados desses estudos indica que é mais adequado utilizar um conjunto de wavelets-mães, com vários níveis de decomposição e com vários descritores, ao invés de utilizar uma única wavelet-mãe ou um descritor específico para a detecção de espículas. A seleção desse conjunto de wavelets, de níveis de decomposição e de descritores permite obter níveis de detecção elevados conforme a carga computacional que se deseje ou a plataforma computacional disponível para a implementação. Como resultado, esse estudo atingiu níveis de desempenho entre 0,9936 a 0,9999, dependendo da carga computacional. Outras contribuições desse estudo referem-se à análise dos métodos de extensão de borda na detecção de espículas; e a análise da taxa de amostragem de sinais de EEG no desempenho do classificador de espículas, ambos com resultados significativos. São também apresentadas como contribuições: uma nova arquitetura de detecção de espículas, fazendo uso da análise de discriminante linear; e a apresentação de um novo descritor, energia centrada, baseado na resposta dos coeficientes das sub-bandas de decomposição da transformada wavelet, capaz de melhorar a discriminação de eventos espícula e não-espícula.
Researchers have concentrated efforts in the past 20 years, by applying the wavelet transform in processing, filtering, pattern recognition and classification of biomedical signals, in particular signals of electroencephalogram (EEG) containing events characteristic of epilepsy, the spike. Several families of mother-wavelets were used, but there are no consensus about which mother-wavelet is the most adequate for this purpose. The signals used have a wide range of events. The literature reports EEG signals sampled from 100 to 600 Hz with spikes ranging from 20 to 200 ms. In this study we used 98 wavelets. The EEG signals were sampled from 200 Hz up to 1 kHz. A neurologist has scored a set of 494 spikes and a set 1500 non-spike events. This study starts evaluating the amount of wavelet decompositions required for the detection of spikes, followed by detailed analysis of the combined use of mother-wavelets of the same family and among families. Following is analyzed the influence of descriptors and the combined use of them in spike detection. The results of these studies indicate that it is more appropriate to use a set of mother-wavelets, with many levels of decomposition and with various descriptors, instead of using a single mother-wavelet or a specific descriptor for the detection of spikes. The selection of this set of wavelets, decomposition level and descriptors allows to obtain high levels of detection according to the computational load desired or computing platform available for implementation. This study reached performance levels between 0.9936 to 0.9999, depending on the computational load. Other contributions of this study refer to the analysis of the border extension methods for spike detection; and the influences of the EEG signal sampling rate in the classifier performance, each one with significant results. Also shown are: a new spike detection architecture by making use of linear discriminant analysis; and the presentation of a new descriptor, the centred energy, based on the response of the coefficients of decomposition levels of the wavelet transform, able to improve the discrimination of spike and non-spike events.
APA, Harvard, Vancouver, ISO, and other styles
26

Presti, G. "SIGNAL TRANSFORMATIONS FOR IMPROVING INFORMATION REPRESENTATION, FEATURE EXTRACTION AND SOURCE SEPARATION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/470676.

Full text
Abstract:
Questa tesi riguarda nuovi metodi di rappresentazione del segnale nel dominio tempo-frequenza, tali da mostrare le informazioni ricercate come dimensioni esplicite di un nuovo spazio. In particolare due trasformate sono introdotte: lo Spazio di Miscelazione Bivariato (Bivariate Mixture Space) e il Campo della Struttura Spettro-Temporale (Spectro-Temporal Structure-Field). La prima trasformata mira a evidenziare le componenti latenti di un segnale bivariato basandosi sul comportamento di ogni componente frequenziale (ad esempio a fini di separazione delle sorgenti); la seconda trasformata mira invece all'incapsulamento di informazioni relative al vicinato di un punto in R^2 in un vettore associato al punto stesso, tale da descrivere alcune proprietà topologiche della funzione di partenza. Nel dominio dell'elaborazione digitale del segnale audio, il Bivariate Mixture Space può essere interpretato come un modo di investigare lo spazio stereofonico per operazioni di separazione delle sorgenti o di estrazione di informazioni, mentre lo Spectro-Temporal Structure-Field può essere usato per ispezionare lo spazio spettro-temporale (segregare suoni percussivi da suoni intonati o tracciae modulazioni di frequenza). Queste trasformate sono studiate e testate anche in relazione allo stato del'arte in campi come la separazione delle sorgenti, l'estrazione di informazioni e la visualizzazione dei dati. Nel campo dell'informatica applicata al suono, queste tecniche mirano al miglioramento della rappresentazione del segnale nel dominio tempo-frequenza, in modo tale da rendere possibile l'esplorazione dello spettro anche in spazi alternativi, quali il panorama stereofonico o una dimensione virtuale che separa gli aspetti percussivi da quelli intonati.
This thesis is about new methods of signal representation in time-frequency domain, so that required information is rendered as explicit dimensions in a new space. In particular two transformations are presented: Bivariate Mixture Space and Spectro-Temporal Structure-Field. The former transform aims at highlighting latent components of a bivariate signal based on the behaviour of each frequency base (e.g. for source separation purposes), whereas the latter aims at folding neighbourhood information of each point of a R^2 function into a vector, so as to describe some topological properties of the function. In the audio signal processing domain, the Bivariate Mixture Space can be interpreted as a way to investigate the stereophonic space for source separation and Music Information Retrieval tasks, whereas the Spectro-Temporal Structure-Field can be used to inspect spectro-temporal dimension (segregate pitched vs. percussive sounds or track pitch modulations). These transformations are investigated and tested against state-of-the-art techniques in fields such as source separation, information retrieval and data visualization. In the field of sound and music computing, these techniques aim at improving the frequency domain representation of signals such that the exploration of the spectrum can be achieved also in alternative spaces like the stereophonic panorama or a virtual percussive vs. pitched dimension.
APA, Harvard, Vancouver, ISO, and other styles
27

Venot, Alain. "Nouvelles méthodes de comparaison d'images numériques." Paris 6, 1986. http://www.theses.fr/1986PA066276.

Full text
Abstract:
Cette thèse présente de nouvelles méthodes de comparaison automatique d'images numériques non similaires et leurs applications à l'imagerie médicale en rayons gamma et X. La comparaison d'images est abordée en deux étapes successives, recalage des images d'abord, comparaison point par point des images recalées ensuite.
APA, Harvard, Vancouver, ISO, and other styles
28

Hachouf, Fella. "Télédétection des contours linéaires." Rouen, 1988. http://www.theses.fr/1988ROUES027.

Full text
Abstract:
Etude du problème de la détection des contours tel qu'il se pose par exemple en détection automatique des réseaux routiers ou fluviaux sur des images issues de photographies aériennes. Les principes de base du traitement d'image sont passés en revue. Les différentes méthodes de filtrage sont ensuite abordées. La solution proposée consiste dans la squelettisation homologique de l'image du module du gradient, ce qui permet l'extraction des structures à reconnaître par transformation du squelette en graphe
APA, Harvard, Vancouver, ISO, and other styles
29

Andrés, Ferrer Jesús. "Statistical approaches for natural language modelling and monotone statistical machine translation." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7109.

Full text
Abstract:
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural y traducción automática estadística. En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad. En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida. Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico .
Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
Palancia
APA, Harvard, Vancouver, ISO, and other styles
30

Bueno, Felipe Roberto 1985. "Perceptrons híbridos lineares/morfológicos fuzzy com aplicações em classificação." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306338.

Full text
Abstract:
Orientador: Peter Sussner
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T15:06:30Z (GMT). No. of bitstreams: 1 Bueno_FelipeRoberto_M.pdf: 1499339 bytes, checksum: 85b58d8b856fafa47974349e80c1729e (MD5) Previous issue date: 2015
Resumo: Perceptrons morfológicos (MPs) pertencem à classe de redes neurais morfológicas (MNNs). Estas redes representam uma classe de redes neurais artificiais que executam operações de morfologia matemática (MM) em cada nó, possivelmente seguido pela aplicação de uma função de ativação. Vale ressaltar que a morfologia matemática foi concebida como uma teoria para processamento e análise de objetos (imagens ou sinais), por meio de outros objetos chamados elementos estruturantes. Embora inicialmente desenvolvida para o processamento de imagens binárias e posteriormente estendida para o processamento de imagens em tons de cinza, a morfologia matemática pode ser conduzida de modo mais geral em uma estrutura de reticulados completos. Originalmente, as redes neurais morfológicas empregavam somente determinadas operações da morfologia matemática em tons de cinza, denominadas de erosão e dilatação em tons de cinza, segundo a abordagem umbra. Estas operações podem ser expressas em termos de produtos máximo aditivo e mínimo aditivo, definidos por meio de operações entre vetores ou matrizes, da álgebra minimax. Recentemente, as operações da morfologia matemática fuzzy surgiram como funções de agregação das redes neurais morfológicas. Neste caso, falamos em redes neurais morfológicas fuzzy. Perceptrons híbridos lineares/morfológicos fuzzy foram inicialmente projetados como uma generalização dos perceptrons lineares/morfológicos existentes, ou seja, os perceptrons lineares/morfológicos fuzzy podem ser definidos por uma combinação convexa de uma parte morfológica fuzzy e uma parte linear. Nesta dissertação de mestrado, introduzimos uma rede neural artificial alimentada adiante, representando um perceptron híbrido linear/morfológico fuzzy chamado F-DELP (do inglês fuzzy dilation/erosion/linear perceptron), que ainda não foi considerado na literatura de redes neurais. Seguindo as ideias de Pessoa e Maragos, aplicamos uma suavização adequada para superar a não-diferenciabilidade dos operadores de dilatação e erosão fuzzy utilizados no modelo F-DELP. Em seguida, o treinamento é realizado por intermédio de um algoritmo de retropropagação de erro tradicional. Desta forma, aplicamos o modelo F-DELP em alguns problemas de classificação conhecidos e comparamos seus resultados com os produzidos por outros classificadores
Abstract: Morphological perceptrons (MPs) belong to the class of morphological neural networks (MNNs). These MNNs represent a class of artificial neural networks that perform operations of mathematical morphology (MM) at every node, possibly followed by the application of an activation function. Recall that mathematical morphology was conceived as a theory for processing and analyzing objects (images or signals), by means of other objects called structuring elements. Although initially developed for binary image processing and later extended to gray-scale image processing, mathematical morphology can be conducted very generally in a complete lattice setting. Originally, morphological neural networks only employed certain operations of gray-scale mathematical morphology, namely gray-scale erosion and dilation according to the umbra approach. These operations can be expressed in terms of (additive maximum and additive minimum) matrix-vector products in minimax algebra. It was not until recently that operations of fuzzy mathematical morphology emerged as aggregation functions of morphological neural networks. In this case, we speak of fuzzy morphological neural networks. Hybrid fuzzy morphological/linear perceptrons was initially designed by generalizing existing morphological/linear perceptrons, in other words, fuzzy morphological/linear perceptrons can be defined by a convex combination of a fuzzy morphological part and a linear part. In this master's thesis, we introduce a feedforward artificial neural network representing a hybrid fuzzy morphological/linear perceptron called fuzzy dilation/erosion/linear perceptron (F-DELP), which has not yet been considered in the literature. Following Pessoa's and Maragos' ideas, we apply an appropriate smoothing to overcome the non-differentiability of the fuzzy dilation and erosion operators employed in the proposed F-DELP models. Then, training is achieved using a traditional backpropagation algorithm. Finally, we apply the F-DELP model to some well-known classification problems and compare the results with the ones produced by other classifiers
Mestrado
Matematica Aplicada
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
31

Pešek, Milan. "Detekce logopedických vad v řeči." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218106.

Full text
Abstract:
The thesis deals with a design and an implementation of software for a detection of logopaedia defects of speech. Due to the need of early logopaedia defects detecting, this software is aimed at a child’s age speaker. The introductory part describes the theory of speech realization, simulation of speech realization for numerical processing, phonetics, logopaedia and basic logopaedia defects of speech. There are also described used methods for feature extraction, for segmentation of words to speech sounds and for features classification into either correct or incorrect pronunciation class. In the next part of the thesis there are results of testing of selected methods presented. For logopaedia speech defects recognition algorithms are used in order to extract the features MFCC and PLP. The segmentation of words to speech sounds is performed on the base of Differential Function method. The extracted features of a sound are classified into either a correct or an incorrect pronunciation class with one of tested methods of pattern recognition. To classify the features, the k-NN, SVN, ANN, and GMM methods are tested.
APA, Harvard, Vancouver, ISO, and other styles
32

Hung, Tsung-yung, and 洪宗湧. "Novel Local Pattern Descriptors via Dynamic Linear Decision Function for Face Recognition." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/14021237019386392236.

Full text
Abstract:
博士
國立中央大學
資訊工程學系
102
Recently, the research in face recognition has been focused on developing a face representation that is designed to generate invariant features for solving facial illumination and expression. Motivated by a simple but powerful local pattern descriptor, Local Binary Pattern (LBP), two novel local pattern descriptors are proposed to extend the LBP to vector-based and directional-based local pattern descriptors via dynamic linear decision function for face recognition. The first descriptor, namely, Local Vector Pattern (LVP), provides a novel vector representation and a coding scheme Comparative Space Transform (CST), which are used to generate more detailed discriminative local features than the other methods. The second proposed descriptor, namely, Local Directional Classifier Pattern (LDCP), computes eight edge response values from extra neighborhood pixels, and these values are used to select the upper and lower bound indices for generating robust complete binary codes. These methods are implemented and compared with existing LBP face recognition systems and other state-of-art local pattern descriptors on FERET, CAS-PEAL, CMU-PIE, Extend Yale B, and LFW databases. Experimental results demonstrate that the proposed methods outperform the other comparative methods with grayscale images and Gabor features as inputs.
APA, Harvard, Vancouver, ISO, and other styles
33

LI, CHI-LIN, and 李季霖. "Linear Solvation Energy Relationship model and pattern recognition studies for gold nanoparticle vapor sensor array." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/65305253637216134468.

Full text
Abstract:
碩士
輔仁大學
化學系
94
Gold nanoclusters capped with four different functional thiolate were synthesized via two phase approach. The diameter of nanoclusters determined by TEM are ranging from 2 to 6 nm. The sensing properties of monolayer protected gold nanoclusters (MPCs) were probed on both QCM and micro-interdigital electrodes. Vapor sensing selectivity which is dominated by the shell ligand structure of MPC was demonstrated. QCM represents the mass change during the sorption of organic vapor into MPCs. The partition coefficient , K , can be estimated by this approach. We have taken further calculations to establish the linear solvation energy relationship model (LSER) for MPCs. The salvation parameters reveal the chemical force behind the selective vapor sorption behavior of MPC: MOP-Au process significant dipole-dipole attraction (s) as well as H-bond acidity (a). Among four MPCs materials, only MBT-Au shows effective H-bond basicity (b). All MPCs rely on Van der Waals force (l), but C8SH-Au has the largest Van der Waals force than other three MPCs. Furthermore, we found that the vapor response patterns of MPC coated QCM array are somehow different than that of same MPC coated chemiresistor. It is because the effectiveness of transferring the sorbed mass into the core-to-core distance change are different from one MPC to another. In addition, we perform statistical analysis using Mahalanobis distance and Fisher's method to determine the recognition of these sensor arrays. We found that QCM array has better recognition rate (75.9%) than that of chemiresistor (60.7%).Finally, if two arrays were joined as one array of eight sensors, the recognition rate increase to 86.7% which is still using four sensing materials only.
APA, Harvard, Vancouver, ISO, and other styles
34

Smit, Willem Jacobus. "Sparse coding for speech recognition." Thesis, 2008. http://upetd.up.ac.za/thesis/available/etd-11112008-151309/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

He, Kun. "Automated Measurement of Neuromuscular Jitter Based on EMG Signal Decomposition." Thesis, 2007. http://hdl.handle.net/10012/3332.

Full text
Abstract:
The quantitative analysis of decomposed electromyographic (EMG) signals reveals information for diagnosing and characterizing neuromuscular disorders. Neuromuscular jitter is an important measure that reflects the stability of the operation of a neuromuscular junction. It is conventionally measured using single fiber electromyographic (SFEMG) techniques. SFEMG techniques require substantial physician dexterity and subject cooperation. Furthermore, SFEMG needles are expensive, and their re-use increases the risk of possible transmission of infectious agents. Using disposable concentric needle (CN) electrodes and automating the measurment of neuromuscular jitter would greatly facilitate the study of neuromuscular disorders. An improved automated jitter measurment system based on the decomposition of CN detected EMG signals is developed and evaluated in this thesis. Neuromuscular jitter is defined as the variability of time intervals between two muscle fiber potentials (MFPs). Given the candidate motor unit potentials (MUPs) of a decomposed EMG signal, which is represented by a motor unit potential train (MUPT), the automated jitter measurement system designed in this thesis can be summarized as a three-step procedure: 1) identify isolated motor unit potentials in a MUPT, 2) detect the significant MFPs of each isolated MUP, 3) track significant MFPs generated by the same muscle fiber across all isolated MUPs, select typical MFP pairs, and calculate jitter. In Step one, a minimal spanning tree-based 2-phase clustering algorithm was developed for identifying isolated MUPs in a train. For the second step, a pattern recognition system was designed to classify detected MFP peaks. At last, the neuromuscular jitter is calculated based on the tracked and selected MFP pairs in the third step. These three steps were simulated and evaluated using synthetic EMG signals independently, and the whole system is preliminary implemented and evaluated using a small simulated data base. Compared to previous work in this area, the algorithms in this thesis showed better performance and great robustness across a variety of EMG signals, so that they can be applied widely to similar scenarios. The whole system developed in this thesis can be implemented in a large EMG signal decomposition system and validated using real data.
APA, Harvard, Vancouver, ISO, and other styles
36

Chou, Chia-Te, and 周家德. "Iris recognition methods for handling linear and nonlinear deformation of iris patterns." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/42428492076760861966.

Full text
Abstract:
博士
國立暨南國際大學
資訊工程學系
98
In this dissertation, we study the influences of both the non-orthogonal imaging condition and the nonlinear deformation of iris patterns on the accuracy of iris recognition, and we also propose effective methods to deal with these two topics. First, we propose a non-orthogonal view iris recognition system which comprises a new iris imaging module, an iris segmentation module, an iris feature extraction module and a classification module. A dual-CCD camera is developed to capture four-spectral (red, green, blue and near-infrared) iris images which contain useful information for simplifying the iris segmentation task. An intelligent RANSAC iris segmentation method is proposed to robustly detect iris boundaries in a four-spectral iris image. In order to match iris images acquired at different off-axis angles, we propose a circle rectification method to reduce the off-axis iris distortion. The rectification parameters are estimated using the detected elliptical pupillary boundary. Furthermore, we propose a novel iris descriptor which characterizes an iris pattern with multi-scale step/ridge edge-type maps. The edge-type maps are extracted with the derivative of Gaussian and the Laplacian of Gaussian filters. The iris pattern classification is accomplished by edge-type matching which can be understood intuitively with the concept of classifier ensembles. Experimental results show that the equal error rate of our approach is only 0.04% when recognizing iris images acquired at different off-axis angles within ±30◦. Additionally, a nonlinear iris normalization method is proposed. This method can handle iris deformation due to myosis/mydriasis. In order to prove the feasibility of our method, another iris imaging system is constructed. This system includes a computer controllable current source for driving a blue LED array, which is used to capture iris deformation images at different light intensity levels. Experimental result shows that our proposed method outperforms the traditional linear normalization method. The equal error rates of our and the traditional linear normalization method are 0.95% and 2.76%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Kamadi, V. N. Surendra. "Novel Compression Fracture Specimens And Analysis of Photoelastic Isotropic Points." Thesis, 2015. http://etd.iisc.ernet.in/handle/2005/2709.

Full text
Abstract:
Compression fracture specimens are ideally suited for miniaturization down to tens of microns. Fracture testing of thermal barrier coatings, ceramics and glasses are also best accomplished under compression or indentation. Compression fracture specimen of finite size with constant form factor was not available in the literature. The finite-sized specimen of edge cracked semicircular disk (ECSD) is designed which has the property of constant form factor. The novel ECSD specimen is explored further using weight function concept. This thesis, therefore, is mainly concerned with the design, development and geometric optimization of compression fracture specimen vis a vis their characterization of form factors, weight functions and isotropic points in the uncracked geometry. Inspired by the Brazilian disk geometry, a novel compression fracture specimen is designed in the form of a semicircular disk with an edge crack which opens up due to the bending moment caused by the compressive load applied along its straight edge. This new design evolved from a set of photoelastic experiments conducted on the Brazilian disk and its two extreme cases. Surprisingly, normalized mode-I stress intensity factor of the semicircular specimen loaded under a particular Hertzian way, is found constant for a wide range of relative crack lengths. This property of constant form factor leads to the development of weight function for ECSD for deeper analysis of the specimen. The weight function of a cracked geometry does not depend on loading configuration and it relates stress intensity factor to the stress distribution in the corresponding uncracked geometry through a weighted integral. The weight function for the disk specimen is synthesized in two different ways: using the conventional approach which requires crack opening displacement and the dual form factor method which is newly developed. Since stress distribution in the uncracked specimen is required in order to use weight function concept, analytical solution is attempted using linear elasticity theory. Since closed form solution for stresses in the uncracked semicircular disk is seldom possible with the available techniques, a new semi-analytical method called partial boundary collocation (PBC), is developed which may be used for solving any 2-D elasticity problem involving a semi-geometry. In the new method, part of the boundary conditions are identically satisfied and remaining conditions are satisfied at discrete boundary points. The classical stress concentration factor for a semi-in finite plate with a semicircular edge notch re-derived using PBC is found to be accurate to the eighth decimal. To enhance the form factor in order to test high-toughness materials, edge cracked semicircular ring (ECSR) specimen is designed in which bending moment at the crack-tip is increased significantly due to the ring geometry. ECSR is analyzed using nite element method and the corresponding uncracked problem is analyzed by PBC. Constant form factor is found possible for the ring specimen with tiny notch. In order to avoid varying semi-Hertzian angle during practice and thereby ensure consistent loading conditions, the designs are further modified by chopping at the loading zones and analyzed. Photoelastic isotropic points (IPs) which are a special case of zeroth order fringe (ZOF) are often found in uncracked and cracked specimens. An analytical technique based on Flamant solution is developed for solving any problem involving circular domain loaded at its boundary. Formation of IPs in a circular disk is studied. The coefficients of static friction between the surfaces of disk and loading fixtures, in photoelastic experiments of three-point and four-point loadings, are explored analytically to confirm with experimental results. The disk under multiple radial loads uniformly spaced on its periphery is found to give rise to one isolated IP at the center. Splitting of this IP into a number of IPs can be observed when the symmetry of normal loading is perturbed. Tangential loading is introduced along with normal loading to capture the effect of the composition on formation of IPs. Bernoulli's lemniscate is found to fit fringe order topology local to multiple IPs. Isotropic points along with other low fringe order zones including ZOF are ideal locations for material removal for weight reduction. Making a small hole in the prospective crack path at the IP location in the uncracked geometry might provide dual benefits: 1. Form factor enhancement; 2. Crack arrestor. Thus, this thesis describes experimental, theoretical and computational investigations for the design, development and calibration of novel compact compression fracture specimens.
APA, Harvard, Vancouver, ISO, and other styles
38

Supriya, Supriya. "Brain Signal Analysis and Classification by Developing New Complex Network Techniques." Thesis, 2020. https://vuir.vu.edu.au/40551/.

Full text
Abstract:
Brain signal analysis has a crucial role in the investigation of the neuronal activity for diagnosis of brain diseases and disorders. The electroencephalogram (EEG) is the most efficient biomarker for the analysis of brain signal that assists in the diagnosis of brain disorder medication and also plays an essential role in all the neurosurgery related to the brain. EEG findings illustrate the meticulous condition, and clinical content of the brain dysfunctions, and has an undisputed importance role in the detection of epilepsy condition and sleep disorders and dysfunctions allied to alcohol. The clinicians visually study the EEG recording to determine the manifestation of abnormalities in the brain. The visual EEG assessment is tiresome, fallible, and also high-priced. In this dissertation, a number of frameworks have been developed for the analysis and classification of EEG signals by addressing three different domains named: Epilepsy, Sleep staging, and Alcohol Use Disorder. Epilepsy is a non-contagious chronic disease of the brain that affects around 65 million people worldwide. The sudden onset tendency of the epileptic attacks vulnerable their sufferers to injuries. It is also challenging for the clinical staff to detect the epileptic-seizure activity early enough for determining the semiology associated with the seizure onset. For that reason, automated techniques that can accurately detect the epilepsy from EEG are of great importance to epileptic patients and especially to those patients who are resistive to therapies and medications. In this dissertation, four different techniques (named Weighted Visibility Network, Weighted Horizontal Visibility Network, Weighted Complex Network, and New Weighted Complex Network) have been developed for the automated identification of epileptic activity from the EEG signals. Most of the developed schemes attained 100% classification outcomes in their experimental evaluation for the identification of seizure activity from non-seizure activity. A sleep disorder can increase the menace of seizure incidence or severity, cognitive tasks impairments, mood deviation, diminution in the functionality of the immune system and other brain anomalies such as insomnia, sleep apnoea, etc. Hence, sleep staging is essential to discriminate among distinct sleep stages for the diagnosis of sleep and its disorders. EEG provides vital and inimitable information regarding the sleeping brain. The study of EEG has documented deformities in sleep patterns. This research has developed an innovative graph- theory based framework named weighted visibility network for sleep staging from EEG signals. The developed framework in this thesis, outperforms with 97.93% overall classification accuracy for categorizing distinct sleep states Alcoholism causes memory issues as well as motor skill defects by affecting the different portions of the brain. Excessive use of alcohol can cause sudden cardiac death and cardiomyopathy. Also, alcohol use disorder leads to respiratory infections, Vision impairment, liver damage, and cancer, etc. Research study demonstrates the use of EEG for diagnosis the patient with a high menace of developmental impediments with alcohol. In this current Ph.D. project, I developed a weighted graph-based technique that analyses EEG to distinguish between alcoholic subject and non-alcoholic person. The promising classification outcome demonstrates the effectiveness of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
39

Anil, Prasad M. N. "Segmentation Strategies for Scene Word Images." Thesis, 2014. http://hdl.handle.net/2005/2889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography