Thèses sur le sujet « Hybrid classifier »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Hybrid classifier.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Hybrid classifier ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Vishnampettai, Sridhar Aadhithya. « A Hybrid Classifier Committee Approach for Microarray Sample Classification ». University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1312341058.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nair, Sujit S. « Coarse Radio Signal Classifier on a Hybrid FPGA/DSP/GPP Platform ». Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/76934.

Texte intégral
Résumé :
The Virginia Tech Universal Classifier Synchronizer (UCS) system can enable a cognitive receiver to detect, classify and extract all the parameters needed from a received signal for physical layer demodulation and configure a cognitive radio accordingly. Currently, UCS can process analog amplitude modulation (AM) and frequency modulation (FM) and digital narrow band M-PSK, M-QAM and wideband signal orthogonal frequency division multiplexing (OFDM). A fully developed prototype of UCS system was designed and implemented in our laboratory using GNU radio software platform and Universal Software Radio Peripheral (USRP) radio platform. That system introduces a lot of latency issues because of the limited USB data transfer speeds between the USRP and the host computer. Also, there are inherent latencies and timing uncertainties in the General Purpose Processor (GPP) software itself. Solving the timing and latency problems requires running key parts of the software-defined radio (SDR) code on a Field Programmable Gate Array (FPGA)/Digital Signal Processor (DSP)/GPP based hybrid platform. Our objective is to port the entire UCS system on the Lyrtech SFF SDR platform which is a hybrid DSP/FPGA/GPP platform. Since the FPGA allows parallel processing on a wideband signal, its computing speed is substantially faster than GPPs and most DSPs, which sequentially process signals. In addition, the Lyrtech Small Form Factor (SFF)-SDR development platform integrates the FPGA and the RF module on one platform; this further reduces the latency in moving signals from RF front end to the computing component. Also for UCS to be commercially viable, we need to port it to a more portable platform which can be transitioned to a handset radio in the future. This thesis is a proof of concept implementation of the coarse classifier which is the first step of classification. Both fixed point and floating point implementations are developed and no compiler specific libraries or vendor specific libraries are used. This makes transitioning the design to any other hardware like GPPs and DSPs of other vendors possible without having to change the basic framework and design.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zimit, Sani Ibrahim. « Hybrid approach to interpretable multiple classifier system for intelligent clinical decision support ». Thesis, University of Reading, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.631699.

Texte intégral
Résumé :
Data-driven decision support approaches have been increasingly employed in recent years in order to unveil useful diagnostic and prognostic patterns from data accumulated in clinical repositories. Given the diverse amount of evidence generated through everyday clinical practice and the exponential growth in the number of parameters accumulated in the data, the capability of finding purposeful task-oriented patterns from patient records is crucial for providing effective healthcare delivery. The application of classification decision support tool in clinical settings has brought about formidable challenges that require a robust system. Knowledge Discovery in Database (KDD) provides a viable solution to decipher implicit knowledge in a given context. KDD classification techniques create models of the accumulated data according to induction algorithms. Despite the availability of numerous classification techniques, the accuracy and interpretability of the decision model are fundamental in the decision processes. Multiple Classifier Systems (MCS) based on the aggregation of individual classifiers usually achieve better decision accuracy. The down size of such models is due to their black box nature. Description of the clinical concepts that influence each decision outcome is fundamental in clinical settings. To overcome this deficiency, the use of artificial data is one technique advocated by researchers to extract an interpretable classifier that mimics the MCS. In the clinical context, practical utilisation of the mimetic procedure depends on the appropriateness of the data generation method to reflect the complexities of the evidence domain. A well-defined intelligent data generation method is required to unveil associations and dependency relationships between various entities the evidence domain. This thesis has devised an Interpretable Multiple classifier system (IMC) using the KDD process as the underlying platform. The approach integrates the flexibility of MCS, the robustness of Bayesian network (BN) and the concept of mimetic classifier to build an interpretable classification system. The BN provides a robust and a clinically accepted formalism to generate synthetic data based on encoded joint relationships of the evidence space. The practical applicability of the IMC was evaluated against the conventional approach for inducing an interpretable classifier on nine clinical domain problems. Results of statistical tests substantiated that the IMC model outperforms the direct approach in terms of decision accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lou, Wan Chan. « A hybrid model of tree classifier and neural network for university admission recommender system ». Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1783609.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Toubakh, Houari. « Automated on-line early fault diagnosis of wind turbines based on hybrid dynamic classifier ». Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10100/document.

Texte intégral
Résumé :
L'objectif principal de cette thèse est de développer un schéma générique et adaptatif basée sur les approches d'apprentissage automatique, intégrant des mécanismes de détection et d'isolation des défauts avec une force d’apparition progressive. Le but de ce schéma est de réaliser le diagnostic en ligne des défauts simple et multiple de type dérive dans les systèmes éoliens, et plus particulièrement dans le système du calage des pales et le convertisseur de puissance. Le schéma proposé est basé sur la décomposition du système éolien en plusieurs composantes. Ensuite, un classifieur est conçu et utilisé pour réaliser le diagnostic de défauts dans chaque composant. Le but de cette décomposition en composants est de faciliter l'isolation des défauts et d'augmenter la robustesse du schéma globale de diagnostic dans le sens que lorsque le classifier lié à un composant est défaillant, les classifieurs liées aux autres composants continuent à réaliser le diagnostic des défauts dans leurs composants. Ce schéma a aussi l'avantage de prendre en compte la dynamique hybride de l’éolienne
This thesis addresses the problem of automatic detection and isolation of drift-like faults in wind turbines (WTs). The main aim of this thesis is to develop a generic on-line adaptive machine learning and data mining scheme that integrates drift detection and isolation mechanism in order to achieve the simple and multiple drift-like fault diagnosis in WTs, in particular pitch system and power converter. The proposed scheme is based on the decomposition of the wind turbine into several components. Then, a classifier is designed and used to achieve the diagnosis of faults impacting each component. The goal of this decomposition into components is to facilitate the isolation of faults and to increase the robustness of the scheme in the sense that when the classifier related to one component is failed, the classifiers for the other components continue to achieve the diagnosis for faults in their corresponding components. This scheme has also the advantage to take into account the WT hybrid dynamics. Indeed, some WT components (as pitch system and power converter) manifest both discrete and continuous dynamic behaviors. In each discrete mode, or a configuration, different continuous dynamics are defined
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rasheed, Sarbast. « A Multiclassifier Approach to Motor Unit Potential Classification for EMG Signal Decomposition ». Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/934.

Texte intégral
Résumé :
EMG signal decomposition is the process of resolving a composite EMG signal into its constituent motor unit potential trains (classes) and it can be configured as a classification problem. An EMG signal detected by the tip of an inserted needle electrode is the superposition of the individual electrical contributions of the different motor units that are active, during a muscle contraction, and background interference.
This thesis addresses the process of EMG signal decomposition by developing an interactive classification system, which uses multiple classifier fusion techniques in order to achieve improved classification performance. The developed system combines heterogeneous sets of base classifier ensembles of different kinds and employs either a one level classifier fusion scheme or a hybrid classifier fusion approach.
The hybrid classifier fusion approach is applied as a two-stage combination process that uses a new aggregator module which consists of two combiners: the first at the abstract level of classifier fusion and the other at the measurement level of classifier fusion such that it uses both combiners in a complementary manner. Both combiners may be either data independent or the first combiner data independent and the second data dependent. For the purpose of experimentation, we used as first combiner the majority voting scheme, while we used as the second combiner one of the fixed combination rules behaving as a data independent combiner or the fuzzy integral with the lambda-fuzzy measure as an implicit data dependent combiner.
Once the set of motor unit potential trains are generated by the classifier fusion system, the firing pattern consistency statistics for each train are calculated to detect classification errors in an adaptive fashion. This firing pattern analysis allows the algorithm to modify the threshold of assertion required for assignment of a motor unit potential classification individually for each train based on an expectation of erroneous assignments.
The classifier ensembles consist of a set of different versions of the Certainty classifier, a set of classifiers based on the nearest neighbour decision rule: the fuzzy k-NN and the adaptive fuzzy k-NN classifiers, and a set of classifiers that use a correlation measure as an estimation of the degree of similarity between a pattern and a class template: the matched template filter classifiers and its adaptive counterpart. The base classifiers, besides being of different kinds, utilize different types of features and their performances were investigated using both real and simulated EMG signals of different complexities. The feature sets extracted include time-domain data, first- and second-order discrete derivative data, and wavelet-domain data.
Following the so-called overproduce and choose strategy to classifier ensemble combination, the developed system allows the construction of a large set of candidate base classifiers and then chooses, from the base classifiers pool, subsets of specified number of classifiers to form candidate classifier ensembles. The system then selects the classifier ensemble having the maximum degree of agreement by exploiting a diversity measure for designing classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between the base classifier outputs, i. e. , to measure the degree of decision similarity between the base classifiers. This mechanism of choosing the team's classifiers based on assessing the classifier agreement throughout all the trains and the unassigned category is applied during the one level classifier fusion scheme and the first combiner in the hybrid classifier fusion approach. For the second combiner in the hybrid classifier fusion approach, we choose team classifiers also based on kappa statistics but by assessing the classifiers agreement only across the unassigned category and choose those base classifiers having the minimum agreement.
Performance of the developed classifier fusion system, in both of its variants, i. e. , the one level scheme and the hybrid approach was evaluated using synthetic simulated signals of known properties and real signals and then compared it with the performance of the constituent base classifiers. Across the EMG signal data sets used, the hybrid approach had better average classification performance overall, specially in terms of reducing the number of classification errors.
Styles APA, Harvard, Vancouver, ISO, etc.
7

McCool, Christopher Steven. « Hybrid 2D and 3D face verification ». Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.

Texte intégral
Résumé :
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Styles APA, Harvard, Vancouver, ISO, etc.
8

McCool, Christopher Steven. « Hybrid 2D and 3D face verification ». Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.

Texte intégral
Résumé :
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Al-Ani, Ahmed Karim. « An improved pattern classification system using optimal feature selection, classifier combination, and subspace mapping techniques ». Thesis, Queensland University of Technology, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ala'raj, Maher A. « A credit scoring model based on classifiers consensus system approach ». Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13669.

Texte intégral
Résumé :
Managing customer credit is an important issue for each commercial bank; therefore, banks take great care when dealing with customer loans to avoid any improper decisions that can lead to loss of opportunity or financial losses. The manual estimation of customer creditworthiness has become both time- and resource-consuming. Moreover, a manual approach is subjective (dependable on the bank employee who gives this estimation), which is why devising and implementing programming models that provide loan estimations is the only way of eradicating the ‘human factor’ in this problem. This model should give recommendations to the bank in terms of whether or not a loan should be given, or otherwise can give a probability in relation to whether the loan will be returned. Nowadays, a number of models have been designed, but there is no ideal classifier amongst these models since each gives some percentage of incorrect outputs; this is a critical consideration when each percent of incorrect answer can mean millions of dollars of losses for large banks. However, the LR remains the industry standard tool for credit-scoring models development. For this purpose, an investigation is carried out on the combination of the most efficient classifiers in credit-scoring scope in an attempt to produce a classifier that exceeds each of its classifiers or components. In this work, a fusion model referred to as ‘the Classifiers Consensus Approach’ is developed, which gives a lot better performance than each of single classifiers that constitute it. The difference of the consensus approach and the majority of other combiners lie in the fact that the consensus approach adopts the model of real expert group behaviour during the process of finding the consensus (aggregate) answer. The consensus model is compared not only with single classifiers, but also with traditional combiners and a quite complex combiner model known as the ‘Dynamic Ensemble Selection’ approach. As a pre-processing technique, step data-filtering (select training entries which fits input data well and remove outliers and noisy data) and feature selection (remove useless and statistically insignificant features which values are low correlated with real quality of loan) are used. These techniques are valuable in significantly improving the consensus approach results. Results clearly show that the consensus approach is statistically better (with 95% confidence value, according to Friedman test) than any other single classifier or combiner analysed; this means that for similar datasets, there is a 95% guarantee that the consensus approach will outperform all other classifiers. The consensus approach gives not only the best accuracy, but also better AUC value, Brier score and H-measure for almost all datasets investigated in this thesis. Moreover, it outperformed Logistic Regression. Thus, it has been proven that the use of the consensus approach for credit-scoring is justified and recommended in commercial banks. Along with the consensus approach, the dynamic ensemble selection approach is analysed, the results of which show that, under some conditions, the dynamic ensemble selection approach can rival the consensus approach. The good sides of dynamic ensemble selection approach include its stability and high accuracy on various datasets. The consensus approach, which is improved in this work, may be considered in banks that hold the same characteristics of the datasets used in this work, where utilisation could decrease the level of mistakenly rejected loans of solvent customers, and the level of mistakenly accepted loans that are never to be returned. Furthermore, the consensus approach is a notable step in the direction of building a universal classifier that can fit data with any structure. Another advantage of the consensus approach is its flexibility; therefore, even if the input data is changed due to various reasons, the consensus approach can be easily re-trained and used with the same performance.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Nasser, Al-Fayadh. « Efficient hybrid classified vector quantisation technique for image compression with application to medical images ». Thesis, Liverpool John Moores University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485732.

Texte intégral
Résumé :
The advancement of fields such as multimedia and medical imaging, and the emergence of high resolution digital cameras have necessitated the acquisition, storage and transmission of high resolution digital images. Storage and transmission of such images is expensive in terms of bytes and bandwidth. Although the bandwidth of communication networks has been increasing continuously, the introduction of new services and the expansion of the existing ones demands an even higher bandwidth. There is a need for compressing these images to curtail the storage and transmission load. The research described in the thesis introduces two new hybrid image compress~on techniques for the efficient representation of stilI images, for lossy compression. The first is hybrid classified vector quantisation (HCVQ), which combines Mean-Removed Classified Vector Quantiser (MRCVQ) and Singular Value Decomposition (SVD), and the second technique is an improvement to the first technique and termed improved hybrid classified vectorquantisation (IHCVQ). . The novelty of these techniques lies in the process of codebook generation using SVD method based VQ, as well as using only one threshold instead of multiple threshold values as is the case with conventional classified VQ scheme. The efficiency of the proposed IHCVQ was examined for Magnetic Resonance Images (MR!). The proposed techniques were benchmarked with the ordinary vector quantiser which was generated using the k-means algorithm, the existing methods using CVQ scheme, and JPEG-2000. Simulation results indicated that the proposed approaches alleviate edge degradation and can reconstruct good visual quality images with high Peak Signal-to Noise-Ratio than the benchmarked techniques or competitive to them. Several visual experimental designs were carried out to evaluate the compressed reconstructed image quality by the proposed methods subjectively. The representative subjective method chosen to use in this research work is mean opinion score CMOS). NIOS is result of perception based subjective evaluation, which is 5-IeveI grading scale developed for subjective evaluation. Mean opinion score was' determined from the subjective visual assessment scale experiment for image quality of stilI-images.
Styles APA, Harvard, Vancouver, ISO, etc.
12

España, Boquera Salvador. « Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition) ». Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62215.

Texte intégral
Résumé :
[EN] This work is focused on problems (like automatic speech recognition (ASR) and handwritten text recognition (HTR)) that: 1) can be represented (at least approximately) in terms of one-dimensional sequences, and 2) solving these problems entails breaking the observed sequence down into segments which are associated to units taken from a finite repertoire. The required segmentation and classification tasks are so intrinsically interrelated ("Sayre's Paradox") that they have to be performed jointly. We have been inspired by what some works call the "successful trilogy", which refers to the synergistic improvements obtained when considering: - a good formalization framework and powerful algorithms; - a clever design and implementation taking the best profit of hardware; - an adequate preprocessing and a careful tuning of all heuristics. We describe and study "two stage generative models" (TSGMs) comprising two stacked probabilistic generative stages without reordering. This model not only includes Hidden Markov Models (HMMs, but also "segmental models" (SMs). "Two stage decoders" may be deduced by simply running a TSGM in reversed way, introducing non determinism when required: 1) A directed acyclic graph (DAG) is generated and 2) it is used together with a language model (LM). One-pass decoders constitute a particular case. A formalization of parsing and decoding in terms of semiring values and language equations proposes the use of recurrent transition networks (RTNs) as a normal form for Context Free Grammars (CFGs), using them in a parsing-as-composition paradigm, so that parsing CFGs result in a slight extension of regular ones. Novel transducer composition algorithms have been proposed that can work with RTNs and can deal with null transitions without resorting to filter-composition even in the presence of null transitions and non-idempotent semirings. A review of LMs is described and some contributions mainly focused on LM interfaces, LM representation and on the evaluation of Neural Network LMs (NNLMs) are provided. A review of SMs includes the combination of generative and discriminative segmental models and general scheme of frame emission and another one of SMs. Some fast cache-friendly specialized Viterbi lexicon decoders taking profit of particular HMM topologies are proposed. They are able to manage sets of active states without requiring dictionary look-ups (e.g. hashing). A dataflow architecture allowing the design of flexible and diverse recognition systems from a little repertoire of components has been proposed, including a novel DAG serialization protocol. DAG generators can take over-segmentation constraints into account, make use SMs other than HMMs, take profit of the specialized decoders proposed in this work and use a transducer model to control its behavior making it possible, for instance, to use context dependent units. Relating DAG decoders, they take profit of a general LM interface that can be extended to deal with RTNs. Some improvements for one pass decoders are proposed by combining the specialized lexicon decoders and the "bunch" extension of the LM interface, including an adequate parallelization. The experimental part is mainly focused on HTR tasks on different input modalities (offline, bimodal). We have proposed some novel preprocessing techniques for offline HTR which replace classical geometrical heuristics and make use of automatic learning techniques (neural networks). Experiments conducted on the IAM database using this new preprocessing and HMM hybridized with Multilayer Perceptrons (MLPs) have obtained some of the best results reported for this reference database. Among other HTR experiments described in this work, we have used over-segmentation information, tried lexicon free approaches, performed bimodal experiments and experimented with the combination of hybrid HMMs with holistic classifiers.
[ES] Este trabajo se centra en problemas (como reconocimiento automático del habla (ASR) o de escritura manuscrita (HTR)) que cumplen: 1) pueden representarse (quizás aproximadamente) en términos de secuencias unidimensionales, 2) su resolución implica descomponer la secuencia en segmentos que se pueden clasificar en un conjunto finito de unidades. Las tareas de segmentación y de clasificación necesarias están tan intrínsecamente interrelacionadas ("paradoja de Sayre") que deben realizarse conjuntamente. Nos hemos inspirado en lo que algunos autores denominan "La trilogía exitosa", refereido a la sinergia obtenida cuando se tiene: - un buen formalismo, que dé lugar a buenos algoritmos; - un diseño e implementación ingeniosos y eficientes, que saquen provecho de las características del hardware; - no descuidar el "saber hacer" de la tarea, un buen preproceso y el ajuste adecuado de los diversos parámetros. Describimos y estudiamos "modelos generativos en dos etapas" sin reordenamientos (TSGMs), que incluyen no sólo los modelos ocultos de Markov (HMM), sino también modelos segmentales (SMs). Se puede obtener un decodificador de "dos pasos" considerando a la inversa un TSGM introduciendo no determinismo: 1) se genera un grafo acíclico dirigido (DAG) y 2) se utiliza conjuntamente con un modelo de lenguaje (LM). El decodificador de "un paso" es un caso particular. Se formaliza el proceso de decodificación con ecuaciones de lenguajes y semianillos, se propone el uso de redes de transición recurrente (RTNs) como forma normal de gramáticas de contexto libre (CFGs) y se utiliza el paradigma de análisis por composición de manera que el análisis de CFGs resulta una extensión del análisis de FSA. Se proponen algoritmos de composición de transductores que permite el uso de RTNs y que no necesita recurrir a composición de filtros incluso en presencia de transiciones nulas y semianillos no idempotentes. Se propone una extensa revisión de LMs y algunas contribuciones relacionadas con su interfaz, con su representación y con la evaluación de LMs basados en redes neuronales (NNLMs). Se ha realizado una revisión de SMs que incluye SMs basados en combinación de modelos generativos y discriminativos, así como un esquema general de tipos de emisión de tramas y de SMs. Se proponen versiones especializadas del algoritmo de Viterbi para modelos de léxico y que manipulan estados activos sin recurrir a estructuras de tipo diccionario, sacando provecho de la caché. Se ha propuesto una arquitectura "dataflow" para obtener reconocedores a partir de un pequeño conjunto de piezas básicas con un protocolo de serialización de DAGs. Describimos generadores de DAGs que pueden tener en cuenta restricciones sobre la segmentación, utilizar modelos segmentales no limitados a HMMs, hacer uso de los decodificadores especializados propuestos en este trabajo y utilizar un transductor de control que permite el uso de unidades dependientes del contexto. Los decodificadores de DAGs hacen uso de un interfaz bastante general de LMs que ha sido extendido para permitir el uso de RTNs. Se proponen también mejoras para reconocedores "un paso" basados en algoritmos especializados para léxicos y en la interfaz de LMs en modo "bunch", así como su paralelización. La parte experimental está centrada en HTR en diversas modalidades de adquisición (offline, bimodal). Hemos propuesto técnicas novedosas para el preproceso de escritura que evita el uso de heurísticos geométricos. En su lugar, utiliza redes neuronales. Se ha probado con HMMs hibridados con redes neuronales consiguiendo, para la base de datos IAM, algunos de los mejores resultados publicados. También podemos mencionar el uso de información de sobre-segmentación, aproximaciones sin restricción de un léxico, experimentos con datos bimodales o la combinación de HMMs híbridos con reconocedores de tipo holístico.
[CAT] Aquest treball es centra en problemes (com el reconeiximent automàtic de la parla (ASR) o de l'escriptura manuscrita (HTR)) on: 1) les dades es poden representar (almenys aproximadament) mitjançant seqüències unidimensionals, 2) cal descompondre la seqüència en segments que poden pertanyer a un nombre finit de tipus. Sovint, ambdues tasques es relacionen de manera tan estreta que resulta impossible separar-les ("paradoxa de Sayre") i s'han de realitzar de manera conjunta. Ens hem inspirat pel que alguns autors anomenen "trilogia exitosa", referit a la sinèrgia obtinguda quan prenim en compte: - un bon formalisme, que done lloc a bons algorismes; - un diseny i una implementació eficients, amb ingeni, que facen bon us de les particularitats del maquinari; - no perdre de vista el "saber fer", emprar un preprocés adequat i fer bon us dels diversos paràmetres. Descrivim i estudiem "models generatiu amb dues etapes" sense reordenaments (TSGMs), que inclouen no sols inclouen els models ocults de Markov (HMM), sinò també models segmentals (SM). Es pot obtindre un decodificador "en dues etapes" considerant a l'inrevés un TSGM introduint no determinisme: 1) es genera un graf acíclic dirigit (DAG) que 2) és emprat conjuntament amb un model de llenguatge (LM). El decodificador "d'un pas" en és un cas particular. Descrivim i formalitzem del procés de decodificació basada en equacions de llenguatges i en semianells. Proposem emprar xarxes de transició recurrent (RTNs) com forma normal de gramàtiques incontextuals (CFGs) i s'empra el paradigma d'anàlisi sintàctic mitjançant composició de manera que l'anàlisi de CFGs resulta una lleugera extensió de l'anàlisi de FSA. Es proposen algorismes de composició de transductors que poden emprar RTNs i que no necessiten recorrer a la composició amb filtres fins i tot amb transicions nul.les i semianells no idempotents. Es proposa una extensa revisió de LMs i algunes contribucions relacionades amb la seva interfície, amb la seva representació i amb l'avaluació de LMs basats en xarxes neuronals (NNLMs). S'ha realitzat una revisió de SMs que inclou SMs basats en la combinació de models generatius i discriminatius, així com un esquema general de tipus d'emissió de trames i altre de SMs. Es proposen versions especialitzades de l'algorisme de Viterbi per a models de lèxic que permeten emprar estats actius sense haver de recórrer a estructures de dades de tipus diccionari, i que trauen profit de la caché. S'ha proposat una arquitectura de flux de dades o "dataflow" per obtindre diversos reconeixedors a partir d'un xicotet conjunt de peces amb un protocol de serialització de DAGs. Descrivim generadors de DAGs capaços de tindre en compte restriccions sobre la segmentació, emprar models segmentals no limitats a HMMs, fer us dels decodificadors especialitzats proposats en aquest treball i emprar un transductor de control que permet emprar unitats dependents del contexte. Els decodificadors de DAGs fan us d'una interfície de LMs prou general que ha segut extesa per permetre l'ús de RTNs. Es proposen millores per a reconeixedors de tipus "un pas" basats en els algorismes especialitzats per a lèxics i en la interfície de LMs en mode "bunch", així com la seua paral.lelització. La part experimental està centrada en el reconeiximent d'escriptura en diverses modalitats d'adquisició (offline, bimodal). Proposem un preprocés d'escriptura manuscrita evitant l'us d'heurístics geomètrics, en el seu lloc emprem xarxes neuronals. S'han emprat HMMs hibridats amb xarxes neuronals aconseguint, per a la base de dades IAM, alguns dels millors resultats publicats. També podem mencionar l'ús d'informació de sobre-segmentació, aproximacions sense restricció a un lèxic, experiments amb dades bimodals o la combinació de HMMs híbrids amb classificadors holístics.
España Boquera, S. (2016). Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62215
TESIS
Premiado
Styles APA, Harvard, Vancouver, ISO, etc.
13

Westwood, Jill. « Hybrid creatures : mapping the emerging shape of art therapy education in Australia ». Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/6318/.

Texte intégral
Résumé :
This PhD provides the first organized view of art therapy education in Australia. It focuses on the theories that are used in this specialized teaching and learning process. It evolved from the authors’ immersion in the field as a migrant art therapy educator to Australia from the UK and a desire to be reflexive on this experience. The research questions aimed to discover the field of art therapy education in Australia: to find out what theories and practices were taught; and where the theoretical influences were coming from, in order to develop understanding of this emerging field. Positioned as a piece of qualitative research a bricolage of methods were used to gather and analyse information from several sources (literature, institutional sources, and key participants, including the author) on the theories and practices of art therapy training programs in Australia. This also included investigating other places in the world shown to be influential (USA and UK). The bricolage approach (McLeod, 2006) included: phenomenology; hermeneutics; semi-structured interviews; practical evaluation (Patton, 1982, 1990/2002); autoethnography (Ellis & Bochner, 2000); heuristic (Moustakas, 1990); and visual methodologies (Kapitan, 2010). These were used to develop a body of knowledge in the form of institution/program profiles, educator profiles, country profiles and an autoethnographic contribution using visual processes. Epistemologically, the project is located in a paradigm of personal knowledge and subjectivity which emphasizes the importance of personal experience and interpretation. The findings contribute knowledge to support the development of art therapy education and the profession in Australia, towards the benefit, health and wellbeing of people in society. The findings show a diverse and multi-layered field of hybrid views and innovative approaches held within seven programs in the public university and private sectors. It was found that theories and practices are closely linked and that theoretical views have evolved from the people who teach the programs, location, professional contexts (health, arts, education, social, community) and the prevailing views within these contexts, which are driven by greater economic, socio-political forces and neo-liberal agendas. The university programs generally teach a range of the major theories of psychotherapy underpinned with a psychodynamic or humanistic perspective. Movement towards a more integrative and eclectic approach was found. This was linked to being part of more general masters programs and economic forces. The private sector programs are more distinctly grounded in a particular theoretical perspective or philosophical view. Key words distilled from the profiles included: conflict, transpersonal, survival through art, pedagogy, epistemology, theory driven by context and mental health. Important issues for art therapy education were identified as: the position and emphasis on art; working with the therapy/education tension; the gender imbalance in the profession; Indigenous perspectives; intercultural issues and difference. The horizons of the field revealed the importance of developing the profile of the profession, reconciling differences towards a more inclusive view and the growth of research. A trend towards opportunities in the social, education and community areas was found, driven by the increasing presence of discourses on arts and wellness.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Konstantaras, Anthony J. « Development and analysis of hybrid adaptive neuro-fuzzy inference systems for the recognition of weak signals preceding earthquakes ». Thesis, University of Central Lancashire, 2004. http://clok.uclan.ac.uk/19072/.

Texte intégral
Résumé :
Prior to an earthquake, there is energy storage in the seismogenic area, the release of which results in a number of micro-cracks, which in effect produce a weak electric signal. Initially, there is a rapid rise in the number of propagating cracks, which creates a transient electric field. The whole process lasts in the order of several tens of minutes, and the resulting electric signal is considered as an electric earthquake precursor (EEP). Electric earthquake precursor recognition is mainly prevented by the very essence of the signal itself. The nature of the signal, according to the theory of propagating cracks, is usually a very weak electric potential anomaly appearing on the Earth's electric field prior to an earthquake, often unobservable within the severely stronger embedded in noise electric background. Furthermore, EEP signals vary in terms of duration and size making reliable recognition even more difficult. The work described in this thesis incorporates neuro-fuzzy technology for the reliable recognition of EEP signals within the electric field. Neuro-fuzzy networks are neural networks with intrinsic fuzzy logic abilities, i.e. the weights of the neurons in the network define the premise and consequent parameters of a fuzzy inference system. In particular, the adaptive neuro-fuzzy inference system (ANFIS) is used, which has been shown to be effective as a universal approximator that can match any input/output data set, providing the system is adequately trained. An average model for EEP signals has been identified based on a time function describing the evolution of the number of propagating cracks. Pattern recognition is performed by the neural network to identify the average EEP model from within the electric field. The fuzzy nature of the neuro-fuzzy model, though, enables the network to classify as EEPs, signals that are not exactly the same but do approximate the average EEP model. On the other hand, signals that look like EEPs but do not approximate enough the average model are being suppressed preventing false classification. The effectiveness of the proposed network is demonstrated using electrotelluric data recorded in NW Greece in 1995. Following training, testing with unseen data verifies the reliable performance of the model.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Westwood, Jill. « Hybrid creatures : mapping the emerging shape of art therapy education in Australia ». Thesis, Goldsmiths College (University of London), 2010. http://handle.uws.edu.au:8081/1959.7/506680.

Texte intégral
Résumé :
This PhD provides the first organized view of art therapy education in Australia. It focuses on the theories that are used in this specialized teaching and learning process. It evolved from the authors’ immersion in the field as a migrant art therapy educator to Australia from the UK and a desire to be reflexive on this experience. The research questions aimed to discover the field of art therapy education in Australia: to find out what theories and practices were taught; and where the theoretical influences were coming from, in order to develop understanding of this emerging field. Positioned as a piece of qualitative research a bricolage of methods were used to gather and analyse information from several sources (literature, institutional sources, and key participants, including the author) on the theories and practices of art therapy training programs in Australia. This also included investigating other places in the world shown to be influential (USA and UK). The bricolage approach (McLeod, 2006) included: phenomenology; hermeneutics; semi-structured interviews; practical evaluation (Patton, 1982, 1990/2002); autoethnography (Ellis and Bochner, 2000); heuristic (Moustakas, 1990); and visual methodologies (Kapitan, 2010). These were used to develop a body of knowledge in the form of institution/program profiles, educator profiles, country profiles and an autoethnographic contribution using visual processes. Epistemologically, the project is located in a paradigm of personal knowledge and subjectivity which emphasizes the importance of personal experience and interpretation. The findings contribute knowledge to support the development of art therapy education and the profession in Australia, towards the benefit, health and wellbeing of people in society. The findings show a diverse and multi-layered field of hybrid views and innovative approaches held within seven programs in the public university and private sectors. It was found that theories and practices are closely linked and that theoretical views have evolved from the people who teach the programs, location, professional contexts (health, arts, education, social, community) and the prevailing views within these contexts, which are driven by greater economic, socio-political forces and neo-liberal agendas. The university programs generally teach a range of the major theories of psychotherapy underpinned with a psychodynamic or humanistic perspective. Movement towards a more integrative and eclectic approach was found. This was linked to being part of more general masters programs and economic forces. The private sector programs are more distinctly grounded in a particular theoretical perspective or philosophical view. Key words distilled from the profiles included: conflict, transpersonal, survival through art, pedagogy, epistemology, theory driven by context and mental health. Important issues for art therapy education were identified as: the position and emphasis on art; working with the therapy/education tension; the gender imbalance in the profession; Indigenous perspectives; intercultural issues and difference. The horizons of the field revealed the importance of developing the profile of the profession, reconciling differences towards a more inclusive view and the growth of research. A trend towards opportunities in the social, education and community areas was found, driven by the increasing presence of discourses on arts and wellness.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Armstrong, Keith M. « Towards an Ecosophical Praxis of New Media Space design ». Thesis, QUT, 2003. https://eprints.qut.edu.au/9073/1/PHDTHESISKMAsmall.pdf.

Texte intégral
Résumé :
This study is an investigation in and through media arts practice. It set out to develop a novel type of new media artistic praxis built upon concepts drawn from the disciplines of scientific and cultural ecology. The rationale for this research was based upon my observation as a practising new media artist that existing praxis in the new media domain appeared to operate largely without awareness of the ecological implications of those practices. The thesis begins by explaining key concepts of ecology, spanning the arts and the sciences. It then outlines the thinking of contemporary theorists who propose that the problem of ecology is a critical issue for the 21st century, suggesting that our well-documented ecological crisis is indicative of a more general crisis of human subjectivity. It then records an investigation into particular strategies for artistic praxis which might instigate an active engagement with this problem of ecology. The study employed a methodology based in action research to focus upon the development and analysis of three new artistic works, '#14', 'Public Relations' and 'transit_lounge'. These were used to explore diverse theories of ecology and to hone a series of pointers towards Ecosophical arts/new media praxis. This journey constitutes an emergent theory for new media space design. The thesis concludes with a toolkit of tactics and approaches that other arts/new media practitioners might employ to begin working on the problem of ecology.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Lin, Cheng-Lung, et 林政龍. « Internet Traffic Classification based on Hybrid Naive Bayes HMMs Classifier ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/59142288452653192129.

Texte intégral
Résumé :
碩士
國立臺灣科技大學
資訊工程系
96
To deal with the large network infrastructure, we must rely on an automatic network management system. Traditionally, most of the firewall simply use the port number of the packets to identify abnormal network traffic. Furthermore, some of them observe the characteristic in application layer to identify abnormal network traffic such as the payload of a packet. However, the traditional security mechanisms encounter difficulties with the increasing popularity of encrypted protocols. Recently, some related researches which can identify application protocol by some restricted characteristics and behaviors in transition layer of TCP/IP model after encryption. Therefore, we combine and implement two models which are Naive Bayes and Hidden Markov Models (HMMs) as an automatic system and use the limited information of encrypted packets to infer and classify the application protocol behavior. Generally speaking, HMMs are relatively good to estimate the potential relationship with temporal data. Naive Bayes is simple, fast, and effective. It is usually used for dealing multidimension dataset in lots of cases. In this thesis, we propose hybrid Naive Bayes HMMs classifier as a fundamental framework to infer application protocol behavior in encrypted network traffic. The hybrid model uses the temporal property of HMMs to inspect the relation between the packets and employs Naive Bayes to character the statistical signature. In this study, our approach can not only identify network behavior in encrypted network traffic, but also employ the temporal property to raise the accuracy. It can be applied to infer application protocol and detects the abnormal behavior. Comparing to related researches, our method only uses a few features to classify multi-flow protocol and get respectable performance.
Styles APA, Harvard, Vancouver, ISO, etc.
18

He, Chun Lei. « Error analysis of a hybrid multiple classifier system for recognizing unconstrained handwritten numerals ». Thesis, 2005. http://spectrum.library.concordia.ca/8493/1/MR10287.pdf.

Texte intégral
Résumé :
Since the early 1990s, many research communities, amongst the pattern recognition and machine learning, have shown a growing interest in Multiple Classifier Systems (MCSs), particularly for the recognition of handwritten words and numerals. This thesis is divided into two parts. First, we construct an effective hybrid MCS (HMCS) of handwritten numeral recognition in order to raise the reliability of the entire system. This HMCS is proposed by integrating the cooperation (serial topology) and combination (parallel topology) of three classifiers: SVM, MQDF, and LeNet-5. In cooperation, patterns rejected from the previous classifier become the input of the next classifier. Based on the principles of different classifiers, effective measurements for the rejection options---First Rank Measurement (FRM), Differential Measurement (DM), and Probability Measurement (PM) are defined. In combination, Weighted Borda Count (WBC) at the rank level, which reflects confidence and preference of different ranks in different classes with different classifiers, is applied. Second, we analyze factors that cause the errors in HMCS. In this process, we focus mainly on the role of size normalization on the recognition of handwritten numerals.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Chen, Jyun-Kai, et 陳俊愷. « CNN-SVM Hybrid Classifier : Multi-label Classification in K-12 Cross-topic Problem ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/6tk9hq.

Texte intégral
Résumé :
碩士
國立中央大學
資訊工程學系
105
In the tide of modern technology, there are many significant innovations in human life. The development of the Internet has led to the more rapid delivery of information. From the learning side, the new learning style is gradually changing the habit of traditional learning. In the K-12 system, the question-driven learning is an effective way of learning. The students can confirm their learning status through question exercises and understand the knowledge and concepts expressed by the problem. In order to provide the learning information for the learners, a good learning material management and classification has become an important task. To classify the question according to the knowledge points covered by them so that the user can get the appropriate questions convenient. And then achieve a better learning efficiency. In this thesis, we continue studies which the classification system of K-12 learning materials. In addition to planning database for learning materials, and proposed cross-topic classification system. The traditional way of learning is often for each different single point of knowledge to learn. In the original system for such problems have a good classification performance. Some question of the large entrance exam and advanced question have the different concept of cross-topic. Therefore, we extend the original Convolutional Neural Network (CNN) and support vector machine (SVM) hybrid classifier and proposed multi-label classification model for cross-topic questions. Finally, we compare the strategies proposed by classification system studies of K-12 learning materials with our multi-label classification model. The experiment shows that the multi-label classification model can outperform original strategies of classification.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Zhang, Ping. « Reliable recognition of handwritten digits using a cascade ensemble classifier system and hybrid features ». Thesis, 2006. http://spectrum.library.concordia.ca/8904/1/NR16288.pdf.

Texte intégral
Résumé :
Aiming at a high recognition rate and a low error rate at the same time, a cascade ensemble classifier system is proposed for the recognition of handwritten digits. The tradeoff among the error, rejection and recognition rates of the recognition system is analyzed theoretically. Three solutions are proposed: (i) extracting more discriminative features to attain a high recognition rate, (ii) using ensemble classifiers to suppress the error rate, and (iii) employing a novel cascade system to enhance the recognition rate and to reduce the rejection rate. Based on these strategies, seven sets of discriminative hybrid features and three sets of randomly selected features are extracted and used in the different layers of the cascade recognition system. Novel gating networks are used to congregate the confidence values of three parallel Artificial Neural Networks (ANNs) classifiers. The weights of the gating networks are trained by the Genetic Algorithms (GAs) to achieve the overall optimal performance. Experiments are conducted on the MNIST handwritten numeral database with encouraging results: a high reliability of 99.96% with a minimal rejection, or 99.59% correct recognition rate without rejection in the last cascade layer. In the verification model, a novel multi-modal nonparametric analysis for optimal feature dimensionality reduction is proposed. The computational complexity of our proposed algorithm is much lower than that of other similar approaches found in the literature. Experiments demonstrate that our proposed method can achieve a high feature compression performance without sacrificing its discriminant ability. The results of dimensionality reduction make the ANNs converge more easily. For the verification of confusing handwritten numeral pairs, our proposed algorithm is used to congregate features, and it outperforms the PCA and compares favorably with other nonparametric discriminant analysis methods
Styles APA, Harvard, Vancouver, ISO, etc.
21

Tzu-Lin, Ho, et 何自琳. « A Hybrid Rough Set Classifier based on Multi-Attributes Selection Method for Identifying Financial Distress of Company ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/85063164060011711706.

Texte intégral
Résumé :
碩士
國立雲林科技大學
資訊管理系
102
After several decades, the prediction of financial distress is an important and challenging issue. Many researchers have constructed models to deal with bankruptcy prediction and financial crisis, including conventional approaches and artificial intelligence (AI) techniques. Financial distress information will influence the investors’ decision, and the investors depend on the analyst’s opinions and their subjective judgments, it will cause investors/decision-makers to make the wrong decision. Therefore, the objectives of this study is to construct a novel model, which can provide the rules of financial situation of company to decision-makers as references. This study employed six attribute selection methods to reduce high dimension data, which contain: (1) Chi square, (2) Information gain, (3) Discriminant analysis, (4) Logistic regression, (5) Support vector machine, and (6) the proposed Join method, then this study utilized rough set classifier to classify financial distress. For verifying proposed model, the TEJ dataset is employed as experimental data, and compare with Decision tree, Multilayer perceptron, Support vector machine in Type I and Type II error and accuracy. The experimental result shows that Logistic regression and Chi square attribute selection method combined rough set classifier outperforms the listing models in Type I and Type II error and accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Rodic, Daniel. « A Hybrid heuristic-exhaustive search approach for rule extraction ». Diss., 2001. http://hdl.handle.net/2263/25095.

Texte intégral
Résumé :
The topic of this thesis is knowledge discovery and artificial intelligence based knowledge discovery algorithms. The knowledge discovery process and associated problems are discussed, followed by an overview of three classes of artificial intelligence based knowledge discovery algorithms. Typical representatives of each of these classes are presented and discussed in greater detail. Then a new knowledge discovery algorithm, called Hybrid Classifier System (HCS), is presented. The guiding concept behind the new algorithm was simplicity. The new knowledge discovery algorithm is loosely based on schemata theory. It is evaluated against one of the discussed algorithms from each class, namely: CN2; C4.5, BRAINNE and BGP. Results are discussed and compared. A comparison was done using a benchmark of classification problems. These results show that the new knowledge discovery algorithm performs satisfactory, yielding accurate, crisp rule sets. Probably the main strength of the HCS algorithm is its simplicity, so it can be the foundation for many possible future extensions. Some of the possible extensions of the new proposed algorithm are suggested in the final part of this thesis.
Dissertation (MSc)--University of Pretoria, 2007.
Computer Science
unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
23

Chen, Tian-Xiang, et 陳天祥. « Using the Hybrid of Disparity Map and Multi-Classifier for Road Surface Detection in Outdoor Piloting of Autonomous Land Vehicles ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/aysh4y.

Texte intégral
Résumé :
碩士
國立臺北科技大學
電腦與通訊研究所
97
Similar to the function of human eyes, autonomous land vehicle (ALV) uses camera to acquire road information. In this paper, we adopt disparity map (DM) to detect ALV''s march path and various obstacles if may face to. Then we develop several road surface voters to recognize what kind of road surface the ALV drives on. Finally, according to the information we collected, the best navigation can be achieved. After experimenting with our ALV in an outdoor road without pavement markings, the proposed algorithms really work well.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Chen, Min-Hsien, et 陳民弦. « A Hybrid Classifier for Type 2 Diabetes Based on Decision Tree, Probabilistic Model and Artificial Neural Network- An Empirical Study of Taichung Tzu-chi General Hospital ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/83336301293216144685.

Texte intégral
Résumé :
碩士
國立中興大學
資訊管理學系所
105
The National Health Insurance of Taiwan has been gained 70% support from the peoples since 1995. The NHI costs is increasing year by year and keeps shortage of funds from 2007 to 2017, that seems NHI has high utilization rate. Taiwan''s elderly population rate is increasing, and it will account for 20% of total population in 2025. With the increase of the elderly population, the burden of health care costs gets more and more .In 2016, annual top ten causes of death show that Diabetes was within the top 5 during 1995 to 2016. To achieve early prevention, early treatment, and find out related factors of Diabetes for clinical diagnosis, this study uses data mining algorithms to establish prediction model of Type II Diabetes mellitus. The cases of study collected from Taichung Tzu Chi Hospital including those patients with and without diabetes during 2009 to 2016, which have total 1,326 patients .The cases of study were analyzed by Decision tree, Neural networks, and Naive Bayesian algorithms. The result shows that urine albumin-creatinine ratio, age, triglycerides, creatinine, high density cholesterol and gender are important factors in the model. While building a model without glucose AC and HbA1C, the accuracy is 75% and the area under the curve is above 0.78. With glucose AC and Hba1c, the accuracy is 98%, the area under the curve is above 0.97. These models have good prediction ability both in diabetic and non-diabetic subjects. The study combined data mining techniques with medical data building prediction models, provide the knowledge of the real impact factors of disease and the integration of medical information and clinical applications.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Hsu, Hsing-Yun, et 許馨勻. « A Hybrid Artificial Neural Network and Decision Tree Based Classifier to Vital Signs of Cardiopulmonary Resuscitation and Intensive Care Unit Patients in Taichung Tzu Chi Hospital ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/34222358377480042639.

Texte intégral
Résumé :
碩士
國立中興大學
資訊管理學系所
104
After admitted to the hospital, many patients need CPR or being transferred to ICU for observation with deteriorating conditions due to various reasons. The seven vital signs including body temperature, respiration, pulse, systolic blood pressure, diastolic blood pressure, degree of blood oxygen saturation and pain index are the most basic signals of the physical conditions and are the best indicating indexes that can reflect whether the body functions normally at the first moment. The visual checks are used as the main judgement basis in most of the hospitals currently. Because that the medical staff are not able to take care of patients all day with all attentions, so that the conditions of the patients often turn deteriorated due to the negligence of busy medical staff. Since the technology is so advanced nowadays, that will be helpful in reducing the deteriorating risk of patients effectively with the help of the information technology to assist medical staff to prejudge and to be alerted instantly. This study focuses on the last vital sign index right before deterioration to investigate the best combinations of reflecting abnormal changes of the body functions. The vital signs data of those patients that were treated with CPR and transferred to the ICU from the general wards during the hospitalization between January 1, 2013 and May 30, 2015 in Taichung Tzu Chi Hospital are used for analysis. Meanwhile the data mining methods (decision tree: CART, ID3, C4.5, CHAID and neural network) are applied to analyze the rules of abnormal vital signs. This study is divided into three specific aims: first, to verify the best data mining algorithms for vital signs data analysis with the empirical results; second, to find out the abnormal signs that cause patients to be treated with CPR or transferred to ICU with the output results of the empirical analysis, to explore the most frequently occurring abnormal combinations and hopefully to assist the physicians and the nurses to make effective and accurate clinical judgements; third, to investigate whether the MEWS is effective in early alerting the changes of patient’s conditions with the decision tree CHAID analysis. With the decision tree analysis, there are three findings as the followings: high respiration rate and low diastolic blood pressure are detected when body temperature is abnormal; high systolic blood pressure and low diastolic blood pressure are found when pulse is abnormal; and, low diastolic blood pressure and high body temperature as well as high pulse are accompanied with abnormal respiration rates. These classifications mentioned above are commonly found in patients who are treated with CPR. For those patients who need to be transferred to ICU, low heartbeat rate and high diastolic pressure are detected when systolic blood pressure is abnormal, and, low body temperature and high systolic blood pressure correlate with abnormal diastolic blood pressure. The correct rates of neural network are higher than decision tree mostly from the analyzed empirical results with analysis software such as WEKA and STATISTICA. It is found that the neural network is good for vital signs data. With the findings by decision tree CHAID analysis—MEWS, the analyzed data of this study are not useful for condition warnings with MEWS scores. Since most of the analyzed scores are less than 5, the false alarm rate may increase when the score is set too low. Whereas, those patients who really need cares may be neglected if the score is set too high. The study suggests that the previous standards followed are valid and may predict the conditions more precisely with adjustments. Therefore, we integrate the clinical standards adopted by Taichung Tzu Chi Hospital, MEWS score criteria, the decision tree and MEWS analyses used in this study to setup different judgement criteria for different patients who may be transferred to ICU and treated with CPR with the modified MEWS score criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Tzu-ChienLien et 連子建. « Feature selection methods with hybrid discretizationfor naive Bayesian classifiers ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/95105249459952662675.

Texte intégral
Résumé :
碩士
國立成功大學
資訊管理研究所
100
Naïve Bayesian classifier is widely used for classification problems, because of its computational efficiency and competitive accuracy. Discretization is one of the major approaches for processing continuous attributes for naïve Bayesian classifier. Hybrid discretization sets the method for discretizing each continuous attribute individually. A previous study found that hybrid discretization is a better approach to improve the performance of the naïve Bayesian classifier than unified discretization. Selective naïve Bayes, abbreviated as SNB, is an important feature selection method for naïve Bayesian classifiers. It improves the efficiency and the accuracy by reducing redundant and irrelevant attributes. The object of this study is to develop methods composed of hybrid discretization and feature selection, and three methods for this purposed are proposed. Method one that is the most efficient executes hybrid discretization after feature selection. Methods two and three generally perform hybrid discretization first followed by feature selection. Method two transforms continuous attributes without considering discrete attributes, while method three determines the best discretization methods for each continuous attribute by searching all possibilities. The experimental results shows that in general, the three methods with hybrid discretization and feature selection all have better performance than the method with unified discretization and feature selection, and method three is the best.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Chuan-YuTsai et 蔡荃宇. « Hybrid discretization methods for naive Bayesian classifiers with priors ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/57897752508508533512.

Texte intégral
Résumé :
碩士
國立成功大學
資訊管理研究所
101
Classification is a kind of method to deal with the data in the realm of Data Mining. Among all classifier, naïve Bayesian classifier takes the advantage of fast processing along with the simple theory. The nature of the naïve Bayesian classifier is suitable for dealing with data having discrete attribute, however, practically data is likely to be continuous, thus, the selection of proper method of discretization is the key to raise the accuracy of classification. Hybrid discretization method is capable of using proper discretization method for every continuous attribute adaptively, which leads to a higher accuracy of classification. Prior distribution can offer the essential knowledge of parameter chosen among the process of classification, with the help, big promotion of accuracy in classification is likely to be achieved based on the fact that the classification is closer to the concept data. Since the announcement of the hybrid discretization is just recent before long, there is no any experiment showing the result in the combination use of hybrid discretization and prior distribution. Out of the reason, the attempt of this research is to conduct an experiment on the use of combination of hybrid discretization and prior distribution, in the hope of promoting the accuracy of classification by using naïve Bayesian classifier. I propose three modes of combination in this research; HDNB1 is conservative among others since it casts the discretization on continuous attribute before carry out the process of prior. HDNB2 and HDNB3 have the same steps on combined hybrid discretization and prior distribution. HDNB2 takes each attribute by order of its importance into consideration, while the process of discretization in the HDNB3 regards all the attributes.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Wang, I.-Shu, et 王怡書. « The construction of multiple-class data based on hybrid classifiers ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/15289623995260957870.

Texte intégral
Résumé :
碩士
國立勤益科技大學
工業工程與管理系
98
This paper utilizes Information Gain to extract important features, and compares the performances among combined decision tree (C5) and artificial neural network(ANN)、hybrid support vector machine(SVM) and C5、hybrid bayesian network(BN) and C5. Three hybrid classifiers are developed in this study and the highest accuracy is 94.05% in the multiple–class E.coli dataset. In order to explain this research is widely applicable and is not limited to particular field, we again verify this research architecture on Parkinson dataset and the accuracy is up to 95.38%.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Cheng, Shang-Wen, et 鄭尚文. « Mode choice models with hybrid decision rules - classified by traveller''s attributes ». Thesis, 1996. http://ndltd.ncl.edu.tw/handle/38559191928194626441.

Texte intégral
Résumé :
碩士
國立成功大學
交通管理(科學)學系
84
Understanding the process of decision-making by which an individual chooses amode is important for researchers in constructing disaggregate behaviormodels. Most of the existed researches assumed that all individuals use thesame decision rule,or seperated them by an attribute and, futher, explainedone''s decision rule by this attribute.In this study,we try to use two kinds ofchoice model:compensatory model and non- compensatory model to make a researchabout one''s choice behavior.Using two classified rules:outer-classificationand inner-classification,this study develops a multi-attribute decision tree,and according to the classified result of this decision tree,we can explainone''s decision rule(compensatory or non-compensatory) under different kinds ofsocial background and travel circumstance.Using the intercity travel data of Taiwan in 1996, this study built the hybridchoice models including compensatory and non-compensatory decision rules. Multinomial logit (MNL) model and Elimination by aspects (EBA) model were usedto construct these two kinds of decision rule models in this study, and thepredictive strength and percent correctly predicted were used asmeasurements. The empirical results found the following conclusions:1.Classified models have better statistic performance than non-classifiedmodels. 2.Non-coessential groups will use different decision rules.Travellerswho are younger, with lower income, on non-bussiness trip,or not in a hurrytend to use compensatory decision rule.Contrastly, travellers who are older,with higher income, on bussiness trip , or in a hurry tend to usenon-compensatory decision rule. 3.Non-coessetial groups differ in modalchoice.Most of travellers who tend to use compensatory decision rule choosetrain or bus.Contrastly, travellers who tend to use non-compensatory decisionrule choose airplane mostly. 4.As to the viewpoint of modal attributes, travellers are different from each other.Cost and convenience are moreimportant to those who tend to use compensatory decision rule.In contrast,travellers who tend to use non- compensatory decision rule consider timespent preferentially.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Yang, Kuo-Hua, et 楊國樺. « Intrusion Detection Systems based on Hybrid Hidden Markov Models and Naïve Bayes Classifiers ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/gwdh9v.

Texte intégral
Résumé :
碩士
國立臺灣科技大學
資訊工程系
94
Under the internet and attacks modes are complicated environment day by day now, the general network management adopts the firewall as the guarantee measure of the information safety. Generally speaking, Hidden Markov Models detected intrusion detection for more, because it is mostly sequence datasets, especially the anomaly detection systems that we can set up a normal behavior models and the datasets collection of the normal behavior model come from it is system call that generated by users. General on the other hand the Hidden Markov Models model is relatively good to producing the pure behavior of measuring the symbol under every state, so our using simple Hidden Markov Models. In a lot of cases, the Na¨ıve Bayes Classifiers are for dealing multidimension datasets, there are simple , fast , and effective characteristics. Among this page thesis, we propose methods combine with Hiddne Markov Models and Na¨ıve Bayes Classifiers. Finally, in the part of the experiment, our system will use KDD Cup 99 datasets. After assessing, our system has better detection rate toward U2R and R2L connections than KDD Cup 99 winner.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Chia-YuYao et 姚佳佑. « Parameter setting methods of hybrid priors for naïveBayesian classifiers with multinomial model in gene sequence data ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/89805330184983862274.

Texte intégral
Résumé :
碩士
國立成功大學
資訊管理研究所
103
Due to the development of metagenomics and sequencing, analysts pay more attention to the effectiveness of classification algorithms in processing high dimensional gene sequence data. Naïve Bayesian classifiers are a popular tool for classifying high dimensional gene sequence data because of its computational efficiency and easy implementation. Setting proper parameters for priors have been shown to be an effective way for improving the performance of the naïve Bayesian classifier with multinomial models, called multinomial naïve Bayesian classifiers, in gene sequence classification. Since the number of class values in a gene sequence data set is huge, and the number of instances for many class values is less than ten, the possibility of improving the classification accuracy of gene sequence data is generally limited. In this study, the covariance matrices for features are first calculated from available gene sequence data. Then several ways are proposed to set and search the parameters of Dirichlet priors for the naïve Bayesian classifiers with multinomial model. The experimental results on two gene sequence sets demonstrate that our proposed methods can improve the prediction accuracy of the multinomial naïve Bayesian classifier in acceptable computational time. The generalized Dirichlet priors are then introduced for the class values with low accuracy and large number of instances. The experimental results on the same gene sequence sets show that the improvement on prediction accuracy is limited because the number of class values is huge and the number of instances in many class values is small.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Jun-JiePeng et 彭俊傑. « Hybrid Energy Efficient Dynamic Bandwidth Allocation with Modified Two-Stage Control Mechanism to Satisfy Classified Delay Constraints in TDM-PONs ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/21996397725709672860.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Uttam, Kumar *. « Algorithms For Geospatial Analysis Using Multi-Resolution Remote Sensing Data ». Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2280.

Texte intégral
Résumé :
Geospatial analysis involves application of statistical methods, algorithms and information retrieval techniques to geospatial data. It incorporates time into spatial databases and facilitates investigation of land cover (LC) dynamics through data, model, and analytics. LC dynamics induced by human and natural processes play a major role in global as well as regional scale patterns, which in turn influence weather and climate. Hence, understanding LC dynamics at the local / regional as well as at global levels is essential to evolve appropriate management strategies to mitigate the impacts of LC changes. This can be captured through the multi-resolution remote sensing (RS) data. However, with the advancements in sensor technologies, suitable algorithms and techniques are required for optimal integration of information from multi-resolution sensors which are cost effective while overcoming the possible data and methodological constraints. In this work, several per-pixel traditional and advanced classification techniques have been evaluated with the multi-resolution data along with the role of ancillary geographical data on the performance of classifiers. Techniques for linear and non-linear un-mixing, endmember variability and determination of spatial distribution of class components within a pixel have been applied and validated on multi-resolution data. Endmember estimation method is proposed and its performance is compared with manual, semi-automatic and fully automatic methods of endmember extraction. A novel technique - Hybrid Bayesian Classifier is developed for per pixel classification where the class prior probabilities are determined by un-mixing a low spatial-high spectral resolution multi-spectral data while posterior probabilities are determined from the training data obtained from ground, that are assigned to every pixel in a high spatial-low spectral resolution multi-spectral data in Bayesian classification. These techniques have been validated with multi-resolution data for various landscapes with varying altitudes. As a case study, spatial metrics and cellular automata based models applied for rapidly urbanising landscape with moderate altitude has been carried out.
Styles APA, Harvard, Vancouver, ISO, etc.
34

(7480409), RISHIKESH MAHESH BAGWE. « MODELING AND ENERGY MANAGEMENT OF HYBRID ELECTRIC VEHICLES ». Thesis, 2019.

Trouver le texte intégral
Résumé :
This thesis proposes an Adaptive Rule-Based Energy Management Strategy (ARBS EMS) for a parallel hybrid electric vehicle (P-HEV). The strategy can effciently be deployed online without the need for complete knowledge of the entire duty cycle in order to optimize fuel consumption. ARBS improves upon the established Preliminary Rule-Based Strategy (PRBS) which has been adopted in commercial vehicles. When compared to PRBS, the aim of ARBS is to maintain the battery State of Charge (SOC) which ensures the availability of the battery over extended distances. The proposed strategy prevents the engine from operating in highly ineffcient regions and reduces the total equivalent fuel consumption of the vehicle. Using an HEV model developed in Simulink, both the proposed ARBS and the established PRBS strategies are compared across eight short duty cycles and one long duty cycle with urban and highway characteristics. Compared to PRBS, the results show that, on average, a 1.19% improvement in the miles per gallon equivalent (MPGe) is obtained with ARBS when the battery initial SOC is 63% for short duty cycles. However, as opposed to PRBS, ARBS has the advantage of not requiring any prior knowledge of the engine efficiency maps in order to achieve optimal performance. This characteristics can help in the systematic aftermarket hybridization of heavy duty vehicles.
Styles APA, Harvard, Vancouver, ISO, etc.
35

(10702884), Mohammed Naziru Issahaq. « HYBRID CUTTING EXTRUSION OF COMMERCIALLY PURE ALUMINUM ALLOYS ». Thesis, 2021.

Trouver le texte intégral
Résumé :

Commercial sheets, strips and wires are currently produced from aluminum alloys by multi-step deformation processing involving rolling and drawing. These processes typically require 10 to 20 steps of deformation, since the plastic strain or reduction that can be imposed in a single step is limited by material workability and process mechanics. In this work, a fundamentally different, single-step approach is demonstrated for producing these aluminum products using machining-based deformation that also enables higher material workability in the formed product. Two process routes are proposed: 1) chip formation by Free Machining (FM), and 2) constrained chip formation by Hybrid Cutting Extrusion (HCE).

Using the very soft and highly ductile commercially pure aluminum alloys as representative systems, various material flow transitions in response to the concentrated shear deformation are observed in FM including plastic instabilities. The flow instabilities usually manifested as folds of varying amplitudes on the unconstrained surface of the chips, are features that limit the desirability of the chip and potential use for strip applications. To suppress these instabilities, two strategies both involving deformation geometry design are outlined: 1) By using large positive rake angle, the flow can be transformed to be more laminar and thus reduces to a substantial amount, the flow instabilities. This also makes it possible for light rolling/drawing reductions to be adopted to smoothen the residual surface folds to improve the strip finish. 2) By using a constraining tool coupled with the cutting tool in what is referred to as HCE, the initial instability that leads to plastic buckling of the material is suppressed, thereby making the flow laminar and thus improve the quality of the strips.

Key property attributes of the chips produced by the shear-based deformation processes such as improved mechanical properties and in the case of HCE, superior surface finish compared to conventional processes of rolling/drawing are highlighted. Implications for commercial manufacture of sheet, strip and wire products are discussed.

Styles APA, Harvard, Vancouver, ISO, etc.
36

(9786557), Maureen Chapman. « An exploration of leadership of registered nurses in clinical settings ». Thesis, 2017. https://figshare.com/articles/thesis/An_exploration_of_leadership_of_registered_nurses_in_clinical_settings/13444769.

Texte intégral
Résumé :
Nurses provide leadership at various levels throughout healthcare organisations as Directors of Nursing, Chief Nursing Officers, as well as clinical leaders at a unit level. Existing research into nursing leadership has mainly focussed on transformational and transactional forms of leadership, which has been at the expense of exploring more contemporary forms of leadership. Furthermore there is limited research into the experiences of leadership from registered nurses working in the clinical settings. This research explored the extent and appropriateness of four forms of leadership namely transactional, transformational, distributed and hybrid leadership as it applies in to clinical nursing.
Styles APA, Harvard, Vancouver, ISO, etc.
37

(7036457), Yansong Chen. « THE OPTIMIZATION OF THE ELECTRICAL SYSTEM VOLTAGE RANGE OF MILD HYBRID ELECTRIC VEHICLE ». Thesis, 2020.

Trouver le texte intégral
Résumé :

The optimization of the electrical system voltage range of a mild hybrid electric vehicle is examined in this research study. The objective is to evaluate and propose the optimized vehicle voltage level for the mild hybrid electric vehicle from both technical and economic aspects. The approach is to evaluate the fuel economy improvement from the mild hybrid electric vehicle of various voltage level for the cost benefit study. The evaluation is conducted from the vehicle system level with discussions of components selection for system optimization. Autonomie, a simulation tool widely used by academic and automotive industry, is used for the vehicle simulation and fuel economy evaluation. The cost analysis is based on the system cost factoring in the component cost based forecasted production volume.

The driver for this study is to propose an optimized voltage for the mild hybrid electric vehicle for the vehicle manufacturers and suppliers to standardize the implementation to meet the fuel economy and emission requirements and vehicle power demand. The standardization of the vehicle voltage level can improve design and development efficiency, reusability and reduce cost in developing non-standard voltage levels of the mild hybrid vehicle. The synergy in standardized voltage level for the mild hybrid vehicle can accelerate technology implementation toward mass production to meet regulatory emission and fuel economy requirements.

Styles APA, Harvard, Vancouver, ISO, etc.
38

(7241471), Michael J. Dziekan. « DESIGN OF A HYBRID HYDROGEN-ON-DEMAND AND PRIMARY BATTERY ELECTRIC VEHICLE ». Thesis, 2021.

Trouver le texte intégral
Résumé :

In recent years lithium-ion battery electric vehicles and stored hydrogen electric vehicles have been developed to address the ever-present threat of climate change and global warming. These technologies have failed to achieve profitability at costs consumers are willing to bear when purchasing a vehicle. IFBattery, Inc. has developed a unique primary battery chemistry which simultaneously produces both electricity and hydrogen-on-demand while being both low cost and without carbon emissions. In order to determine the feasibility of the IFBattery chemistry for mobile applications, a prototype golf cart was constructed as the first public application of IFBattery technology. The legacy lead acid batteries of the prototype golf cart were replaced with an IFBattery chemistry tuned to primarily produce hydrogen-on-demand with supplemental electricity. Hydrogen produced by the IFBattery was purified and then fed into a hydrogen fuel cell where electricity was produced to power the vehicle. Electricity from the IFBattery was converted to the common voltage of the golf cart and also used to power the vehicle. Validation testing of the IFBattery powered golf cart demonstrated favorable results as an alternative to both lithium-ion battery and stored hydrogen technologies, and displayed potential for future applications.

Styles APA, Harvard, Vancouver, ISO, etc.
39

(7043354), Himal Agrawal. « Manufacturing and Testing of Composite Hybrid Leaf Spring for Automotive Applications ». Thesis, 2019.

Trouver le texte intégral
Résumé :
Leaf springs are a part of the suspension system attached between the axle and the chassis of the vehicle to support weight and provide shock absorbing capacity of the vehicle. For more than half a century the leaf springs are being made of steel which increases the weight of the vehicle and is prone to rusting and failure. The current study explores the feasibility of composite leaf spring to reduce weight by designing, manufacturing and testing the leaf spring for the required load cases. An off the shelf leaf spring of Ford F-150 is chosen for making of composite hybrid spring prototype. The composite hybrid prototype was made by replacing all the leaves with glass fiber unidirectional laminate except the first leaf. Fatigue tests are then done on steel and composite hybrid leaf spring to observe the failure locations and mechanism if any. High frequency fatigue tests were then done on composite beams with varying aspect ratio in a displacement-controlled mode to observe fatigue location and mechanism of just glass fiber composite laminate. It was observed that specimens with low aspect ratio failed from crack propagation initiated from stress concentrations at the loading tip in 3-point cyclic flexure test and shear forces played a dominant role in propagation of crack. Specimens with high aspect ratio under the same loading did not fail in cyclic loading and preserved the same stiffness as before the cyclic loading. The preliminary fatigue results for high aspect ratio composite beams predict a promising future for multi-leaf composite springs.
Styles APA, Harvard, Vancouver, ISO, etc.
40

(9833834), Steven Senini. « The application of hybrid active filter technology to unbalanced traction loads ». Thesis, 1999. https://figshare.com/articles/thesis/The_application_of_hybrid_active_filter_technology_to_unbalanced_traction_loads/13424726.

Texte intégral
Résumé :
This thesis examines the application of hybrid active filter technology to large unbalanced loads such as those presented by electric railway traction systems. The thesis provides a systematic overview of possible hybrid topologies which will provide improvement over existing topologies. Several new topologies are identified in this process. The topologies are analysed to demonstrate the potential reduction in ratings of the active element achieved by the use of the hybrid topologies. A weakness is identified in the areas of signal detection and control system modelling. These areas are also addressed in the thesis and the modelling approach presented is believed to be new in this area. The modelling approach is demonstrated using one topology identified as having the lowest active element ratings. The analysis, operation and control algorithms are demonstrated using both simulation studies and experimental studies. The theoretical, simulated and experimental results show good correlation. The concept of duality is used to analyse and explain the operation of several new topologies identified as useful for harmonic isolation between two distorted buses. The use of hybrid topologies for harmonic isolation has not been seen before in the literature. These topologies are demonstrated using simulation studies.
Styles APA, Harvard, Vancouver, ISO, etc.
41

(11153853), Tyler A. Swedes. « Electrification of Diesel-Based Powertrains for Heavy Vehicles ». Thesis, 2021.

Trouver le texte intégral
Résumé :
In recent decades as environmental concerns and the cost and availability of fossil fuels have become more pressing issues, the need to extract more work from each drop of fuel has increased accordingly. Electrification has been identified as a way to address these issues in vehicles powered by internal combustion engines, as it allows existing engines to be operated more efficiently, reducing overall fuel consumption. Two applications of electrification are discussed in the work presented: a series-electric hybrid powertrain from an on-road class 8 truck, and an electrically supercharged diesel engine for use in the series hybrid power system of a wheel loader.
The first application is an experimental powertrain developed by a small start-up company for use in highway trucks. The work presented in this thesis shows test results from routes along (1) Interstate 75 between Florence, KY, and Lexington, KY, and (2) Interstates 74 and 70 east of Indianapolis, during which tests the startup collected power flow data from the vehicle's motor, generator, and battery, and three-dimensional position data from a GPS system. Based on these data, it was determined that the engine-driven generator provided an average of 15% more propulsive energy than required due to electrical losses in the drivetrain. Some of these losses occured in the power electronics, which are shown to be 82% - 92% efficient depending on power flow direction, but the battery showed significant signs of wear, accounting for the remainder of these electrical losses. Overall, most of the system's fuel savings came from its regenerative braking capability, which recaptured between 3% and 12% of the total drive energy output. Routes with significant grade changes maximize this energy recapture percentage, but it is shown minimizing drag and rolling resistance with a more modern truck and trailer could further increase this energy capture to between 8% and 18%.
In the second application, an electrified air handling system is added to a 4.5L engine, allowing it to replace the 6.8L engine in John Deere's 644K hybrid wheel loader. Most of the fuel savings arise from downsizing the engine, so in this case an electrically driven supercharger (eBooster) allows the engine to meet the peak torque requirements of the larger, original engine. In this thesis, a control-oriented nonlinear state space model of the modified 4.5L engine is presented and linearized for use in designing a robust, multi-input multi-output (MIMO) controller which commands the engine's fueling rate, eBooster, eBooster bypass valve, exhaust gas recirculation (EGR) valve, and exhaust throttle. This integrated control strategy will ultimately allow superior tracking of engine speed, EGR fraction, and air-fuel ratio (AFR) targets, but these performance gains over independent single-input single-output control loops for each component demand linear models that accurately represent the engine's gas exchange dynamics. To address this, a physics-based model is presented and linearized to simulate pressures, temperatures, and shaft speeds based on sub-models for exhaust temperature, cylinder charge flow, valve flow, compressor flow, turbine flow, compressor power, and turbine power. The nonlinear model matches the truth reference engine model over the 1200 rpm - 2000 rpm and 100 Nm - 500 Nm speed and torque envelope of interest within 10% in steady state and 20% in transient conditions. Two linear models represent the full engine's dynamics over this speed and torque range, and these models match the truth reference model within 20% in the middle of the operating envelope. However, specifically at (1) low load for any speed and (2) high load at high speed, the linear models diverge from the nonlinear and truth reference models due to nonlinear engine dynamics lost in linearization. Nevertheless, these discrepancies at the edges of the engine's operating envelope are acceptable for control design, and if greater accuracy is needed, additional linear models can be generated to capture the engine's dynamics in this region.
Styles APA, Harvard, Vancouver, ISO, etc.
42

(6872132), Doosan Back. « APPLICATIONS OF MICROHEATER/RESISTANCE TEMPERATURE DETECTOR AND ELECTRICAL/OPTICAL CHARACTERIZATION OF METALLIC NANOWIRES WITH GRAPHENE HYBRID NETWORKS ». Thesis, 2020.

Trouver le texte intégral
Résumé :
A microheater and resistance temperature detector (RTD) are designed and fabricated for various applications. First, a hierarchical manifold microchannel heatsink with an integrated microheater and RTDs is demonstrated. Microfluidic cooling within the embedded heat sink improves heat dissipation, with two-phase operation offering the potential for dissipation of very high heat fluxes while maintaining moderate chip temperatures. To enable multi-chip stacking and other heterogeneous packaging approaches, it is important to densely integrate all fluid flow paths into the device. Therefore, the details of heatsink layouts and fabrication processes are introduced. Characterization of two-phase cooling as well as reliability of the microheater/RTDs are discussed. In addition, another application of microheater for mining particle detection using interdigitated capacitive sensor. While current personal monitoring devices are optimized for monitoring microscale particles, a higher resolution technique is required to detect sub-micron and nanoscale particulate matters (PM) due to smaller volume and mass of the particles. The detection capability of the capacitive sensor for sub-micron and nanoparticles are presented, and an incorporated microheater improved stable capacitive sensor reading under air flow and various humidity.
This paper also introduces the characterization of nanomaterials such as metallic nanowires (NWs) and single layer graphene. First, the copper nanowire (CuNW)/graphene hybrid networks for transparent conductors (TC) is investigated. Though indium tin oxide (ITO) has been widely used, demands for the next generation of TC is increasing due to a limited supply of indium. Thus, the optical and electrical properties of CuNW/graphene hybrid network are compared with other transparent conductive materials including ITO. Secondly, silver nanowire (AgNW) growth technique using electrodeposition is introduced. A vertically aligned branched AgNW arrays is made using a porous anodic alumina template and the optical properties of the structure are discussed.

Styles APA, Harvard, Vancouver, ISO, etc.
43

(10701090), Andrew J. Fairbanks. « Novel Composites for Nonlinear Transmission Line Applications ». Thesis, 2021.

Trouver le texte intégral
Résumé :

Nonlinear transmission lines (NLTLs) provide a solid state alternative to conventional vacuum based high power microwave (HPM) sources. The three most common NLTL implementations are the lumped element, split ring resonator (SRR), and the nonlinear bulk material based NLTLs. The nonlinear bulk material implementation provides the highest power output of the three configurations, though they are limited to pulse voltages less than 50 kV; higher voltages are possible when an additional insulator is used, typically SF6 or dielectric oil, between the nonlinear material and the outer conductor. The additional insulator poses a risk of leaking if structural integrity of the outer conductor is compromised. The desire to provide a fieldable NLTL based HPM system makes the possibility of a leak problematic. The work reported here develops a composite based NLTL system that can withstand voltages higher than 50 kV and not pose a risk of catastrophic failure due to a leak while also decreasing the size and weight of the device and increasing the output power.

Composites with barium strontium titanate (BST) or nickel zinc ferrite (NZF) spherical inclusions mixed in a silicone matrix were manufactured at volume fractions ranging from 5% to 25%. The dielectric and magnetic parameters were measured from 1-4 GHz using a coaxial airline. The relative permittivity increased from 2.74±0.01 for the polydimethylsiloxane (PDMS) host material to 7.45±0.33 after combining PDMS with a 25% volume fraction of BST inclusions. The relative permittivity of BST and NZF composites was relatively constant across all measured frequencies. The relative permeability of the composites increased from 1.001±0.001 for PDMS to 1.43±0.04 for a 25% NZF composite at 1 GHz. The relative permeability of the 25% NZF composite decreased from 1.43±0.05 at 1 GHz to 1.17±0.01 at 4 GHz. The NZF samples also exhibited low dielectric and magnetic loss tangents from 0.005±0.01 to 0.091±0.015 and 0.037±0.001 to 0.20±0.038, respectively, for all volume fractions, although the dielectric loss tangent did increase with volume fraction. For BST composites, all volume fraction changes of at least 5% yielded statistically significant changes in permittivity; no changes in BST volume fraction yielded statistically significant changes in permeability. For NZF composites, the change in permittivity was statistically significant when the volume fraction varied by more than 5% and the change in permeability was statistically significant for variations in volume fraction greater than 10%. The DC electrical breakdown strength of NZF composites decreased exponentially with increasing volume fraction of NZF, while BST composites exhibited no statistically significant variation with volume fraction.

For composites containing both BST and NZF, increasing the volume fraction of either inclusion increased the permittivity with a stronger dependence on BST volume fraction. Increasing NZF volume fraction increased the magnetic permeability, while changing BST volume fraction had no effect on the composite permeability. The DC dielectric breakdown voltage decreased exponentially with increased NZF volume fraction. Adding as little as 5% BST to an NZF composite more than doubled the breakdown threshold compared to a composite containing NZF alone. For example, adding 10% BST to a 15% NZF composite increased the breakdown strength by over 800%. The combination of tunability of permittivity and permeability by managing BST and NZF volume fractions with the increased dielectric breakdown strength by introducing BST make this a promising approach for designing high power nonlinear transmission lines with input pulses of hundreds of kilovolts.

Coaxial nonlinear transmission lines are produced using composites with NZF inclusions and BST inclusions and driven by a Blumlein pulse generator with a 10 ns pulse duration and 1.5 ns risetime. Applying a 30 kV pulse using the Blumlein pulse generator resulted in frequencies ranging from 1.1 to 1.3 GHz with an output power over 20 kW from the nonlinear transmission line. The output frequencies increased with increasing volume fraction of BST, but the high power oscillations characteristic of an NLTL did not occur. Simulations using LT Spice demonstrated that an NLTL driven with a Blumlein modulator did not induce high power oscillations while driving the same NLTL with a pulse forming network did.

Finally, a composite-based NLTL could be driven directly by a high voltage power supply without a power modulator to produce oscillations both during and after the formed pulse upon reaching a critical threshold. The output frequency of the NLTLs is 1 GHz after the pulse and ranged from 950 MHz to 2.2 GHz during the pulse. These results demonstrate that the NLTL may be used as both a pulse forming line and high power microwave source, providing a novel way to reduce device size and weight, while the use of composites could provide additional flexibility in pulse output tuning.

Styles APA, Harvard, Vancouver, ISO, etc.
44

(10723164), Suki N. Zhang. « Electronic Application of Two Dimensional Materials ». Thesis, 2021.

Trouver le texte intégral
Résumé :
Recent advances in atomically thin two-dimensional materials have led to various promising technologies such as nanoelectronics, sensing, energy storage, and optoelectronics applications. Graphene with sp2-bonded carbon atoms densely packed in a honeycomb crystal lattice has attracted tremendous interest with excellent electrical, optical, mechanical, and chemical properties. In this work, graphene’s mechanical properties, chemical properties, and piezoelectric properties are explored as graphene is implemented in the automotive electrical distribution system. Graphene is useful in friction reduction, corrosion protection, and piezoelectric energy harvesting cell improvement. Besides graphene, transition metal dichalcogenides (TMDs), which are the metal atoms sandwiched between two chalcogen atoms, have also attracted much attention. Unlike graphene, many TMDs are semiconductors in nature and possess enormous potential to be used as a potential channel material in ultra-scaled field-effect transistors (FETs). In this work, chemical doping strategies are explored for the tunnel FETs applications using different metal phthalocyanines and polyethyleneimines as dopants. TMDs FETs can also be used as a selective NO2 gas sensor with a polydimethylsiloxane filter and a highly sensitive photo-interfacial gated photodetector application.
Styles APA, Harvard, Vancouver, ISO, etc.
45

(8782580), Di Wang. « Mechanical behaviors of bio-inspired composite materials with functionally graded reinforcement orientation and architectural motifs ». Thesis, 2020.

Trouver le texte intégral
Résumé :

Naturally-occurring biological materials with stiff mineralized reinforcement embedded in a ductile matrix are commonly known to achieve excellent balance between stiffness, strength and ductility. Interestingly, nature offers a broad diversity of architectural motifs, exemplify the multitude of ways in which exceptional mechanical properties can be achieved. Such diversity is the source of bio-inspiration and its translation to synthetic material systems. In particular, the helicoid and the “brick and mortar” architectured materials are two key architectural motifs we are going to study and to synthesize new bio-inspired materials.

Due to geometry mismatch(misorientation) and incompatibilities of mechanical properties between fiber and matrix materials, it is acknowledged that misoriented stiff fibers would rotate in compliant matrix beneath uniaxial deformation. However, the role of fiber reorientation inside the flexible matrix of helicoid composites on their mechanical behaviors have not yet been extensively investigated. In the present project, fiber reorientation values of single misoriented laminae, mono-balanced laminates and helicoid architectures under uniaxial tensile are calculated and compared. In the present work, we introduce a Discontinuous Fiber Helicoid (DFH) composite inspired by both the helicoid microstructure in the cuticle of mantis shrimp and the nacreous architecture of the red abalone shell. We employ 3D printed specimens, analytical models and finite element models to analyze and quantify in-plane fiber reorientation in helicoid architectures with different geometrical features. We also introduce additional architectures, i.e., single unidirectional lamina and mono-balanced architectures, for comparison purposes. Compared with associated mono-balanced architectures, helicoid architectures exhibit less fiber reorientation values and lower values of strain stiffening. The explanation for this difference is addressed in terms of the measured in-plane deformation, due to uniaxial tensile of the laminae, correlated to lamina misorientation with respect to the loading direction and lay-up sequence.

In addition to fiber, rod-like, reinforced laminate, platelet reinforced composite materials, “brick and mortar” architectures, are going to be discussed as well, since it can provide in-plane isotropic behavior on elastic modulus that helicoid architecture can offer as well, but with different geometries of reinforcement. Previous “brick and mortar” models available in the literature have provided insightful information on how these structures promote certain mechanisms that lead to significant improvement in toughness without sacrificing strength. In this work, we present a detailed comparative analysis that looks at the three-dimensional geometries of the platelet-like and rod-like structures. However, most of these previous analyses have been focused on two-dimensional representations. We 3D print and test rod-like and tablet-like architectures and analyze the results employing a computational and analytical micromechanical model under a dimensional analysis framework. In particular, we focus on the stiffness, strength and toughness of the resulting structures. It is revealed that besides volume fraction and aspect ratio of reinforcement, the effective shear and tension area in the matrix governs the mechanical behavior as well. In turns, this leads to the conclusion that rod-like microstructures exhibit better performance than tablet-like microstructures when the architecture is subjected to uniaxial load. However, rod-like microstructures tend to be much weaker and brittle in the transverse direction. On the other hand, tablet-like architectures tend to be a much better choice for situations where biaxial load is expected.

Through varying the geometry of reinforcement and changing the orientation of reinforcement, different architectural motifs can promote in-plane mechanical properties, such as strain stiffening under uniaxial tensile, strength and toughness under biaxial tensile loading. On the other hand, the various out-of-plane orientation of the reinforcement leads to functionally graded effective indentation stiffness. The external layer of nacre shell is composed of calcite prisms with graded orientation from surface to interior. This orientation gradient leads to functionally graded Young’s modulus, which is confirmed to have higher fracture resistance than homogenous materials under mode I fracture loading act.

Similar as graded prism orientation in calcite layer of nacre, the helicoid architecture found in nature exhibits gradients on geometrical parameters as well. The pitch distance of helicoid architecture is found to be functionally graded through the thickness of biological materials, including the dactyl club of mantis shrimp and the fish scale of coelacanth. This can be partially explained by the long-term evolution and selection of living organisms to create high performance biological materials from limited physical, chemical and geometrical elements. This naturally “design” procedure can provide us a spectrum of design motifs on architectural materials.

In the present work, linear gradient on pitch distance of helicoid architectures, denoted by functionally graded helicoid (FGH), is chose to be the initial pathway to understand the functionality of graded pitch distance, associated with changing pitch angle. Three-point bending on short beam and low-velocity impact tests are employed in FEA to analyze the mechanical properties of composite materials simultaneously. Both static(three-point bending) and dynamic(low-velocity impact) tests reveal that FGH with pitch angle increasing from surface to interior can provide multiple superior properties at the same time, such as peak load and toughness, while the helicoid architectures with constant pitch angle can only provide one competitive property at one time. Specifically, helicoid architectures with smaller pitch angle, such as 15-degree, show higher values on toughness, but less competitive peak load under static three-point bending loading condition, while helicoid architectures with middle pitch angle, larger than or equal to 22.5-degree and smaller than 45-degree, exhibit less value of toughness, but higher peak load. The explanation on this trend and the benefits of FGH is addressed by analyzing the transverse shear stresses distribution through the thickness in FEA, combined with analytical prediction. In low-velocity impact tests, the projected delamination area of helicoid architectures is observed to increase when the pitch angle is decreasing. Besides, laminates with specific pitch angles, such as 45-degree, classical quasi-isotropic laminate, 60-degree, specific angle ply, and 90-degree, cross-ply, are designed to compare with helicoid architectures and FGH.

Styles APA, Harvard, Vancouver, ISO, etc.
46

(5929505), Eduardo Barocio. « FUSION BONDING OF FIBER REINFORCED SEMI-CRYSTALLINE POLYMERS IN EXTRUSION DEPOSITION ADDITIVE MANUFACTURING ». Thesis, 2020.

Trouver le texte intégral
Résumé :

Extrusion deposition additive manufacturing (EDAM) has enabled upscaling the dimensions of the objects that can be additively manufactured from the desktop scale to the size of a full vehicle. The EDAM process consists of depositing beads of molten material in a layer-by-layer manner, thereby giving rise to temperature gradients during part manufacturing. To investigate the phenomena involved in EDAM, the Composites Additive Manufacturing Research Instrument (CAMRI) was developed as part of this project. CAMRI provided unparalleled flexibility for conducting controlled experiments with carbon fiber reinforced semi-crystalline polymers and served as a validation platform for the work presented in this dissertation.

Since the EDAM process is highly non-isothermal, modeling heat transfer in EDAM is of paramount importance for predicting interlayer bonding and evolution of internal stresses during part manufacturing. Hence, local heat transfer mechanisms were characterized and implemented in a framework for EDAM process simulations. These include local convection conditions, heat losses in material compaction as well as heat of crystallization or melting. Numerical predictions of the temperature evolution during the printing process of a part were in great agreement with experimental measurements by only calibrating the radiation ambient temperature.

In the absence of fibers reinforcing the interface between adjacent layers, the bond developed through the polymer is the primary mechanisms governing the interlayer fracture properties in printed parts. Hence, a fusion bonding model was extended to predict the evolution of interlayer fracture properties in EDAM with semi-crystalline polymer composites. The fusion bonding model was characterized and implemented in the framework for EDAM process simulation. Experimental verification of numerical predictions obtained with the fusion bonding model for interlayer fracture properties is provided. Finally, this fusion bonding model bridges the gap between processing conditions and interlayer fracture properties which is extremely valuable for predicting regions with frail interlayer bond within a part.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ganchev, Todor. « Αναγνώριση ομιλητή ». 2005. http://nemertes.lis.upatras.gr/jspui/handle/10889/308.

Texte intégral
Résumé :
Η παρούσα διατριβή πραγματεύεται την αναγνώριση ομιλητή σε πραγματικές συνθήκες. Τα κύρια σημεία της εργασίας είναι: (1) αξιολόγηση διαφόρων προσεγγίσεων εξαγωγής χαρακτηριστικών παραμέτρων ομιλίας, (2) μείωση της ισχύος της περιβαλλοντικής επίδρασης στην απόδοση της αναγνώρισης ομιλητή, και (3) μελέτη τεχνικών κατηγοριοποίησης, εναλλακτικών προς τις υπάρχουσες. Συγκεκριμένα, στο (1), προτείνεται μια νέα δομή εξαγωγής παραμέτρων ομιλίας βασισμένη σε πακέτα κυματομορφών, κατάλληλα σχεδιασμένη για αναγνώριση ομιλητή. Εξάγεται με ένα αντικειμενικό τρόπο σε σχέση με την απόδοση αναγνώρισης ομιλητή, σε αντίθεση με την MFCC προσέγγιση, που βασίζεται στην προσέγγιση της αντίληψης της ανθρώπινης ακοής. Έπειτα, στο (2), δίνεται μια δομή για την εξαγωγή παραμέτρων βασισμένη στα MFCC, ανεκτική στο θόρυβο, για την βελτίωση της απόδοσης της αναγνώρισης ομιλητή σε πραγματικό περιβάλλον. Συνοπτικά, μια τεχνική μείωσης του θορύβου βασισμένη σε μοντέλο προσαρμοσμένη στο πρόβλημα της επιβεβαίωσης ομιλητή ενσωματώνεται απευθείας στη δομή υπολογισμού των MFCC. Αυτή η προσέγγιση επέδειξε σημαντικό πλεονέκτημα σε πραγματικό και ταχέως μεταβαλλόμενο περιβάλλον. Τέλος, στο (3), εισάγονται δύο νέοι κατηγοριοποιητές που αναφέρονται ως Locally Recurrent Probabilistic Neural Network (LR PNN), και Generalized Locally Recurrent Probabilistic Neural Network (GLR PNN). Είναι υβρίδια μεταξύ των Recurrent Neural Network (RNN) και Probabilistic Neural Network (PNN) και συνδυάζουν τα πλεονεκτήματα των γεννετικών και διαφορικών προσσεγγίσεων κατηγοριοποίησης. Επιπλέον, τα νέα αυτά νευρωνικά δίκτυα είναι ευαίσθητα σε παροδικές και ειδικές συσχετίσεις μεταξύ διαδοχικών εισόδων, και έτσι, είναι κατάλληλα για να αξιοποιήσουν την συσχέτιση παραμέτρων ομιλίας μεταξύ πλαισίων ομιλίας. Κατά την εξαγωγή των πειραμάτων, διαφάνηκε ότι οι αρχιτεκτονικές LR PNN και GLR PNN παρέχουν καλύτερη απόδοση, σε σχέση με τα αυθεντικά PNN.
This dissertation dials with speaker recognition in real-world conditions. The main accent falls on: (1) evaluation of various speech feature extraction approaches, (2) reduction of the impact of environmental interferences on the speaker recognition performance, and (3) studying alternative to the present state-of-the-art classification techniques. Specifically, within (1), a novel wavelet packet-based speech features extraction scheme fine-tuned for speaker recognition is proposed. It is derived in an objective manner with respect to the speaker recognition performance, in contrast to the state-of-the-art MFCC scheme, which is based on approximation of human auditory perception. Next, within (2), an advanced noise-robust feature extraction scheme based on MFCC is offered for improving the speaker recognition performance in real-world environments. In brief, a model-based noise reduction technique adapted for the specifics of the speaker verification task is incorporated directly into the MFCC computation scheme. This approach demonstrated significant advantage in real-world fast-varying environments. Finally, within (3), two novel classifiers referred to as Locally Recurrent Probabilistic Neural Network (LR PNN), and Generalized Locally Recurrent Probabilistic Neural Network (GLR PNN) are introduced. They are hybrids between Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN) and combine the virtues of the generative and discriminative classification approaches. Moreover, these novel neural networks are sensitive to temporal and special correlations among consecutive inputs, and therefore, are capable to exploit the inter-frame correlations among speech features derived for successive speech frames. In the experimentations, it was demonstrated that the LR PNN and GLR PNN architectures provide benefit in terms of performance, when compared to the original PNN.
Styles APA, Harvard, Vancouver, ISO, etc.
48

(11036556), Yen-yu Chen. « 2D MATERIALS FOR GAS-SENSING APPLICATIONS ». Thesis, 2021.

Trouver le texte intégral
Résumé :

Two-dimensional (2D) transition-metal dichalcogenides (TMDCs) and transition metal carbides/nitrides (MXenes), have been recently receiving attention for gas sensing applications due to their high specific area and rich surface functionalities. However, using pristine 2D materials for gas-sensing applications presents some drawbacks, including high operation temperatures, low gas response, and poor selectivity, limiting their practical sensing applications. Moreover, one of the long-standing challenges of MXenes is their poor stability against hydration and oxidation in a humid environment, which negatively influences their long- term storage and applications. Many studies have reported that the sensitivity and selectivity of 2D materials can be improved by surface functionalization and hybridization with other materials.

In this work, the effects of surface functionalization and/or hybridization of these two materials classes (TMDCs and MXenes) on their gas sensing performance have been investigated. In one of the lines of research, 2D MoS2 nanoflakes were functionalized with Au nanoparticles as a sensing material, providing a performance enhancement towards sensing of volatile organic compounds (VOCs) at room temperature. Next, a nanocomposite film composed of exfoliated MoS2, single-walled carbon nanotubes, and Cu(I)−tris(mercaptoimidazolyl)borate complexes was the sensing material used for the design of a chemiresistive sensor for the selective detection of ethylene (C2H4). Moreover, the hybridization of MXene (Ti3C2Tx) and TMDC (WSe2) as gas-sensing materials was also proposed. The Ti3C2Tx/WSe2 hybrid sensor reveals high sensitivity, good selectivity, low noise level, and ultrafast response/recovery times for the detection of various VOCs. Lastly, we demonstrated a surface functionalization strategy for Ti3C2Tx with fluoroalkylsilane (FOTS) molecules, providing a superhydrophobic surface, mechanical/environmental stability, and excellent sensing performance. The strategies presented here can be an effective solution for not only improving materials' stability, but also enhancing sensor performance, shedding light on the development of next-generation field-deployable sensors.

Styles APA, Harvard, Vancouver, ISO, etc.
49

(6532391), Nicolas Guarin-Zapata. « Modeling and Analysis of Wave and Damaging Phenomena in Biological and Bioinspired Materials ». Thesis, 2021.

Trouver le texte intégral
Résumé :

There is a current interest in exploring novel microstructural architectures that take advantage of the response of independent phases. Current guidelines in materials design are not just based on changing the properties of the different phases but also on modifying its base architecture. Hence, the mechanical behavior of composite materials can be adjusted by designing microstructures that alternate stiff and flexible constituents, combined with well-designed architectures. One source of inspiration to achieve these designs is Nature, where biologically mineralized composites can be taken as an example for the design of next-generation structural materials due to their low density, high-strength, and toughness currently unmatched by engineering technologies.


The present work focuses on the modeling of biologically inspired composites, where the source of inspiration is the dactyl club of the Stomatopod. Particularly, we built computational models for different regions of the dactyl club, namely: periodic and impact regions. Thus, this research aimed to analyze the effect of microstructure present in the impact and periodic regions in the impact resistance associated with the materials present in the appendage of stomatopods. The main contributions of this work are twofold. First, we built a model that helped to study wave propagation in the periodic region. This helped to identify possible bandgaps and their influence on the wave propagation through the material. Later on, we extended what we learned from this material to study the bandgap tuning in bioinspired composites. Second, we helped to unveil new microstructural features in the impact region of the dactyl club. Specifically, the sinusoidally helicoidal composite and bicontinuous particulate layer. For these, structural features we developed finite element models to understand their mechanical behavior.


The results in this work help to elucidate some new microstructures and present some guidelines in the design of architectured materials. By combining the current synthesis and advanced manufacturing methods with design elements from these biological structures we can realize potential blueprints for a new generation of advanced materials with a broad range of applications. Some of the possible applications include impact- and vibration-resistant coatings for buildings, body armors, aircraft, and automobiles, as well as in abrasion- and impact-resistant wind turbines.


Styles APA, Harvard, Vancouver, ISO, etc.
50

(7026707), Siddharth Saksena. « Integrated Flood Modeling for Improved Understanding of River-Floodplain Hydrodynamics : Moving beyond Traditional Flood Mapping ». Thesis, 2019.

Trouver le texte intégral
Résumé :
With increasing focus on large scale planning and allocation of resources for protection against future flood risk, it is necessary to analyze and improve the deficiencies in the conventional flood modeling approach through a better understanding of the interactions between river hydrodynamics and subsurface processes. Recent studies have shown that it is possible to improve the flood inundation modeling and mapping using physically-based integrated models that incorporate observable data through assimilation and simulate hydrologic fluxes using the fundamental laws of conservation of mass at multiple spatiotemporal scales. However, despite the significance of integrated modeling in hydrology, it has received relatively less attention within the context of flood hazard. The overall aim of this dissertation is to study the heterogeneity in complex physical processes that govern the watershed response during flooding and incorporate these effects in integrated models across large scales for improved flood risk estimation. Specifically, this dissertation addresses the following questions: (1) Can physical process incorporation using integrated models improve the characterization of antecedent conditions and increase the accuracy of the watershed response to flood events? (2) What factors need to be considered for characterizing scale-dependent physical processes in integrated models across large watersheds? (3) How can the computational efficiency and process representation be improved for modeling flood events at large scales? (4) Can the applicability of integrated models be improved for capturing the hydrodynamics of unprecedented flood events in complex urban systems?

To understand the combined effect of surface-subsurface hydrology and hydrodynamics on streamflow generation and subsequent inundation during floods, the first objective incorporates an integrated surface water-groundwater (SW-GW) modeling approach for simulating flood conditions. The results suggest that an integrated model provides a more realistic simulation of flood hydrodynamics for different antecedent soil conditions. Overall, the findings suggest that the current practice of simulating floods which assumes an impervious surface may not be providing realistic estimates of flood inundation, and that an integrated approach incorporating all the hydrologic and hydraulic processes in the river system must be adopted.

The second objective focuses on providing solutions to better characterize scale-dependent processes in integrated models by comparing two model structures across two spatial scales and analyzing the changes in flood responses. The results indicate that since the characteristic length scales of GW processes are larger than SW processes, the intrinsic scale (or resolution) of GW in integrated models should be coarser when compared to SW. The results also highlight the degradation of streamflow prediction using a single channel roughness when the stream length scales are increased. A distributed channel roughness variable along the stream length improves the modeled basin response. Further, the results highlight the ability of a dimensionless parameter 𝜂1, representing the ratio of the reach length in the study region to maximum length of the single stream draining at that point, for identifying which streams may require a distributed channel roughness.

The third objective presents a hybrid flood modeling approach that incorporates the advantages of both loosely-coupled (‘downward’) and integrated (‘upward’) modeling approaches by coupling empirically-based and physically-based approaches within a watershed. The computational efficiency and accuracy of the proposed hybrid modeling approach is tested across three watersheds in Indiana using multiple flood events and comparing the results with fully- integrated models. Overall, the hybrid modeling approach results in a performance comparable to a fully-integrated approach but at a much higher computational efficiency, while at the same time, providing objective-oriented flexibility to the modeler.

The fourth objective presents a physically-based but computationally-efficient approach for modeling unprecedented flood events at large scales in complex urban systems. The application of the proposed approach results in accurate simulation of large scale flood hydrodynamics which is shown using Hurricane Harvey as the test case. The results also suggest that the ability to control the mesh development using the proposed flexible model structure for incorporating important physical and hydraulic features is as important as integration of distributed hydrology and hydrodynamics.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie