Dissertations / Theses on the topic 'Least-squares support vector machine'

To see the other types of publications on this topic, follow the link: Least-squares support vector machine.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Least-squares support vector machine.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zigic, Ljiljana. "Direct L2 Support Vector Machine." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4274.

Full text
Abstract:
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative. Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets. In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ke. "Automotive engine tuning using least-squares support vector machines and evolutionary optimization." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2580667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khawaja, Taimoor Saleem. "A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34758.

Full text
Abstract:
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators, and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classication for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to nd a good trade-o between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data, is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate (possibly) non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines , (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines,(c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
APA, Harvard, Vancouver, ISO, and other styles
4

Erdas, Ozlem. "Modelling And Predicting Binding Affinity Of Pcp-like Compounds Using Machine Learning Methods." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608792/index.pdf.

Full text
Abstract:
Machine learning methods have been promising tools in science and engineering fields. The use of these methods in chemistry and drug design has advanced after 1990s. In this study, molecular electrostatic potential (MEP) surfaces of PCP-like compounds are modelled and visualized in order to extract features which will be used in predicting binding affinity. In modelling, Cartesian coordinates of MEP surface points are mapped onto a spherical self-organizing map. Resulting maps are visualized by using values of electrostatic potential. These values also provide features for prediction system. Support vector machines and partial least squares method are used for predicting binding affinity of compounds, and results are compared.
APA, Harvard, Vancouver, ISO, and other styles
5

Pai, Chih-Yun. "Automatic Pain Assessment from Infants’ Crying Sounds." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6560.

Full text
Abstract:
Crying is infants utilize to express their emotional state. It provides the parents and the nurses a criterion to understand infants’ physiology state. Many researchers have analyzed infants’ crying sounds to diagnose specific diseases or define the reasons for crying. This thesis presents an automatic crying level assessment system to classify infants’ crying sounds that have been recorded under realistic conditions in the Neonatal Intensive Care Unit (NICU) as whimpering or vigorous crying. To analyze the crying signal, Welch’s method and Linear Predictive Coding (LPC) are used to extract spectral features; the average and the standard deviation of the frequency signal and the maximum power spectral density are the other spectral features which are used in classification. For classification, three state-of-the-art classifiers, namely K-nearest Neighbors, Random Forests, and Least Squares Support Vector Machine are tested in this work, and the experimental result achieves the highest accuracy in classifying whimper and vigorous crying using the clean dataset is 90%, which is sampled with 10 seconds before scoring and 5 seconds after scoring and uses K-nearest neighbors as the classifier.
APA, Harvard, Vancouver, ISO, and other styles
6

Yoldas, Mine. "Predicting The Effect Of Hydrophobicity Surface On Binding Affinity Of Pcp-like Compounds Using Machine Learning Methods." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613215/index.pdf.

Full text
Abstract:
This study aims to predict the binding affinity of the PCP-like compounds by means of molecular hydrophobicity. Molecular hydrophobicity is an important property which affects the binding affinity of molecules. The values of molecular hydrophobicity of molecules are obtained on three-dimensional coordinate system. Our aim is to reduce the number of points on the hydrophobicity surface of the molecules. This is modeled by using self organizing maps (SOM) and k-means clustering. The feature sets obtained from SOM and k-means clustering are used in order to predict binding affinity of molecules individually. Support vector regression and partial least squares regression are used for prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

TREVISO, FELIPE. "Modeling for the Computer-Aided Design of Long Interconnects." Doctoral thesis, Politecnico di Torino, 2022. https://hdl.handle.net/11583/2973429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Melo, Davyd Bandeira de. "Algoritmos de aprendizagem para aproximaÃÃo da cinemÃtica inversa de robÃs manipuladores: um estudo comparativo." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16997.

Full text
Abstract:
In this dissertation it is reported the results of a comprehensive comparative study involving seven machine learning algorithms applied to the task of approximating the inverse kinematic model of 3 robotic arms (planar, PUMA 560 and Motoman HP6). The evaluated algorithm are the following ones: Multilayer Perceptron (MLP), Extreme Learning Machine (ELM), Least Squares Support Vector Regression (LS-SVR), Minimal Learning Machine (MLM), Gaussian Processes (GP), Adaptive Network-Based Fuzzy Inference Systems (ANFIS) and Local Linear Mapping (LLM). Each algorithm is evaluated with respect to its accuracy in estimating the joint angles given the cartesian coordinates which comprise end-effector trajectories within the robot workspace. A comprehensive evaluation of the performances of the aforementioned algorithms is carried out based on correlation analysis of the residuals. Finally, hypothesis testing procedures are also executed in order to verifying if there are significant differences in performance among the best algorithms.
Nesta dissertaÃÃo sÃo reportados os resultados de um amplo estudo comparativo envolvendo sete algoritmos de aprendizado de mÃquinas aplicados à tarefa de aproximaÃÃo do modelo cinemÃtico inverso de 3 robÃs manipuladores (planar, PUMA 560 e Motoman HP6). Os algoritmos avaliados sÃo os seguintes: Perceptron Multicamadas (MLP), MÃquina de Aprendizado Extremo (ELM), RegressÃo de MÃnimos Quadrados via Vetores-Suporte (LS-SVR), MÃquina de Aprendizado MÃnimo (MLM), Processos Gaussianos (PG), Sistema de InferÃncia Fuzzy Baseado em Rede Adaptativa (ANFIS) e Mapeamento Linear Local (LLM). Estes algoritmos sÃo avaliados quanto à acurÃcia na estimaÃÃo dos Ãngulos das juntas dos robÃs manipuladores em experimentos envolvendo a geraÃÃo de vÃrios tipos de trajetÃrias no volume de trabalho dos referidos robÃs. Uma avaliaÃÃo abrangente do desempenho de cada algoritmo à feito com base na anÃlise dos resÃduos e testes de hipÃteses sÃo executados para verificar se hà diferenÃas significativas entre os desempenhos dos melhores algoritmos.
APA, Harvard, Vancouver, ISO, and other styles
9

Padilha, Carlos Alberto de Araújo. "Uma abordagem multinível usando algoritmos genéticos em um comitê de LS-SVM." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174541.

Full text
Abstract:
Há muitos anos, os sistemas de comitê já tem se mostrado um método eficiente para aumentar a acurácia e estabilidade de algoritmos de aprendizado nas décadas recentes, embora sua construção tem uma questão para ser elucidada: diversidade. O desacordo entre os modelos que compõe o comitê pode ser gerado quando eles são contruídos sob diferentes circunstâncias, tais como conjunto de dados de treinamento, configuração dos parâmetros e a seleção dos algoritmos de aprendizado. O ensemble pode ser visto como uma estrutura com três níveis: espaço de entrada, a base de componentes e o bloco de combinação das respostas dos componentes. Neste trabalho é proposta uma abordagem multi-nível usando Algoritmos Genéticos para construir um ensemble de Máquinas de Vetor de Suporte por Mínimos Quadrados ou LS-SVM, realizando uma seleção de atributos no espaço de entrada, parametrização e a escolha de quais modelos irão compor o comitê no nível de componentes e a busca por um vetor de pesos que melhor represente a importância de cada classificador na resposta final do comitê. De forma a avaliar a performance da abordagem proposta, foram utilizados alguns benchmarks do repositório da UCI para comparar com outros algoritmos de classificação. Além disso, também foram comparados os resultados da abordagem proposta com métodos de aprendizagem profunda nas bases de dados MNIST e CIFAR e se mostraram bastante satisfatórios.
Many years ago, the ensemble systems have been shown to be an efficient method to increase the accuracy and stability of learning algorithms in recent decades, although its construction has a question to be elucidated: diversity. The disagreement among the models that compose the ensemble can be generated when they are built under different circumstances, such as training dataset, parameter setting and selection of learning algorithms. The ensemble may be viewed as a structure with three levels: input space, the base components and the combining block of the components responses. In this work is proposed a multi-level approach using genetic algorithms to build the ensemble of Least Squares Support Vector Machines (LS-SVM), performing a feature selection in the input space, the parameterization and the choice of which models will compose the ensemble at the component level and finding a weight vector which best represents the importance of each classifier in the final response of the ensemble. In order to evaluate the performance of the proposed approach, some benchmarks from UCI Repository have been used to compare with other classification algorithms. Also, the results obtained by our approach were compared with some deep learning methods on the datasets MNIST and CIFAR and proved very satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
10

SEDAGHAT, MOSTAFA. "Modeling and Optimization of the Microwave PCB Interconnects Using Macromodel Techniques." Doctoral thesis, Politecnico di Torino, 2022. https://hdl.handle.net/11583/2973989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hennerdal, Aron. "Investigation of multivariate prediction methods for the analysis of biomarker data." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5889.

Full text
Abstract:

The paper describes predictive modelling of biomarker data stemming from patients suffering from multiple sclerosis. Improvements of multivariate analyses of the data are investigated with the goal of increasing the capability to assign samples to correct subgroups from the data alone.

The effects of different preceding scalings of the data are investigated and combinations of multivariate modelling methods and variable selection methods are evaluated. Attempts at merging the predictive capabilities of the method combinations through voting-procedures are made. A technique for improving the result of PLS-modelling, called bagging, is evaluated.

The best methods of multivariate analysis of the ones tried are found to be Partial least squares (PLS) and Support vector machines (SVM). It is concluded that the scaling have little effect on the prediction performance for most methods. The method combinations have interesting properties – the default variable selections of the multivariate methods are not always the best. Bagging improves performance, but at a high cost. No reasons for drastically changing the work flows of the biomarker data analysis are found, but slight improvements are possible. Further research is needed.

APA, Harvard, Vancouver, ISO, and other styles
12

Jansson, Daniel, and Rasmus Blomstrand. "REAL-TIME PREDICTION OF SHIMS DIMENSIONS IN POWER TRANSFER UNITS USING MACHINE LEARNING." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-45615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

MEMON, ZAIN ANWER. "Novel Modeling and Simulation Concepts for Power Distribution Networks." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2922916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hussain, Sibt Ul. "Apprentissage machine pour la détection des objets." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00722632.

Full text
Abstract:
Le but de cette thèse est de développer des méthodes pratiques plus performantes pour la détection d'instances de classes d'objets de la vie quotidienne dans les images. Nous présentons une famille de détecteurs qui incorporent trois types d'indices visuelles performantes - histogrammes de gradients orientés (Histograms of Oriented Gradients, HOG), motifs locaux binaires (Local Binary Patterns, LBP) et motifs locaux ternaires (Local Ternary Patterns, LTP) - dans des méthodes de discrimination efficaces de type machine à vecteur de support latent (Latent SVM), sous deux régimes de réduction de dimension - moindres carrées partielles (Partial Least Squares, PLS) et sélection de variables par élagage de poids SVM (SVM Weight Truncation). Sur plusieurs jeux de données importantes, notamment ceux du PASCAL VOC2006 et VOC2007, INRIA Person et ETH Zurich, nous démontrons que nos méthodes améliorent l'état de l'art du domaine. Nos contributions principales sont : Nous étudions l'indice visuelle LTP pour la détection d'objets. Nous démontrons que sa performance est globalement mieux que celle des indices bien établies HOG et LBP parce qu'elle permet d'encoder à la fois la texture locale de l'objet et sa forme globale, tout en étant résistante aux variations d'éclairage. Grâce à ces atouts, LTP fonctionne aussi bien pour les classes qui sont caractérisées principalement par leurs structures que pour celles qui sont caractérisées par leurs textures. En plus, nous démontrons que les indices HOG, LBP et LTP sont bien complémentaires, de sorte qu'un jeux d'indices étendu qui intègre tous les trois améliore encore la performance. Les jeux d'indices visuelles performantes étant de dimension assez élevée, nous proposons deux méthodes de réduction de dimension afin d'améliorer leur vitesse et réduire leur utilisation de mémoire. La première, basée sur la projection moindres carrés partielles, diminue significativement le temps de formation des détecteurs linéaires, sans réduction de précision ni perte de vitesse d'exécution. La seconde, fondée sur la sélection de variables par l'élagage des poids du SVM, nous permet de réduire le nombre d'indices actives par un ordre de grandeur avec une réduction minime, voire même une petite augmentation, de la précision du détecteur. Malgré sa simplicité, cette méthode de sélection de variables surpasse toutes les autres approches que nous avons mis à l'essai.
APA, Harvard, Vancouver, ISO, and other styles
15

Padilha, Carlos Alberto de Ara?jo. "Algoritmos gen?ticos aplicados a um comit? de LS-SVM em problemas de classifica??o." Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15472.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:56:13Z (GMT). No. of bitstreams: 1 CarlosAAP_DISSERT.pdf: 1150903 bytes, checksum: a90e625336bbabe7e96da74cb85ee7aa (MD5) Previous issue date: 2013-01-31
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers
A classifica??o de padr?es ? uma das sub?reas do aprendizado de m?quina que possui maior destaque. Entre as v?rias t?cnicas para resolver problemas de classifica??o de padr?es, as M?quinas de Vetor de Suporte (do ingl?s, Support Vector Machines ou SVM) recebem grande ?nfase, devido a sua facilidade de uso e boa capacidade de generaliza??o. A formula??o por M?nimos Quadrados da SVM (do ingl?s, Least Squares Support Vector Machines ou LS-SVM) encontra um hiperplano de separa??o ?tima atrav?s da solu??o de um sistema de equa??es lineares, evitando assim o uso da programa??o quadr?tica implementada na SVM. As LS-SVMs fornecem alguns par?metros livres que precisam ser corretamente selecionados para alcan?ar resultados satisfat?rios em uma determinada tarefa. Apesar das LS-SVMs possuir elevado desempenho, v?rias ferramentas tem sido desenvolvidas para aprimor?-la, principalmente o desenvolvimento de novos m?todos de classifica??o e a utiliza??o de comit?s de m?quinas, ou seja, a combina??o de v?rios classificadores. Neste trabalho, n?s propomos tanto o uso de um comit? de m?quinas quanto o uso de um Algoritmo Gen?tico (AG), algoritmo de busca baseada na evolu??o das esp?cies, para aprimorar o poder de classifica??o da LS-SVM. Na constru??o desse comit?, utilizamos uma sele??o aleat?ria de atributos do problema original, que divide o problema original em outros menores onde cada classificador do comit? vai atuar. Ent?o, aplicamos o AG para encontrar valores efetivos para os par?metros de cada LS-SVM e tamb?m encontrando um vetor de pesos, medindo a import?ncia de cada m?quina na classifica??o final. Por fim, a classifica??o final ? dada por uma combina??o linear das respostas de cada m?quina ponderadas pelos pesos. Foram utilizados v?rios problemas de classifica??o, tidos como benchmarks, para avaliar o desempenho do algoritmo e comparamos os resultados obtidos com outros classificadores
APA, Harvard, Vancouver, ISO, and other styles
16

Pavlík, Vít. "Sledování objektů ve videosekvencích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255397.

Full text
Abstract:
Master's thesis addresses the long-term image tracking in video sequences. The project was intended to demonstrate the techniques that are needed for handling the long-term tracking. It primarily describes the techniques which application leads to construction of adaptive tracking system which is able to deal with the change of appearance of the object and unstable character of the surrounding environement appropriately.
APA, Harvard, Vancouver, ISO, and other styles
17

Hassani, Mujtaba. "CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49007.

Full text
Abstract:
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:
APA, Harvard, Vancouver, ISO, and other styles
18

Costa, Daniel Moura Martins da. "Ensemble baseado em métodos de Kernel para reconhecimento biométrico multimodal." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-28072016-190335/.

Full text
Abstract:
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas, de forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar e verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou de impressões digitais. Entretanto, existem outros sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Nos últimos anos, o reconhecimento biométrico obteve avanços na sua confiabilidade e precisão, com algumas modalidades biométricas oferecendo bom desempenho global. No entanto, mesmo os sistemas biométricos mais avançados ainda enfrentam problemas. Recentemente, esforços têm sido realizados visando empregar diversas modalidades biométricas de forma a tornar o processo de identificação menos vulnerável a ataques. Biometria multimodal é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolidar múltiplas modalidades biométricas. A multimodalidade é baseada no conceito de que informações obtidas a partir de diferentes modalidades se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. As principais questões envolvidas na construção de um sistema biométrico unimodal dizem respeito à definição das técnicas de extração de característica e do classificador. Já no caso de um sistema biométrico multimodal, além destas questões, é necessário definir o nível de fusão e a estratégia de fusão a ser adotada. O objetivo desta dissertação é investigar o emprego de ensemble para fusão das modalidades biométricas, considerando diferentes estratégias de fusão, lançando-se mão de técnicas avançadas de processamento de imagens (tais como transformada Wavelet, Contourlet e Curvelet) e Aprendizado de Máquina. Em especial, dar-se-á ênfase ao estudo de diferentes tipos de máquinas de aprendizado baseadas em métodos de Kernel e sua organização em arranjos de ensemble, tendo em vista a identificação biométrica baseada em face e íris. Os resultados obtidos mostraram que a abordagem proposta é capaz de projetar um sistema biométrico multimodal com taxa de reconhecimento superior as obtidas pelo sistema biométrico unimodal.
With the advancement of technology, traditional strategies for identifying people become more susceptible to failure, in order to overcome these difficulties some approaches have been proposed in the literature. Among these approaches highlights the Biometrics. The field of Biometrics encompasses a wide variety of technologies used to identify and verify the person\'s identity through the measurement and analysis of physiological and behavioural aspects of the human body. As a result, biometrics has a wide field of applications in systems that require precise identification of their users. The most popular biometric systems are based on face recognition and fingerprint matching. Furthermore, there are other biometric systems that utilize iris and retinal scan, speech, face, and hand geometry. In recent years, biometrics authentication has seen improvements in reliability and accuracy, with some of the modalities offering good performance. However, even the best biometric modality is facing problems. Recently, big efforts have been undertaken aiming to employ multiple biometric modalities in order to make the authentication process less vulnerable to attacks. Multimodal biometrics is a relatively new approach to biometrics representation that consolidate multiple biometric modalities. Multimodality is based on the concept that the information obtained from different modalities complement each other. Consequently, an appropriate combination of such information can be more useful than using information from single modalities alone. The main issues involved in building a unimodal biometric System concern the definition of the feature extraction technique and type of classifier. In the case of a multimodal biometric System, in addition to these issues, it is necessary to define the level of fusion and fusion strategy to be adopted. The aim of this dissertation is to investigate the use of committee machines to fuse multiple biometric modalities, considering different fusion strategies, taking into account advanced methods in machine learning. In particular, it will give emphasis to the analyses of different types of machine learning methods based on Kernel and its organization into arrangements committee machines, aiming biometric authentication based on face, fingerprint and iris. The results showed that the proposed approach is capable of designing a multimodal biometric System with recognition rate than those obtained by the unimodal biometrics Systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Yen, Chun-Hsiang, and 顏俊翔. "Least Squares Support Vector Machine for Power Distribution Prediction in Research." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/nu5m58.

Full text
Abstract:
碩士
龍華科技大學
資訊管理系碩士班
105
Electricity output is an important factor in energy technology and strategy analysis. This study adopts material’s price and data of google search engine, which is discussion times of various electricity production, as input data, and the forecasting model uses least-squares support vector regression (LS-SVR) with genetic algorithm . LS-SVR model is a forecasting approach and has been successfully used to solve time series problems. This study aims at applying a LS-SVR model to accurately forecast percentage of electricity output from various sources in Taiwan. Empirical results indicate that the LS-SVR based on material’s price and data of google search engine is an effective method in the forecasting electricity output.
APA, Harvard, Vancouver, ISO, and other styles
20

Hoang, Nhat-Duc, and 黃日德. "Decision Support System for Construction Management Based on Evolutionary Least Squares Support Vector Machine." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/77335670328796620902.

Full text
Abstract:
博士
國立臺灣科技大學
營建工程系
101
Problems in the field of construction management are sophisticated, highly uncertain, and context-dependent. Thus, the application of artificial intelligence (AI) to tackle such problems can be a promising research direction. Considering the features and advantages of each AI technique, this research integrates various prevalent advanced approaches to establish a novel decision support system that utilizes Least Squares Support Vector Machine (LS-SVM), Differential Evolution (DE), Adaptive Time Function (ATF), and Fuzzy Logic (FL). At the first stage, LS-SVM is incorporated with DE to create Evolutionary Least Squares Support Vector Machine Inference System (ELSIS) in which LS-SVM is utilized as a supervised learning method used for regression analysis/ classification in high dimensional space and Differential Evolution is employed to identify the optimal set of tuning parameters. At the second stage, ATF is integrated into ELSIS to establish Adaptive Time-Dependent Evolutionary Least Squares Support Vector Machine Inference System (ELSIST). In ELSIST, ATF is deployed to deal with the unbalanced nature of time series data. At the final stage, ELSIS incorporates FL to develop Evolutionary Fuzzy Least Squares Support Vector Machine Inference System (EFLSIS) in which FL aims at facilitating the system capability of approximate reasoning and coping with vague information. Experimental results obtained from system applications demonstrate that the newly established inference system can be a highly beneficial for decision-makers when solving various problems in the field of construction management.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Yu-shu, and 劉羽書. "Tuning the parameters of Least Squares Support Vector Machine using Particle Swarm Optimization." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/q877rt.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
102
The advancement of information technology has encouraged engineering consulting firms to store historical project data for future reference. Such data may be transformed into useful information to help the firms gain competitive edge. The present study proposes a data mining model to predict the result of new coming projects (capital gains or losses) based on historical project data. Specifically, the proposed model uses Particle Swarm Optimization (PSO) to automatically fine-tune the parameters of Least Squares Support Vector Machine (LS-SVM). The proposed model is demonstrated by analyzing the data set collected from a large engineering consulting firm. The set includes 177 projects that focus on construction observation between 1999 and 2011. Several meetings with the targeted firm are held to identify important predictors. The number of predictors is reduced by the analysis of stepwise regression while the variance inflation factors are checked to ensure no significant collinearity among predictors. The proposed model can be used to perform binary classification and multi-class classification. The performance of the proposed model is measured in terms of prediction accuracy and Kappa statistics. The proposed model is shown to be superior to Artificial Neural Network (ANN) and an ordinary alternative: using grid search to fine-tune LS-SVM.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Pao-jung, and 陳保戎. "Application of Least Squares Support Vector Machines in Image Coding." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/26618256914748557423.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
94
In this thesis, least squares support vector machine for regression (LS-SVR) is applied to image coding. First, we propose five simple algorithms for solving LS-SVR. For linear regression, two simple Widrow-Hoff-like algorithms, in primal form and in dual form, are proposed for LS-SVR problems. The dual form of the algorithm is then generalized to kernel-based nonlinear LS-SVR. The elegant and powerful two-parameter sequential minimization optimization (2PSMO) and three-parameter sequential minimization optimization (3PSMO) algorithms are provided in detail. A predictive function obtained from LS-SVR is utilized to approximate the gray levels of the image. After pruning, only a subset of training data called support vectors is saved. Experimental results on seven image blocks show that the LS-SVR with Gaussian kernel is more appropriate than that with Mahalanobis kernel with a covariance matrix. Two-layer LS-SVR is proposed to choose the machine parameters of the LS-SVR. Before training outer LS-SVR, feature extraction is used to reduce the input dimensionality. Experimental results on three whole images show that the results with two-layer LS-SVR after reducing dimensionality are better than those with two-layer LS-SVR without reducing dimensionality in PSNR for Lena and Baboon images and they are almost the same in PSNR for F16 image.
APA, Harvard, Vancouver, ISO, and other styles
23

Chang, Chung Ming, and 張仲銘. "Combining Rough Set Theory and Least Squares Support Vector Machine in Sovereign Credit Rating." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/74764344457556730277.

Full text
Abstract:
碩士
中國文化大學
會計學系
101
There have been happened many serious international financial crises, the high risk that has already been taken seriously by market participants is accompanied by the high profit, and the investment targets change into sovereign bonds. The sovereign credit rating is a indicator of the debt-paying ability about sovereign bonds publishers, and the demand of information about sovereign credit rating have become more important. Past studies about sovereign credit rating focused on variables that affect the sovereign credit rating, and how sovereign credit rating affect markets. However, studies focused on classification models are not many. Consequently, we will apply machine learning techniques to build a two-step multiclass classification models. In the first step, we use stepwise regression and rough set theory to select variables, and apply least squares support vector machine (LS-SVM), rough set theory (RST), back-propagation neural network (BPN) and C5.0 for classification in the second step, and the result indicates that the classification performance of RST+LS-SVM is the best.
APA, Harvard, Vancouver, ISO, and other styles
24

Ye, Xin-Hao, and 葉欣豪. "A Study of Prediction Ionospheric Total Electron Content by Least Squares Support Vector Machine." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/29762152361027889921.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
101
GPS satellite positioning system development of rapid on navigation but positional accuracy was interference by the environment. The factor influence GPS positional accuracy is ionospheric error that radio wave refraction produce time delay error. Use the dual-band observations appropriate and linear combination can be eliminated. Single-frequency observations using differential technology can be reduced most of the errors, especially when two receivers close to each other. Spatial correlation between the receivers gradually reduces by the baseline growth. Observations differential fails to effectively eliminate the ionospheric error, usually use ionospheric model to amend. Klobuchar model just can amend of 50% to 75%. Therefore, my research expects to forecast more accurate information is corrected. The information of Total Electron Content describes the structure and patterns of the ionospheric. By the research function can calculate ionosphere delay measure. Therefore, in my research both use the least squares support vector machine comparative analysis to forecast TEC, hope in advance and real-time correction of ionospheric delay. Experimental range is 2011 of Taiwan. Use LSSVM continuous 360 days forecast, compared to one day in advance with the IGS forecast data, the average error enhance 0.73TECU and error percentage less 95% for more than 44 days. Based on these research data can proof my research improved the prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Chia-Hsin, and 陳佳欣. "Using genetic algorithm based least squares support vector machine to improve the undulation estimation accuracy." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/7m6jt6.

Full text
Abstract:
碩士
國立政治大學
地政學系
107
The orthometric height often used in engineering application can be derived by leveling, which costs a lot. Whereas the orthometric height derived by GPS leveling has the advantage of lower cost. And within the process of improving the accuracy of GPS levelling, obtaining the undulation model that satisfies the required accuracy is the main study goal. In this paper, Least Square Support Vector Machine(LSSVM) will be used to estimate the undulation model. And the Genetic Algorithm (GA), which has the capability of global optimization, will be used to search and optimize the parameters of LSSVM to improve the accuracy of the undulation model. Tainan, central part of Taiwan and Taiwan are chosen as the test area in this paper. For the test data, 2,067 benchmark points distributed throughout the Taiwan region with the orthometric height, the ellipsoidal height and plane coordinates of the points at the same time, were used. According to the test results, the conclusions are obtained as follows: (1) undulation are improved after using genetic algorithm based least squares support machine (LSSVM(GA)) with 19.13 % improvement in Tainan (reduced from 0.0298m to 0.0241m), 42.83 % improvement at central part of Taiwan (reduced from 0.0523m to 0.0299m) and 1.86% improvement in Taiwan (reduced from 0.0431m to 0.0423m). (2) after comparing with other studies, the results show that LSSVM(GA) is superior to Back Propagation Artificial Neural Network (BPANN) in establishing the undulation model of the three test area.
APA, Harvard, Vancouver, ISO, and other styles
26

Cuambe, Isaura Denise Filipe. "Electricity load demand forecasting in Portugal using least-squares support vector machines." Master's thesis, 2013. http://hdl.handle.net/10400.1/3553.

Full text
Abstract:
Dissertação de mest., Engenharia Informática, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2013
Electricity Load Demand (ELD) forecasting is a subject that is of interest mainly to producers and distributors and it has a great impact on the national economy. At the national scale it is not viable to store electricity and it is also difficult to estimate its consumption accurately enough in order to provide a better agreement between supply and demand and consequently less waste of energy. Thus, researchers from many areas have addressed this issue in a way to facilitate the task of power grid companies in adjusting production levels to consumption demand. Over the years, many predictive algorithms were tested and the Radial Basis Function Artificial Neural Network (RBF ANN) was up to now one of the most tested approaches with satisfactory results. The fact that the on-line adaptation is not an easy task for this approach, led demand for new ways to make the prediction, promising better results, or at least as good as those of RBF ANN, and also the ability to overcome the difficulties founded by RBF ANN in on-line adaptation. This work aims at introducing a new approach still little explored for electricity consumption prediction. Least-Squares Support Vector Machines (LS-SVMs) are a good alternative to RBF ANN and other approaches, since they have fewer parameters to adjust, hence, allowing significant decrease in the sensitivity of those machines to well-known problems associated with parameter adaptation, making the on-line model adaptation more stable over time
APA, Harvard, Vancouver, ISO, and other styles
27

Trang, Pham Thi Phuong, and Pham Thi Phuong Trang. "Performance comparison of metaheuristic-optimized least squares support vector machine for multi-class classification in civil engineering applications." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/01319681516541216634.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
105
Multi-class classification is one of the major challenges in machine learning and an on-going research issue. Classification algorithms are generally binary but they must be extended to multi-class problems for real-world application. Multi-class classification is more complex than binary classification. In binary classification, only the decision boundaries of one class are to be known whereas in multiclass classification, several boundaries are involved. The objective of this investigation is to propose a metaheuristic optimized multi-level classification model for forecasting in engineering problems. The proposed model integrates the firefly algorithm (FA), metaheuristic intelligence, decomposition approaches, the one-against-one (OAO), and the least squares support vector machine (LSSVM). The enhanced FA automatically fine-tunes the hyperparameters of the LSSVM to construct an optimized LSSVM classification model. The developed model is called the Optimized-OAO-LSSVM. Ten benchmark functions are used to evaluate the performance of the enhanced optimization algorithm. Two binary-class datasets related to geotechnical engineering, concerning seismic bumps and soil liquefaction, are then used to clarify the application of the proposed model to binary problems. Further, this investigation uses multi-class cases in civil engineering and construction management to verify the effectiveness of the model in the diagnosis of faults in steel plates, quality of water in a reservoir and determining urban land cover. The results revealed that the Optimized-OAO-LSSVM model predicted faults in steel plates with an accuracy of 91.085%, the quality of water in a reservoir with an accuracy of 93.650% and urban land cover with an accuracy of 87.274%. To demonstrate the effectiveness of the proposed model, its predictive accuracy was compared with that of a non-optimized baseline model (OAO-LSSVM), single multi-class classification algorithms (Sequential Minimal Optimization (SMO), the Multiclass Classifier, the Naïve Bayes, the Library Support vector machine (LibSVM) and Logistic). The analytical results showed that the proposed model is a promising tool to help decision-makers solve classification problems in civil engineering and construction management.
APA, Harvard, Vancouver, ISO, and other styles
28

Shen, Yu-Ting, and 沈昱廷. "A Study of Fitting Local Geoid Model by Least Squares Support Vector Machine─A Case Study of Taichung Area." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/94712598541681838676.

Full text
Abstract:
碩士
中興大學
土木工程學系所
99
GPS has been used widely in all kinds of engineering, the traditional measurement of plane surveying has been replaced, but datum of leveling is restricted the inconsistency between the two elevations, GPS technique, which obtains ellipsoidal heights, can not be directly used in the leveling height system, it must be transformed by appropriate geoid model, and then can be used in engineering survey. Therefore, researchers have been so far devoting themselves to geoid study in domestic and abroad. Due to geographical environment''s impact in Taiwan, gravity and leveling are not easy to survey, hence a lack of reliable data. The representation of the result is poor in geoid model of Taiwan, all the results are fit to large area, but small area are not. The geoid, which can be used to reflect the circumstances of its spatial distribution by mathematical functions, is changed unobviously in a specific area, it obtained other by interpolating. A local geoid model is established by the study, which is based on GPS and leveling data with least squares support vector machine. According to the experiment, the result of using LS-SVM with Radial Basis Function (RBF) kernel and the third-order polynomial kernel computed that the accuracy is about ±1.4 cm, it could satisfy the requirements of elevation in engineering survey. Comparing with the results by zoning, RBF kernel is better than others. The obtained results from this study would be a great help in improving spirit leveling in the future, as well as enhance GPS technique.
APA, Harvard, Vancouver, ISO, and other styles
29

Hsu, Chao-Wei, and 徐超偉. "A Study of using the Least Squares Support Vector Machine Fitted GPS Ephemerides for the Impact on Point Positioning." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/44972421125852235920.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
101
This study presents solutions for GPS orbit computation from broadcast, precise ephemerides, and the precise broadcast orbit correction using least squares support vector machine. Moreover, it analyzes impacts of sampling time and fitting method and the choice of ephemerides on precision of orbit, also evaluates the accuracy of ground point positioning after fitting. According to the experimental results, the fitting errors of satellite’s 3D coordinate components are still in the range of precise ephemerides while using LSSVM fitting with training set of about 75% sampling rate in 2 hours. The single point positioning precision of fitting ephemerides are 0.3~0.7m higher, and the positioning satellite coordinates are about 1.5m closer than that obtained from original broadcast ephemerides computing. Results show that the proposed approach by using fitting of ephemerides to reduce computation process has been adopted, it solves the problem in the past that the precise ephemerides couldn’t directly be used in ground single point positioning due to the lack of satellite velocity measurement, and also prove that the studied approach is capable of providing an effective and universal precise fitting orbit, upgrading the accuracy of ground point positioning.
APA, Harvard, Vancouver, ISO, and other styles
30

Hu, Guan-Yi, and 胡冠儀. "Least Trimmed Square Support Vector Machine Regression and Its Applications." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/559zua.

Full text
Abstract:
碩士
國立虎尾科技大學
光電與材料科技研究所
99
There are many machine learning algorithms have developed since the ideal of artificial intelligence was proposed. Besides, in recent years, SVM among machine learning algorithms is generally used. Hence, many literatures about the support vector machine regression (SVMR) and the least squares-support vector machine regression (LS-SVMR) can be found in some well-known journals. In this thesis, for the robustness problem of the LS-SVMR, we propose the least trimmed squares support vector machine regression (LTS-SVMR) which is the hybrid of the least trimmed squares (LTS) and the LS-SVMR which is the improvement of the support vector machine regression (SVMR). Some literatures have pointed out that when the LTS method faces on the training sample with outliers, it can effectively remove outlier points. That is, robustness of the LS-SVMR is enhanced by combining the LS-SVMR and the LTS. However, the LTS method has one major drawback which is that the process of choosing the suitable initial function leads computation to be very large. For this problem we propose three methods of choosing the suitable initial function. First method is that an initial function is obtained by performing once estimation of the LS-SVMR before doing trimming process. Second method is that an optimal initial function is picked from training subsamples which are produced by random way before doing trimming process. Third method is that an optimal initial function is obtained by the simulated annealing (SA) algorithm before doing trimming process. In addition, in order to reducing process of complex computation, we propose the locally linear embeddings least trimmed squares support vector machine regression (LLE-LTS-SVMR) which combines our first method of the LTS-SVMR with the locally linear embeddings (LLE) which is the dimensionality reduction algorithm. Finally, experiment results show that three methods of the LTS-SVMR can improve the problem of low robustness of the LS-SVMR and the LTS-SVMR based on the SA algorithm is more reliable than other method of LTS-SVMR. Besides, the LLE-LTS-SVMR can reduce large computation process of first method of the LTS-SVMR and keep a good modeling ability.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Tzyy-Yun, and 黃子毓. "Estimate Cost of Bridge Construction Using Evolutionary Least Squares Support Vector Machine Inference Model(ELSIM) - New Taipei City Case Study." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/39285677227918074868.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
101
Taiwan's terrain is varied by the mountains and rivers. Therefore, part of the traffic bottleneck sections need to rely on bridges and the contribution of bridges to national economic developments is big. However, the cost of building a new bridge is immense, takes up a large proportion of local government budget. If reasonably calculates construction costs, it can reduce the waste of public affairs budget and ease the financial burden.. Ministry of Transport has established the database for completed bridges in Taiwan (Taiwan Bridge Management Information Systems). If uses data wisely, compare and organize useful information in order to calculate the reasonable cost for constructing bridges and exchange information, which can effectively reduce the unnecessary waste for all departments. This research collects Xinbei city bridge basic information as the construction cost factor, and applying statistical software (SPSS) for influencing factors, objectively pick out cost factor which affects bridge construction as the model input parameters, learning from the previous cases and experiences then train from Evolutionary Least Squares Support Vector Machine Inference Model (ELSIM) to identify the relationship of inputs (factors affecting construction costs) and output (cost), summed up the experts’ decision making process and the logic of analysis. In order to verify the accuracy of the construction cost estimation model, besides ELSIM model tests, this study also compares to other artificial intelligence programs (SVM and ESIM). ELSIM gets the best final confirmed result. According to the result, builds up the bridges construction cost prediction mode, to establish a "bridge construction cost estimation model", to support decision maker’s conduct.
APA, Harvard, Vancouver, ISO, and other styles
32

Pan, Pei-Huai, and 潘配淮. "Estimate Cost of Bridge Maintenance Using Evolutionary Least Squares Support Vector Machine Inference Model (ELSIM) - New Taipei City Case Study." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/07369265316193503986.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
101
Mountains and hills take up two-thirds of the total territory of Taiwan, transportation net works constantly expand along with economic developments and growing city spaces, building numerous bridges is the solution to overcome the natural barriers and obstacles. Maintenance and management is a very important issue while facing such a large number of bridges. Making budget for bridge maintenance usually depends on previous experiences as guideline. However, this somehow will cause troubles for bridge maintain department such as under budge, doubt of resource exclusions and poor execution. An appropriate code of conducts in order to evaluate the bridge database and accurately estimate funding for bridge maintenance will be able to assist management department to make decisions for budgeting and distribution which will improve bridge performances, safety and keep the transportation system in function. The most accurate way to assess the funding for bridge maintenance is the application of instruments. However, the cost and time consumption is stunning if the number of the required bridge is huge. Currently, visual detection is one of the practical methods, there is none damage to the structure of bridge, simple, easy applied, efficient and low cost. "Taiwan bridge management system" is now using DER & U visual detection evaluation method, according to four categories of “Degree”, “Extend”, “Relevancy”, and “Urgency” then fills the survey with the found defects as a result. This research will use the 21 examining items from the regular bridge survey sheet as a basis of model factors and organizes into desired format then using SPSS correlation analysis, and screening the relevance between factors and maintenance expenses. Then select 16 qualified evaluated factors as model factors. Applies Evolutionary Least Squares Support Vector Machine Inference Model (ELSIM) to learn from previous cases and experiences, summarized expert decision-making process and logic of analysis, establish the model of estimating bridge maintenance funding and support decision makers to decide and rational allocate maintenance funds to enhance the effectiveness of decision-making from construction management.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Chih-Yuan, and 陳智源. "A Study of Using Least Square Support Vector Machine for Three dimensional coordinatesTransformation." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/63248804078846618471.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
102
At present the Taiwan commonly used coordinates system has following three kinds, TWD67( TAIWAN DATUM 67,Taiwan''s Geodetic Datums)、TWD97(TAIWAN DATUM 97,Taiwan''s Geodetic Datums)、 WGS84 ( World Geodetic System,1984 )。Taiwan''s TWD67 coordinates system is by the regional non-center of earth geodetic coordinates datum, And divides“the place coordinates”and“the elevation coordinates”two sub-systemsIts elevation definition is begins reckoning the orthometric height from “the earth reduced plane”,The TWD97 coordinates system is by“the global center of earth geodetic coordinates datum”,And begins reckoning the ellipsoidal height by “the two axle ellipsoid dignity”,Because TWD67 coordinates system and the TWD97 coordinates system uses the datum and the ellipse spheroid parameter is different,The identical spot position can have the different coordinates value in two coordinates systems。 This research will use LSSVM union three kind of functions (lin, poly, RBF) carries on TWD97 and TWD67 the three dimensional coordinates transformation,And compares with the least squares method seven parameter coordinates transformation,The experimental achievement demonstrated that,uses LSSVM to carry on the three dimensional coordinates to transform, but reduces the elevation difference effectively the influence which transforms to the three dimensional coordinates,and uses the RBF function to obtain the achievement to be best,The transformation precision appraised picks the RMSE,the E value is:0.016,the N value is:0.010,the H value is:0.028,Surpasses far uses achievement of the least squares method seven parameter coordinates transformation,its RMSE E value is:0.088,the N value is: 0.065,the H value is:0.035。Demonstrated in addition by the experimental achievement that,uses LSSVM to be possible to quote the projection coordinates and ellipsoidal height makes the three dimensional coordinates transformation,Its transformation achievement, RMSE E value is:0.019, the N value is:0.013,the H value is:0.030,With uses the cartesian coordinates transformation achievement difference not to be big, according to this article research results, will conducive toward favor the simplification three dimensional coordinates transformation computation, the abbreviation resolving projection coordinates and the cartesian coordinates computation process, achieved time-saving, will reduce effort the effect。
APA, Harvard, Vancouver, ISO, and other styles
34

"Forecasting Mid-Term Electricity Market Clearing Price Using Support Vector Machines." Thesis, 2014. http://hdl.handle.net/10388/ETD-2014-05-1558.

Full text
Abstract:
In a deregulated electricity market, offering the appropriate amount of electricity at the right time with the right bidding price is of paramount importance. The forecasting of electricity market clearing price (MCP) is a prediction of future electricity price based on given forecast of electricity demand, temperature, sunshine, fuel cost, precipitation and other related factors. Currently, there are many techniques available for short-term electricity MCP forecasting, but very little has been done in the area of mid-term electricity MCP forecasting. The mid-term electricity MCP forecasting focuses electricity MCP on a time frame from one month to six months. Developing mid-term electricity MCP forecasting is essential for mid-term planning and decision making, such as generation plant expansion and maintenance schedule, reallocation of resources, bilateral contracts and hedging strategies. Six mid-term electricity MCP forecasting models are proposed and compared in this thesis: 1) a single support vector machine (SVM) forecasting model, 2) a single least squares support vector machine (LSSVM) forecasting model, 3) a hybrid SVM and auto-regression moving average with external input (ARMAX) forecasting model, 4) a hybrid LSSVM and ARMAX forecasting model, 5) a multiple SVM forecasting model and 6) a multiple LSSVM forecasting model. PJM interconnection data are used to test the proposed models. Cross-validation technique was used to optimize the control parameters and the selection of training data of the six proposed mid-term electricity MCP forecasting models. Three evaluation techniques, mean absolute error (MAE), mean absolute percentage error (MAPE) and mean square root error (MSRE), are used to analysis the system forecasting accuracy. According to the experimental results, the multiple SVM forecasting model worked the best among all six proposed forecasting models. The proposed multiple SVM based mid-term electricity MCP forecasting model contains a data classification module and a price forecasting module. The data classification module will first pre-process the input data into corresponding price zones and then the forecasting module will forecast the electricity price in four parallel designed SVMs. This proposed model can best improve the forecasting accuracy on both peak prices and overall system compared with other 5 forecasting models proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Chian-Ching, and 陳芊憬. "Integrating the grey theory and least square support vector machine to the patent classification." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/60870619625986124239.

Full text
Abstract:
碩士
中華科技大學
經營管理研究所
101
This study with integrating individual two-stage construction of consolidated grey and LS-SVM and rough set model method patent classification model of constructivism. Main purpose of hope through the sorting on the variable weight grey, to provide a collec-tion of roughly LS-SVM and a good starting point, then roughly LS-SVM and preliminary analysis of the data collection method, roughly through the integration of grey, LS-SVM and collection development a more rapid and accurate patent classification schemes.   In addition, on patent classification measure hands, in addition to refer to the tradi-tional number of pointers, this study also joined the industry type with pointers such as funding, hoping to complete through more diversified enterprise information, to help companies assess their own true advantage, and make the right decisions. This research through theory and literature discussion, established has new of patent classification mode, in after empirical of results found, through ash associated analysis method for research in the by consider patent classification measure pointer for weight value sort analysis, found ash associated can sort out important variable, again integration LS-SVM and the roughly collection mode analysis of accurate rate, are can upgrade individual LS-SVM and the in-dividual roughly collection analysis of accurate rate.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, En-Ju, and 林恩汝. "Applying the two stages classification to improve the Least square-support vector machine classification accuracy." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/18806001572914135369.

Full text
Abstract:
碩士
國立勤益科技大學
工業工程與管理系
95
Support Vector Machine (SVM) is a new technique for data classification. Least squares support vector (LS-SVM) machine was the reformulation of the principles of SVM. In this study, we have detected on BUPA liver disorders database using LS-SVM with Taguchi methods. BUPA Liver Disorders database includes 345 samples with 6 features and 2 class labels. The approach system has two stages. In the first stage, in order to effectively determine the parameters of the kernel function, Taguchi method was used to obtain better parameter settings. In the second stage, diagnosis of BUPA liver disorders database was conducted by using LS-SVM classifier. The classification accuracy was 95.07%. Compared with results of related research, our proposed system is very effective and reliable.
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Yu-Hsien, and 楊于賢. "Predicting of Semiconductor Book-to-Bill Ratio by Using Genetic Algorithm based Least Squared Support Vector Machine." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/34638577678921840342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Heng-Wei, and 李恆瑋. "Improved Least Trimmed Square Support Vector Machine Regression for Biological Systems Modeling and Its Application on Smart Phone." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/cq3f36.

Full text
Abstract:
碩士
國立虎尾科技大學
資訊工程研究所
102
Machine learning approaches have been the rapid development in recent years. In general, Least Squares Support Vector Machine (LS-SVM) is used more commonly in the machine learning algorithms. In this thesis, we firstly improve to avoid sort in least trimmed square (LTS) method for reduce the computation time. Because the LTS can for excluding outlier sample data, it can enhance robustness of LS-SVMR. Hence, the improved LTS-SVMR under avoiding sort in LTS and statistical analysis is proposed in this thesis. However, the selection of the initial function affected by predicted results. Hence, different initial function methods to the improved LTS-SVMR are also proposed for modeling with noise data. The second part of this thesis is to propose asymmetric LTS-SVMR (ALTS-SVMR) approach to remove asymmetric noise and to do prediction under asymmetric noise. That is, we apply the Box-Cox transformation to the improved LTS-SVMR approaches that can deal with asymmetric noise. Finally, we apply the proposed methods for modeling of biological systems and implement the proposed method on android smart phone systems. Keywords: Noise and outliers, Robust, Least trimmed square, Systems biology, Asymmetric, Smart phone, Box-Cox transformation.
APA, Harvard, Vancouver, ISO, and other styles
39

Van, der Merwe J. F. "Acoustic impulse detection algorithms for application in gunshot localization." Thesis, 2012. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000390.

Full text
Abstract:
M. Tech. Electrical Engineering.
Attempts to find computational efficient ways to identify and extract gunshot impulses from signals. Areas of study include Generalised Cross Correlation (GCC), sidelobe minimisation utilising Least Square (LS) techniques as well as training algorithms using a Reproducing Kernel Hilbert Space (RKHS) approach. It also incorporates Support Vector Machines (SVM) to train a network to recognise gunshot impulses. By combining these individual research areas more optimal solutions are obtainable.
APA, Harvard, Vancouver, ISO, and other styles
40

Irvine, Allison W. "Computational Analysis of Flow Cytometry Data." 2013. http://hdl.handle.net/1805/3367.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The objective of this thesis is to compare automated methods for performing analysis of flow cytometry data. Flow cytometry is an important and efficient tool for analyzing the characteristics of cells. It is used in several fields, including immunology, pathology, marine biology, and molecular biology. Flow cytometry measures light scatter from cells and fluorescent emission from dyes which are attached to cells. There are two main tasks that must be performed. The first is the adjustment of measured fluorescence from the cells to correct for the overlap of the spectra of the fluorescent markers used to characterize a cell’s chemical characteristics. The second is to use the amount of markers present in each cell to identify its phenotype. Several methods are compared to perform these tasks. The Unconstrained Least Squares, Orthogonal Subspace Projection, Fully Constrained Least Squares and Fully Constrained One Norm methods are used to perform compensation and compared. The fully constrained least squares method of compensation gives the overall best results in terms of accuracy and running time. Spectral Clustering, Gaussian Mixture Modeling, Naive Bayes classification, Support Vector Machine and Expectation Maximization using a gaussian mixture model are used to classify cells based on the amounts of dyes present in each cell. The generative models created by the Naive Bayes and Gaussian mixture modeling methods performed classification of cells most accurately. These supervised methods may be the most useful when online classification is necessary, such as in cell sorting applications of flow cytometers. Unsupervised methods may be used to completely replace manual analysis when no training data is given. Expectation Maximization combined with a cluster merging post-processing step gives the best results of the unsupervised methods considered.
APA, Harvard, Vancouver, ISO, and other styles
41

Guo, Zhi-Heng, and 郭致亨. "Intuitionistic Fuzzy C-Least Squares Support Vector Regression." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/42862931861349067169.

Full text
Abstract:
碩士
龍華科技大學
資訊管理系碩士班
103
The C- mean fuzzy clustering method (Fuzzy C-Means, FCM) has been widely used in a variety of different places, this study to develop a new type of intuitive C- fuzzy least squares support vector regression clustering method (Novel intuitionistic fuzzy c-least-squares support vector regression, IFC-LSSVR), and this method was applied to analyze the customer's e-learning platform and wheat seed data for analysis. It is divided into two stages Sammon mapping performed utilizing the dimension data conversion. The first phase will be analyzed by Sammon mapping data for conversion to reduce its complexity, and the second stage uses IFC-LSSVR and PSO. Finally, data analysis, the first stage to explain Sammon mapping The second phase of this study is to compare the proposed IFC-LSSVR and the other two methods (KM, FCM). The results of this study show proposed IFC-LSSVR relatively better than other comparison of methods have been proposed.
APA, Harvard, Vancouver, ISO, and other styles
42

Pinho, André Miguel da Silva. "Classification Models for Sleep Apnea Detection." Master's thesis, 2019. http://hdl.handle.net/10400.6/9998.

Full text
Abstract:
A dissertação incide na selecção de características, importância das mesmas num modelo de classificação e diferentes tipos de classificação para uma melhor detecção de apneia de sono, recorrendo apenas a um electrocardiograma (ECG) e comparando a precisão dos diferentes modelos. Recorrendo ao uso da base de dados Physionet Apnea-ECG, um filtro Savitsky Golay (sgolay) foi aplicado aos registos para limpar o sinal e depois obter o complexo QRS de forma a obter o Heart Rate Variability (HRV) e o ECG-Derived Respi¬ration (EDR). As características extraídas destes dois métodos foram usadas para treino, teste e validação dos classificadores. As características obtidas foram analisadas por forma a se selecionar as mais relevantes no contexto da detecção de apnea. Os conjuntos de treino e teste foram obtidos dividindo aleatoriamente os dados até uma boa performance ser atingida usando o k-fold cross-validation (k=IO). De acordo com os resultados obtidos, a melhor precisão foi de 82.12%, com uma sensibilidade e uma especificidade de 88.41% e 72.29%, respectivamente. Estes resultados preliminares podem levar a estudos complementares para implementação numa aplicação real
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Tzu-Yu, and 林子渝. "Empirical mode decomposition-based least squares support vector regression for foreign exchange rate forecasting." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/srfwv8.

Full text
Abstract:
博士
國立交通大學
管理科學系所
101
To address the nonlinear and non-stationary characteristics of financial time series such as foreign exchange rates, this study proposes a hybrid forecasting model using empirical mode decomposition (EMD) and least squares support vector regression (LSSVR) for foreign exchange rate forecasting. EMD is used to decompose the dynamics of foreign exchange rate into several intrinsic mode function (IMF) components and one residual component. LSSVR is constructed to forecast these IMFs and residual value individually, and then all these forecasted values are aggregated to produce the final forecasted value for foreign exchange rates. We will expect the empirical results from this paper show that the proposed EMD-LSSVR model outperforms the EMD-ARIMA (autoregressive integrated moving average) as well as the LSSVR and ARIMA models without time series decomposition.
APA, Harvard, Vancouver, ISO, and other styles
44

HONG, LING-CHUANG, and 洪令莊. "Text Mining, Google Trends Keywords and Least Squares Support Vector Regression in Forecasting Stock Prices." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41015037127421933376.

Full text
Abstract:
碩士
國立暨南國際大學
資訊管理學系
104
In this study, values of three stock markets, Dow Jones Industrial Average, Nasdaq Composite and Russell 2000, are predicted. Traditionally, time series models were applied in forecasting stock markets without considering external factors. This study uses Least Squares Support Vector Regression (LSSVR) model with hybrid data containing historical data and Google Trends keywords to forecast stock markets. This study proposes two ways to select keywords for Google Trends. The first one is the selection of popular keywords on the Google Trends homepage, and the second one is based on the text of Twitter. In this study, a three-stage experiment architecture was proposed to forecast stock markets and the Auto Regressive Integrated Moving Average (ARIMA) model is used predict time series data of stock markets. Numerical results show that the proposed model is a feasible way in predicting stock markets.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Chia-Li, and 陳佳莉. "Hybrid Logarithm Least-Squares Support Vector Regression with Cautious Random Particle Swarm Optimization for Mortality Prediction." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/24xp64.

Full text
Abstract:
博士
元智大學
資訊管理學系
105
Intensive care is very important in modern health care. Mortality prediction models are good outcome predictors for intensive care and resources allocation. Many research used the information technologies to construct new mortality prediction models. Healthcare professionals need to utilize intensive care resources effectively. Mortality prediction models help physicians decide which patients require intensive care the most and which do not. There are several approaches in mortality prediction models construction. The first approach is the tendency of mortality. The models of this stage are the tendency of severity for patients. The Glasgow Coma Scale (GCS) is also one of the severe tendency models. These models focused on the tendency of the mortalities which are pure scoring system but not probabilities system. The second approach is the probabilities approach. The APACHE II and SAPS II are two most popular models. These models were constructed with probity regression and used the probabilities as the outcome description of mortality. The Mortality Probability Model 2nd version (MPM II) is also one of the ICU outcome prediction models with probabilities. Artificial intelligence technologies have since been built into mortality prediction models, and form the basis of the information technology approach. These models are more accurate than the traditional models. This study retrospectively collected data on 695 patients admitted to intensive care units and constructed a novel mortality prediction model with logarithm least-squares support vector regression (LLS-SVR) and cautious random particle swarm optimization (CRPSO). LLS-SVR-CRPSO was employed to optimally select the parameters of the hybrid system. Logarithm Least-Squares Support Vector Regression (LLS-SVR) has been applied in addressing forecasting problems in various fields, including bioinformatics, financial time series, electronics, plastic injection molding, chemistry and cost estimations. Cautious Random Particle Swarm Optimization (CRPSO) uses random values that allow pbest and gbest to be adjusted to the correct weight using a random value. CRPSO limits the random value to be conditional, to avoid premature convergence into a local optimum. If the random value is greater than 0.8, another random value is chosen. The movement of the range (cautious flow) is controlled to avoid premature convergence. This new mortality model can offer agile support for physicians' intensive care decision-making.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Yi-Ting, and 黃怡婷. "Using Social Media Data and the Least Squares Support Vector Regression to Predict Movie Box Office." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8b2m49.

Full text
Abstract:
碩士
國立暨南國際大學
資訊管理學系
106
Nowadays increasingly busy lives and the and easy accessibility of Internet, the development of social networking sites has been promoted, and the number of users has increased dramatically year by year. This study used Twitter, one of the top 10 global community websites in 2017, as a source of collection of emotional analysis data, and as the combination of this study. The other data were collected from movie websites of Box Office Mojo and IMDB (Internet Movie Database) This study uses the least square support vector regression (LSSVR) and the following three models Multivariate Linear Regression (MLR), Back Propagation Neural Networks (BPNN), the General Regression Neural Network (GRNN) to analyze the data. The cross validation procedure was performed. The numerical results indicated that the Mean Absolute Percentage Error (MAPE) of the emotional data combined with structured data is lower than that generated by the single data (emotional data or structured data). In addition, the prediction results of LSSVR model are better than that of the other modes.
APA, Harvard, Vancouver, ISO, and other styles
47

Ha, Truong Thi Thu, and Truong Thi Thu Ha. "Sliding-Window Forecasting of Foreign Exchange Rates with Nature-inspired Metaheuristic Optimization-based Least Squares Support Vector Regression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/02418595033041254080.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
104
The forecasting of exchange rates has become a challenging area of research that has attracted many researchers over recent years. This work presents a sliding-window metaheuristic optimization-based forecast system for one-step ahead forecasting. The proposed system is a graphical user interface, which is developed in the MATLAB environment and functions as a stand-alone application. The system integrates the novel firefly algorithm (FA), metaheuristic (Meta) intelligence, and least squares support vector regression (LSSVR), namely MetaFA-LSSVR. The MetaFA automatically tunes the hyperparameters of the LSSVR to construct an optimal LSSVR prediction model. The optimization effectiveness of the MetaFA is verified using ten benchmark functions. Two case studies on the daily Canadian dollar-USD exchange rate (CAN/USD) and the four-hour closing EUR-USD rates (EUR/USD) were used to confirm the performance of the system, in which the mean absolute percentage errors are 0.2532% and 0.169%, respectively. The MetaFA-LSSVR has an 89.8-99.7% greater predictive accuracy than prior work when applied to the currency pair CAN/USD. With respect to the EUR/USD exchange rate, the error rates obtained using the proposed system were up to 23.9% better than those obtained by the LSSVR system. Therefore, the sliding-window metaheuristic system is potentially useful for decision-makers in financial markets.
APA, Harvard, Vancouver, ISO, and other styles
48

王馴瀚. "Kernel dynamical transaction model:Using Support Vector Regression and Kernel Partial Least Squares Regression to provide more TXO information for an investor." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/51769959411923767463.

Full text
Abstract:
碩士
國立彰化師範大學
商業教育學系
96
Because TXO has many merits, it is attractive. In recent years, many scholars have applied the technology of data mining in financial domain. Many researchers have proved that the technology of data Ming is effective. On the basis of above reasons, I have a motive. I want to use support vector regression or kernel partial least squares regression to overcome problems of time series. Problems of time series are with respect to finance. Then, I engage in the research to quite popular TXO. The purpose of this research is that I want to search some model which can give an investor more information. Namely I want to use support vector regression or kernel partial least squares regression to predict TXO’s price. The information of prediction is advantageous to an investor. An investor can use the information to make advantageous investment decision. Thus it can increase probability of making a profit and decrease the relative risk. Namely according to TXO’s prediction price, an investor can adopt advantageous transaction strategy to implement arbitrage. Empirical results are as follows: [1] When the model is accurate at the variation direction, predictive value of the model is not closer to actual value. [2] When predictive value of the model is closer to actual value, the model is not accurate at the variation direction. [3] If an investor wants to know level of tomorrow’s actual value much, he should adopt KPLSR model as auxiliary tool to make a decision. [4] If an investor want to know variation direction of tomorrow’s actual value much, he should adopt SVR model as auxiliary tool to make a decision. [5] If an investor can know variation direction f tomorrow’s actual value by using model, he can earn arbitrage profit by buying or selling TXO. Thus, an investor cares about variation direction of tomorrow’s actual value much. An investor does not care predictive value which is closer to actual value. For practical value, the SVR model is higher than the KPLSR model. Keywords: support vector regression(SVR), kernel partial least squares regression(KPLSR), TAIEX options(TXO)
APA, Harvard, Vancouver, ISO, and other styles
49

Leal, Ana Catarina Lacerda. "Estimation of petroleum products properties based on combined NIR and high resolution 1H NMR technologies." Doctoral thesis, 2020. http://hdl.handle.net/10773/28864.

Full text
Abstract:
Petroleum products are the result of several refining processes and operations. The blending of different petroleum fractions is the last operation in gasoline production and the mixing ratios are continuously adjusted to assure that the specifications are under the limits imposed by regulations. The quality control of gasoline and other petroleum products is performed at the laboratory of Matosinhos refinery and involve numerous analyses that are time-consuming and resource intensive. The main goal of this project is to develop methodologies capable of monitoring the physical-chemical properties of gasoline using both near-infrared and proton nuclear magnetic resonance spectroscopies. The existing correlation between the physical properties and the chemical composition of a sample allows the quantification of different physical-chemical properties using the chemical information given by the spectra. To achieve that, different chemometric methods need to be implemented to successfully relate the data resulting from spectroscopic analysis with the results obtained by the conventional standard methods. The benefits of spectroscopic analysis for these ends have been widely evidenced in the last decades. One of the main advantages of spectroscopic techniques is the real time data, which represents an important feature for refinery operation. The project began with the database creation. In this work, a total of 498 samples of gasoline were analysed by standard methods to determine the following physical-chemical properties: research octane number, motor octane number, olefinic content, aromatic content, methyl t-butyl ether content, benzene content, oxygen content, percent evaporated at 70 ºC, percent evaporated at 100 ºC, percent evaporated at 150 ºC, final boiling point, density, sulfur content, and vapour pressure. Additionally, for the same samples, the near-infrared and proton nuclear magnetic resonance spectra were recorded. The second part of this project involved the application of multivariate statistical methods to correlate the physical-chemical properties values with the quantitative information given by both spectra. Before the regression, spectral data is pre-processed to reduce/remove noise or less relevant information from spectra, improving the subsequent data analysis. Phase and baseline correction, scaling, spectral editing and binning, scatter corrections and spectral derivatives are the pre-processing techniques whose applicability was investigated in this project. Two different multivariate statistical regression methods were applied: partial least squares, which is one of the most frequently used methods for quantitative spectral analysis, and support vector machines, which is a machine learning based tool with great potential for regression applications. Besides, two alternative approaches to find the optimum complexity of the partial least squares’ models were compared. Analysing the performance results of the developed models, it was possible to conclude that both statistical methodologies, support vector regression and partial least squares, originate models that can accurately estimate most of gasoline physical-chemical properties, presenting comparable performance indexes. The potential of combining near-infrared and proton nuclear magnetic resonance spectroscopic data to improve the accuracy of predictions was investigated in this work by following two different procedures. The first consists in merging the data into one predictor block and the second consists in modelling the two different blocks separately. The results demonstrated that the models developed using the later approach can outperform the models calibrated with only one type of spectroscopic data. At the end of this project, is was possible to confirm that both spectroscopic techniques combined with multivariate statistical models can be used to estimate the physical-chemical properties of petroleum products, namely gasoline. Combining data from both spectroscopic techniques gives even more accurate estimations as compared to the ones obtained using only one type of spectral information. The accuracy of predictions allowed the implementation of this technology in the laboratory of Matosinhos refinery, replacing the conventional standard methods for the characterization of blends. This implementation allows a more flexible, prompt and less resource-intensive process of constituting a gasoline lot.
Os produtos petrolíferos resultam de diversos processos e operações de refinação. A mistura de diferentes frações do petróleo é a última operação no processo de produção de gasolina e a sua formulação é continuamente ajustada de forma a assegurar que as especificações se encontram dentro dos limites impostos pela regulamentação. O controlo de qualidade da gasolina e de outros produtos petrolíferos é efetuado no laboratório da refinaria de Matosinhos e envolve inúmeras análises que requerem muito tempo e recursos. O principal objetivo deste projeto é desenvolver metodologias que permitam monitorizar as propriedades físico-químicas da gasolina utilizando as espectroscopias de infravermelho próximo e ressonância magnética nuclear. A correlação existente entre as propriedades físicas e a composição química de uma amostra permite a quantificação de diferentes propriedades físico-quimicas a partir da informação química presente nos espectros. Para tal, diferentes métodos quimiométricos têm de ser implementados de forma a relacionar os dados resultantes da análise espectroscópica com os resultados obtidos através dos métodos convencionais de análise. Os benefícios da análise espectroscópica para este tipo de finalidade têm sido amplamente evidenciados nas últimas décadas. Uma das principais vantagens das técnicas de espectroscopia é a obtenção de dados em tempo real, o que representa uma característica importante na refinação. O início do projeto coincide com a criação da base de dados. Neste trabalho, um total de 498 amostras de gasolina foram analisadas através dos métodos convencionais para determinar as seguintes propriedades físico-químicas: número de octano (RON e MON), teor de olefinas, teor de aromáticos, teor de éter metil t-butílico, teor de benzeno, teor de oxigénio, percentagem de evaporado a 70 ºC, percentagem de evaporado a 100 ºC, percentagem de evaporado a 150 ºC, ponto final de ebulição, massa volúmica, teor de enxofre e pressão de vapor. Adicionalmente, e para as mesmas amostras, os espectros de infravermelho próximo e de ressonância magnética nuclear foram registados. A segunda parte deste projeto envolveu a aplicação de métodos estatísticos multivariados para correlacionar os valores das propriedades físico-químicas com a informação quantitativa de ambos os espetros. Antes da regressão, os dados dos espetros são pré-processados de forma a remover/reduzir ruido ou informação pouco relevante dos mesmos, melhorando assim a robustez da análise de dados subsequente. As técnicas cuja aplicabilidade foi investigada neste projeto são a correção de fase e de linha de base, reescalonamento, edição espetral e binning, correções de sinal e derivadas. Dois métodos diferentes de regressão foram aplicados: regressão por mínimos quadrados parciais, que é um dos métodos mais utilizados na análise quantitativa de espetros, e máquinas vetor de suporte, que é uma ferramenta baseada na aprendizagem computacional com grande potencial de aplicação em regressão. Foram também comparadas duas abordagens para a determinação da complexidade ótima dos modelos por mínimos quadrados parciais. Analisando os resultados relativos à performance dos modelos desenvolvidos, foi possível concluir que ambas as metodologias estatísticas, regressão por vetores de suporte e por mínimos quadrados parciais, dão origem a modelos que estimam a maior parte das propriedades da gasolina com elevada exatidão, apresentando índices de performance semelhantes. O potencial de combinar os dados de ambas as técnicas de espetroscopia para a melhoria da exatidão nas previsões foi investigado neste trabalho através de duas abordagens diferentes. A primeira consiste na junção dos dados num único bloco de variáveis de entrada e a segunda consiste em modelar os dois blocos separadamente. Os resultados demonstram que a performance dos modelos desenvolvidos através desta última abordagem supera a dos modelos calibrados apenas com um tipo de espetro. No final deste projeto foi possível confirmar que ambas as técnicas de espectroscopia em combinação com modelos estatísticos podem ser usadas na estimativa de propriedades físico-químicas de produtos petrolíferos, nomeadamente da gasolina. A utilização combinada das duas técnicas permite a obtenção de estimativas ainda mais rigorosas comparativamente às obtidas fazendo uso de apenas um tipo de informação espectroscópica. Uma vez que se trata de duas tecnologias rápidas, o seu uso paralelo acrescenta valor ao processo de controlo de qualidade. A exatidão das previsões permitiu a implementação desta tecnologia no laboratório da refinaria de Matosinhos, substituindo os métodos convencionais para a caracterização de estudos de misturas. Esta implementação permite um processo de constituição de lotes de gasolina mais flexível, célere e com utilização de menos recursos.
Programa Doutoral em Engenharia da Refinação, Petroquímica e Química
APA, Harvard, Vancouver, ISO, and other styles
50

Ambasana, Nikita. "Analysis, Diagnosis and Design for System-level Signal and Power Integrity in Chip-package-systems." Thesis, 2017. http://etd.iisc.ernet.in/2005/3609.

Full text
Abstract:
The Internet of Things (IoT) has ushered in an age where low-power sensors generate data which are communicated to a back-end cloud for massive data computation tasks. From the hardware perspective this implies co-existence of several power-efficient sub-systems working harmoniously at the sensor nodes capable of communication and high-speed processors in the cloud back-end. The package-board system-level design plays a crucial role in determining the performance of such low-power sensors and high-speed computing and communication systems. Although there exist several commercial solutions for electromagnetic and circuit analysis and verification, problem diagnosis and design tools are lacking leading to longer design cycles and non-optimal system designs. This work aims at developing methodologies for faster analysis, sensitivity based diagnosis and multi-objective design towards signal integrity and power integrity of such package-board system layouts. The first part of this work aims at developing a methodology to enable faster and more exhaustive design space analysis. Electromagnetic analysis of packages and boards can be performed in time domain, resulting in metrics like eye-height/width and in frequency domain resulting in metrics like s-parameters and z-parameters. The generation of eye-height/width at higher bit error rates require longer bit sequences in time domain circuit simulation, which is compute-time intensive. This work explores learning based modelling techniques that rapidly map relevant frequency domain metrics like differential insertion-loss and cross-talk, to eye-height/width therefore facilitating a full-factorial design space sweep. Numerical results performed with artificial neural network as well as least square support vector machine on SATA 3.0 and PCIe Gen 3 interfaces generate less than 2% average error with order of magnitude speed-up in eye-height/width computation. Accurate power distribution network design is crucial for low-power sensors as well as a cloud sever boards that require multiple power level supplies. Achieving target power-ground noise levels for low power complex power distribution networks require several design and analysis cycles. Although various classes of analysis tools, 2.5D and 3D, are commercially available, the presence of design tools is limited. In the second part of the thesis, a frequency domain mesh-based sensitivity formulation for DC and AC impedance (z-parameters) is proposed. This formulation enables diagnosis of layout for maximum impact in achieving target specifications. This sensitivity information is also used for linear approximation of impedance profile updates for small mesh variations, enabling faster analysis. To enable designing of power delivery networks for achieving target impedance, a mesh-based decoupling capacitor sensitivity formulation is presented. Such an analytical gradient is used in gradient based optimization techniques to achieve an optimal set of decoupling capacitors with appropriate values and placement information in package/boards, for a given target impedance profile. Gradient based techniques are far less expensive than the state of the art evolutionary optimization techniques used presently for a decoupling capacitor network design. In the last part of this work, the functional similarities between package-board design and radio frequency imaging are explored. Qualitative inverse-solution methods common to the radio frequency imaging community, like Tikhonov regularization and Landweber methods are applied to solve multi-objective, multi-variable signal integrity package design problems. Consequently a novel Hierarchical Search Linear Back Projection algorithm is developed for an efficient solution in the design space using piecewise linear approximations. The presented algorithm is demonstrated to converge to the desired signal integrity specifications with minimum full wave 3D solve iterations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography