Rozprawy doktorskie na temat „PCA ALGORITHM”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „PCA ALGORITHM”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Petters, Patrik. "Development of a Supervised Multivariate Statistical Algorithm for Enhanced Interpretability of Multiblock Analysis". Thesis, Linköpings universitet, Matematiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138112.
Pełny tekst źródłaErgin, Emre. "Investigation Of Music Algorithm Based And Wd-pca Method Based Electromagnetic Target Classification Techniques For Their Noise Performances". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611218/index.pdf.
Pełny tekst źródłaRomualdo, Kamilla Vogas. "Problemas direto e inverso de processos de separação em leito móvel simulado mediante mecanismos cinéticos de adsorção". Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6750.
Pełny tekst źródłaSeveral important industrial applications involving adsorption processes, citing as an example the product purification, separation of substances, pollution control and moisture among others. The growing interest in processes of purification of biomolecules is mainly due to the development of biotechnology and the demand of pharmaceutical and chemical products with high purity. The simulated moving bed (SMB) chromatography is a continuous process that has been applied to simulate the movement of the adsorbent bed, in a countercurrent to the movement of liquid through the periodic exchange of the positions of input and output currents, being operated so continuous, notwithstanding the purity of the outlet streams. This is the extract, rich in the more strongly adsorbed component, and the raffinate, rich in the more weakly adsorbed component, the method being particularly suited to binary separations. The aim of this thesis is to study and evaluate different approaches using stochastic optimization methods for the inverse problem of the phenomena involved in the separation process in LMS. We used discrete models with different approaches to mass transfer. With the benefit of using a large number of theoretical plates in a column of moderate length, in this process the separation increases as the solute flowing through the bed, i.e. as many times as molecules interact between the mobile phase and stationary phase thus achieving the equilibrium. The modeling and simulation verified in these approaches allowed the assessment and identification of the main characteristics of a separation unit LMS. The application under consideration refers to the simulation of the separation of Ketamine and Baclofen. These compounds were chosen because they are well characterized in the literature and are available in kinetic studies and equilibrium adsorption on experimental results. With the results of experiments evaluated the behavior of the direct and inverse problem of a separation unit LMS in order to compare these results, always based on the criteria of separation efficiency between the mobile and stationary phases. The methods studied were the GA (Genetic Algorithm) and PCA (Particle Collision Algorithm) and we also made a hybridization between the GA and PCA. This thesis, we analyzed and compared the optimization methods in different aspects of the kinetic mechanism for mass transfer between the adsorption and desorption of the adsorbent solid phases.
SINGH, BHUPINDER. "A HYBRID MSVM COVID-19 IMAGE CLASSIFICATION ENHANCED USING PARTICLE SWARM OPTIMIZATION". Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18864.
Pełny tekst źródłaWang, Xuechuan, i n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition". Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.
Pełny tekst źródłaWang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition". Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.
Pełny tekst źródłaThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
Rimal, Suraj. "POPULATION STRUCTURE INFERENCE USING PCA AND CLUSTERING ALGORITHMS". OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2860.
Pełny tekst źródłaKatadound, Sachin. "Face Recognition: Study and Comparison of PCA and EBGM Algorithms". TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/241.
Pełny tekst źródłaPerez, Gallardo Jorge Raúl. "Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)". Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10505/1/perez_gallardo_partie_1_sur_2.pdf.
Pełny tekst źródłaLacasse, Alexandre. "Bornes PAC-Bayes et algorithmes d'apprentissage". Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27635/27635.pdf.
Pełny tekst źródłaThe main purpose of this thesis is the theoretical study and the design of learning algorithms returning majority-vote classifiers. In particular, we present a PAC-Bayes theorem allowing us to bound the variance of the Gibbs’ loss (not only its expectation). We deduce from this theorem a bound on the risk of a majority vote tighter than the famous bound based on the Gibbs’ risk. We also present a theorem that allows to bound the risk associated with general loss functions. From this theorem, we design learning algorithms building weighted majority vote classifiers minimizing a bound on the risk associated with the following loss functions : linear, quadratic and exponential. Also, we present algorithms based on the randomized majority vote. Some of these algorithms compare favorably with AdaBoost.
Koutsogiannis, Grigorios. "Novel TDE demodulator and kernal-PCA denoising algorithms for improvement of reception of communication signal". Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401349.
Pełny tekst źródłaShanian, Sara. "Sample Compressed PAC-Bayesian Bounds and Learning Algorithms". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29037/29037.pdf.
Pełny tekst źródłaIn classification, sample compression algorithms are the algorithms that make use of the available training data to construct the set of possible predictors. If the data belongs to only a small subspace of the space of all "possible" data, such algorithms have the interesting ability of considering only the predictors that distinguish examples in our areas of interest. This is in contrast with non sample compressed algorithms which have to consider the set of predictors before seeing the training data. The Support Vector Machine (SVM) is a very successful learning algorithm that can be considered as a sample-compression learning algorithm. Despite its success, the SVM is currently limited by the fact that its similarity function must be a symmetric positive semi-definite kernel. This limitation by design makes SVM hardly applicable for the cases where one would like to be able to use any similarity measure of input example. PAC-Bayesian theory has been shown to be a good starting point for designing learning algorithms. In this thesis, we propose a PAC-Bayes sample-compression approach to kernel methods that can accommodate any bounded similarity function. We show that the support vector classifier is actually a particular case of sample-compressed classifiers known as majority votes of sample-compressed classifiers. We propose two different groups of PAC-Bayesian risk bounds for majority votes of sample-compressed classifiers. The first group of proposed bounds depends on the KL divergence between the prior and the posterior over the set of sample-compressed classifiers. The second group of proposed bounds has the unusual property of having no KL divergence when the posterior is aligned with the prior in some precise way that we define later in this thesis. Finally, for each bound, we provide a new learning algorithm that consists of finding the predictor that minimizes the bound. The computation times of these algorithms are comparable with algorithms like the SVM. We also empirically show that the proposed algorithms are very competitive with the SVM.
Knapo, Peter. "Vývoj algoritmů pro digitální zpracování obrazu v reálním čase v DSP procesoru". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217872.
Pełny tekst źródłaZirakiza, Brice, i Brice Zirakiza. "Forêts Aléatoires PAC-Bayésiennes". Master's thesis, Université Laval, 2013. http://hdl.handle.net/20.500.11794/24036.
Pełny tekst źródłaDans ce mémoire de maîtrise, nous présentons dans un premier temps un algorithme de l'état de l'art appelé Forêts aléatoires introduit par Léo Breiman. Cet algorithme effectue un vote de majorité uniforme d'arbres de décision construits en utilisant l'algorithme CART sans élagage. Par après, nous introduisons l'algorithme que nous avons nommé SORF. L'algorithme SORF s'inspire de l'approche PAC-Bayes, qui pour minimiser le risque du classificateur de Bayes, minimise le risque du classificateur de Gibbs avec un régularisateur. Le risque du classificateur de Gibbs constitue en effet, une fonction convexe bornant supérieurement le risque du classificateur de Bayes. Pour chercher la distribution qui pourrait être optimale, l'algorithme SORF se réduit à être un simple programme quadratique minimisant le risque quadratique de Gibbs pour chercher une distribution Q sur les classificateurs de base qui sont des arbres de la forêt. Les résultasts empiriques montrent que généralement SORF est presqu'aussi bien performant que les forêts aléatoires, et que dans certains cas, il peut même mieux performer que les forêts aléatoires.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
Germain, Pascal. "Algorithmes d'apprentissage automatique inspirés de la théorie PAC-Bayes". Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26191/26191.pdf.
Pełny tekst źródłaAt first, this master thesis presents a general PAC-Bayes theorem, from which we can easily obtain some well-known PAC-Bayes bounds. Those bounds allow us to compute a guarantee on the risk of a classifier from its achievements on the training set. We analyze the behavior of two PAC-Bayes bounds and we determine peculiar characteristics of classifiers favoured by those bounds. Then, we present a specialization of those bounds to the linear classifiers family. Secondly, we conceive three new machine learning algorithms based on the minimization, by conjugate gradient descent, of various mathematical expressions of the PAC-Bayes bounds. The last algorithm uses a part of the training set to capture a priori knowledges. One can use those algorithms to construct majority vote classifiers as well as linear classifiers implicitly represented by the kernel trick. Finally, an elaborated empirical study compares the three algorithms and shows that some versions of those algorithms are competitive with both AdaBoost and SVM.
Inscrit au Tableau d'honneur de la Faculté des études supérieures
Awasthi, Pranjal. "Approximation Algorithms and New Models for Clustering and Learning". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/266.
Pełny tekst źródłaMinotti, Gioele. "Sviluppo di algoritmi di machine learning per il monitoraggio stradale". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Znajdź pełny tekst źródłaCarletti, Davide. "Applicazioni dell'analisi tensoriale delle componenti principali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Znajdź pełny tekst źródłaBerlier, Jacob A. "A Parallel Genetic Algorithm for Placement and Routing on Cloud Computing Platforms". VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2406.
Pełny tekst źródłaClasson, Johan, i Viktor Andersson. "Procedural Generation of Levels with Controllable Difficulty for a Platform Game Using a Genetic Algorithm". Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129801.
Pełny tekst źródłaKini, Rohit Ravindranath. "Sensor Position Optimization for Multiple LiDARs in Autonomous Vehicles". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289597.
Pełny tekst źródła3D- sensor LiDAR, är en sensor som används i stor utsträckning inom den autonoma fordonsindustrin, men LiDAR- placeringsproblemet studeras inte i stor utsträckning. Detta uppsatsarbete föreslår en ram i en öppen källkod för autonom körningssimulator (CARLA) som syftar till att lösa LiDAR- placeringsproblem, baserat på de uppgifter som LiDAR är avsedda för i de flesta av de autonoma fordonen. LiDAR- placeringsproblem löses genom att förbättra punktmolntätheten runt fordonet, och detta beräknas med LiDAR Occupancy Boards (LOB). Genom att introducera LiDAR Occupancy som en objektiv funktion används den genetiska algoritmen för att optimera detta problem. Denna metod kan utökas för flera LiDAR- placeringsproblem. Dessutom kan LiDAR- scanningsalgoritm (NDT) för flera LiDAR- placeringsproblem också användas för att hitta en bättre matchning för LiDAR för första eller referens. Flera experiment utförs i simulering med ett annat fordon lastbil och bil, olika LiDAR-sensorer Velodyne 16 och 32kanals LiDAR, och, genom att variera intresseområde (ROI), för att testa skalbarhet och teknisk robusthet i ramverket. Slutligen valideras detta ramverk genom att jämföra de nuvarande och föreslagna LiDAR- positionerna på lastbilen.
Soukup, Jiří. "Metody a algoritmy pro rozpoznávání obličejů". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-374588.
Pełny tekst źródłaBacchielli, Tommaso. "Algoritmi di Machine Learning per il riconoscimento di attività umane da vibrazioni strutturali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Znajdź pełny tekst źródłaXingwen, Ding, Zhai Wantao, Chang Hongyu i Chen Ming. "CMA BLIND EQUALIZER FOR AERONAUTICAL TELEMETRY". International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624262.
Pełny tekst źródłaDavis, Daniel Jacob. "Achieving Six Sigma printed circuit board yields by improving incoming component quality and using a PCBA prioritization algorithm". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43831.
Pełny tekst źródłaIncludes bibliographical references (p. 139-143).
Printed circuit board assemblies (PCBAs) are the backbone of the electronics industry. PCBA technologies are keeping pace with Moore's Law and will soon enable the convergence of video, voice, data, and mobility onto a single device. With the rapid advancements in product and component technologies, manufacturing tests are being pushed to the limits as consumers are demanding higher quality and more reliable electronics than ever before. Cisco Systems, Inc. (Cisco) currently manufactures over one thousand different types of printed circuit board assemblies (PCBAs) per quarter all over the world. Each PCBA in Cisco's portfolio has an associated complexity to its design determined by the number of interconnects, components, and other variables. PCBA manufacturing yields have historically been quite variable. In order to remain competitive, there is an imminent need to attain Six Sigma PCBA yields while controlling capital expenditures and innovating manufacturing test development and execution. Recently, Cisco kicked off the Test Excellence initiative to improve overall PCBA manufacturing yields and provided the backdrop to this work study. This thesis provides a first step on the journey to attaining Six Sigma PCBA manufacturing yields. Using Six Sigma techniques, two hypotheses are developed that will enable yield improvements: (1) PCBA yields can be improved by optimizing component selection across the product portfolio by analyzing component cost and quality levels, and (2) Using the Six Sigma DMAIC (define-measure-analyze-improve-control) method and the TOPSIS (Technique for Order Preferences by Similarity to Ideal Solutions) algorithm, PCBA yields will improve by optimally prioritizing manufacturing resources on the most important PCBAs first.
(cont.) The two analytical tools derived in this thesis will provide insights into how PCBA manufacturing yields can be improved today while enabling future yield improvements to occur.
by Daniel Jacob Davis.
S.M.
M.B.A.
Marques, Daniel Soares e. "Sistema misto reconfigurável aplicado à Interface PCI para Otimização do Algoritmo Non-local Means". Universidade Federal da Paraíba, 2012. http://tede.biblioteca.ufpb.br:8080/handle/tede/6075.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The digital image processing field is continually evolving and, although the diverse application areas, the commonly problems found converge to methods capable to improve visual information for analysis and interpretation. A major limitation issue on image precision is noise, which is defined as a perturbation in the image. The Non-Local Means (NLM) method stands out as the state-of-the-art of digital image denoising filtering. However, its computational complexity is an obstacle to make it practical on general purpose computing applications. This work presents a computer system implementation, developed with parts implemented in software and hardware applied to PCI, to optimize the NLM algorithm using hardware acceleration techniques, allowing a greater efficiency than is normally provided by general use processors. The use of reconfigurable computing helped in developing the hardware system, providing the modification of the described circuit in its use environment, accelerating the project implementation. Using an FPGA prototyping kit for PCI, dedicated to perform the dedicated calculation of the Squared Weighted Euclidean Distance, the results obtained show a gain of up to 3.5 times greater than the compared optimization approaches, also maintaining the visual quality of the denoising filtering.
A área de processamento de imagens digitais está evoluindo continuamente e, embora as áreas de aplicações sejam diversas, os problemas encontrados comumente convergem para os métodos capazes de melhorar a informação visual para a análise e interpretação. Uma das principais limitações em questão de precisão de imagens é o ruído, que é definido como uma perturbação na imagem. O método Non-Local Means (NLM) destaca-se como o estado da arte de filtragem de ruído. Contudo, sua complexidade computacional é um empecilho para torná-lo prático em aplicações computacionais de uso geral. O presente trabalho apresenta a implementação de um sistema computacional, desenvolvido com partes executadas em software e em hardware aplicado à PCI, visando a otimização do algoritmo NLM através de técnicas de aceleração em hardware, permitindo uma eficiência maior do que normalmente é fornecida por processadores de uso geral. O uso da computação reconfigurável auxiliou no desenvolvimento do sistema em hardware, proporcionando a modificação do circuito descrito no ambiente de sua utilização, acelerando a implementação do projeto. Utilizando um kit PCI de prototipação FPGA, para efetuar o cálculo dedicado da Distância Euclidiana Quadrática Ponderada, os resultados obtidos nos testes exibem um ganho de tempo até 3.5 vezes maior que as abordagens de otimização comparadas, mantendo também a qualidade visual da filtragem estabilizada.
Wessman, Filip. "Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43175.
Pełny tekst źródłaDella, Chiesa Enrico. "Implementazione Tensorflow di Algoritmi di Anomaly Detection per la Rilevazione di Intrusioni Mediante Signals of Opportunity (SoOP)". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Znajdź pełny tekst źródłaVan, der Walt Marizelle. "Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der Walt". Thesis, North-West University, 2011. http://hdl.handle.net/10394/9536.
Pełny tekst źródłaThesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
Cazelles, Elsa. "Statistical properties of barycenters in the Wasserstein space and fast algorithms for optimal transport of measures". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0125/document.
Pełny tekst źródłaThis thesis focuses on the analysis of data in the form of probability measures on R^d. The aim is to provide a better understanding of the usual statistical tools on this space endowed with the Wasserstein distance. The first order statistical analysis is a natural notion to consider, consisting of the study of the Fréchet mean (or barycentre). In particular, we focus on the case of discrete data (or observations) sampled from absolutely continuous probability measures (a.c.) with respect to the Lebesgue measure. We thus introduce an estimator of the barycenter of random measures, penalized by a convex function, making it possible to enforce its a.c. Another estimator is regularized by adding entropy when computing the Wasserstein distance. We are particularly interested in controlling the variance of these estimators. Thanks to these results, the principle of Goldenshluger and Lepski allows us to obtain an automatic calibration of the regularization parameters. We then apply this work to the registration of multivariate densities, especially for flow cytometry data. We also propose a test statistic that can compare two multivariate distributions, efficiently in terms of computational time. Finally, we perform a second-order statistical analysis to extract the global geometric tendency of a dataset, also called the main modes of variation. For that purpose, we propose algorithms allowing to carry out a geodesic principal components analysis in the space of Wasserstein
Jošth, Radovan. "Využití GPU pro algoritmy grafiky a zpracování obrazu". Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-261274.
Pełny tekst źródłaBjörklund, Oscar. "Kompakthet av procedurellt genererade grottsystem : En jämförelse av procedurellt genererade grottsystem". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12355.
Pełny tekst źródłaUyanik, Basar. "Cell Formation: A Real Life Application". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606635/index.pdf.
Pełny tekst źródłaDurán, Alcaide Ángel. "Development of high-performance algorithms for a new generation of versatile molecular descriptors. The Pentacle software". Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7201.
Pełny tekst źródłaEl trabajo que se presenta en esta tesis se ha centrado en el desarrollo de algoritmos de altas prestaciones para la obtención de una nueva generación de descriptores moleculares, con numerosas ventajas con respecto a sus predecesores, adecuados para diversas aplicaciones en el área del diseño de fármacos, y en su implementación en un programa científico de calidad comercial (Pentacle). Inicialmente se desarrolló un nuevo algoritmo de discretización de campos de interacción molecular (AMANDA) que permite extraer eficientemente las regiones de máximo interés. Este algoritmo fue incorporado en una nueva generación de descriptores moleculares independientes del alineamiento, denominados GRIND-2. La rapidez y eficiencia del nuevo algoritmo permitieron aplicar estos descriptores en cribados virtuales. Por último, se puso a punto un nuevo algoritmo de codificación independiente de alineamiento (CLACC) que permite obtener modelos cuantitativos de relación estructura-actividad con mejor capacidad predictiva y mucho más fáciles de interpretar que los obtenidos con otros métodos.
Parks, Jeremy. "A Texas Instruments C33 DSP PCI platform for high-speed real-time implementation of IEEE802.11a Wireless LAN algorithms". [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002880.
Pełny tekst źródłaBagi, Ligia Bariani. "Algoritmo treansgen?tico na solu??o do problema do Caixeiro Viajante". Universidade Federal do Rio Grande do Norte, 2007. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18112.
Pełny tekst źródłaCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
The Traveling Purchaser Problem is a variant of the Traveling Salesman Problem, where there is a set of markets and a set of products. Each product is available on a subset of markets and its unit cost depends on the market where it is available. The objective is to buy all the products, departing and returning to a domicile, at the least possible cost defined as the summation of the weights of the edges in the tour and the cost paid to acquire the products. A Transgenetic Algorithm, an evolutionary algorithm with basis on endosymbiosis, is applied to the Capacited and Uncapacited versions of this problem. Evolution in Transgenetic Algorithms is simulated with the interaction and information sharing between populations of individuals from distinct species. The computational results show that this is a very effective approach for the TPP regarding solution quality and runtime. Seventeen and nine new best results are presented for instances of the capacited and uncapacited versions, respectively
O Problema do Caixeiro Comprador ? uma variante do Problema do Caixeiro Viajante, onde existe um conjunto de mercados e um conjunto de produtos. Cada produto est? dispon?vel em um subconjunto de mercados e o pre?o da unidade varia de acordo com o mercado. O objetivo ? comprar todos os produtos, partindo e retornando para o dep?sito, de maneira que a soma do custo da rota e dos produtos seja m?nimo. Um Algoritmo Transgen?tico, algoritmo evolucion?rio com base na endosimbiose, ? utilizado para resolver a vers?o Capacitada e N?o Capacitada desse problema. A evolu??o no algoritmo transgen?tico ? simulada com a intera??o e troca de informa??es entre popula??o de indiv?duos de diferentes esp?cies. Os resultados computacionais mostram que a abordagem ? satisfat?ria para o PCC , tanto na qualidade da solu??o, quanto no tempo de execu??o. Dezessete e nove novas melhores solu??es s?o encontradas para o PCC Capacitado e para o PCC N?o Capacitado, respectivamente
Gagliardi, Raphael Luiz. "Aplicação de Inteligência Computacional para a Solução de Problemas Inversos de Transferência Radiativa em Meios Participantes Unidimensionais". Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7543.
Pełny tekst źródłaThis research consists in the solution of the inverse problem of radiative transfer for a participating media (emmiting, absorbing and/or scattering) homogeneous one-dimensional in one layer, using the combination of artificial neural network (ANN), with optimization techniques. The output of the ANN, properly trained presents the values of the radiative properties [w, to, p1 e p2] that are optimized through the following techniques: Particle Collision Algorithm (PCA), Genetic Algorithm (GA), Greedy Randomized Adaptive Search Procedure (GRASP) and Tabu Search (TS). The data used in the training are synthetics, generated through the direct problem without the introduction of noise. The results obtained by the (ANN) alone, presents an average percentage error minor than 1,64%, what it would be satisfying, however, for the treatment using the four techniques of optimization aforementioned, the results have become even better with percentage errors minor than 0,03%, especially when the optimization is made by the GA.
Ranjitkar, Hari Sagar, i Sudip Karki. "Comparison of A*, Euclidean and Manhattan distance using Influence map in MS. Pac-Man". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11800.
Pełny tekst źródłaBountourelis, Theologos. "Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.
Pełny tekst źródłaCommittee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
Karlsson, Albin. "Evaluation of the Complexity of Procedurally Generated Maze Algorithms". Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16839.
Pełny tekst źródłaHelge, Adam. "Procedurell Generering - Rum och Korridorer : En jämförelse av BSP och Bucks algoritm som metoder för procedurell generering av dungeons". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15288.
Pełny tekst źródłaAudibert, Jean-Yves. "Théorie statistique de l'apprentissage : une approche PAC-Bayésienne". Paris 6, 2004. http://www.theses.fr/2004PA066003.
Pełny tekst źródłaMorandini, Jacques. "Contribution à la résolution multi-méthode des équations aux dérivées partielles couplées rencontrées en magnéto-thermo-hydrodynamique". Grenoble INPG, 1994. http://www.theses.fr/1994INPG0068.
Pełny tekst źródłaDahl, David, i Oscar Pleininger. "A Comparative Study of Representations for Procedurally Generated Structures in Games". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20665.
Pełny tekst źródłaAlfredsson, Jon. "Design of a parallel A/D converter system on PCB : For high-speed sampling and timing error correction". Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1201.
Pełny tekst źródłaThe goals for most of today’s receiver system are sampling at high-speed, with high resolution and with as few errors as possible. This master thesis describes the design of a high-speed sampling system with"state-of-the-art"components available on the market. The system is designed with a parallel Analog-to-digital converter (ADC) architecture, also called time interleaving. It aims to increase the sampling speed of the system. The system described in this report uses four 12-bits ADCs in parallel. Each ADC can sample at 125 MHz and the total sampling speed will then theoretically become 500 Ms/s. The system has been implemented and manufactured on a printed circuit board (PCB). Up to four boards can be connected in parallel to get 2 Gs/s theoretically.
In an approach to increase the systems performance even further, a timing error estimation algorithm will be used on the sampled data. This algorithm estimates the timing errors that occur when sampling with non-uniform time interval between samples. After the estimations, the sampling clocks can be adjusted to correct the errors.
This thesis is concerning some ADC theory, system design and PCB implementation. It also describes how to test and measure the system’s performance. No measurement results are presented in this thesis because measurements will be done after this project. The last part of the thesis discusses future improvementsto achieve even higher performance.
Elhadji, Ille Gado Nassara. "Méthodes aléatoires pour l’apprentissage de données en grande dimension : application à l'apprentissage partagé". Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0032.
Pełny tekst źródłaThis thesis deals with the study of random methods for learning large-scale data. Firstly, we propose an unsupervised approach consisting in the estimation of the principal components, when the sample size and the observation dimension tend towards infinity. This approach is based on random matrices and uses consistent estimators of eigenvalues and eigenvectors of the covariance matrix. Then, in the case of supervised learning, we propose an approach which consists in reducing the dimension by an approximation of the original data matrix and then realizing LDA in the reduced space. Dimension reduction is based on low–rank approximation matrices by the use of random matrices. A fast approximation algorithm of the SVD and a modified version as fast approximation by spectral gap are developed. Experiments are done with real images and text data. Compared to other methods, the proposed approaches provide an error rate that is often optimal, with a small computation time. Finally, our contribution in transfer learning consists in the use of the subspace alignment and the low-rank approximation of matrices by random projections. The proposed method is applied to data derived from benchmark database; it has the advantage of being efficient and adapted to large-scale data
Bennett, Casey. "Channel Noise and Firing Irregularity in Hybrid Markov Models of the Morris-Lecar Neuron". Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1441551744.
Pełny tekst źródłaDobossy, Barnabás. "Odhad parametrů jezdce na vozítku segway a jejich použití pro optimalizaci řídícího algoritmu". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399406.
Pełny tekst źródłaFigueiredo, António José Pereira de. "Energy efficiency and comfort strategies for Southern European climate : optimization of passive housing and PCM solutions". Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17291.
Pełny tekst źródłaPursuing holistic sustainable solutions, towards the target defined by the United Nations Framework Convention on Climate Change (UNFCCC) is a stimulating goal. Exploring and tackling this task leads to a broad number of possible combinations of energy saving strategies than can be bridged by Passive House (PH) concept and the use of advanced materials, such as Phase Change Materials (PCM) in this context. Acknowledging that the PH concept is well established and practiced mainly in cold climate countries of Northern and Central Europe, the present research investigates how the construction technology and energy demand levels can be adapted to Southern Europe, in particular to Portugal mainland climate. For Southern Europe in addition to meeting the heating requirements in a fairly easier manner, it is crucial to provide comfortable conditions during summer, due to a high risk of overheating. The incorporation of PCMs into building solutions making use of solar energy to ensure their phase change process, are a potential solution for overall reduction of energy consumption and overheating rate in buildings. The PH concept and PCM use need to be adapted and optimised to work together with other active and passive systems improving the overall building thermal behaviour and reducing the energy consumption. Thus, a hybrid evolutionary algorithm was used to optimise the application of the PH concept to the Portuguese climate through the study of the combination of several building features as well as constructive solutions incorporating PCMs minimizing multi-objective benchmark functions for attaining the defined goals.
A procura de soluções de sustentabilidade holísticas que conduzam ao cumprimento dos desafios impostos pela Convenção-Quadro das Nações Unidas sobre as Alterações Climáticas é uma meta estimulante. Explorar esta tarefa resulta num amplo número de possíveis combinações de estratégias de poupança energética, sendo estas alcançáveis através do conceito definido pela Passive House (PH) e pela utilização de materiais de mudança de fase que se revelam como materiais inovadores neste contexto. Reconhecendo que este conceito já se encontra estabelecido e disseminado em países de climas frios do centro e norte da Europa, o presente trabalho de investigação foca-se na aplicabilidade e adaptabilidade deste conceito e correspondentes técnicas construtivas, assim como os níveis de energia, para climas do sul da Europa, nomeadamente em Portugal continental. No sudeste da Europa, adicionalmente à necessidade de cumprimento dos requisitos energéticos para aquecimento, é crucial promover e garantir condições de conforto no verão, devido ao elevado risco de sobreaquecimento. A incorporação de materiais de mudança de fase nas soluções construtivas dos edifícios, utilizando a energia solar para assegurar o processo de mudança de fase, conduz a soluções de elevado potencial para a redução global da energia consumida e do risco de sobreaquecimento. A utilização do conceito PH e dos materiais de mudança de fase necessitam de ser adaptados e otimizados para funcionarem integrados com outros sistemas ativos e passivos, melhorando o comportamento térmico dos edifícios e minimizando o consumo energético. Assim, foi utilizado um algoritmo evolutivo para otimizar a aplicabilidade do conceito PH ao clima português através do estudo e combinação de diversos aspetos construtivos, bem como o estudo de possíveis soluções construtivas inovadoras com incorporação de materiais de mudança de fase minimizando as funções objetivo para o cumprimento das metas inicialmente definidas.
Goyal, Anil. "Learning a Multiview Weighted Majority Vote Classifier : Using PAC-Bayesian Theory and Boosting". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES037/document.
Pełny tekst źródłaWith tremendous generation of data, we have data collected from different information sources having heterogeneous properties, thus it is important to consider these representations or views of the data. This problem of machine learning is referred as multiview learning. It has many applications for e.g. in medical imaging, we can represent human brain with different set of features for example MRI, t-fMRI, EEG, etc. In this thesis, we focus on supervised multiview learning, where we see multiview learning as combination of different view-specific classifiers or views. Therefore, according to our point of view, it is interesting to tackle multiview learning issue through PAC-Bayesian framework. It is a tool derived from statistical learning theory studying models expressed as majority votes. One of the advantages of PAC-Bayesian theory is that it allows to directly capture the trade-off between accuracy and diversity between voters, which is important for multiview learning. The first contribution of this thesis is extending the classical PAC-Bayesian theory (with a single view) to multiview learning (with more than two views). To do this, we considered a two-level hierarchy of distributions over the view-specific voters and the views. Based on this strategy, we derived PAC-Bayesian generalization bounds (both probabilistic and expected risk bounds) for multiview learning. From practical point of view, we designed two multiview learning algorithms based on our two-level PAC-Bayesian strategy. The first algorithm is a one-step boosting based multiview learning algorithm called as PB-MVBoost. It iteratively learns the weights over the views by optimizing the multiview C-Bound which controls the trade-off between the accuracy and the diversity between the views. The second algorithm is based on late fusion approach where we combine the predictions of view-specific classifiers using the PAC-Bayesian algorithm CqBoost proposed by Roy et al. Finally, we show that minimization of classification error for multiview weighted majority vote is equivalent to the minimization of Bregman divergences. This allowed us to derive a parallel update optimization algorithm (referred as MωMvC2) to learn our multiview weighted majority vote