Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: PCA ALGORITHM.

Дисертації з теми "PCA ALGORITHM"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "PCA ALGORITHM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Petters, Patrik. "Development of a Supervised Multivariate Statistical Algorithm for Enhanced Interpretability of Multiblock Analysis." Thesis, Linköpings universitet, Matematiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138112.

Повний текст джерела
Анотація:
In modern biological research, OMICs techniques, such as genomics, proteomics or metabolomics, are often employed to gain deep insights into metabolic regulations and biochemical perturbations in response to a specific research question. To gain complementary biologically relevant information, multiOMICs, i.e., several different OMICs measurements on the same specimen, is becoming increasingly frequent. To be able to take full advantage of this complementarity, joint analysis of such multiOMICs data is necessary, but this is yet an underdeveloped area. In this thesis, a theoretical background is given on general component-based methods for dimensionality reduction such as PCA, PLS for single block analysis, and multiblock PLS for co-analysis of OMICs data. This is followed by a rotation of an unsupervised analysis method. The aim of this method is to divide dimensionality-reduced data in block-distinct and common variance partitions, using the DISCO-SCA approach. Finally, an algorithm for a similar rotation of a supervised (PLS) solution is presented using data available in the literature. To the best of our knowledge, this is the first time that such an approach for rotation of a supervised analysis in block-distinct and common partitions has been developed and tested.This newly developed DISCO-PLS algorithm clearly showed an increased potential for visualisation and interpretation of data, compared to standard PLS. This is shown bybiplots of observation scores and multiblock variable loadings.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ergin, Emre. "Investigation Of Music Algorithm Based And Wd-pca Method Based Electromagnetic Target Classification Techniques For Their Noise Performances." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611218/index.pdf.

Повний текст джерела
Анотація:
Multiple Signal Classification (MUSIC) Algorithm based and Wigner Distribution-Principal Component Analysis (WD-PCA) based classification techniques are very recently suggested resonance region approaches for electromagnetic target classification. In this thesis, performances of these two techniques will be compared concerning their robustness for noise and their capacity to handle large number of candidate targets. In this context, classifier design simulations will be demonstrated for target libraries containing conducting and dielectric spheres and for dielectric coated conducting spheres. Small scale aircraft targets modeled by thin conducting wires will also be used in classifier design demonstrations.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Romualdo, Kamilla Vogas. "Problemas direto e inverso de processos de separação em leito móvel simulado mediante mecanismos cinéticos de adsorção." Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6750.

Повний текст джерела
Анотація:
Diversas aplicações industriais relevantes envolvem os processos de adsorção, citando como exemplos a purificação de produtos, separação de substâncias, controle de poluição e umidade entre outros. O interesse crescente pelos processos de purificação de biomoléculas deve-se principalmente ao desenvolvimento da biotecnologia e à demanda das indústrias farmacêutica e química por produtos com alto grau de pureza. O leito móvel simulado (LMS) é um processo cromatográfico contínuo que tem sido aplicado para simular o movimento do leito de adsorvente, de forma contracorrente ao movimento do líquido, através da troca periódica das posições das correntes de entrada e saída, sendo operado de forma contínua, sem prejuízo da pureza das correntes de saída. Esta consiste no extrato, rico no componente mais fortemente adsorvido, e no rafinado, rico no componente mais fracamente adsorvido, sendo o processo particularmente adequado a separações binárias. O objetivo desta tese é estudar e avaliar diferentes abordagens utilizando métodos estocásticos de otimização para o problema inverso dos fenômenos envolvidos no processo de separação em LMS. Foram utilizados modelos discretos com diferentes abordagens de transferência de massa, com a vantagem da utilização de um grande número de pratos teóricos em uma coluna de comprimento moderado, neste processo a separação cresce à medida que os solutos fluem através do leito, isto é, ao maior número de vezes que as moléculas interagem entre a fase móvel e a fase estacionária alcançando assim o equilíbrio. A modelagem e a simulação verificadas nestas abordagens permitiram a avaliação e a identificação das principais características de uma unidade de separação do LMS. A aplicação em estudo refere-se à simulação de processos de separação do Baclofen e da Cetamina. Estes compostos foram escolhidos por estarem bem caracterizados na literatura, estando disponíveis em estudos de cinética e de equilíbrio de adsorção nos resultados experimentais. De posse de resultados experimentais avaliou-se o comportamento do problema direto e inverso de uma unidade de separação LMS visando comparar os resultados obtidos com os experimentais, sempre se baseando em critérios de eficiência de separação entre as fases móvel e estacionária. Os métodos estudados foram o GA (Genetic Algorithm) e o PCA (Particle Collision Algorithm) e também foi feita uma hibridização entre o GA e o PCA. Como resultado desta tese analisouse e comparou-se os métodos de otimização em diferentes aspectos relacionados com o mecanismo cinético de transferência de massa por adsorção e dessorção entre as fases sólidas do adsorvente.
Several important industrial applications involving adsorption processes, citing as an example the product purification, separation of substances, pollution control and moisture among others. The growing interest in processes of purification of biomolecules is mainly due to the development of biotechnology and the demand of pharmaceutical and chemical products with high purity. The simulated moving bed (SMB) chromatography is a continuous process that has been applied to simulate the movement of the adsorbent bed, in a countercurrent to the movement of liquid through the periodic exchange of the positions of input and output currents, being operated so continuous, notwithstanding the purity of the outlet streams. This is the extract, rich in the more strongly adsorbed component, and the raffinate, rich in the more weakly adsorbed component, the method being particularly suited to binary separations. The aim of this thesis is to study and evaluate different approaches using stochastic optimization methods for the inverse problem of the phenomena involved in the separation process in LMS. We used discrete models with different approaches to mass transfer. With the benefit of using a large number of theoretical plates in a column of moderate length, in this process the separation increases as the solute flowing through the bed, i.e. as many times as molecules interact between the mobile phase and stationary phase thus achieving the equilibrium. The modeling and simulation verified in these approaches allowed the assessment and identification of the main characteristics of a separation unit LMS. The application under consideration refers to the simulation of the separation of Ketamine and Baclofen. These compounds were chosen because they are well characterized in the literature and are available in kinetic studies and equilibrium adsorption on experimental results. With the results of experiments evaluated the behavior of the direct and inverse problem of a separation unit LMS in order to compare these results, always based on the criteria of separation efficiency between the mobile and stationary phases. The methods studied were the GA (Genetic Algorithm) and PCA (Particle Collision Algorithm) and we also made a hybridization between the GA and PCA. This thesis, we analyzed and compared the optimization methods in different aspects of the kinetic mechanism for mass transfer between the adsorption and desorption of the adsorbent solid phases.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

SINGH, BHUPINDER. "A HYBRID MSVM COVID-19 IMAGE CLASSIFICATION ENHANCED USING PARTICLE SWARM OPTIMIZATION." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18864.

Повний текст джерела
Анотація:
COVID-19 (novel coronavirus disease) is a serious illness that has killed millions of civilians and affected millions around the world. Mostly as result, numerous technologies that enable both the rapid and accurate identification of COVID-19 illnesses will provide much assistance to healthcare practitioners. A machine learning- based approach is used for the detection of COVID-19. In general, artificial intelligence (AI) approaches have yielded positive outcomes in healthcare visual processing and analysis. CXR is the digital image processing method that plays a vital role in the analysis of Covid-19 disease. Due to the maximum accessibility of huge scale annotated image databases, excessive success has been done using multiclass support vector machines for image classification. Image classification is the main challenge to detect medical diagnosis. The existing work used CNN with a transfer learning mechanism that can give a solution by transferring information from GENETIC object recognition tasks. The DeTrac method has been used to detect the disease in CXR images. DeTrac method accuracy achieved 93.1~ 97 percent. In this proposed work, the hybridization PSO+MSVM method has worked with irregularities in the CXR images database by studying its group distances using a group or class mechanism. At the initial phase of the process, a median filter is used for the noise reduction from the image. Edge detection is an essential step in the process of COVID-19 detection. The canny edge detector is implemented for the detection of edges in the chest x-ray images. The PCA (Principal Component Analysis) method is implemented for the feature extraction phase. There are multiple features extracted through PCA and the essential features are optimized by an optimization technique known as swarm optimization is used for feature optimization. For the detection of COVID-19 through CXR images, a hybrid multi-class support vector machine technique is implemented. The PSO (particle swarm optimization) technique is used for feature optimization. The comparative analysis of various existing techniques is also depicted in this work. The proposed system has achieved an accuracy of 97.51 percent, SP of 97.49 percent, and 98.0 percent of SN. The proposed system is compared with existing systems and achieved better performance and the compared systems are DeTrac, GoogleNet, and SqueezeNet.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Xuechuan, and n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.

Повний текст джерела
Анотація:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.

Повний текст джерела
Анотація:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rimal, Suraj. "POPULATION STRUCTURE INFERENCE USING PCA AND CLUSTERING ALGORITHMS." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2860.

Повний текст джерела
Анотація:
Genotype data, consisting large numbers of markers, is used as demographic and association studies to determine genes related to specific traits or diseases. Handling of these datasets usually takes a significant amount of time in its application of population structure inference. Therefore, we suggested applying PCA on genotyped data and then clustering algorithms to specify the individuals to their particular subpopulations. We collected both real and simulated datasets in this study. We studied PCA and selected significant features, then applied five different clustering techniques to obtain better results. Furthermore, we studied three different methods for predicting the optimal number of subpopulations in a collected dataset. The results of four different simulated datasets and two real human genotype datasets show that our approach performs well in the inference of population structure. NbClust is more effective to infer subpopulations in the population. In this study, we showed that centroid-based clustering: such as k-means and PAM, performs better than model-based, spectral, and hierarchical clustering algorithms. This approach also has the benefit of being fast and flexible in the inference of population structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Katadound, Sachin. "Face Recognition: Study and Comparison of PCA and EBGM Algorithms." TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/241.

Повний текст джерела
Анотація:
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Perez, Gallardo Jorge Raúl. "Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10505/1/perez_gallardo_partie_1_sur_2.pdf.

Повний текст джерела
Анотація:
Because of the increasing demand for the provision of energy worldwide and the numerous damages caused by a major use of fossil sources, the contribution of renewable energies has been increasing significantly in the global energy mix with the aim at moving towards a more sustainable development. In this context, this work aims at the development of a general methodology for designing PV systems based on ecodesign principles and taking into account simultaneously both techno-economic and environmental considerations. In order to evaluate the environmental performance of PV systems, an environmental assessment technique was used based on Life Cycle Assessment (LCA). The environmental model was successfully coupled with the design stage model of a PV grid-connected system (PVGCS). The PVGCS design model was then developed involving the estimation of solar radiation received in a specific geographic location, the calculation of the annual energy generated from the solar radiation received, the characteristics of the different components and the evaluation of the techno-economic criteria through Energy PayBack Time (EPBT) and PayBack Time (PBT). The performance model was then embedded in an outer multi-objective genetic algorithm optimization loop based on a variant of NSGA-II. A set of Pareto solutions was generated representing the optimal trade-off between the objectives considered in the analysis. A multi-variable statistical method (i.e., Principal Componet Analysis, PCA) was then applied to detect and omit redundant objectives that could be left out of the analysis without disturbing the main features of the solution space. Finally, a decision-making tool based on M-TOPSIS was used to select the alternative that provided a better compromise among all the objective functions that have been investigated. The results showed that while the PV modules based on c-Si have a better performance in energy generation, the environmental aspect is what makes them fall to the last positions. TF PV modules present the best trade-off in all scenarios under consideration. A special attention was paid to recycling process of PV module even if there is not yet enough information currently available for all the technologies evaluated. The main cause of this lack of information is the lifetime of PV modules. The data relative to the recycling processes for m-Si and CdTe PV technologies were introduced in the optimization procedure for ecodesign. By considering energy production and EPBT as optimization criteria into a bi-objective optimization cases, the importance of the benefits of PV modules end-of-life management was confirmed. An economic study of the recycling strategy must be investigated in order to have a more comprehensive view for decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lacasse, Alexandre. "Bornes PAC-Bayes et algorithmes d'apprentissage." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27635/27635.pdf.

Повний текст джерела
Анотація:
L’objet principale de cette thèse est l’étude théorique et la conception d’algorithmes d’apprentissage concevant des classificateurs par vote de majorité. En particulier, nous présentons un théorème PAC-Bayes s’appliquant pour borner, entre autres, la variance de la perte de Gibbs (en plus de son espérance). Nous déduisons de ce théorème une borne du risque du vote de majorité plus serrée que la fameuse borne basée sur le risque de Gibbs. Nous présentons également un théorème permettant de borner le risque associé à des fonctions de perte générale. À partir de ce théorème, nous concevons des algorithmes d’apprentissage construisant des classificateurs par vote de majorité pondérés par une distribution minimisant une borne sur les risques associés aux fonctions de perte linéaire, quadratique, exponentielle, ainsi qu’à la fonction de perte du classificateur de Gibbs à piges multiples. Certains de ces algorithmes se comparent favorablement avec AdaBoost.
The main purpose of this thesis is the theoretical study and the design of learning algorithms returning majority-vote classifiers. In particular, we present a PAC-Bayes theorem allowing us to bound the variance of the Gibbs’ loss (not only its expectation). We deduce from this theorem a bound on the risk of a majority vote tighter than the famous bound based on the Gibbs’ risk. We also present a theorem that allows to bound the risk associated with general loss functions. From this theorem, we design learning algorithms building weighted majority vote classifiers minimizing a bound on the risk associated with the following loss functions : linear, quadratic and exponential. Also, we present algorithms based on the randomized majority vote. Some of these algorithms compare favorably with AdaBoost.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Koutsogiannis, Grigorios. "Novel TDE demodulator and kernal-PCA denoising algorithms for improvement of reception of communication signal." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401349.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Shanian, Sara. "Sample Compressed PAC-Bayesian Bounds and Learning Algorithms." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29037/29037.pdf.

Повний текст джерела
Анотація:
Dans le domaine de la classification, les algorithmes d'apprentissage par compression d'échantillons sont des algorithmes qui utilisent les données d'apprentissage disponibles pour construire l'ensemble de classificateurs possibles. Si les données appartiennent seulement à un petit sous-espace de l'espace de toutes les données «possibles», ces algorithmes possédent l'intéressante capacité de ne considérer que les classificateurs qui permettent de distinguer les exemples qui appartiennent à notre domaine d'intérêt. Ceci contraste avec d'autres algorithmes qui doivent considérer l'ensemble des classificateurs avant d'examiner les données d'entraînement. La machine à vecteurs de support (le SVM) est un algorithme d'apprentissage très performant qui peut être considéré comme un algorithme d'apprentissage par compression d'échantillons. Malgré son succès, le SVM est actuellement limité par le fait que sa fonction de similarité doit être un noyau symétrique semi-défini positif. Cette limitation rend le SVM difficilement applicable au cas où on désire utiliser une mesure de similarité quelconque.
In classification, sample compression algorithms are the algorithms that make use of the available training data to construct the set of possible predictors. If the data belongs to only a small subspace of the space of all "possible" data, such algorithms have the interesting ability of considering only the predictors that distinguish examples in our areas of interest. This is in contrast with non sample compressed algorithms which have to consider the set of predictors before seeing the training data. The Support Vector Machine (SVM) is a very successful learning algorithm that can be considered as a sample-compression learning algorithm. Despite its success, the SVM is currently limited by the fact that its similarity function must be a symmetric positive semi-definite kernel. This limitation by design makes SVM hardly applicable for the cases where one would like to be able to use any similarity measure of input example. PAC-Bayesian theory has been shown to be a good starting point for designing learning algorithms. In this thesis, we propose a PAC-Bayes sample-compression approach to kernel methods that can accommodate any bounded similarity function. We show that the support vector classifier is actually a particular case of sample-compressed classifiers known as majority votes of sample-compressed classifiers. We propose two different groups of PAC-Bayesian risk bounds for majority votes of sample-compressed classifiers. The first group of proposed bounds depends on the KL divergence between the prior and the posterior over the set of sample-compressed classifiers. The second group of proposed bounds has the unusual property of having no KL divergence when the posterior is aligned with the prior in some precise way that we define later in this thesis. Finally, for each bound, we provide a new learning algorithm that consists of finding the predictor that minimizes the bound. The computation times of these algorithms are comparable with algorithms like the SVM. We also empirically show that the proposed algorithms are very competitive with the SVM.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Knapo, Peter. "Vývoj algoritmů pro digitální zpracování obrazu v reálním čase v DSP procesoru." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217872.

Повний текст джерела
Анотація:
Rozpoznávanie tvárí je komplexný proces, ktorého hlavným ciežom je rozpoznanie žudskej tváre v obrázku alebo vo video sekvencii. Najčastejšími aplikáciami sú sledovacie a identifikačné systémy. Taktiež je rozpoznávanie tvárí dôležité vo výskume počítačového videnia a umelej inteligencií. Systémy rozpoznávania tvárí sú často založené na analýze obrazu alebo na neurónových sieťach. Táto práca sa zaoberá implementáciou algoritmu založeného na takzvaných „Eigenfaces“ tvárach. „Eigenfaces“ tváre sú výsledkom Analýzy hlavných komponent (Principal Component Analysis - PCA), ktorá extrahuje najdôležitejšie tvárové črty z originálneho obrázku. Táto metóda je založená na riešení lineárnej maticovej rovnice, kde zo známej kovariančnej matice sa počítajú takzvané „eigenvalues“ a „eigenvectors“, v preklade vlastné hodnoty a vlastné vektory. Tvár, ktorá má byť rozpoznaná, sa premietne do takzvaného „eigenspace“ (priestor vlastných hodnôt). Vlastné rozpoznanie je na základe porovnania takýchto tvárí s existujúcou databázou tvárí, ktorá je premietnutá do rovnakého „eigenspace“. Pred procesom rozpoznávania tvárí, musí byť tvár lokalizovaná v obrázku a upravená (normalizácia, kompenzácia svetelných podmienok a odstránenie šumu). Existuje mnoho algoritmov na lokalizáciu tváre, ale v tejto práci je použitý algoritmus lokalizácie tváre na základe farby žudskej pokožky, ktorý je rýchly a postačujúci pre túto aplikáciu. Algoritmy rozpoznávania tváre a lokalizácie tváre sú implementované do DSP procesoru Blackfin ADSP-BF561 od Analog Devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zirakiza, Brice, and Brice Zirakiza. "Forêts Aléatoires PAC-Bayésiennes." Master's thesis, Université Laval, 2013. http://hdl.handle.net/20.500.11794/24036.

Повний текст джерела
Анотація:
Dans ce mémoire de maîtrise, nous présentons dans un premier temps un algorithme de l'état de l'art appelé Forêts aléatoires introduit par Léo Breiman. Cet algorithme effectue un vote de majorité uniforme d'arbres de décision construits en utilisant l'algorithme CART sans élagage. Par après, nous introduisons l'algorithme que nous avons nommé SORF. L'algorithme SORF s'inspire de l'approche PAC-Bayes, qui pour minimiser le risque du classificateur de Bayes, minimise le risque du classificateur de Gibbs avec un régularisateur. Le risque du classificateur de Gibbs constitue en effet, une fonction convexe bornant supérieurement le risque du classificateur de Bayes. Pour chercher la distribution qui pourrait être optimale, l'algorithme SORF se réduit à être un simple programme quadratique minimisant le risque quadratique de Gibbs pour chercher une distribution Q sur les classificateurs de base qui sont des arbres de la forêt. Les résultasts empiriques montrent que généralement SORF est presqu'aussi bien performant que les forêts aléatoires, et que dans certains cas, il peut même mieux performer que les forêts aléatoires.
Dans ce mémoire de maîtrise, nous présentons dans un premier temps un algorithme de l'état de l'art appelé Forêts aléatoires introduit par Léo Breiman. Cet algorithme effectue un vote de majorité uniforme d'arbres de décision construits en utilisant l'algorithme CART sans élagage. Par après, nous introduisons l'algorithme que nous avons nommé SORF. L'algorithme SORF s'inspire de l'approche PAC-Bayes, qui pour minimiser le risque du classificateur de Bayes, minimise le risque du classificateur de Gibbs avec un régularisateur. Le risque du classificateur de Gibbs constitue en effet, une fonction convexe bornant supérieurement le risque du classificateur de Bayes. Pour chercher la distribution qui pourrait être optimale, l'algorithme SORF se réduit à être un simple programme quadratique minimisant le risque quadratique de Gibbs pour chercher une distribution Q sur les classificateurs de base qui sont des arbres de la forêt. Les résultasts empiriques montrent que généralement SORF est presqu'aussi bien performant que les forêts aléatoires, et que dans certains cas, il peut même mieux performer que les forêts aléatoires.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Germain, Pascal. "Algorithmes d'apprentissage automatique inspirés de la théorie PAC-Bayes." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26191/26191.pdf.

Повний текст джерела
Анотація:
Dans un premier temps, ce mémoire présente un théorème PAC-Bayes général, duquel il est possible d'obtenir simplement plusieurs bornes PAC-Bayes connues. Ces bornes permettent de calculer une garantie sur le risque d'un classificateur à partir de ses performances sur l'ensemble de données d'entraînement. Par l'interprétation du comportement de deux bornes PAC-Bayes, nous énonçons les caractéristiques propres aux classificateurs qu'elles favorisent. Enfin, une spécialisation de ces bornes à la famille des classificateurs linéaires est détaillée. Dans un deuxième temps, nous concevons trois nouveaux algorithmes d'apprentissage automatique basés sur la minimisation, par la méthode de descente de gradient conjugué, de l'expression mathématique de diverses formulations des bornes PAC-Bayes. Le dernier algorithme présenté utilise une fraction de l'ensemble d'entraînement pour l'acquisition de connaissances a priori. Ces algorithmes sont aptes à construire des classificateurs exprimés par vote de majorité ainsi que des classificateurs linéaires exprimés implicitement à l'aide de la stratégie du noyau. Finalement, une étude empirique élaborée compare les trois algorithmes entre eux et révèle que certaines versions de ces algorithmes construisent des classificateurs compétitifs avec ceux obtenus par AdaBoost et les SVM.
At first, this master thesis presents a general PAC-Bayes theorem, from which we can easily obtain some well-known PAC-Bayes bounds. Those bounds allow us to compute a guarantee on the risk of a classifier from its achievements on the training set. We analyze the behavior of two PAC-Bayes bounds and we determine peculiar characteristics of classifiers favoured by those bounds. Then, we present a specialization of those bounds to the linear classifiers family. Secondly, we conceive three new machine learning algorithms based on the minimization, by conjugate gradient descent, of various mathematical expressions of the PAC-Bayes bounds. The last algorithm uses a part of the training set to capture a priori knowledges. One can use those algorithms to construct majority vote classifiers as well as linear classifiers implicitly represented by the kernel trick. Finally, an elaborated empirical study compares the three algorithms and shows that some versions of those algorithms are competitive with both AdaBoost and SVM.
Inscrit au Tableau d'honneur de la Faculté des études supérieures
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Awasthi, Pranjal. "Approximation Algorithms and New Models for Clustering and Learning." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/266.

Повний текст джерела
Анотація:
This thesis is divided into two parts. In part one, we study the k-median and the k-means clustering problems. We take a different approach than the traditional worst case analysis models. We show that by looking at certain well motivated stable instances, one can design much better approximation algorithms for these problems. Our algorithms achieve arbitrarily good approximation factors on stable instances, something which is provably hard on worst case instances. We also study a different model for clustering which introduces limited amount of interaction with the user. Such interactive models are very popular in the context of learning algorithms but their effectiveness for clustering is not well understood. We present promising theoretical and experimental results in this direction. The second part of the thesis studies the design of provably good learning algorithms which work under adversarial noise. One of the fundamental problems in this area is to understand the learnability of the class of disjunctions of Boolean variables. We design a learning algorithm which improves on the guarantees of the previously best known result for this problem. In addition, the techniques used seem fairly general and promising to be applicable to a wider class of problems. We also propose a new model for learning with queries. This model restricts the algorithms ability to only ask certain “local” queries. We motivate the need for the model and show that one can design efficient local query algorithms for a wide class of problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Minotti, Gioele. "Sviluppo di algoritmi di machine learning per il monitoraggio stradale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Анотація:
La seguente tesi si pone l’obiettivo di simulare un algoritmo di machine learning per studiarne il funzionamento con l’idea specifica di creare un sistema di rilevazione di asperità o punti di disconnessione sul manto stradale durante il transito di un’autovettura. La trattazione di questo elaborato è stata organizzata come segue: • Fase di osservazione e acquisizione dati in modo da poter raccogliere campioni descrittivi del fenomeno in oggetto. • Fase di elaborazione del dataset e di dimensionality reduction dello stesso. • Sviluppo dell’algoritmo di tipo K-NN. • Studio del comportamento dell’algoritmo con lo scopo di massimizzarne l’accuratezza.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Carletti, Davide. "Applicazioni dell'analisi tensoriale delle componenti principali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Анотація:
In questa tesi si studia l'analisi delle componenti principali (PCA): a partire da un dataset iniziale, si vuole ridurre la grande mole dei dati originari per semplificare il problema mantenendo tutte e informazioni principali. Lo si fa attraverso due metodi diversi: il primo è la PCA matriciale e viene spiegata nel Capitolo 2; il secondo viene descritto nel Capitolo 3 tramite i tensori e sarà applicato mediante la decomposizione Tensor Train, una delle possibili strategie per scomporre un tensore.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Berlier, Jacob A. "A Parallel Genetic Algorithm for Placement and Routing on Cloud Computing Platforms." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2406.

Повний текст джерела
Анотація:
The design and implementation of today's most advanced VLSI circuits and multi-layer printed circuit boards would not be possible without automated design tools that assist with the placement of components and the routing of connections between these components. In this work, we investigate how placement and routing can be implemented and accelerated using cloud computing resources. A parallel genetic algorithm approach is used to optimize component placement and the routing order supplied to a Lee's algorithm maze router. A study of mutation rate, dominance rate, and population size is presented to suggest favorable parameter values for arbitrary-sized printed circuit board problems. The algorithm is then used to successfully design a Microchip PIC18 breakout board and Micrel Ethernet Switch. Performance results demonstrate that a 50X runtime performance improvement over a serial approach is achievable using 64 cloud computing cores. The results further suggest that significantly greater performance could be achieved by requesting additional cloud computing resources for additional cost. It is our hope that this work will serve as a framework for future efforts to improve parallel placement and routing algorithms using cloud computing resources.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Classon, Johan, and Viktor Andersson. "Procedural Generation of Levels with Controllable Difficulty for a Platform Game Using a Genetic Algorithm." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129801.

Повний текст джерела
Анотація:
This thesis describes the implementation and evaluation of a genetic algorithm (GA) for procedurally generating levels with controllable difficulty for a motion-based 2D platform game. Manually creating content can be time-consuming, and it may be desirable to automate this process with an algorithm, using Procedural Content Generation (PCG). An algorithm was implemented and then refined with an iterative method by conducting user tests. The resulting algorithm is considered a success and shows that using GA's for this kind of PCG is viable. An algorithm able to control difficulty of its output was achieved, but more refinement could be made with further user tests. Using a GA for this purpose, one should find elements that affect difficulty, incorporate these in the fitness function, and test generated content to ensure that the fitness function correctly evaluates solutions with regard to the desired output.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kini, Rohit Ravindranath. "Sensor Position Optimization for Multiple LiDARs in Autonomous Vehicles." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289597.

Повний текст джерела
Анотація:
3D ranging sensor LiDAR, is an extensively used sensor in the autonomous vehicle industry, but LiDAR placement problem is not studied extensively. This thesis work proposes a framework in an open- source autonomous driving simulator (CARLA) that aims to solve LiDAR placement problem, based on the tasks that LiDAR is intended for in most of the autonomous vehicles. LiDAR placement problem is solved by improving point cloud density around the vehicle, and this is calculated by using LiDAR Occupancy Boards (LOB). Introducing LiDAR Occupancy as an objective function, the genetic algorithm is used to optimize this problem. This method can be extended for multiple LiDAR placement problem. Additionally, for multiple LiDAR placement problem, LiDAR scan registration algorithm (NDT) can also be used to find a better match for first or reference LiDAR. Multiple experiments are carried out in simulation with a different vehicle truck and car, different LiDAR sensors Velodyne 16 and 32 channel LiDAR, and, by varying Region Of Interest (ROI), for testing the scalability and technical robustness of the framework. Finally, this framework is validated by comparing the current and proposed LiDAR positions on the truck.
3D- sensor LiDAR, är en sensor som används i stor utsträckning inom den autonoma fordonsindustrin, men LiDAR- placeringsproblemet studeras inte i stor utsträckning. Detta uppsatsarbete föreslår en ram i en öppen källkod för autonom körningssimulator (CARLA) som syftar till att lösa LiDAR- placeringsproblem, baserat på de uppgifter som LiDAR är avsedda för i de flesta av de autonoma fordonen. LiDAR- placeringsproblem löses genom att förbättra punktmolntätheten runt fordonet, och detta beräknas med LiDAR Occupancy Boards (LOB). Genom att introducera LiDAR Occupancy som en objektiv funktion används den genetiska algoritmen för att optimera detta problem. Denna metod kan utökas för flera LiDAR- placeringsproblem. Dessutom kan LiDAR- scanningsalgoritm (NDT) för flera LiDAR- placeringsproblem också användas för att hitta en bättre matchning för LiDAR för första eller referens. Flera experiment utförs i simulering med ett annat fordon lastbil och bil, olika LiDAR-sensorer Velodyne 16 och 32kanals LiDAR, och, genom att variera intresseområde (ROI), för att testa skalbarhet och teknisk robusthet i ramverket. Slutligen valideras detta ramverk genom att jämföra de nuvarande och föreslagna LiDAR- positionerna på lastbilen.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Soukup, Jiří. "Metody a algoritmy pro rozpoznávání obličejů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-374588.

Повний текст джерела
Анотація:
This work is describing basic methods of face recognition. The methods PCA, LDA, ICA, trace tranfsorm, elastic bunch graph map, genetic algorithm and neural network are described. In practical part, the PCA, PCA + RBF neural network and genetic algorithms are implemented. The RBF neural network is used in the way of clasificator and genetic algorithm is used for RBF NN training in one case and for selecting eigenvectors from PCA method in the other case. This method, PCA + GA, called EPCA, outperform other methods tested in this work on the ORL testing database.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Bacchielli, Tommaso. "Algoritmi di Machine Learning per il riconoscimento di attività umane da vibrazioni strutturali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Знайти повний текст джерела
Анотація:
La tesi tratta l'implementazione di algoritmi di "Machine Learning" per il riconoscimento di quattro attività umane (camminata, corsa, bici e auto) sfruttando solo le vibrazioni strutturali che queste producono nel terreno, le quali sono state rilevate mediante due geofoni elettromagnetici (uno orizzontale e uno verticale). Tutte le fasi del progetto, a partire dall'acquisizione ed elaborazione dei dati fino all'implementazione degli algoritmi di "Machine Learning", sono state sviluppate in MATLAB.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xingwen, Ding, Zhai Wantao, Chang Hongyu, and Chen Ming. "CMA BLIND EQUALIZER FOR AERONAUTICAL TELEMETRY." International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624262.

Повний текст джерела
Анотація:
In aeronautical telemetry, the multipath interference usually causes significant performance degradation. As the bit rate of telemetry systems increases, the impairments of multipath interference are more serious. The constant modulus algorithm (CMA) blind equalizer is effective to mitigate the impairments of multipath interference. The CMA adapts the equalizer coefficients to minimize the deviation of the signal envelope from a constant level. This paper presents the performances of the CMA blind equalizer applied for PCM-FM, PCM-BPSK, SOQPSK-TG and ARTM CPM in aeronautical telemetry.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Davis, Daniel Jacob. "Achieving Six Sigma printed circuit board yields by improving incoming component quality and using a PCBA prioritization algorithm." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43831.

Повний текст джерела
Анотація:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2008.
Includes bibliographical references (p. 139-143).
Printed circuit board assemblies (PCBAs) are the backbone of the electronics industry. PCBA technologies are keeping pace with Moore's Law and will soon enable the convergence of video, voice, data, and mobility onto a single device. With the rapid advancements in product and component technologies, manufacturing tests are being pushed to the limits as consumers are demanding higher quality and more reliable electronics than ever before. Cisco Systems, Inc. (Cisco) currently manufactures over one thousand different types of printed circuit board assemblies (PCBAs) per quarter all over the world. Each PCBA in Cisco's portfolio has an associated complexity to its design determined by the number of interconnects, components, and other variables. PCBA manufacturing yields have historically been quite variable. In order to remain competitive, there is an imminent need to attain Six Sigma PCBA yields while controlling capital expenditures and innovating manufacturing test development and execution. Recently, Cisco kicked off the Test Excellence initiative to improve overall PCBA manufacturing yields and provided the backdrop to this work study. This thesis provides a first step on the journey to attaining Six Sigma PCBA manufacturing yields. Using Six Sigma techniques, two hypotheses are developed that will enable yield improvements: (1) PCBA yields can be improved by optimizing component selection across the product portfolio by analyzing component cost and quality levels, and (2) Using the Six Sigma DMAIC (define-measure-analyze-improve-control) method and the TOPSIS (Technique for Order Preferences by Similarity to Ideal Solutions) algorithm, PCBA yields will improve by optimally prioritizing manufacturing resources on the most important PCBAs first.
(cont.) The two analytical tools derived in this thesis will provide insights into how PCBA manufacturing yields can be improved today while enabling future yield improvements to occur.
by Daniel Jacob Davis.
S.M.
M.B.A.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Marques, Daniel Soares e. "Sistema misto reconfigurável aplicado à Interface PCI para Otimização do Algoritmo Non-local Means." Universidade Federal da Paraí­ba, 2012. http://tede.biblioteca.ufpb.br:8080/handle/tede/6075.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-05-14T12:36:34Z (GMT). No. of bitstreams: 1 arquivototal2.pdf: 4503412 bytes, checksum: e8c898ba24436013a2e89d08737039bb (MD5) Previous issue date: 2012-08-31
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The digital image processing field is continually evolving and, although the diverse application areas, the commonly problems found converge to methods capable to improve visual information for analysis and interpretation. A major limitation issue on image precision is noise, which is defined as a perturbation in the image. The Non-Local Means (NLM) method stands out as the state-of-the-art of digital image denoising filtering. However, its computational complexity is an obstacle to make it practical on general purpose computing applications. This work presents a computer system implementation, developed with parts implemented in software and hardware applied to PCI, to optimize the NLM algorithm using hardware acceleration techniques, allowing a greater efficiency than is normally provided by general use processors. The use of reconfigurable computing helped in developing the hardware system, providing the modification of the described circuit in its use environment, accelerating the project implementation. Using an FPGA prototyping kit for PCI, dedicated to perform the dedicated calculation of the Squared Weighted Euclidean Distance, the results obtained show a gain of up to 3.5 times greater than the compared optimization approaches, also maintaining the visual quality of the denoising filtering.
A área de processamento de imagens digitais está evoluindo continuamente e, embora as áreas de aplicações sejam diversas, os problemas encontrados comumente convergem para os métodos capazes de melhorar a informação visual para a análise e interpretação. Uma das principais limitações em questão de precisão de imagens é o ruído, que é definido como uma perturbação na imagem. O método Non-Local Means (NLM) destaca-se como o estado da arte de filtragem de ruído. Contudo, sua complexidade computacional é um empecilho para torná-lo prático em aplicações computacionais de uso geral. O presente trabalho apresenta a implementação de um sistema computacional, desenvolvido com partes executadas em software e em hardware aplicado à PCI, visando a otimização do algoritmo NLM através de técnicas de aceleração em hardware, permitindo uma eficiência maior do que normalmente é fornecida por processadores de uso geral. O uso da computação reconfigurável auxiliou no desenvolvimento do sistema em hardware, proporcionando a modificação do circuito descrito no ambiente de sua utilização, acelerando a implementação do projeto. Utilizando um kit PCI de prototipação FPGA, para efetuar o cálculo dedicado da Distância Euclidiana Quadrática Ponderada, os resultados obtidos nos testes exibem um ganho de tempo até 3.5 vezes maior que as abordagens de otimização comparadas, mantendo também a qualidade visual da filtragem estabilizada.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wessman, Filip. "Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43175.

Повний текст джерела
Анотація:
Background: A problematic area in today’s large scale distributed systems is the exponential amount of growing log data. Finding anomalies by observing and monitoring this data with manual human inspection methods becomes progressively more challenging, complex and time consuming. This is vital for making these systems available around-the-clock. Aim: The main objective of this study is to determine which are the most suitable Machine Learning (ML) algorithms and if they can live up to needs and requirements regarding optimization and efficiency in the log data monitoring area. Including what specific steps of the overall problem can be improved by using these algorithms for anomaly detection and classification on different real provided data logs. Approach: Initial pre-study is conducted, logs are collected and then preprocessed with log parsing tool Drain and regular expressions. The approach consisted of a combination of K-Means + XGBoost and respectively Principal Component Analysis (PCA) + K-Means + XGBoost. These was trained, tested and with different metrics individually evaluated against two datasets, one being a Server data log and on a HTTP Access log. Results: The results showed that both approaches performed very well on both datasets. Able to with high accuracy, precision and low calculation time classify, detect and make predictions on log data events. It was further shown that when applied without dimensionality reduction, PCA, results of the prediction model is slightly better, by a few percent. As for the prediction time, there was marginally small to no difference for when comparing the prediction time with and without PCA. Conclusions: Overall there are very small differences when comparing the results for with and without PCA. But in essence, it is better to do not use PCA and instead apply the original data on the ML models. The models performance is generally very dependent on the data being applied, it the initial preprocessing steps, size and it is structure, especially affecting the calculation time the most.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Della, Chiesa Enrico. "Implementazione Tensorflow di Algoritmi di Anomaly Detection per la Rilevazione di Intrusioni Mediante Signals of Opportunity (SoOP)." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Анотація:
In questo elaborato viene presentata l’implementazione di algoritmi di machine learning di tipo supervised e unsupervised attraverso Python e Tensorflow. In particolare viene affrontato come caso di studio l’implementazione di algoritmi di Anomaly Detection. Nel Capitolo 1 vengono presentati gli algoritmi di machine learning implementati. Nel Capitolo 2 viene presentato e analizzato l’ambiente di sviluppo utilizzato, costituito da Python e Tensoflow. Infine è presentata l’implementazione degli algoritmi descritti al capitolo 1. Nel Capitolo 3 sono implementati come caso di studio due algoritmi tratti dall’articolo Anomaly Detection Using WiFi Signals of Opportunity. Il caso di studio prevede la rilevazione di cambiamenti della configurazione spaziale di una stanza utilizzando i segnali WiFi presenti nell’ambiente ed algoritmi di Anomaly Detection. Gli algoritmi sono stati riprodotti attraverso Python e Tensorflow. Inoltre è presentata un’ulteriore soluzione basata su una rete neurale autoassociativa (autoencoder). Infine sono riportate le conclusioni, in cui viene fatto il resoconto dei risultati ottenuti ed effettuato un accenno a sviluppi futuri.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Van, der Walt Marizelle. "Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der Walt." Thesis, North-West University, 2011. http://hdl.handle.net/10394/9536.

Повний текст джерела
Анотація:
In this dissertation, the aim is to investigate the empirical relationship between the partial pressure of CO2 (pCO2) and other ocean variables in the Southern Ocean, by using a small percentage of the available data. CO2 is one of the main greenhouse gases that contributes to global warming and climate change. The concentration of anthropogenic CO2 in the atmosphere, however, would have been much higher if some of it was not absorbed by oceanic and terrestrial sinks. The oceans absorb and release CO2 from and to the atmosphere. Large regions in the Southern Ocean are expected to be a CO2 sink. However, the measurements of CO2 concentrations in the ocean are sparse in the Southern Ocean, and accurate values for the sinks and sources cannot be determined. In addition, it is difficult to develop accurate oceanic and ocean-atmosphere models of the Southern Ocean with the sparse observations of CO2 concentrations in this part of the ocean. In this dissertation classical techniques are investigated to determine the empirical relationship between pCO2 and other oceanic variables using in situ measurements. Additionally, sampling techniques are investigated in order to make a judicious selection of a small percentage of the total available data points in order to develop an accurate empirical relationship. Data from the SANAE49 cruise stretching between Antarctica and Cape Town are used in this dissertation. The complete data set contains 6103 data points. The maximum pCO2 value in this stretch is 436.0 μatm, the minimum is 251.2 μatm and the mean is 360.2 μatm. An empirical relationship is investigated between pCO2 and the variables Temperature (T), chlorophyll-a concentration (Chl), Mixed Layer Depth (MLD) and latitude (Lat). The methods are repeated with latitude included and excluded as variable respectively. D-optimal sampling is used to select a small percentage of the available data for determining the empirical relationship. Least squares optimization is used as one method to determine the empirical relationship. For 200 D-optimally sampled points, the pCO2 prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 μatm (on the estimation of pCO2) with latitude excluded as variable and a RMS error of 8.797 μatm with latitude included as variable. Radial basis function (RBF) interpolation is another method that is used to determine the empirical relationship between the variables. The RBF interpolation with 200 D-optimally sampled points yields a RMS error of 9.617 μatm with latitude excluded as variable and a RMS error of 6.716 μatm with latitude included as variable. Optimal scaling is applied to the variables in the RBF interpolation, yielding a RMS error of 9.012 μatm with latitude excluded as variable and a RMS error of 4.065 μatm with latitude included as variable for 200 D-optimally sampled points.
Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Cazelles, Elsa. "Statistical properties of barycenters in the Wasserstein space and fast algorithms for optimal transport of measures." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0125/document.

Повний текст джерела
Анотація:
Cette thèse se concentre sur l'analyse de données présentées sous forme de mesures de probabilité sur R^d. L'objectif est alors de fournir une meilleure compréhension des outils statistiques usuels sur cet espace muni de la distance de Wasserstein. Une première notion naturelle est l'analyse statistique d'ordre un, consistant en l'étude de la moyenne de Fréchet (ou barycentre). En particulier, nous nous concentrons sur le cas de données (ou observations) discrètes échantillonnées à partir de mesures de probabilité absolument continues (a.c.) par rapport à la mesure de Lebesgue. Nous introduisons ainsi un estimateur du barycentre de mesures aléatoires, pénalisé par une fonction convexe, permettant ainsi d'imposer son a.c. Un autre estimateur est régularisé par l'ajout d'entropie lors du calcul de la distance de Wasserstein. Nous nous intéressons notamment au contrôle de la variance de ces estimateurs. Grâce à ces résultats, le principe de Goldenshluger et Lepski nous permet d'obtenir une calibration automatique des paramètres de régularisation. Nous appliquons ensuite ce travail au recalage de densités multivariées, notamment pour des données de cytométrie de flux. Nous proposons également un test d'adéquation de lois capable de comparer deux distributions multivariées, efficacement en terme de temps de calcul. Enfin, nous exécutons une analyse statistique d'ordre deux dans le but d'extraire les tendances géométriques globales d'un jeu de donnée, c'est-à-dire les principaux modes de variations. Pour cela nous proposons un algorithme permettant d'effectuer une analyse en composantes principales géodésiques dans l'espace de Wasserstein
This thesis focuses on the analysis of data in the form of probability measures on R^d. The aim is to provide a better understanding of the usual statistical tools on this space endowed with the Wasserstein distance. The first order statistical analysis is a natural notion to consider, consisting of the study of the Fréchet mean (or barycentre). In particular, we focus on the case of discrete data (or observations) sampled from absolutely continuous probability measures (a.c.) with respect to the Lebesgue measure. We thus introduce an estimator of the barycenter of random measures, penalized by a convex function, making it possible to enforce its a.c. Another estimator is regularized by adding entropy when computing the Wasserstein distance. We are particularly interested in controlling the variance of these estimators. Thanks to these results, the principle of Goldenshluger and Lepski allows us to obtain an automatic calibration of the regularization parameters. We then apply this work to the registration of multivariate densities, especially for flow cytometry data. We also propose a test statistic that can compare two multivariate distributions, efficiently in terms of computational time. Finally, we perform a second-order statistical analysis to extract the global geometric tendency of a dataset, also called the main modes of variation. For that purpose, we propose algorithms allowing to carry out a geodesic principal components analysis in the space of Wasserstein
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Jošth, Radovan. "Využití GPU pro algoritmy grafiky a zpracování obrazu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-261274.

Повний текст джерела
Анотація:
Táto práca popisuje niekoľko vybraných algoritmov, ktoré boli primárne vyvinuté pre CPU procesory, avšak vzhľadom k vysokému dopytu po ich vylepšeniach sme sa rozhodli ich využiť v prospech GPGPU (procesorov grafického adaptéra). Modifikácia týchto algoritmov bola zároveň cieľom nášho výskumu, ktorý  bol prevedený pomocou CUDA rozhrania. Práca je členená podľa troch skupín algoritmov, ktorým sme sa venovali: detekcia objektov v reálnom čase, spektrálna analýza obrazu a detekcia čiar v reálnom čase. Pre výskum detekcie objektov v reálnom čase sme zvolili použitie LRD a LRP funkcií.  Výskum spektrálnej analýzy obrazu bol prevedný pomocou PCA a NTF algoritmov. Pre potreby skúmania detekcie čiar v reálnom čase sme používali dva rôzne spôsoby modifikovanej akumulačnej schémy Houghovej transformácie. Pred samotnou časťou práce venujúcej sa konkrétnym algoritmom a predmetu skúmania, je v úvodných kapitolách, hneď po kapitole ozrejmujúcej dôvody skúmania vybranej problematiky, stručný prehľad architektúry GPU a GPGPU. Záverečné kapitoly sú zamerané na konkretizovanie vlastného prínosu autora, jeho zameranie, dosiahnuté výsledky a zvolený prístup k ich dosiahnutiu. Súčasťou výsledkov je niekoľko vyvinutých produktov.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Björklund, Oscar. "Kompakthet av procedurellt genererade grottsystem : En jämförelse av procedurellt genererade grottsystem." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12355.

Повний текст джерела
Анотація:
För att minska mängden arbete för att skapa spel så används Procedural Content Generation (PCG) för att kunna skapa nytt och varierat innehåll för spel. Denna studie fokuserar på att undersöka algoritmerna Binary Space Partitioning, Shortest Path och Cellular Automata för att skapa banor till spel med en grottstruktur. Undersökningens syfte är att utvärdera hur snabbt dessa skapar banor, hur kompakta dessa är och hur stor del av den totala ytan som förblir oanvänd. Efter testerna kan slutsatsen dras att den mest effektiva algoritmen för att skapa mest de mest kompakta grottsystemen på kort tid är Binary Space Partitioning. Framtida arbeten kan behandla implementeringen i t.ex. datorspel och simuleringar.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Uyanik, Basar. "Cell Formation: A Real Life Application." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606635/index.pdf.

Повний текст джерела
Анотація:
In this study, the plant layout problem of a worldwide Printed Circuit Board (PCB) producer company is analyzed. Machines are grouped into cells using grouping methodologies of Tabular Algorithm, K-means clustering algorithm, and Hierarchical grouping with Levenshtein distances. Production plant layouts, which are formed by using different techniques, are evaluated using technical and economical indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Durán, Alcaide Ángel. "Development of high-performance algorithms for a new generation of versatile molecular descriptors. The Pentacle software." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7201.

Повний текст джерела
Анотація:
The work of this thesis was focused on the development of high-performance algorithms for a new generation of molecular descriptors, with many advantages with respect to its predecessors, suitable for diverse applications in the field of drug design, as well as its implementation in commercial grade scientific software (Pentacle). As a first step, we developed a new algorithm (AMANDA) for discretizing molecular interaction fields which allows extracting from them the most interesting regions in an efficient way. This algorithm was incorporated into a new generation of alignmentindependent molecular descriptors, named GRIND-2. The computing speed and efficiency of the new algorithm allow the application of these descriptors in virtual screening. In addition, we developed a new alignment-independent encoding algorithm (CLACC) producing quantitative structure-activity relationship models which have better predictive ability and are easier to interpret than those obtained with other methods.
El trabajo que se presenta en esta tesis se ha centrado en el desarrollo de algoritmos de altas prestaciones para la obtención de una nueva generación de descriptores moleculares, con numerosas ventajas con respecto a sus predecesores, adecuados para diversas aplicaciones en el área del diseño de fármacos, y en su implementación en un programa científico de calidad comercial (Pentacle). Inicialmente se desarrolló un nuevo algoritmo de discretización de campos de interacción molecular (AMANDA) que permite extraer eficientemente las regiones de máximo interés. Este algoritmo fue incorporado en una nueva generación de descriptores moleculares independientes del alineamiento, denominados GRIND-2. La rapidez y eficiencia del nuevo algoritmo permitieron aplicar estos descriptores en cribados virtuales. Por último, se puso a punto un nuevo algoritmo de codificación independiente de alineamiento (CLACC) que permite obtener modelos cuantitativos de relación estructura-actividad con mejor capacidad predictiva y mucho más fáciles de interpretar que los obtenidos con otros métodos.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Parks, Jeremy. "A Texas Instruments C33 DSP PCI platform for high-speed real-time implementation of IEEE802.11a Wireless LAN algorithms." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002880.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Bagi, Ligia Bariani. "Algoritmo treansgen?tico na solu??o do problema do Caixeiro Viajante." Universidade Federal do Rio Grande do Norte, 2007. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18112.

Повний текст джерела
Анотація:
Made available in DSpace on 2014-12-17T15:48:11Z (GMT). No. of bitstreams: 1 LigiaBB.pdf: 1036516 bytes, checksum: 36260a287f3ddf0bc38abbb0ec32b82f (MD5) Previous issue date: 2007-02-09
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
The Traveling Purchaser Problem is a variant of the Traveling Salesman Problem, where there is a set of markets and a set of products. Each product is available on a subset of markets and its unit cost depends on the market where it is available. The objective is to buy all the products, departing and returning to a domicile, at the least possible cost defined as the summation of the weights of the edges in the tour and the cost paid to acquire the products. A Transgenetic Algorithm, an evolutionary algorithm with basis on endosymbiosis, is applied to the Capacited and Uncapacited versions of this problem. Evolution in Transgenetic Algorithms is simulated with the interaction and information sharing between populations of individuals from distinct species. The computational results show that this is a very effective approach for the TPP regarding solution quality and runtime. Seventeen and nine new best results are presented for instances of the capacited and uncapacited versions, respectively
O Problema do Caixeiro Comprador ? uma variante do Problema do Caixeiro Viajante, onde existe um conjunto de mercados e um conjunto de produtos. Cada produto est? dispon?vel em um subconjunto de mercados e o pre?o da unidade varia de acordo com o mercado. O objetivo ? comprar todos os produtos, partindo e retornando para o dep?sito, de maneira que a soma do custo da rota e dos produtos seja m?nimo. Um Algoritmo Transgen?tico, algoritmo evolucion?rio com base na endosimbiose, ? utilizado para resolver a vers?o Capacitada e N?o Capacitada desse problema. A evolu??o no algoritmo transgen?tico ? simulada com a intera??o e troca de informa??es entre popula??o de indiv?duos de diferentes esp?cies. Os resultados computacionais mostram que a abordagem ? satisfat?ria para o PCC , tanto na qualidade da solu??o, quanto no tempo de execu??o. Dezessete e nove novas melhores solu??es s?o encontradas para o PCC Capacitado e para o PCC N?o Capacitado, respectivamente
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Gagliardi, Raphael Luiz. "Aplicação de Inteligência Computacional para a Solução de Problemas Inversos de Transferência Radiativa em Meios Participantes Unidimensionais." Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7543.

Повний текст джерела
Анотація:
Esta pesquisa consiste na solução do problema inverso de transferência radiativa para um meio participante (emissor, absorvedor e/ou espalhador) homogêneo unidimensional em uma camada, usando-se a combinação de rede neural artificial (RNA) com técnicas de otimização. A saída da RNA, devidamente treinada, apresenta os valores das propriedades radiativas [ω, τ0, ρ1 e ρ2] que são otimizadas através das seguintes técnicas: Particle Collision Algorithm (PCA), Algoritmos Genéticos (AG), Greedy Randomized Adaptive Search Procedure (GRASP) e Busca Tabu (BT). Os dados usados no treinamento da RNA são sintéticos, gerados através do problema direto sem a introdução de ruído. Os resultados obtidos unicamente pela RNA, apresentam um erro médio percentual menor que 1,64%, seria satisfatório, todavia para o tratamento usando-se as quatro técnicas de otimização citadas anteriormente, os resultados tornaram-se ainda melhores com erros percentuais menores que 0,04%, especialmente quando a otimização é feita por AG.
This research consists in the solution of the inverse problem of radiative transfer for a participating media (emmiting, absorbing and/or scattering) homogeneous one-dimensional in one layer, using the combination of artificial neural network (ANN), with optimization techniques. The output of the ANN, properly trained presents the values of the radiative properties [w, to, p1 e p2] that are optimized through the following techniques: Particle Collision Algorithm (PCA), Genetic Algorithm (GA), Greedy Randomized Adaptive Search Procedure (GRASP) and Tabu Search (TS). The data used in the training are synthetics, generated through the direct problem without the introduction of noise. The results obtained by the (ANN) alone, presents an average percentage error minor than 1,64%, what it would be satisfying, however, for the treatment using the four techniques of optimization aforementioned, the results have become even better with percentage errors minor than 0,03%, especially when the optimization is made by the GA.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ranjitkar, Hari Sagar, and Sudip Karki. "Comparison of A*, Euclidean and Manhattan distance using Influence map in MS. Pac-Man." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11800.

Повний текст джерела
Анотація:
Context An influence map and potential fields are used for finding path in domain of Robotics and Gaming in AI. Various distance measures can be used to find influence maps and potential fields. However, these distance measures have not been compared yet. ObjectivesIn this paper, we have proposed a new algorithm suitable to find an optimal point in parameters space from random parameter spaces. Finally, comparisons are made among three popular distance measures to find the most efficient. Methodology For our RQ1 and RQ2, we have implemented a mix of qualitative and quantitative approach and for RQ3, we have used quantitative approach. Results A* distance measure in influence maps is more efficient compared to Euclidean and Manhattan in potential fields. Conclusions Our proposed algorithm is suitable to find optimal point and explores huge parameter space. A* distance in influence maps is highly efficient compared to Euclidean and Manhattan distance in potentials fields. Euclidean and Manhattan distance performed relatively similar whereas A* distance performed better than them in terms of score in Ms. Pac-Man (See Appendix A).
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Bountourelis, Theologos. "Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.

Повний текст джерела
Анотація:
Thesis (M. S.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Karlsson, Albin. "Evaluation of the Complexity of Procedurally Generated Maze Algorithms." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16839.

Повний текст джерела
Анотація:
Background. Procedural Content Generation (PCG) in Video Games can be used as a tool for efficiently producing large varieties of new content using less manpower, making it ideal for smaller teams of developers who wants to compete with games made by larger teams. One particular facet of PCG is the generation of mazes. Designers that want their game to feature mazes also need to know how to evaluate their maze-complexity, in order to know which maze fits the difficulty curve best. Objectives. This project aims to investigate the difference in complexity between the maze generation algorithms recursive backtracker (RecBack), Prim’s algorithm (Prims), and recursive division (RecDiv), in terms completion time, when solved using a depth-first-search (DFS) algorithm. In order to understand which parameters affect completion time/complexity, investigate possible connections between completion time, and the distribution of branching paths, distribution of corridors, and length of the path traversed by DFS. Methods. The main methodology was an implementation in the form of a C# application, which randomly generated 100 mazes for each algorithm for five different maze grid resolutions (16x16, 32x32, 64x64, 128x128, 256x256). Each one of the generated mazes was solved using a DFS algorithm, whose traversed nodes, solving path, and completion time was recorded. Additionally, branch distribution and corridor distribution data was gathered for each generated maze. Results. The initial results showed that mazes generated by Prims algorithm had the lowest complexity (shortest completion time), the shortest solving path, the lowest amount of traversed nodes, and the lowest proportion of 2-branches, but the highest proportion of all other branch types. Additionally Prims had the highest proportion of 4-6 length paths, but the lowest proportion of 2 and 3 length paths. Later mazes generated by RecDiv had intermediate complexity, intermediate solving path, intermediate traversed nodes, intermediate proportion of all branch types, and the highest proportion of 2-length paths, but the lowest proportion of 4-6 length paths. Finally mazes generated by RecBack had opposite statistics from Prims: the highest complexity, the longest solving path, the highest amount of traversed nodes, the highest proportion of 2-branches, but lowest proportion of all other branch types, and the highest proportion of 3-length paths, but the lowest of 2-length paths. Conclusions. Prims algorithm had the lowest complexity, RecDiv intermediate complexity, and RecBack the highest complexity. Increased solving path length, traversed nodes, and increased proportions of 2-branches, seem to correlate with increased complexity. However the corridor distribution results are too small and diverse to identify a pattern affecting completion time. However the corridor distribution results are too diverse to make it possible to discern a pattern affecting completion time by just observing the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Helge, Adam. "Procedurell Generering - Rum och Korridorer : En jämförelse av BSP och Bucks algoritm som metoder för procedurell generering av dungeons." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15288.

Повний текст джерела
Анотація:
Procedurell generering av spelinnehåll är idag ett populärt alternativ för många spelutvecklare då det kan minska kostanden, tiden samt arbetsbördan för mindre bolag. Den här studien har som syfte att undersöka två stycken algoritmer inom procedurell generering där den ena är BSP och den andra är Bucks algoritm. Bucks algoritm är en ny, okänd och tämligen oanvänd algoritm som är av intresse att undersöka för att utvärdera dess användbarhet inom spelutveckling. Resultatet från studien visar att BSP genererar innehåll snabbast och i gengäld skapar Bucks algoritm innehåll med en högre nivå av densitet och relativ storlek.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Audibert, Jean-Yves. "Théorie statistique de l'apprentissage : une approche PAC-Bayésienne." Paris 6, 2004. http://www.theses.fr/2004PA066003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Morandini, Jacques. "Contribution à la résolution multi-méthode des équations aux dérivées partielles couplées rencontrées en magnéto-thermo-hydrodynamique." Grenoble INPG, 1994. http://www.theses.fr/1994INPG0068.

Повний текст джерела
Анотація:
Une analyse methodologique de la resolution numerique des equations aux derivees partielles couplees rencontrees en physique des milieux continus, et plus particulierement en magneto-thermo-hydrodynamique, est presentee. L'originalite de cette approche repose sur une generalisation des notions de methode numerique et d'algorithme de controle de la resolution, ainsi que sur la modelisation dynamique des objets, support des phenomenes physiques simules. Nous nous sommes appuyes sur cette analyse pour la specification et le developpement d'un prototype de systeme generateur multi-methode, dans lequel, les methodes numeriques sont definies comme un ensemble d'operateurs elementaires. Les algorithmes de controle de la resolution sont independants a la fois des formulations employees et des methodes numeriques utilisees. La construction dynamique des objets est conduite par un ensemble d'operateurs de construction, suivant les regles decrites par l'utilisateur. L'ensemble des taches necessaires au bon fonctionnement des operateurs et des algorithmes est pris en charge, de maniere automatique, au cours de la resolution, par les moteurs des methodes, des algorithmes et des objets. Quelques applications sont presentees pour illustrer le fonctionnement du systeme generateur. Elles concernent la resolution d'un probleme thermique 2d grace a un algorithme de prediction-correction simple, la resolution d'un probleme magneto-thermique 2d, avec franchissement du point de curie, par un algorithme de prediction-correction croisee, et enfin, un probleme magneto-thermique 3d avec suivi de front de solidification
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Dahl, David, and Oscar Pleininger. "A Comparative Study of Representations for Procedurally Generated Structures in Games." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20665.

Повний текст джерела
Анотація:
In this paper we have compared and evaluated two different representations used in search based procedural content generation (PCG). The comparison was based on the differences in performance, quality of the generated content and the complexity of the final artifacts. This was accomplished by creating two artifacts, each of which used one of the representations in combination with a genetic algorithm. This was followed up with individual testing sessions in which 21 test subjects participated. The evaluated results were then presented in a manner of relevance for both search based PCG as a whole, and for further exploration within the area of representations used in this field.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Alfredsson, Jon. "Design of a parallel A/D converter system on PCB : For high-speed sampling and timing error correction." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1201.

Повний текст джерела
Анотація:

The goals for most of today’s receiver system are sampling at high-speed, with high resolution and with as few errors as possible. This master thesis describes the design of a high-speed sampling system with"state-of-the-art"components available on the market. The system is designed with a parallel Analog-to-digital converter (ADC) architecture, also called time interleaving. It aims to increase the sampling speed of the system. The system described in this report uses four 12-bits ADCs in parallel. Each ADC can sample at 125 MHz and the total sampling speed will then theoretically become 500 Ms/s. The system has been implemented and manufactured on a printed circuit board (PCB). Up to four boards can be connected in parallel to get 2 Gs/s theoretically.

In an approach to increase the systems performance even further, a timing error estimation algorithm will be used on the sampled data. This algorithm estimates the timing errors that occur when sampling with non-uniform time interval between samples. After the estimations, the sampling clocks can be adjusted to correct the errors.

This thesis is concerning some ADC theory, system design and PCB implementation. It also describes how to test and measure the system’s performance. No measurement results are presented in this thesis because measurements will be done after this project. The last part of the thesis discusses future improvementsto achieve even higher performance.

Стилі APA, Harvard, Vancouver, ISO та ін.
46

Elhadji, Ille Gado Nassara. "Méthodes aléatoires pour l’apprentissage de données en grande dimension : application à l'apprentissage partagé." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0032.

Повний текст джерела
Анотація:
Cette thèse porte sur l’étude de méthodes aléatoires pour l’apprentissage de données en grande dimension. Nous proposons d'abord une approche non supervisée consistant en l'estimation des composantes principales, lorsque la taille de l'échantillon et la dimension de l'observation tendent vers l'infini. Cette approche est basée sur les matrices aléatoires et utilise des estimateurs consistants de valeurs propres et vecteurs propres de la matrice de covariance. Ensuite, dans le cadre de l’apprentissage supervisé, nous proposons une approche qui consiste à, d'abord réduire la dimension grâce à une approximation de la matrice de données originale, et ensuite réaliser une LDA dans l’espace réduit. La réduction de dimension est basée sur l’approximation de matrices de rang faible par l’utilisation de matrices aléatoires. Un algorithme d'approximation rapide de la SVD, puis une version modifiée permettant l’approximation rapide par saut spectral sont développés. Les approches sont appliquées à des données réelles images et textes. Elles permettent, par rapport à d’autres méthodes, d’obtenir un taux d’erreur assez souvent optimal, avec un temps de calcul réduit. Enfin, dans le cadre de l’apprentissage par transfert, notre contribution consiste en l’utilisation de l'alignement des sous-espaces caractéristiques et l’approximation de matrices de rang faible par projections aléatoires. La méthode proposée est appliquée à des données de référence ; elle présente l’avantage d’être performante et adaptée à des données de grande dimension
This thesis deals with the study of random methods for learning large-scale data. Firstly, we propose an unsupervised approach consisting in the estimation of the principal components, when the sample size and the observation dimension tend towards infinity. This approach is based on random matrices and uses consistent estimators of eigenvalues and eigenvectors of the covariance matrix. Then, in the case of supervised learning, we propose an approach which consists in reducing the dimension by an approximation of the original data matrix and then realizing LDA in the reduced space. Dimension reduction is based on low–rank approximation matrices by the use of random matrices. A fast approximation algorithm of the SVD and a modified version as fast approximation by spectral gap are developed. Experiments are done with real images and text data. Compared to other methods, the proposed approaches provide an error rate that is often optimal, with a small computation time. Finally, our contribution in transfer learning consists in the use of the subspace alignment and the low-rank approximation of matrices by random projections. The proposed method is applied to data derived from benchmark database; it has the advantage of being efficient and adapted to large-scale data
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Bennett, Casey. "Channel Noise and Firing Irregularity in Hybrid Markov Models of the Morris-Lecar Neuron." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1441551744.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Dobossy, Barnabás. "Odhad parametrů jezdce na vozítku segway a jejich použití pro optimalizaci řídícího algoritmu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399406.

Повний текст джерела
Анотація:
Táto práca sa zaoberá vývojom, testovaním a implementáciou adaptívneho riadiaceho systému pre dvojkolesové samobalancujúce vozidlo. Adaptácia parametrov vozidla sa uskutoční na základe parametrov vodiča. Parametre sústavy sa nemerajú priamo, ale sú odhadované na základe priebehu stavových premenných a odozvy sústavy. Medzi odhadované parametre patrí hmotnosť a poloha ťažiska vodiča. Cieľom práce je zabezpečiť adaptáciu jazdných vlastností vozidla k rôznym vodičom s rôznou hmotnosťou, kvôli zlepšeniu stability vozidla. Táto práca je pokračovaním predchádzajúcich projektov z roku 2011 a 2015.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Figueiredo, António José Pereira de. "Energy efficiency and comfort strategies for Southern European climate : optimization of passive housing and PCM solutions." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17291.

Повний текст джерела
Анотація:
Doutoramento em Engenharia Civil
Pursuing holistic sustainable solutions, towards the target defined by the United Nations Framework Convention on Climate Change (UNFCCC) is a stimulating goal. Exploring and tackling this task leads to a broad number of possible combinations of energy saving strategies than can be bridged by Passive House (PH) concept and the use of advanced materials, such as Phase Change Materials (PCM) in this context. Acknowledging that the PH concept is well established and practiced mainly in cold climate countries of Northern and Central Europe, the present research investigates how the construction technology and energy demand levels can be adapted to Southern Europe, in particular to Portugal mainland climate. For Southern Europe in addition to meeting the heating requirements in a fairly easier manner, it is crucial to provide comfortable conditions during summer, due to a high risk of overheating. The incorporation of PCMs into building solutions making use of solar energy to ensure their phase change process, are a potential solution for overall reduction of energy consumption and overheating rate in buildings. The PH concept and PCM use need to be adapted and optimised to work together with other active and passive systems improving the overall building thermal behaviour and reducing the energy consumption. Thus, a hybrid evolutionary algorithm was used to optimise the application of the PH concept to the Portuguese climate through the study of the combination of several building features as well as constructive solutions incorporating PCMs minimizing multi-objective benchmark functions for attaining the defined goals.
A procura de soluções de sustentabilidade holísticas que conduzam ao cumprimento dos desafios impostos pela Convenção-Quadro das Nações Unidas sobre as Alterações Climáticas é uma meta estimulante. Explorar esta tarefa resulta num amplo número de possíveis combinações de estratégias de poupança energética, sendo estas alcançáveis através do conceito definido pela Passive House (PH) e pela utilização de materiais de mudança de fase que se revelam como materiais inovadores neste contexto. Reconhecendo que este conceito já se encontra estabelecido e disseminado em países de climas frios do centro e norte da Europa, o presente trabalho de investigação foca-se na aplicabilidade e adaptabilidade deste conceito e correspondentes técnicas construtivas, assim como os níveis de energia, para climas do sul da Europa, nomeadamente em Portugal continental. No sudeste da Europa, adicionalmente à necessidade de cumprimento dos requisitos energéticos para aquecimento, é crucial promover e garantir condições de conforto no verão, devido ao elevado risco de sobreaquecimento. A incorporação de materiais de mudança de fase nas soluções construtivas dos edifícios, utilizando a energia solar para assegurar o processo de mudança de fase, conduz a soluções de elevado potencial para a redução global da energia consumida e do risco de sobreaquecimento. A utilização do conceito PH e dos materiais de mudança de fase necessitam de ser adaptados e otimizados para funcionarem integrados com outros sistemas ativos e passivos, melhorando o comportamento térmico dos edifícios e minimizando o consumo energético. Assim, foi utilizado um algoritmo evolutivo para otimizar a aplicabilidade do conceito PH ao clima português através do estudo e combinação de diversos aspetos construtivos, bem como o estudo de possíveis soluções construtivas inovadoras com incorporação de materiais de mudança de fase minimizando as funções objetivo para o cumprimento das metas inicialmente definidas.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Goyal, Anil. "Learning a Multiview Weighted Majority Vote Classifier : Using PAC-Bayesian Theory and Boosting." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES037/document.

Повний текст джерела
Анотація:
La génération massive de données, nous avons de plus en plus de données issues de différentes sources d’informations ayant des propriétés hétérogènes. Il est donc important de prendre en compte ces représentations ou vues des données. Ce problème d'apprentissage automatique est appelé apprentissage multivue. Il est utile dans de nombreux domaines d’applications, par exemple en imagerie médicale, nous pouvons représenter le cerveau humains via des IRM, t-fMRI, EEG, etc. Dans cette cette thèse, nous nous concentrons sur l’apprentissage multivue supervisé, où l’apprentissage multivue est une combinaison de différents modèles de classifications ou de vues. Par conséquent, selon notre point de vue, il est intéressant d’aborder la question de l’apprentissage à vues multiples dans le cadre PAC-Bayésien. C’est un outil issu de la théorie de l’apprentissage statistique étudiant les modèles s’exprimant comme des votes de majorité. Un des avantages est qu’elle permet de prendre en considération le compromis entre précision et diversité des votants, au cœur des problématiques liées à l’apprentissage multivue. La première contribution de cette thèse étend la théorie PAC-Bayésienne classique (avec une seule vue) à l’apprentissage multivue (avec au moins deux vues). Pour ce faire, nous définissons une hiérarchie de votants à deux niveaux: les classifieurs spécifiques à la vue et les vues elles-mêmes. Sur la base de cette stratégie, nous avons dérivé des bornes en généralisation PAC-Bayésiennes (probabilistes et non-probabilistes) pour l’apprentissage multivue. D'un point de vue pratique, nous avons conçu deux algorithmes d'apprentissage multivues basés sur notre stratégie PAC-Bayésienne à deux niveaux. Le premier algorithme appelé PB-MVBoost est un algorithme itératif qui apprend les poids sur les vues en contrôlant le compromis entre la précision et la diversité des vues. Le second est une approche de fusion tardive où les prédictions des classifieurs spécifiques aux vues sont combinées via l’algorithme PAC-Bayésien CqBoost proposé par Roy et al. Enfin, nous montrons que la minimisation des erreurs pour le vote de majorité multivue est équivalente à la minimisation de divergences de Bregman. De ce constat, nous proposons un algorithme appelé MωMvC2 pour apprendre un vote de majorité multivue
With tremendous generation of data, we have data collected from different information sources having heterogeneous properties, thus it is important to consider these representations or views of the data. This problem of machine learning is referred as multiview learning. It has many applications for e.g. in medical imaging, we can represent human brain with different set of features for example MRI, t-fMRI, EEG, etc. In this thesis, we focus on supervised multiview learning, where we see multiview learning as combination of different view-specific classifiers or views. Therefore, according to our point of view, it is interesting to tackle multiview learning issue through PAC-Bayesian framework. It is a tool derived from statistical learning theory studying models expressed as majority votes. One of the advantages of PAC-Bayesian theory is that it allows to directly capture the trade-off between accuracy and diversity between voters, which is important for multiview learning. The first contribution of this thesis is extending the classical PAC-Bayesian theory (with a single view) to multiview learning (with more than two views). To do this, we considered a two-level hierarchy of distributions over the view-specific voters and the views. Based on this strategy, we derived PAC-Bayesian generalization bounds (both probabilistic and expected risk bounds) for multiview learning. From practical point of view, we designed two multiview learning algorithms based on our two-level PAC-Bayesian strategy. The first algorithm is a one-step boosting based multiview learning algorithm called as PB-MVBoost. It iteratively learns the weights over the views by optimizing the multiview C-Bound which controls the trade-off between the accuracy and the diversity between the views. The second algorithm is based on late fusion approach where we combine the predictions of view-specific classifiers using the PAC-Bayesian algorithm CqBoost proposed by Roy et al. Finally, we show that minimization of classification error for multiview weighted majority vote is equivalent to the minimization of Bregman divergences. This allowed us to derive a parallel update optimization algorithm (referred as MωMvC2) to learn our multiview weighted majority vote
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії