Siga este enlace para ver otros tipos de publicaciones sobre el tema: Hyperparameter selection and optimization.

Tesis sobre el tema "Hyperparameter selection and optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Hyperparameter selection and optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.

Texto completo
Resumen
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir
Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Thornton, Chris. "Auto-WEKA : combined selection and hyperparameter optimization of supervised machine learning algorithms". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46177.

Texto completo
Resumen
Many different machine learning algorithms exist; taking into account each algorithm's set of hyperparameters, there is a staggeringly large number of possible choices. This project considers the problem of simultaneously selecting a learning algorithm and setting its hyperparameters. Previous works attack these issues separately, but this problem can be addressed by a fully automated approach, in particular by leveraging recent innovations in Bayesian optimization. The WEKA software package provides an implementation for a number of feature selection and supervised machine learning algorithms, which we use inside our automated tool, Auto-WEKA. Specifically, we examined the 3 search and 8 evaluator methods for feature selection, as well as all of the classification and regression methods, spanning 2 ensemble methods, 10 meta-methods, 27 base algorithms, and their associated hyperparameters. On 34 popular datasets from the UCI repository, the Delve repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, our method produces classification and regression performance often much better than obtained using state-of-the-art algorithm selection and hyperparameter optimization methods from the literature. Using this integrated approach, users can more effectively identify not only the best machine learning algorithm, but also the corresponding hyperparameter settings and feature selection methods appropriate for that algorithm, and hence achieve improved performance for their specific classification or regression task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bertrand, Quentin. "Hyperparameter selection for high dimensional sparse learning : application to neuroimaging". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG054.

Texto completo
Resumen
Grâce à leur caractère non invasif et leur excellente résolution temporelle, la magnéto- et l'électroencéphalographie (M/EEG) sont devenues des outils incontournables pour observer l'activité cérébrale. La reconstruction des signaux cérébraux à partir des enregistrements M/EEG peut être vue comme un problème inverse de grande dimension mal posé. Les estimateurs typiques des signaux cérébraux se basent sur des problèmes d'optimisation difficiles à résoudre, composés de la somme d'un terme d'attache aux données et d'un terme favorisant la parcimonie. À cause du paramètre de régularisation notoirement difficile à calibrer, les estimateurs basés sur la parcimonie ne sont actuellement pas massivement utilisés par les praticiens. L'objectif de cette thèse est de fournir un moyen simple, rapide et automatisé de calibrer des modèles linéaires parcimonieux. Nous étudions d'abord quelques propriétés de la descente par coordonnées : identification du modèle, convergence linéaire locale, et accélération. En nous appuyant sur les schémas d'extrapolation d'Anderson, nous proposons un moyen efficace d'accélérer la descente par coordonnées en théorie et en pratique. Nous explorons ensuite une approche statistique pour calibrer le paramètre de régularisation des problèmes de type Lasso. Il est possible de construire des estimateurs pour lesquels le paramètre de régularisation optimal ne dépend pas du niveau de bruit. Cependant, ces estimateurs nécessitent de résoudre des problèmes d'optimisation "non lisses + non lisses". Nous montrons que le lissage partiel préserve leurs propriétés statistiques et nous proposons une application aux problèmes de localisation de sources M/EEG. Enfin, nous étudions l'optimisation d'hyperparamètres, qui comprend notamment la validation croisée. Cela nécessite de résoudre des problèmes d'optimisation à deux niveaux avec des problèmes internes non lisses. De tels problèmes sont résolus de manière usuelle via des techniques d'ordre zéro, telles que la recherche sur grille ou la recherche aléatoire. Nous présentons une technique efficace pour résoudre ces problèmes d'optimisation à deux niveaux en utilisant des méthodes du premier ordre
Due to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Thomas, Janek [Verfasser] y Bernd [Akademischer Betreuer] Bischl. "Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization / Janek Thomas ; Betreuer: Bernd Bischl". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1189584808/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nakisa, Bahareh. "Emotion classification using advanced machine learning techniques applied to wearable physiological signals data". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129875/9/Bahareh%20Nakisa%20Thesis.pdf.

Texto completo
Resumen
This research contributed to the development of advanced feature selection model, hyperparameter optimization and temporal multimodal deep learning model to improve the performance of dimensional emotion recognition. This study adopts different approaches based on portable wearable physiological sensors. It identified best models for feature selection and best hyperparameter values for Long Short-Term Memory network and how to fuse multi-modal sensors efficiently for assessing emotion recognition. All methods of this thesis collectively deliver better algorithms and maximize the use of miniaturized sensors to provide an accurate measurement of emotion recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Klein, Aaron [Verfasser] y Frank [Akademischer Betreuer] Hutter. "Efficient bayesian hyperparameter optimization". Freiburg : Universität, 2020. http://d-nb.info/1214592961/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gousseau, Clément. "Hyperparameter Optimization for Convolutional Neural Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272107.

Texto completo
Resumen
Training algorithms for artificial neural networks depend on parameters called the hyperparameters. They can have a strong influence on the trained model but are often chosen manually with trial and error experiments. This thesis, conducted at Orange Labs Lannion, presents and evaluates three algorithms that aim at solving this task: a naive approach (random search), a Bayesian approach (Tree Parzen Estimator) and an evolutionary approach (Particle Swarm Optimization). A well-known dataset for handwritten digit recognition (MNIST) is used to compare these algorithms. These algorithms are also evaluated on audio classification, which is one of the main activities in the company team where the thesis was conducted. The evolutionary algorithm (PSO) showed better results than the two other methods.
Hyperparameteroptimering är en viktig men svår uppgift vid träning av ett artificiellt neuralt nätverk. Detta examensarbete, genomfört vid Orange Labs Lannion, presenterar och utvärderar tre algoritmer som syftar till att lösa denna uppgift: en naiv strategi (slumpmässig sökning), en Bayesiansk metod (TPE) och en evolutionär strategi (PSO). För att jämföra dessa algoritmer har MNIST-datasetet använts. Algoritmerna utvärderas även med hjälp av ljudklassificering, som är kärnverksamheten på företaget där examensarbetet genomfördes. Evolutionsalgoritmen (PSO) gav bättre resultat än de två andra metoderna.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lévesque, Julien-Charles. "Bayesian hyperparameter optimization : overfitting, ensembles and conditional spaces". Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28364.

Texto completo
Resumen
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
In this thesis, we consider the analysis and extension of Bayesian hyperparameter optimization methodology to various problems related to supervised machine learning. The contributions of the thesis are attached to 1) the overestimation of the generalization accuracy of hyperparameters and models resulting from Bayesian optimization, 2) an application of Bayesian optimization to ensemble learning, and 3) the optimization of spaces with a conditional structure such as found in automatic machine learning (AutoML) problems. Generally, machine learning algorithms have some free parameters, called hyperparameters, allowing to regulate or modify these algorithms’ behaviour. For the longest time, hyperparameters were tuned by hand or with exhaustive search algorithms. Recent work highlighted the conceptual advantages in optimizing hyperparameters with more rational methods, such as Bayesian optimization. Bayesian optimization is a very versatile framework for the optimization of unknown and non-derivable functions, grounded strongly in probabilistic modelling and uncertainty estimation, and we adopt it for the work in this thesis. We first briefly introduce Bayesian optimization with Gaussian processes (GP) and describe its application to hyperparameter optimization. Next, original contributions are presented on the dangers of overfitting during hyperparameter optimization, where the optimization ends up learning the validation folds. We show that there is indeed overfitting during the optimization of hyperparameters, even with cross-validation strategies, and that it can be reduced by methods such as a reshuffling of the training and validation splits at every iteration of the optimization. Another promising method is demonstrated in the use of a GP’s posterior mean for the selection of final hyperparameters, rather than directly returning the model with the minimal crossvalidation error. Both suggested approaches are demonstrated to deliver significant improvements in the generalization accuracy of the final selected model on a benchmark of 118 datasets. The next contributions are provided by an application of Bayesian hyperparameter optimization for ensemble learning. Stacking methods have been exploited for some time to combine multiple classifiers in a meta classifier system. Those can be applied to the end result of a Bayesian hyperparameter optimization pipeline by keeping the best classifiers and combining them at the end. Our Bayesian ensemble optimization method consists in a modification of the Bayesian optimization pipeline to search for the best hyperparameters to use for an ensemble, which is different from optimizing hyperparameters for the performance of a single model. The approach has the advantage of not requiring the training of more models than a regular Bayesian hyperparameter optimization. Experiments show the potential of the suggested approach on three different search spaces and many datasets. The last contributions are related to the optimization of more complex hyperparameter spaces, namely spaces that contain a structure of conditionality. Conditions arise naturally in hyperparameter optimization when one defines a model with multiple components – certain hyperparameters then only need to be specified if their parent component is activated. One example of such a space is the combined algorithm selection and hyperparameter optimization, now better known as AutoML, where the objective is to choose the base model and optimize its hyperparameters. We thus highlight techniques and propose new kernels for GPs that handle structure in such spaces in a principled way. Contributions are also supported by experimental evaluation on many datasets. Overall, the thesis regroups several works directly related to Bayesian hyperparameter optimization. The thesis showcases novel ways to apply Bayesian optimization for ensemble learning, as well as methodologies to reduce overfitting or optimize more complex spaces.
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nygren, Rasmus. "Evaluation of hyperparameter optimization methods for Random Forest classifiers". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301739.

Texto completo
Resumen
In order to create a machine learning model, one is often tasked with selecting certain hyperparameters which configure the behavior of the model. The performance of the model can vary greatly depending on how these hyperparameters are selected, thus making it relevant to investigate the effects of hyperparameter optimization on the classification accuracy of a machine learning model. In this study, we train and evaluate a Random Forest classifier whose hyperparameters are set to default values and compare its classification accuracy to another classifier whose hyperparameters are obtained through the use of the hyperparameter optimization (HPO) methods Random Search, Bayesian Optimization and Particle Swarm Optimization. This is done on three different datasets, and each HPO method is evaluated based on the classification accuracy change it induces across the datasets. We found that every HPO method yielded a total classification accuracy increase of approximately 2-3% across all datasets compared to the accuracies obtained using the default hyperparameters. However, due to limitations of time, data and computational resources, no assertions can be made as to whether the observed positive effect is generalizable at a larger scale. Instead, we could conclude that the utility of HPO methods is dependent on the dataset at hand.
För att skapa en maskininlärningsmodell behöver en ofta välja olika hyperparametrar som konfigurerar modellens egenskaper. Prestandan av en sådan modell beror starkt på valet av dessa hyperparametrar, varför det är relevant att undersöka hur optimering av hyperparametrar kan påverka klassifikationssäkerheten av en maskininlärningsmodell. I denna studie tränar och utvärderar vi en Random Forest-klassificerare vars hyperparametrar sätts till särskilda standardvärden och jämför denna med en klassificerare vars hyperparametrar bestäms av tre olika metoder för optimering av hyperparametrar (HPO) - Random Search, Bayesian Optimization och Particle Swarm Optimization. Detta görs på tre olika dataset, och varje HPO- metod utvärderas baserat på den ändring av klassificeringsträffsäkerhet som den medför över dessa dataset. Vi fann att varje HPO-metod resulterade i en total ökning av klassificeringsträffsäkerhet på cirka 2-3% över alla dataset jämfört med den träffsäkerhet som kruleslassificeraren fick med standardvärdena för hyperparametrana. På grund av begränsningar i form av tid och data kunde vi inte fastställa om den positiva effekten är generaliserbar till en större skala. Slutsatsen som kunde dras var istället att användbarheten av metoder för optimering av hyperparametrar är beroende på det dataset de tillämpas på.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Matosevic, Antonio. "On Bayesian optimization and its application to hyperparameter tuning". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74962.

Texto completo
Resumen
This thesis introduces the concept of Bayesian optimization, primarly used in optimizing costly black-box functions. Besides theoretical treatment of the topic, the focus of the thesis is on two numerical experiments. Firstly, different types of acquisition functions, which are the key components responsible for the performance, are tested and compared. Special emphasis is on the analysis of a so-called exploration-exploitation trade-off. Secondly, one of the most recent applications of Bayesian optimization concerns hyperparameter tuning in machine learning algorithms, where the objective function is expensive to evaluate and not given analytically. However, some results indicate that much simpler methods can give similar results. Our contribution is therefore a statistical comparison of simple random search and Bayesian optimization in the context of finding the optimal set of hyperparameters in support vector regression. It has been found that there is no significant difference in performance of these two methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Schilling, Nicolas [Verfasser], Lars [Akademischer Betreuer] Schmidt-Thieme y Frank [Gutachter] Hutter. "Bayesian Hyperparameter Optimization - Relational and Scalable Surrogate Models for Hyperparameter Optimization Across Problem Instances / Nicolas Schilling ; Gutachter: Frank Hutter ; Betreuer: Lars Schmidt-Thieme". Hildesheim : Stiftung Universität Hildesheim, 2019. http://d-nb.info/1199005703/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Larsson, Olov. "A Reward-based Algorithm for Hyperparameter Optimization of Neural Networks". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78827.

Texto completo
Resumen
Machine learning and its wide range of applications is becoming increasingly prevalent in both academia and industry. This thesis will focus on the two machine learning methods convolutional neural networks and reinforcement learning. Convolutional neural networks has seen great success in various applications for both classification and regression problems in a diverse range of fields, e.g. vision for self-driving cars or facial recognition. These networks are built on a set of trainable weights optimized on data, and a set of hyperparameters set by the designer of the network which will remain constant. For the network to perform well, the hyperparameters have to be optimized separately. The goal of this thesis is to investigate the use of reinforcement learning as a method for optimizing hyperparameters in convolutional neural networks built for classification problems. The reinforcement learning methods used are a tabular Q-learning and a new Q-learning inspired algorithm denominated max-table. These algorithms have been tested with different exploration policies based on each hyperparameter value’s covariance, precision or relevance to the performance metric. The reinforcement learning algorithms were mostly tested on the datasets CIFAR10 and MNIST fashion against a baseline set by random search. While the Q-learning algorithm was not able to perform better than random search, max-table was able to perform better than random search in 50% of the time on both datasets. Hyperparameterbased exploration policy using covariance and relevance were shown to decrease the optimizers’ performance. No significant difference was found between a hyperparameter based exploration policy using performance and an equally distributed exploration policy.
Maskininlärning och dess många tillämpningsområden blir vanligare i både akademin och industrin. Den här uppsatsen fokuserar på två maskininlärningsmetoder, faltande neurala nätverk och förstärkningsinlärning. Faltande neurala nätverk har sett stora framgångar inom olika applikationsområden både för klassifieringsproblem och regressionsproblem inom diverse fält, t.ex. syn för självkörande bilar eller ansiktsigenkänning. Dessa nätverk är uppbyggda på en uppsättning av tränbara parameterar men optimeras på data, samt en uppsättning hyperparameterar bestämda av designern och som hålls konstanta vilka behöver optimeras separat för att nätverket ska prestera bra. Målet med denna uppsats är att utforska användandet av förstärkningsinlärning som en metod för att optimera hyperparameterar i faltande neurala nätverk gjorda för klassifieringsproblem. De förstärkningsinlärningsmetoder som använts är en tabellarisk "Q-learning" samt en ny "Q-learning" inspirerad metod benämnd "max-table". Dessa algoritmer har testats med olika handlingsmetoder för utforskning baserade på hyperparameterarnas värdens kovarians, precision eller relevans gentemot utvärderingsmetriken. Förstärkningsinlärningsalgoritmerna var i största del testade på dataseten CIFAR10 och MNIST fashion och jämförda mot en baslinje satt av en slumpmässig sökning. Medan "Q-learning"-algoritmen inte kunde visas prestera bättre än den slumpmässiga sökningen, kunde "max-table" prestera bättre på 50\% av tiden på både dataseten. De handlingsmetoder för utforskning som var baserade på kovarians eller relevans visades minska algoritmens prestanda. Ingen signifikant skillnad kunde påvisas mellan en handlingsmetod baserad på hyperparametrarnas precision och en jämnt fördelad handlingsmetod för utforsking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jeggle, Kai. "Scalable Hyperparameter Opimization: Combining Asynchronous Bayesian Optimization With Efficient Budget Allocation". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280340.

Texto completo
Resumen
Automated hyperparameter tuning has become an integral part in the optimization of machine learning (ML) pipelines. Sequential model based optimization algorithms, such as bayesian optimization (BO), have been proven to be sample efficient with strong final performance. However, the increasing complexity and training times of ML models requires a shift from sequential to asynchronous, distributed hyperparameter tuning. The literature has come up with different strategies to modify BO to work in an asynchronous setting. By combining asynchronous BO with budget allocation strategies, poor performing trials are stopped early to free up expensive resources for other trials, improving the efficient use of resources and hence scalability further. Maggy is an open-source asynchronous hyperparameter optimization framework built on Spark that transparently schedules and manages hyperparameter trials. In this thesis, we present new support for a plug and play API to arbitrarily combine asynchronous Bayesian optimization algorithms with budget allocation strategies, like Hyperband or Median Early Stopping. This combines the best of both worlds and provides high scalability through efficient use of resources and strong final performance. We experimentally evaluate different combinations of asynchronous Bayesian Optimization with budget allocation algorithms and demonstrate its competitive performance and ability to scale.
Automatiserad inställning av hyperparameter har blivit en integrerad del i optimering av arbetsflöde för maskininlärning (ML). Sekventiella modellbaserade optimeringsalgoritmer, såsom Bayesian Optimization (BO), har visat sig vara dataeffektiva med stark slutprestanda. Emellertid kräver ökande komplexitet och träningstider för ML-modeller en övergång från sekventiell till asynkron, distribuerad hyperparameterinställning. Litteraturen har kommit med olika strategier för att modifiera BO för att arbeta i en asynkron miljö. Genom att kombinera asynkron BO med budgetallokeringsstrategier kan dåliga försök stoppas tidigt för att frigöra dyra resurser för andra försök att utforska sökutrymmet, förbättra effektiv resursanvändning och därmed skalbarhet ytterligare.Maggy är en öppen källkod asynkron hyperparameteroptimeringsram som är byggd på Spark som transparent planerar och hanterar hyperparameter-försök. I den här avhandlingen presenterar vi nytt stöd för en plug and play API som kombinerar asynkron bayesiska optimeringsalgoritmer med bud-get allokeringsstrategier, som Hyperband eller Median Early Stopping. Detta kombinerar det bästa från båda världarna och ger hög skalbarhet genom effektiv användning av resurser och stark slutprestanda. Vi utvärderar experimentellt olika kombinationer av asynkron Bayesian Optimization med allokering av budgetalgoritmer och visar dess konkurrenskraftiga prestanda och förmåga.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Gabere, Musa Nur. "Prediction of antimicrobial peptides using hyperparameter optimized support vector machines". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7345_1330684697.

Texto completo
Resumen

Antimicrobial peptides (AMPs) play a key role in the innate immune response. They can be ubiquitously found in a wide range of eukaryotes including mammals, amphibians, insects, plants, and protozoa. In lower organisms, AMPs function merely as antibiotics by permeabilizing cell membranes and lysing invading microbes. Prediction of antimicrobial peptides is important because experimental methods used in characterizing AMPs are costly, time consuming and resource intensive and identification of AMPs in insects can serve as a template for the design of novel antibiotic. In order to fulfil this, firstly, data on antimicrobial peptides is extracted from UniProt, manually curated and stored into a centralized database called dragon antimicrobial peptide database (DAMPD). Secondly, based on the curated data, models to predict antimicrobial peptides are created using support vector machine with optimized hyperparameters. In particular, global optimization methods such as grid search, pattern search and derivative-free methods are utilised to optimize the SVM hyperparameters. These models are useful in characterizing unknown antimicrobial peptides. Finally, a webserver is created that will be used to predict antimicrobial peptides in haemotophagous insects such as Glossina morsitan and Anopheles gambiae.

Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Hauser, Kristen. "Hyperparameter Tuning for Reinforcement Learning with Bandits and Off-Policy Sampling". Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1613034993418088.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Denton, Trip Shokoufandeh Ali. "Subset selection using nonlinear optimization /". Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1763.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Clune, Rory P. (Rory Patrick). "Algorithm selection in structural optimization". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82832.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 153-162).
Structural optimization is largely unused as a practical design tool, despite an extensive academic literature which demonstrates its potential to dramatically improve design processes and outcomes. Many factors inhibit optimization's application. Among them is the requirement for engineers-who generally lack the requisite expertise-to choose an optimization algorithm for a given problem. A suitable choice of algorithm improves the resulting design and reduces computational cost, yet the field of optimization does little to guide engineers in selecting from an overwhelming number of options. The goal of this dissertation is to aid, and ultimately to automate, algorithm selection, thus enhancing optimization's applicability in real-world design. The initial chapters examine the extent of the problem by reviewing relevant literature and by performing a short, empirical study of algorithm performance variation. We then specify hundreds of bridge design problems by methodically varying problem characteristics, and solve each of them with eight commonly-used nonlinear optimization algorithms. The resulting, extensive data set is used to address the algorithm selection problem. The results are first interpreted from an engineering perspective to ensure their validity as solutions to realistic problems. Algorithm performance trends are then analyzed, showing that no single algorithm outperforms the others on every problem. Those that achieve the best solutions are often computationally expensive, and those that converge quickly often arrive at poor solutions. Some problem features, such as the numbers of design variables and constraints, the structural type, and the nature of the objective function, correlate with algorithm performance. This knowledge and the generated data set are then used to develop techniques for automatic selection of optimization algorithms, based on a range supervised learning methods. Compared to a set of current, manual selection strategies, these techniques select the best algorithm almost twice as often, lead to better-quality solutions and reduced computational cost, and-on a randomly-chosen set of mass minimization problems-reduce average material use by 9.4%. The dissertation concludes by outlining future research on algorithm selection, on integrating these techniques in design software, and on adapting structural optimization to the realities of design. Keywords: Algorithm selection, structural optimization, structural design, machine learning
by Rory Clune.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Silvestre, Fialho Álvaro Roberto. "Adaptive operator selection for optimization". Paris 11, 2010. http://www.theses.fr/2010PA112292.

Texto completo
Resumen
Les Algorithmes Evolutionnaires sont des algorithmes d'optimisation qui ont déjà montré leur efficacité dans plusieurs domaines ; mais leur performance dépend du réglage de plusieurs paramètres. Cette thèse est consacrée au développement de techniques pour automatiser ce réglage par le biais de l'apprentissage automatique. Plus spécifiquement, nous avons travaillé sur un sous-problème : étant donné un ensemble d'opérateurs, cela consiste à choisir lequel doit être appliqué pour la génération de chaque nouvelle solution, basé sur la performance connue de chaque opérateur. Cette approche est utilisée en ligne, au cours de la résolution du problème, en utilisant exclusivement l'histoire du processus d'optimisation courant pour décider parmi les différents opérateurs ; ce paradigme est couramment référencé comme Sélection Adaptative d'Opérateurs (SAO). Pour faire de la SAO, deux composants sont nécessaires. L'Affectation de Crédit définit comment récompenser les opérateurs selon l'impact de leur application sur le processus de recherche. La Sélection d'Opérateurs règle leur choix selon les récompenses reçues ultérieurement. En résumé, la contribution principale de cette thèse consiste dans la proposition et l'analyse de différentes approches pour la SAO, basées sur le paradigme de Bandit Manchot (BM) ; nous avons proposé plusieurs modifications pour transformer un algorithme BM en une technique à la fois performante dans l'environnement dynamique de la SAO, et robuste par rapport aux caractéristiques des problèmes diverses. La dernière méthode, appelé AUC-MAB, est capable de suivre efficacement le meilleur opérateur sans nécessiter d'un réglage spécifique pour chaque problème
Evolutionary Algorithms have demonstrated their ability to address a wide range of optimization problems; but their performance relies on tuning a few parameters, depending on the problem at hand. During this thesis work, we have focused on the development of tools to automatically set some of these parameters, using machine learning techniques. More specifically, we have worked on the following sub-problem: given a number of available variation operators, it consists in selecting which is the best operator to be applied at each moment of the search, based on how the operators have performed up to the given time instant of the current search/optimization process. This approach is applied online, i. E. , while solving the problem, using only the history of the current run to evaluate and decide between the operators; this paradigm is commonly referred to as Adaptive Operator Selection (AOS). To do AOS, we need two components: a Credit Assignment scheme, that defines how to reward the operators based on their impact in the search process, and an Operator Selection mechanism that, based on the rewards received, decides which is the best operator to be applied next. The contributions of this thesis, in summary, lie in the proposaI and analysis of schemes to solve the AOS problem based on the Multi-Armed Bandit (MAB) paradigm; we have proposed different techniques, in order to enable a MAB algorithm to efficiently cope with the dynamics of evolution and with the very different characteristics of the problems to be tackled. The latest one, referred to as AUC-MAB, is able to efficiently control the application of operators, while being robust with respect to its own parameters
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Krüger, Franz David y Mohamad Nabeel. "Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.

Texto completo
Resumen
Maskininlärning har blivit allt vanligare inom näringslivet. Informationsinsamling med Data mining (DM) har expanderats och DM-utövare använder en mängd tumregler för att effektivisera tillvägagångssättet genom att undvika en anständig tid att ställa in hyperparametrarna för en given ML-algoritm för nå bästa träffsäkerhet. Förslaget i denna rapport är att införa ett tillvägagångssätt som systematiskt optimerar ML-algoritmerna med hjälp av genetiska algoritmer (GA), utvärderar om och hur modellen ska konstrueras för att hitta globala lösningar för en specifik datamängd. Genom att implementera genetiska algoritmer på två utvalda ML-algoritmer, K-nearest neighbors och Random forest, med två numeriska datamängder, Iris-datauppsättning och Wisconsin-bröstcancerdatamängd. Modellen utvärderas med träffsäkerhet och beräkningstid som sedan jämförs med sökmetoden exhaustive search. Resultatet har visat att GA fungerar bra för att hitta bra träffsäkerhetspoäng på en rimlig tid. Det finns vissa begränsningar eftersom parameterns betydelse varierar för olika ML-algoritmer.
As machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Bardenet, Rémi. "Towards adaptive learning and inference : applications to hyperparameter tuning and astroparticle physics". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112307.

Texto completo
Resumen
Les algorithmes d'inférence ou d'optimisation possèdent généralement des hyperparamètres qu'il est nécessaire d'ajuster. Nous nous intéressons ici à l'automatisation de cette étape d'ajustement et considérons différentes méthodes qui y parviennent en apprenant en ligne la structure du problème considéré.La première moitié de cette thèse explore l'ajustement des hyperparamètres en apprentissage artificiel. Après avoir présenté et amélioré le cadre générique de l'optimisation séquentielle à base de modèles (SMBO), nous montrons que SMBO s'applique avec succès à l'ajustement des hyperparamètres de réseaux de neurones profonds. Nous proposons ensuite un algorithme collaboratif d'ajustement qui mime la mémoire qu'ont les humains d'expériences passées avec le même algorithme sur d'autres données.La seconde moitié de cette thèse porte sur les algorithmes MCMC adaptatifs, des algorithmes d'échantillonnage qui explorent des distributions de probabilité souvent complexes en ajustant leurs paramètres internes en ligne. Pour motiver leur étude, nous décrivons d'abord l'observatoire Pierre Auger, une expérience de physique des particules dédiée à l'étude des rayons cosmiques. Nous proposons une première partie du modèle génératif d'Auger et introduisons une procédure d'inférence des paramètres individuels de chaque événement d'Auger qui ne requiert que ce premier modèle. Ensuite, nous remarquons que ce modèle est sujet à un problème connu sous le nom de label switching. Après avoir présenté les solutions existantes, nous proposons AMOR, le premier algorithme MCMC adaptatif doté d'un réétiquetage en ligne qui résout le label switching. Nous présentons une étude empirique et des résultats théoriques de consistance d'AMOR, qui mettent en lumière des liens entre le réétiquetage et la quantification vectorielle
Inference and optimization algorithms usually have hyperparameters that require to be tuned in order to achieve efficiency. We consider here different approaches to efficiently automatize the hyperparameter tuning step by learning online the structure of the addressed problem. The first half of this thesis is devoted to hyperparameter tuning in machine learning. After presenting and improving the generic sequential model-based optimization (SMBO) framework, we show that SMBO successfully applies to the task of tuning the numerous hyperparameters of deep belief networks. We then propose an algorithm that performs tuning across datasets, mimicking the memory that humans have of past experiments with the same algorithm on different datasets. The second half of this thesis deals with adaptive Markov chain Monte Carlo (MCMC) algorithms, sampling-based algorithms that explore complex probability distributions while self-tuning their internal parameters on the fly. We start by describing the Pierre Auger observatory, a large-scale particle physics experiment dedicated to the observation of atmospheric showers triggered by cosmic rays. The models involved in the analysis of Auger data motivated our study of adaptive MCMC. We derive the first part of the Auger generative model and introduce a procedure to perform inference on shower parameters that requires only this bottom part. Our model inherently suffers from label switching, a common difficulty in MCMC inference, which makes marginal inference useless because of redundant modes of the target distribution. After reviewing existing solutions to label switching, we propose AMOR, the first adaptive MCMC algorithm with online relabeling. We give both an empirical and theoretical study of AMOR, unveiling interesting links between relabeling algorithms and vector quantization
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Carlson, Susan Elizabeth. "Component selection optimization using genetic algorithms". Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17886.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Wu, Joseph T. (Joseph Tszkei) 1977. "Optimization of influenza vaccine strain selection". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29600.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003.
Includes bibliographical references (p. 89-90).
The World Health Organization (WHO) is responsible for making annual vaccine strains recommendation to countries around the globe. However, various studies have found that the WHO vaccine selection strategy has not been effective in some years. This motivates the search for a better strategy for choosing vaccine strains. In this work, we use recent results from theoretical immunology to formulate the vaccine selection problem as a discrete-time stochastic dynamic program with a high-dimensional continuous state space. We discuss the techniques that were developed for solving this difficult dynamic program, and present an effective and robust heuristic policy. We compare the performance of the heuristic policy, the follow policy, and the no-vaccine policy and show that the heuristic policy is the best among the three. After taking the cost of implementation into account, however, we conclude that the WHO policy is a cost-effective influenza vaccine strain selection policy.
by Joseph T. Wu.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Puhle, Michael. "Bond portfolio optimization". Berlin Heidelberg Springer, 2007. http://d-nb.info/985928115/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hugo, André. "Environmentally conscious process selection, design and optimization". Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417505.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Huang, Yu'e. "An optimization of feature selection for classification". Thesis, University of Ulster, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428284.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Persson, Mikael. "Cableharness selection for gearboxes using mathematical optimization". Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209929.

Texto completo
Resumen
The Scania modular product system enables the production of thousands of different versions of gearboxes. If each version use a unique cable harness, this leads to large costs for storage and production. It is desired to find a smaller set of cable harnesses to fit the needs of all gearboxes. In this report we present two mathematical programming models to accomplish this while minimizing cost for production and storage. We propose a procedure for partitioning the data into smaller subsets without loosing model accuracy. We also show how the solution to the first model may be used as a warm start solution for the second model. The report focuses on cables for gearbox control systems used in heavy trucks manufactured by Scania. Results from testing the models against data provided by Scania is presented. These results suggest that substantial reduction in production cost can be achieved. Findings from this project can be used in similar situations, for example engine control system cables and general vehicle electric wiring.
Scanias modulsystem gör att tusentals olika växellådsvarianter är möjliga att tillverka. Om varje växellådsvariant skall ha ett eget kablage leder detta till stora lagerhållnings- och produktionskostnader. Det är därför fördelaktigt om man kan hitta en mindre uppsättning kablage som uppfyller kraven för alla växellådor. Två modeller inom matematisk optimering presenteras för att uppnå målet samtidigt som kostnader för lagerhållning och produktion minimeras. Vidare föreslås en metod för att dela upp problemet i delproblem utan att noggrannheten minskar. Vi visar även hur lösningen från den första modellen kan användas som varmstart till den andra modellen. Fokus är på kablage för växellådor till Scanias lastbilar. Resultat från test av modellerna med data från Scanias produktion presenteras. Resultaten visar på att en betydande besparing är möjlig. Rapportens slutsatser kan även användas i liknande situationer, till exempel motorstyrsystem och andra elsystem i fordon.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Klasila, A. (Aleksi). "Mbed OS regression test selection and optimization". Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201908312830.

Texto completo
Resumen
Abstract. Testing is a fundamental building block in the identification of bugs, errors and defects in both hardware and software. Effective testing of large projects requires automated testing, test selection and test optimization. Using CI (Continuous Integration) tools, and test selection and optimization techniques reduce development time and increase productivity. The prioritization, selection and minimization of tests are well-known problems in software testing. Arm Mbed OS is a free, open-source embedded operating system designed specifically for the “things” in the IoT (Internet of Things). This thesis researches regression test selection (RTS) and optimization techniques (RTO). The main focus of the thesis is to develop a set of effective automated safe RTS (mbedRTS) and RTO (mbedRTO) techniques for Mbed OS pull request (PR) testing. This thesis refers to the set of developed techniques as Mbed OS regression test techniques (MbedRTT), also known as Mbed OS Smart Tester. The empirical analysis of the researched, and developed MbedRTT techniques show promising results. Several developed MbedRTT techniques have already been adopted in Mbed OS Jenkins CI.Mbed OS -regressiotestien valinta ja optimointi. Tiivistelmä. Testaus on olennainen tekijä vikojen ja virheiden tunnistamisessa sekä ohjelmistossa että laitteistossa. Isojen projektien tehokas testaaminen vaatii automaattista testausta, testien valintaa ja testien optimointia. Jatkuvan integraation (engl. continuous integration) työkalut, testien valintatekniikat ja testien optimointitekniikat lyhentävät kehitykseen kuluvaa aikaa ja kasvattavat tuottavuutta. Testien priorisointi, valinta ja minimointi ovat tunnettuja ongelmia ohjelmistotestauksessa. Arm Mbed OS on ilmainen avoimen lähdekoodin sulautettu käyttöjärjestelmä, joka on tarkoitettu erityisesti “asioille” asioiden Internetissä (engl. Internet of Things). Tässä työssä tutkitaan regressiotestauksen valinta- ja optimointimenetelmiä. Tämän työn päätehtävä on kehittää tehokkaita ja turvallisia valinta- (mbedRTS) ja optimointimenetelmiä (mbedRTO) Mbed OS pull request:ien regressiotestaukseen. Mbed OS -regressiotestausmenetelmillä (MbedRTT) viitataan tässä työssä kehitettyihin regressiotestausmenetelmiin, jotka tunnetaan myös nimellä Mbed OS älykäs testaaja (engl. Mbed OS Smart Tester). Tutkittujen ja kehitettyjen MbedRTT-tekniikoiden empiirisen analyysin tulos näyttää lupaavalta. Mbed OS Jenkins CI:ssä on jo otettu käyttöön useita kehitettyjä MbedRTT-tekniikoita.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Shende, Sourabh. "Bayesian Topology Optimization for Efficient Design of Origami Folding Structures". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592170569337763.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Müller, Stephan. "Constrained portfolio optimization /". [S.l.] : [s.n.], 2005. http://aleph.unisg.ch/hsgscan/hm00133325.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

SCHLITTLER, JOAO GABRIEL FELIZARDO S. "PORTFOLIO SELECTION VIA DATA-DRIVEN DISTRIBUTIONALLY ROBUST OPTIMIZATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36002@1.

Texto completo
Resumen
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE EXCELENCIA ACADEMICA
Otimização de portfólio tradicionalmente assume ter conhecimento da distribuição de probabilidade dos retornos ou pelo menos algum dos seus momentos. No entanto, é sabido que a distribuição de probabilidade dos retornos muda com frequência ao longo do tempo, tornando difícil a utilização prática de modelos puramente estatísticos, que confiam indubitavelmente em uma distribuição estimada. Em contrapartida, otimização robusta considera um completo desconhecimento da distribuição dos retornos, e por isto, buscam uma solução ótima para todas as realizações possíveis dentro de um conjunto de incerteza dos retornos. Mais recentemente na literatura, técnicas de distributionally robust optimization permitem lidar com a ambiguidade com relação à distribuição dos retornos. No entanto essas técnicas dependem da construção do conjunto de ambiguidade, ou seja, distribuições de probabilidade a serem consideradas. Neste trabalho, propomos a construção de conjuntos de ambiguidade poliédricos baseado somente em uma amostra de retornos. Nestes conjuntos, as relações entre variáveis são determinadas pelos dados de maneira não paramétrica, sendo assim livre de possíveis erros de especificação de um modelo estocástico. Propomos um algoritmo para construção do conjunto e, dado o conjunto, uma reformulação computacionalmente tratável do problema de otimização de portfólio. Experimentos numéricos mostram que uma melhor performance do modelo em comparação com benchmarks selecionados.
Portfolio optimization traditionally assumes knowledge of the probability distribution of returns or at least some of its moments. However is well known that the probability distribution of returns changes over time, making difficult the use of purely statistic models which undoubtedly rely on an estimated distribution. On the other hand robust optimization consider a total lack of knowledge about the distribution of returns and therefore it seeks an optimal solution for all the possible realizations wuthin a set of uncertainties of the returns. More recently the literature shows that distributionally robust optimization techniques allow us to deal with ambiguity regarding the distribution of returns. However these methods depend on the construction of the set of ambiguity, that is, all distribution of probability to be considered. This work proposes the construction of polyhedral ambiguity sets based only on a sample of returns. In those sets, the relations between variables are determined by the data in a non-parametric way, being thus free of possible specification errors of a stochastic model. We propose an algorithm for constructing the ambiguity set, and then a computationally treatable reformulation of the portfolio optimization problem. Numerical experiments show that a better performance of the model compared to selected benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Yu, Baosheng. "Robust Diversity-Driven Subset Selection in Combinatorial Optimization". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19834.

Texto completo
Resumen
Subset selection is fundamental in combinatorial optimization with applications in biology, operations research, and computer science, especially machine learning and computer vision. However, subset selection has turned out to be NP-hard and polynomial-time solutions are usually not available. Therefore, it is of great importance to develop approximate algorithms with theoretical guarantee for subset selection in constrained settings. To select a diverse subset with an asymmetric objective function, we develop an asymmetric subset selection method, which is computationally efficient and has a solid lower bound on approximation ratio. Experimental results on cascade object detection demonstrate the effectiveness of the proposed method. To select a diverse subset with bandit feedbacks, we develop a new bandit framework, which we refer to it as per-round knapsack constrained linear submodular bandits. With the proposed bandit framework, we propose two algorithms with solid regret bounds. Experimental results on personalized recommendation demonstrate the effectiveness of the proposed method. To correct bias in subset selection, we develop a new regularization criterion to minimize the distribution shift between selected subset and the set of all elements. Experimental results on image retrieval demonstrate the effectiveness of the proposed method. To explore diversity in anchor templates, we devise a pyramid of diversity-driven anchor templates to generate high quality proposals. Experimental results on cascade face detection demonstrate the effectiveness of the proposed method. In this thesis, we focus on developing robust diversity-driven subset selection methods in constrained settings as well as their applications in machine learning and computer vision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Toy, Ayhan Özgür. "Route, aircraft prioritization and selection for airlift mobility optimization /". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA326731.

Texto completo
Resumen
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1996.
"September 1996." Thesis advisors, Richard E. Rosenthal, Steven F. Baker. Includes bibliographical references (p. 67). Also available online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Toy, Ayhan Özgür. "Route, aircraft prioritization and selection for airlift mobility optimization". Thesis, Monterey, California. Naval Postgraduate School, 1996. http://hdl.handle.net/10945/8932.

Texto completo
Resumen
Approved for public release; distribution in unlimited.
The Throughput II mobility optimization model was developed at the Naval Postgraduate School for the Air Force Studies and Analysis Agency (AFSAA). The purpose of Throughput II is to help answer questions about the ability of the USAF to conduct airlift of soldiers and equipment in support of major military operations. Repeated runs of this model have helped AFSAA generate insights and recommendations concerning the selection of aircraft assets. Although Throughput II has earned the confidence of AFSAA, repeated applications are hampered by the fact that it can take over three hours to run on a fast workstation. This is due to the model's size; it is a linear program whose dimensions can exceed 100,000 variables, 100,000 constraints, and 1 million nonzero coefficients, even alter extensive model reduction techniques are used. The purpose of this thesis is to develop heuristics that can be performed prior to running Throughput II in order to reduce the model's size. Specifically, this thesis addresses the fact that the Throughput II formulation has many variables and constraints that depend on the number of available routes for each aircraft. The goal is to carefully eliminate routes so as to make the problem smaller without sacrificing much solution quality
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Zhernova, P. "Optimization methods for the selection of protective printing complex". Thesis, НТМТ, 2015. http://openarchive.nure.ua/handle/document/8357.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Hodges, Clayton Christopher. "Optimization of BMP Selection for Distributed Stormwater Treatment Networks". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81698.

Texto completo
Resumen
Current site scale stormwater management designs typically include multiple distributed stormwater best management practices (BMPs), necessary to meet regulatory objectives for nutrient removal and groundwater recharge. Selection of the appropriate BMPs for a particular site requires consideration of contributing drainage area characteristics, such as soil type, area, and land cover. Other physical constraints such as karst topography, areas of highly concentrated pollutant runoff, etc. as well as economics, such as installation and operation and maintenance cost must be considered. Due to these multiple competing selection criteria and regulatory requirements, selection of optimal configurations of BMPs by manual iteration using conventional design tools is not tenable, and the resulting sub-optimal solutions are often biased. This dissertation addresses the need for an objective BMP selection optimization tool through definition of an objective function, selection of an optimization algorithm based on defined selection criteria, development of cost functions related to installation cost and operation and maintenance cost, and ultimately creation and evaluation of a new software tool that enables multi-objective user weighted selection of optimal BMP configurations. A software tool is developed using the nutrient and pollutant removal logic found in the Virginia Runoff Reduction Method (VRRM) spreadsheets. The resulting tool is tested by a group of stormwater professionals from the Commonwealth of Virginia for two case studies. Responses from case study participants indicate that use of the tool has a significant impact on the current engineering design process for selection of stormwater BMPs. They further indicate that resulting selection of stormwater BMPs through use of the optimization tool is more objective than conventional methods of design, and allows designers to spend more time evaluating solutions, rather than attempting to meet regulatory objectives.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Olsson, Sam. "Optimization model for selection of switches at railway stations". Thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177613.

Texto completo
Resumen
The goal of this project is to implement and verify an optimization model for finding a min-cost selection of switches and train paths at railway stations. The selected train paths must satisfy traffic requirements that commonly apply to regular railway traffic. The requirements include different combinations of simultaneous and overtaking train movements. The model does not rely on timetables but does instead utilize different path sets that are produced via algorithms based on a network representation of the station layout. The model has been verified on a small test station and also on the real station layout at Katrineholm. These tests show that the model can solve the problem for mid size stations with through traffic. In addition, we have performed a literature study regarding maintenance problems for switches and crossings. We have also looked at articles regarding the scheduling and routing of trains through railway stations. Finally we present some possible ways to further improve the model for more realistic experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Ujihara, Rintaro. "Multi-objective optimization for model selection in music classification". Thesis, KTH, Optimeringslära och systemteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298370.

Texto completo
Resumen
With the breakthrough of machine learning techniques, the research concerning music emotion classification has been getting notable progress combining various audio features and state-of-the-art machine learning models. Still, it is known that the way to preprocess music samples and to choose which machine classification algorithm to use depends on data sets and the objective of each project work. The collaborating company of this thesis, Ichigoichie AB, is currently developing a system to categorize music data into positive/negative classes. To enhance the accuracy of the existing system, this project aims to figure out the best model through experiments with six audio features (Mel spectrogram, MFCC, HPSS, Onset, CENS, Tonnetz) and several machine learning models including deep neural network models for the classification task. For each model, hyperparameter tuning is performed and the model evaluation is carried out according to pareto optimality with regard to accuracy and execution time. The results show that the most promising model accomplished 95% correct classification with an execution time of less than 15 seconds.
I och med genombrottet av maskininlärningstekniker har forskning kring känsloklassificering i musik sett betydande framsteg genom att kombinera olikamusikanalysverktyg med nya maskinlärningsmodeller. Trots detta är hur man förbehandlar ljuddatat och valet av vilken maskinklassificeringsalgoritm som ska tillämpas beroende på vilken typ av data man arbetar med samt målet med projektet. Denna uppsats samarbetspartner, Ichigoichie AB, utvecklar för närvarande ett system för att kategorisera musikdata enligt positiva och negativa känslor. För att höja systemets noggrannhet är målet med denna uppsats att experimentellt hitta bästa modellen baserat på sex musik-egenskaper (Mel-spektrogram, MFCC, HPSS, Onset, CENS samt Tonnetz) och ett antal olika maskininlärningsmodeller, inklusive Deep Learning-modeller. Varje modell hyperparameteroptimeras och utvärderas enligt paretooptimalitet med hänsyn till noggrannhet och beräkningstid. Resultaten visar att den mest lovande modellen uppnådde 95% korrekt klassificering med en beräkningstid på mindre än 15 sekunder.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Texto completo
Resumen
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Baumgarten, Peter B. "Optimization of United States Marine Corps Officer Career Path Selection". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA381837.

Texto completo
Resumen
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 2000.
Thesis advisor, Siriphong Lawphongpanich. "September 200." Includes bibliographical references (p. 67). Also available online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Lourens, Mark. "Integer optimization for the selection of a twenty20 cricket team". Thesis, Nelson Mandela Metropolitan University, 2008. http://hdl.handle.net/10948/1000.

Texto completo
Resumen
During the last few years, much effort has been devoted to measuring the ability of sport teams, as well as that of the individual players. Much research has been on the game of cricket, and the comparison, or ranking, of players according to their abilities. This study continues preceding research using an optimization approach, namely, a binary integer programme, to select an SA domestic Pro20 cricket team.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Fraleigh, Lisa Marie. "Optimal sensor selection and parameter estimation for real-time optimization". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40050.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Verdugo, Silva Víctor Ignacio. "Convex and online optimization: Applications to scheduling and selection problems". Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168128.

Texto completo
Resumen
Doctor en Sistemas de Ingeniería en cotutela con Ecole Normale Supérieure
Convex optimization has been a powerful tool for designing algorithms. In practice is a widely used in areas such as operations research and machine learning, but also in many fundamental combinatorial problems they yield to the best know approximations algorithms providing unconditional guarantees over the solution quality. In the first part of this work we study the effect of constructing convex relaxations to a packing problem, based on applying lift & project methods. We exhibit a weakness of this relaxations when they are obtained from the natural formulations of this problem, by showing the impossibility of reducing the gap even when this relaxations are very large. We provide a way of combining symmetry breaking procedures and lift & project methods to obtain arbitrarily good gaps. In the second part of this thesis we study online selection problems, that is, elements arrive over time and we have to select some of them, irrevocably, in order to meet some combinatorial constraints, but also trying to maximize the quality of the selection. Usually this quality in measured in terms of weight, but we consider a stronger variant in which weights are not necessarily known because of information availability. Instead, as long as we can rank the elements, we can provide a general framework to obtain approximation algorithms with good competitive ratios in many contexts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Meoni, Francesco <1987&gt. "Modeling, Component Selection and Optimization of Servo-controlled Automatic Machinery". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/8140/1/Meoni_Francesco_tesi.pdf.

Texto completo
Resumen
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Sforza, Eleonora. "Oil from microalgae: species selection, photobioreactor design and process optimization". Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3421970.

Texto completo
Resumen
This PhD research project was focused on microalgal oil production for biofuel. The work includes mostly an experimental part on microalgal species selection and optimization of growth conditions and a part of process simulation and photobioreactor design. After an overview of literature on the algal biology and cultivation and photobioreactor design, the experimental activities started with the set up of materials, methods and experimental apparatus, several microalgal species were screened, in order to select the most promising ones from an industrial point of view. In addition, an experimental apparatus was set up for optimizing growth conditions, under non-limiting CO2 supply. The effect of light and of other relevant operating variables on growth were addressed and discussed, and some suggestions to better understand the process behavior were given with respect to lipid content maximization, carbon dioxide and nitrogen supply, and illumination conditions. The possibility of exploiting mixotrophy to support algal growth overnight or in dark zones of a photobioreactor was also investigated. Finally, a continuous photobioreactor was designed and built, in order to test the feasibility of the algal biomass production in an industrial continuous process operated at steady-state. Together with experiments, a process simulation work was also carried out, by using the software Aspen Plus™. This part of the thesis was focused on the calculation of the microalgal biomass production and photobioreactor performances under different operating conditions.
La produzione di biocarburanti da biomassa sta suscitando un vivo interesse a livello internazionale e, in questo contesto, l’olio derivato da microalghe sembra essere l’unica tecnologia potenzialmente in grado di supportare la richiesta energetica di combustibili liquidi per autotrazione e sostituire, nel lungo termine, i carburanti da fonti fossili (Chisti, 2008). La produzione di biomassa algale per via fotosintetica, inoltre, ha dei notevoli vantaggi dal punto di vista ambientale, contribuendo alla diminuzione dell’immissione dei gas serra in atmosfera e all’eliminazione di sali di ammonio e fosforo dalle acque di scarico. Per microalghe s’intendono tutti gli organismi unicellulari, o di piccole dimensioni, che possiedono la clorofilla A (grazie alla quale operano fotosintesi) e che presentano un tallo non differenziato in radice-fusto-foglia. In questa classificazione vengono inglobati generalmente anche i cianobatteri, sebbene siano organismi procarioti (Mata et al., 2010). L’interesse verso questi organismi nasce, storicamente, dalla potenzialità di utilizzo per la produzione di biomassa a scopo alimentare, per la nutrizione animale e per la produzione di composti chimici, sfruttando l’energia solare. La fotosintesi, infatti, è un processo di conversione di composti inorganici e energia solare in materia organica. Gli organismi in grado di effettuare tali reazioni vengono definiti fotoautotrofi. La fotosintesi ossigenica, in particolare, può essere definita come una serie di reazioni di ossido-riduzione, nelle quali l’anidride carbonica e l’acqua sono trasformate in carboidrati e ossigeno. In presenza di macronutrienti (principalmente nitrati e fosfati, e una fonte di carbonio) e micronutrienti (principalmente metalli, utilizzati come cofattori) le microalghe sono in grado di riprodursi, generalmente mediante divisione asessuata, con una velocità notevolmente maggiore rispetto alle piante superiori terrestri. Ciò rende le microalghe particolarmente adatte alla coltivazione su larga scala per l’assorbimento della CO2 atmosferica, per la produzione di biocombustibili, per la depurazione di reflui civili e agro-zootecnici e per la produzione di biomolecole. Microalghe di varie specie vengono già prodotte a livello commerciale in molti Paesi e utilizzate, in genere, per ottenere integratori alimentari, mangimi, pigmenti, acidi grassi, ω3, biomasse per acquacoltura e per il trattamento di reflui. E’ stato osservato che alcune specie di microalghe sono in grado di accumulare grandi quantità di lipidi, che possono essere estratti ed utilizzati come oli vegetali, al pari degli oli estratti dai semi delle piante superiori. I vantaggi dell’utilizzo delle microalghe sono legati al fatto che questi organismi presentano elevate velocità di crescita, e possono essere coltivati massivamente in fotobioreattori, senza bisogno di terreni coltivabili ed eliminando, di conseguenza, il problema della competizione con risorse agricole destinate ad uso alimentare. La coltivazione di microalghe legata alla produzione di biodiesel, quindi, è una tecnologia che potrebbe avere un elevato potenziale di sviluppo, consentendo una netta riduzione delle emissioni di CO2 rispetto all’uso di combustibili fossili, senza sottrarre risorse alle coltivazioni terrestri per fini alimentari. Inoltre, dopo l’estrazione di biocombustibile, la biomassa microalgale residua potrebbe ancora essere impiegata per l’estrazione di biomolecole di interesse commerciale, per la produzione di biogas o per scopi energetici (Chisti, 2008). La scelta dei sistemi di coltura di questo tipo su larga scala è tuttora oggetto di studio (Mata et al., 2010; Grobbelaar, 2010; Ho et al., 2011). Tali sistemi si distinguono in due categorie principali: sistemi aperti (open ponds) e sistemi chiusi (fotobioreattori). Costituiti da circuiti generalmente tubolari o a pannello, i fotobioreattori presentano un grado di complessità decisamente maggiore rispetto ai sistemi aperti ma consentono uno stretto controllo dei parametri chimico-fisici e biologici della coltura e una migliore resa produttiva. Le maggiori criticità sono da imputare al controllo della temperatura e al rischio di accumulo dell'ossigeno prodotto per fotosintesi, che richiede sistemi di eliminazione di questo gas. Questi problemi limitano le dimensioni dei fotobioreattori, che attualmente sono costituiti principalmente da serpentine di lunghezza non elevata e volumi limitati. Di conseguenza il costo della produzione di microalghe è piuttosto elevato, per cui le applicazioni rimangono limitate alle sole colture massive di elevata purezza, necessarie per l'estrazione di biomolecole di alto valore commerciale o per inoculi di colture in sistemi aperti. La ricerca internazionale sta puntando sempre più l’attenzione su questa tecnologia (Pittman et al., 2011; Perez-Garcia et al., 2011; Ho et al., 2011), ma il mondo delle microalghe è tuttavia molto vasto; è, quindi, necessario scegliere accuratamente la specie più adatta dal punto di vista della velocità di riproduzione e del contenuto di lipidi su massa secca. Inoltre, è necessario tenere conto delle condizioni di coltura al fine di ottimizzare la crescita e l’accumulo di lipidi (Amaro et al., 2011). Questa tesi di dottorato si è occupata della scelta delle specie microalgali più promettenti, oggetto di interesse internazionale, e dell’ottimizzazione delle condizioni di coltura, volti alla progettazione di un fotobioreattore. A tal scopo, la parte preliminare di questo progetto di ricerca ha visto la messa a punto delle principali tecniche di coltura e analisi della crescita microalgale. Una volta messe a punto le metodiche, si è proceduto con uno screening delle specie più interessanti nell’ottica della produzione di olio vegetale, scelte dopo consultazione della bibliografia disponibile. Nannochloropsis salina, una specie marina, sembra essere la specie più adatta per la produzione di olio, mostrando la miglior combinazione di velocità cinetiche di crescita (circa 0,5 giorni-1) e di contenuto di lipidi. Infatti, il contenuto lipidico dell’alga cambia durante la fase di crescita, mostrando un significativo accumulo di lipidi in fase stazionaria corrispondente al circa 69% del peso secco. Ciò è probabilmente determinato dall’accumulo di lipidi come materiale di riserva quando le condizioni di crescita risultano limitanti. Considerando l’elevato contenuto di lipidi, questa specie mostra delle reali potenzialità dal punto di vista applicativo, e quindi è stata sottoposta ad ulteriori esperimenti di ottimizzazione delle condizioni di coltura, variando le concentrazioni di nutrienti nel mezzo di crescita e allestendo apparecchiature sperimentali in grado di fornire concentrazioni di anidride carbonica maggiori rispetto a quelle atmosferiche, risultate limitanti per una produzione significativa di biomassa. Gli esperimenti, quindi, sono stati condotti insufflando nella coltura aria arricchita al 5% di CO2 in modo tale che non fosse limitante per la crescita. Altri esperimenti sono stati finalizzati a comprendere se la concentrazione di azoto nel terreno di coltivazione fosse limitante per N. salina. Sono state quindi testate diverse concentrazioni di nitrato di sodio mantenendo la concentrazioni di CO2 del 5% nell’aria insufflata. In risultati hanno mostrato che la CO2 presente in aria è limitante per la crescita. Inoltre, sebbene con il 5% la coltura raggiunga concentrazioni più elevate in fase stazionaria rispetto alla coltura insufflata con semplice aria, il nutriente veramente limitante è chiaramente l’azoto. Infatti, in presenza di 1,5 g/L di NaNO3 (circa 20 volte la concentrazione normalmente utilizzata nei terreni di coltura) la concentrazione cellulare in fase stazionaria arriva ad un valore 4 volte maggiore di quello misurato negli altri casi. Quando l’azoto è presente in eccesso, tuttavia, il contenuto di lipidi rimane basso. Questo sembra suggerire che, effettivamente, l’aumento di lipidi sia determinato dalla carenza di azoto. Infatti, cellule raccolte per centrifugazione e risospese in un terreno povero in azoto, mostrano un aumento di fluorescenza corrispondente al 63±1% di lipidi su peso secco. Questi dati dimostrano che la deficienza di azoto in N. salina è responsabile dell’accumulo di lipidi. E’ noto, infatti, che la composizione biochimica delle microalghe può essere modificata attraverso manipolazioni ambientali, inclusa la disponibilità dei nutrienti. A questo scopo, per specifiche applicazioni, alcuni nutrienti vengono somministrati in concentrazioni limitanti. In particolare, il contenuto di lipidi in alcune alghe può variare come risultato di cambiamenti nelle condizioni di crescita o nelle caratteristiche del mezzo di coltura (Rodolfi et al., 2009; Converti et al., 2009). Il più efficiente approccio per aumentare il contenuto di lipidi nella alghe sembra essere la deficienza di azoto. In queste condizioni, la produttività della coltura è generalmente ridotta, se messa a confronto con le condizioni di nutrienti in eccesso (Rodolfi et al., 2009). Infatti, la deprivazione di azoto è generalmente associata ad una riduzione nella resa di biomassa ed ad una diminuzione della crescita. Questo spiega i risultati sperimentali ottenuti, in cui l’elevata concentrazione di nitrati, nella prima fase, stimola la crescita di biomassa, probabilmente stimolando la sintesi e l’accumulo di proteine, mentre, durante la fase limitante in azoto le alghe cominciano ad accumulare lipidi, e si registra un netto aumento della concentrazione di massa secca per cellula, fino ad una concentrazione complessiva di 4.05 g/L DW. Dal punto di vista industriale, quindi, la strategia vincente è probabilmente un approccio a due step, sperimentata con successo anche da (Rodolfi et al., 2009), con una prima fase di produzione di biomassa in terreno con sufficiente concentrazione di nutrienti (N-sufficient phase), seguita da un’induzione di accumulo lipidico attraverso deprivazione d’azoto (N-starved phase). In questa tesi è stata, inoltre, presa in considerazione la crescita mixotrofa. In colture algali a scala industriale, infatti, la crescita in fotoautotrofia potrebbe presentare alcuni limiti, legati soprattutto alla produttività. Ciò è dovuto sia alla scarsa penetrazione della luce in colture su larga scala, che è inversamente proporzionale alla densità cellulare, sia ai limiti intrinseci dell’efficienza fotosintetica delle microalghe. Per incrementare la produttività, una possibile strategia è crescere le colture in mixotrofia, esponendo quindi le alghe alla luce, ma fornendo anche un substrato organico che migliori la velocità cinetica di crescita e la resa in biomassa. L’obiettivo di questa parte del progetto di ricerca è stato di studiare gli effetti sulla crescita algale di diversi substrati organici, e di ottimizzare le condizioni di crescita mixotrofa. A tale scopo, N. salina e altre specie interessanti sono state sottoposte ad uno studio più accurato delle condizioni di crescita mixotrofa. Successivamente, è stata testata la capacità delle microalghe di utilizzare substrati organici durante i periodi di buio, in curve di crescita soggette a cicli di illuminazione giorno-notte. In generale è possibile concludere che le microalghe prese in considerazione sono in grado di aumentare le loro performance di crescita in presenza di fonti di carbonio organico addizionato al mezzo di coltura, rispetto alla sola crescita fotoautotrofa, anche se le diverse specie rispondono in modo diverso alla presenza dei vari substrati. Questa osservazione è vera però soltanto per le colture cresciute in condizioni di CO2 atmosferica. Quando la CO2 non è limitante, invece, l’aggiunta del substrato organico non solo non migliora le velocità di crescita, ma sembra inibire la crescita algale. Una possibile spiegazione del fenomeno è che, in una situazione in cui la CO2 è presente in eccesso, le microalghe preferiscano seguire la via fotosintetica, non consumando il substrato organico che, rimanendo nel mezzo di coltura, dà luogo a fenomeni di inibizione. In proposito, alcuni lavori riportano dati di inibizione della crescita in presenza di alte concentrazioni di substrati organici(Lee et al., 2007). Risultati più interessanti invece riguardano il contenuto di lipidi, che aumenta in presenza del substrato organico. Il ruolo determinante della CO2 è stato dimostrato in esperimenti in mixotrofia, in cui l’apporto di CO2 è stato interrotto nei periodi di buio. Questi esperimenti hanno dimostrato che in condizioni limitanti di anidride carbonica, e in assenza di luce, la capacità delle microalghe di consumare i composti organici viene ripristinata e la presenza del substrato aumenta la quantità di biomassa prodotta. La velocità di crescita microalgale è influenzata, inoltre, dalla disponibilità della luce che, se poco intensa, può essere limitante, mentre, se presente in eccesso, può portare a fenomeni di fotosaturazione o fotoinibizione, con una conseguente perdita in produttività (Carvalho et al., 2011; Cuaresma et al., 2011; Brindley,et al., 2011). Allo scopo di valutare l’effetto delle intensità luminose, gli esperimenti sono stati condotti in reattori a pannello sottile, appositamente progettati e costruiti. In tale sistema sono state effettuate curve di crescita a diverse intensità luminose, con particolare attenzione anche ai cicli luce-buio ad alta frequenza, che potrebbero evitare i fenomeni di fotoinibizione alle elevate intensità. I risultati mostrano che la velocità di crescita di N. salina aumenta linearmente all’aumento dell’intensità luminosa fino a valori di 150 µE m-2 s-1. In questo intervallo, quindi, la luce è da considerarsi limitante per la crescita. Oltre tali valori, invece, la velocità si assesta e le cellule mostrano segnali visibili di stress, con ingiallimento delle colture per l’accumulo di pigmenti fotoprotettori, e l’aumento del contenuto di lipidi, anche quando nel terreno è presente un elevato contenuto di azoto. Se sottoposte a cicli ad alta frequenza luce buio, ad elevate intensità, le cellule mostrano una diminuzione dei segnali di stress, suggerendo la possibilità di evitare tali fenomeni di fotosaturazione e fotoinibizione mediante mescolamento in un ipotetico fotobioreattore, che esponga ciclicamente le cellule alla superficie di esposizione, grazie a cicli di mescolamento (Zijffers et al., 2010). I risultati hanno dimostrato che la frequenza è cruciale nell’evitare fenomeni di stress, perché all’aumento dei tempi di esposizione aumentano i danni agli apparati fotosintetici, con una conseguente drastica diminuzione della produttività. Il tempo di flash light che ottimizza l’assorbimento della luce sembra essere nell’ordine dei 10 ms. Queste indicazioni devono essere prese in considerazione nella progettazione di un fotobioreattore, con particolare riguardo alle frequenze di mixing, che possono aumentare le performance di crescita esponendo alla superficie di esposizione le cellule a cicli tali da ridurre i fenomeni di foto inibizione. In questa tesi di dottorato, inoltre, sono stati effettuati degli esperimenti di screening di una specie algale d’acqua dolce, con la quale sono stati inoltre effettuati degli esperimenti volti a verificare la capacità di utilizzare fumi di combustione e azoto di acque di processo di stabilimenti industriali. Questa parte del progetto ha visto, quindi, l’identificazione di un ceppo di microalghe di acqua dolce che potesse essere usato per la produzione di olio in un impianto che utilizzi tali sottoprodotti. Dai risultati ottenuti per le specie e, assumendo rese tipiche di fotobioreattori industriali esistenti, per N. oleabundans i dati fanno ipotizzare una produttività annuale possibile di olio di circa 25 t/ha (e un limite teorico di circa 130 t/ha); mentre per B. braunii i dati sperimentali indicano una produttività possibile di circa 35 t/ha (limite teorico: circa 170 t/ha). Si è, inoltre, verificato sperimentalmente, che l’acqua di processo e i fumi disponibili da un reale stabilimento industriale consentono la crescita di N. oleabundans, pur con velocità inferiore). L’attività sperimentale di questo progetto di dottorato ha visto, inoltre, il design e la costruzione di un fotobioreattore a scala laboratorio per la produzione in continuo di biomassa. In tale reattore, configurato a pannello, è stata ottenuta la produzione di N. salina per una durata di 100 giorni complessivi, con una produttività costante di biomassa alla concentrazione di circa 1 g/L DW. Essendo in condizioni di azoto non limitante, la biomassa prodotta ha mantenuto un basso contenuto di lipidi. Come detto in precedenza, quindi, per questa specie è necessario un secondo step per l’accumulo di lipidi. L’ultima parte di questa tesi è stata dedicata a calcoli di produttività teorica di biomassa algale, mediante simulazioni con il software Aspen Plus™. Per simulare un fotobioreattore per microalghe con il software Aspen Plus, è stato necessario impostare un componente non convenzionale (la biomassa), e una subroutine in Fortran per impostare la cinetica e la stechiometria di reazione nel blocco reattore. In tal modo è stato possibile simulare il processo di produzione di biomassa algale. Dai risultati della simulazione è stato possibile, inoltre, fare delle considerazioni sulla geometria del reattore, e in particolare sulla profondità dello stesso, in grado di garantire elevati valori di produttività. E’ sempre necessario, comunque, tener conto dei vincoli termodinamici di produttività, che sono imposti dalla radiazione solare incidente. Le analisi di sensitività sul fotobioreattore hanno mostrato che, a parità di produttività, l’aumento della concentrazione di biomassa in ingresso comporta un volume di reazione minore, e tempi di permanenza minori. In sintesi, la produzione di olio vegetale da microalghe, sebbene presenti ancora alcuni aspetti che devono essere approfonditi, soprattutto per quanto riguarda le conoscenze fisiologiche e biologiche di tali organismi, sembra essere promettente. La tecnologia, infatti, pur agli albori, potrebbe dare un contributo determinante all’approvvigionamento di biocarburanti, in modo ecocompatibile ed energeticamente sostenibile.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Steele, Steven Cory Wyatt. "Optimal Engine Selection and Trajectory Optimization using Genetic Algorithms for Conceptual Design Optimization of Resuable Launch Vehicles". Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51771.

Texto completo
Resumen
Proper engine selection for Reusable Launch Vehicles (RLVs) is a key factor in the design of low cost reusable launch systems for routine access to space. RLVs typically use combinations of different types of engines used in sequence over the duration of the flight. Also, in order to properly choose which engines are best for an RLV design concept and mission, the optimal trajectory that maximizes or minimizes the mission objective must be found for that engine configuration. Typically this is done by the designer iteratively choosing engine combinations based on his/her judgment and running each individual combination through a full trajectory optimization to find out how well the engine configuration performed on board the desired RLV design. This thesis presents a new method to reliably predict the optimal engine configuration and optimal trajectory for a fixed design of a conceptual RLV in an automated manner. This method is accomplished using the original code Steele-Flight. This code uses a combination of a Genetic Algorithm (GA) and a Non-Linear Programming (NLP) based trajectory optimizer known as GPOPS II to simultaneously find the optimal engine configuration from a user provided selection pool of engine models and the matching optimal trajectory. This method allows the user to explore a broad range of possible engine configurations that they wouldn't have time to consider and do so in less time than if they attempted to manually select and analyze each possible engine combination. This method was validated in two separate ways. The codes ability to optimize trajectories was compared to the German trajectory optimizer suite known as ASTOS where only minimal differences in the output trajectory were noticed. Afterwards another test was performed to verify the method used by Steele-Flight for engine selection. In this test, Steele-Flight was provided a vehicle model based on the German Saenger TSTO RLV concept and models of turbofans, turbojets, ramjets, scramjets and rockets. Steele-Flight explored the design space through the use of a Genetic Algorithm to find the optimal engine combination to maximize payload. The results output by Steele-Flight were verified by a study in which the designer manually chose the engine combinations one at a time, running each through the trajectory optimization routine to determine the best engine combination. For the most part, these methods yielded the same optimal engine configurations with only minor variation. The code itself provides RLV researchers with a new tool to perform conceptual level engine selection from a gathering of user provided conceptual engine data models and RLV structural designs and trajectory optimization for fixed RLV designs and fixed mission requirement.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zhao, Feng 1992. "Advanced pixel selection and optimization algorithms for Persistent Scatterer Interferometry (PSI)". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668472.

Texto completo
Resumen
Ground deformation measurements can provide valuable information for minimization of associated loss and damage caused by natural and environmental hazards. As a kind of remote sensing technique, Persistent Scatterer Interferometry (PSI) SAR is able to measure ground deformation with high spatial resolution, efficiently. Moreover, the ground deformation monitoring accuracy of PSI techniques can reach up to millimeter level. However, low coherence could hinderthe exploitation of SAR data, and high-accuracy deformation monitoring can only be achieved by PSI for high quality pixels. Therefore, pixel optimization and identification of coherent pixels are crucial for PSI techniques. In this thesis, advanced pixel selection and optimization algorithms have been investigated. Firstly, a full-resolution pixel selection method based on the Temporal Phase Coherence (TPC) has been proposed. This method first estimates noise phase term of each pixel at interferogram level. Then, for each pixel, its noise phase terms of all interferograms are used to assess this pixel’s temporal phase quality (i.e., TPC). In the next, based on the relationship between TPC and phase Standard Deviation (STD), a threshold can be posed on TPC to identify high phase quality pixels. This pixel selection method can work with both Deterministic Scatterers (PSs) and Distributed Scatterers (DSs). To valid the effectiveness of the developed method, it has been used to monitor the Canillo (Andorra) landslide. The results show that the TPC method can obtained highest density of valid pixels among the employed three approaches in this challenging area with X-band SAR data. Second, to balance the polarimetric DInSAR phase optimization effect and the computation cost, a new PolPSI algorithm is developed. This proposed PolPSI algorithm is based on the Coherency Matrix Decomposition result to determine the optimal scattering mechanism of each pixel, thus it is named as CMD-PolPSI. CMDPolPSI need not to search for solution within the full space of solution, it is therefore much computationally faster than the classical Equal Scattering Mechanism (ESM) method, but with lower optimization performance. On the other hand, its optimization performance outperforms the less computational costly BEST method. Third, an adaptive algorithm SMF-POLOPT has been proposed to adaptive filtering and optimizing PolSAR pixels for PolPSI applications. This proposed algorithm is based on PolSAR classification results to firstly identify Polarimetric Homogeneous Pixels (PHPs) for each pixel, and at the same time classify PS and DS pixels. After that, DS pixels are filtered by their associated PHPs, and then optimized based on the coherence stability phase quality metric; PS pixels are unfiltered and directly optimized based on the DA phase quality metric. SMF-POLOPT can simultaneously reduce speckle noise and retain structures’ details. Meanwhile, SMF-POLOPT is able to obtain much higher density of valid pixels for deformation monitoring than the ESM method. To conclude, one pixel selection method has been developed and tested, two PolPSI algorithms have been proposed in this thesis. This work make contributions to the research of “Advanced Pixel Selection and Optimization Algorithms for Persistent Scatterer Interferometry
Les mesures de deformació del sòl poden proporcionar informació valuosa per minimitzar les pèrdues i els danys associats causats pels riscos naturals i ambientals. Com a tècnica de teledetecció, la interferometria de dispersors persistents (Persistent Scatter Interferometry, PSI) SAR és capaç de mesurar de forma eficient la deformació del terreny amb una alta resolució espacial. A més, la precisió de monitorització de la deformació del sòl de les tècniques PSI pot arribar a arribar a nivells del mil·límetre. No obstant això, una baixa coherència pot dificultar l’explotació de dades SAR i el control de deformació d’alta precisió només es pot aconseguir mitjançant PSI per a píxels d’alta qualitat. Per tant, l’optimització de píxels i la identificació de píxels coherents són crucials en les tècniques PSI. En aquesta tesi s¿han investigat algorismes avançats de selecció i optimització de píxels. En primer lloc, s'ha proposat un mètode de selecció de píxels de resolució completa basat en la coherència temporal de fase (Temporal Phase Coherence, TPC). Aquest mètode estima per primera vegada el terme de fase de soroll de cada píxel a nivell d’interferograma. A continuació, per a cada píxel, s'utilitzen els termes de la fase de soroll de tots els interferogrames per avaluar la qualitat de fase temporal d'aquest píxel (és a dir, TPC). A la següent, basant-se en la relació entre el TPC i la desviació estàndard de fase (STD), es pot plantejar un llindar de TPC per identificar píxels de qualitat de fase alta. Aquest mètode de selecció de píxels es capaç de detectar tant els dispersors deterministes (PS) com els distribuïts (DS). Per validar l’eficàcia del mètode desenvolupat, s’ha utilitzat per controlar l’esllavissada de Canillo (Andorra). Els resultats mostren que el mètode TPC pot obtenir la major densitat de píxels vàlids, comparat amb els mètodes clàssics de selecció, en aquesta àrea difícil amb dades de SAR de banda X. En segon lloc, per equilibrar l’efecte d’optimització de fase DInSAR polarimètrica i el cost de càlcul, es desenvolupa un nou algorisme de PolPSI. Aquest algorisme proposat de PolPSI es basa en el resultat de la descomposició de la matriu de coherència per determinar el mecanisme de dispersió òptim de cada píxel, de manera que es denomina CMD-PolPSI. CMDPolPSI no necessita buscar solucions dins de l’espai complet de la solució, per tant, és molt més eficient computacionalment que el mètode clàssic de mecanismes d’igualtat de dispersió (Equal Scattering Mechanism, ESM), però amb un efecte d’optimització no tant òptim. D'altra banda, el seu efecte d'optimització supera el mètode BEST, el que te un menor cost computacional. En tercer lloc, s'ha proposat un algoritme adaptatiu SMF-POLOPT per al filtratge adaptatiu i l'optimització de píxels PolSAR per a aplicacions PolPSI. Aquest algorisme proposat es basa en els resultats de classificació PolSAR per identificar primer els píxels homogenis polarimètrics (PHP) per a cada píxel i, alhora, classificar els píxels PS i DS. Després d'això, els píxels DS es filtren pels seus PHP associats i, a continuació, s'optimitzen en funció de la mètrica de qualitat de la fase d'estabilitat de coherència; els píxels classificats com PS no es filtren i s'optimitzen directament en funció de la mètrica de qualitat de la fase DA. SMF-POLOPT pot reduir simultàniament el soroll de la fase interferomètrica i conservar els detalls de les estructures. Mentrestant, SMF-POLOPT aconsegueix obtenir una densitat molt més alta de píxels vàlids per al seguiment de la deformació que el mètode ESM. Per concloure, en aquesta tesi s’ha desenvolupat i provat un mètode de selecció de píxels, i s’han proposat dos algoritmes PolPSI. Aquest treball contribueix a la recerca en "Advanced Pixel Selection and Optimization Algorithms for Persistent Scatterer Interferometry"
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ruan, Tieming. "Selection and optimization of snap-fit features via web-based software". Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133282089.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Sremac, Stefan. "A rubric for algorithm selection in optimization of black-box functions". Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/53956.

Texto completo
Resumen
When optimizing black-box functions, little information is available to assist the user in selecting an optimization approach. It is assumed that prior to optimization, the input dimension d of the objective function, the average running time tf of the objective function and the total time T allotted to solve the problem, are known. The intent of this research is to explore the relationship between the variables d, tf, and T and the performance of five optimization algorithms: Genetic Algorithm, Nelder-Mead, NOMAD, Efficient Global Optimization, and Knowledge Gradient for Continuous Parameters. The performance of the algorithms is measured over a set of functions with varying dimensions, function call budgets, and starting points. Then a rubric is developed to assist the optimizer in selecting the most appropriate algorithm for a given optimization scenario. Based on the information available prior to optimization, the rubric estimates the number of function calls available to each algorithm and the amount of improvement each algorithm can make on the objective function under the function call constraint. The rubric reveals that Bayesian Global Optimization algorithms require substantially more time than the competing algorithms and are therefore limited to fewer function call budgets. However, if the objective function requires a large running time, this difference becomes negligible. With respect to improvement, the rubric suggests that Derivative Free Optimization algorithms are preferred at lower dimensions and higher budgets, while Bayesian Global Optimization algorithms are expected to perform better at higher dimensions and lower budgets. A test of the claims of the rubric reveals that the estimate of function call budget is accurate and reliable, but the improvement is not estimated accurately. The test data demonstrates large variability for the measure of improvement. It appears that the variables d, tf, and T are insufficient for describing the expected performance of the assessed algorithms, since variables such as function type and starting point are unaccounted for.
Irving K. Barber School of Arts and Sciences (Okanagan)
Mathematics, Department of (Okanagan)
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Mekarapiruk, Wichaya. "Simultaneous optimal parameter selection and dynamic optimization using iterative dynamic programming". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58926.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Rawat, Waseem. "Optimization of convolutional neural networks for image classification using genetic algorithms and bayesian optimization". Diss., 2018. http://hdl.handle.net/10500/24977.

Texto completo
Resumen
Notwithstanding the recent successes of deep convolutional neural networks for classification tasks, they are sensitive to the selection of their hyperparameters, which impose an exponentially large search space on modern convolutional models. Traditional hyperparameter selection methods include manual, grid, or random search, but these require expert knowledge or are computationally burdensome. Divergently, Bayesian optimization and evolutionary inspired techniques have surfaced as viable alternatives to the hyperparameter problem. Thus, an alternative hybrid approach that combines the advantages of these techniques is proposed. Specifically, the search space is partitioned into discrete-architectural, and continuous and categorical hyperparameter subspaces, which are respectively traversed by a stochastic genetic search, followed by a genetic-Bayesian search. Simulations on a prominent image classification task reveal that the proposed method results in an overall classification accuracy improvement of 0.87% over unoptimized baselines, and a greater than 97% reduction in computational costs compared to a commonly employed brute force approach.
Electrical and Mining Engineering
M. Tech. (Electrical Engineering)
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía