Thèses sur le sujet « Multi-Objective Estimation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Multi-Objective Estimation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 18 meilleures thèses pour votre recherche sur le sujet « Multi-Objective Estimation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Martins, Marcella Scoczynski Ribeiro. « A hybrid multi-objective bayesian estimation of distribution algorithm ». Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2806.

Texte intégral
Résumé :
Atualmente, diversas metaheurísticas têm sido desenvolvidas para tratarem problemas de otimização multiobjetivo. Os Algoritmos de Estimação de Distribuição são uma classe específica de metaheurísticas que exploram o espaço de variáveis de decisão para construir modelos de distribuição de probabilidade a partir das soluções promissoras. O modelo probabilístico destes algoritmos captura estatísticas das variáveis de decisão e suas interdependências com o problema de otimização. Além do modelo probabilístico, a incorporação de métodos de busca local em Algoritmos Evolutivos Multiobjetivo pode melhorar consideravelmente os resultados. Estas duas técnicas têm sido aplicadas em conjunto na resolução de problemas de otimização multiobjetivo. Nesta tese, um algoritmo de estimação de distribuição híbrido, denominado HMOBEDA (Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm ), o qual é baseado em redes bayesianas e busca local é proposto no contexto de otimização multi e com muitos objetivos a fim de estruturar, no mesmo modelo probabilístico, as variáveis, objetivos e as configurações dos parâmetros da busca local. Diferentes versões do HMOBEDA foram testadas utilizando instâncias do problema da mochila multiobjetivo com dois a cinco e oito objetivos. O HMOBEDA também é comparado com outros cinco métodos evolucionários (incluindo uma versão modificada do NSGA-III, adaptada para otimização combinatória) nas mesmas instâncias do problema da mochila, bem como, em um conjunto de instâncias do modelo MNK-landscape para dois, três, cinco e oito objetivos. As fronteiras de Pareto aproximadas também foram avaliadas utilizando as probabilidades estimadas pelas estruturas das redes resultantes, bem como, foram analisadas as interações entre variáveis, objetivos e parâmetros de busca local a partir da representação da rede bayesiana. Os resultados mostram que a melhor versão do HMOBEDA apresenta um desempenho superior em relação às abordagens comparadas. O algoritmo não só fornece os melhores valores para os indicadores de hipervolume, capacidade e distância invertida geracional, como também apresenta um conjunto de soluções com alta diversidade próximo à fronteira de Pareto estimada.
Nowadays, a number of metaheuristics have been developed for dealing with multiobjective optimization problems. Estimation of distribution algorithms (EDAs) are a special class of metaheuristics that explore the decision variable space to construct probabilistic models from promising solutions. The probabilistic model used in EDA captures statistics of decision variables and their interdependencies with the optimization problem. Moreover, the aggregation of local search methods can notably improve the results of multi-objective evolutionary algorithms. Therefore, these hybrid approaches have been jointly applied to multi-objective problems. In this work, a Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA), which is based on a Bayesian network, is proposed to multi and many objective scenarios by modeling the joint probability of decision variables, objectives, and configuration parameters of an embedded local search (LS). We tested different versions of HMOBEDA using instances of the multi-objective knapsack problem for two to five and eight objectives. HMOBEDA is also compared with five cutting edge evolutionary algorithms (including a modified version of NSGA-III, for combinatorial optimization) applied to the same knapsack instances, as well to a set of MNK-landscape instances for two, three, five and eight objectives. An analysis of the resulting Bayesian network structures and parameters has also been carried to evaluate the approximated Pareto front from a probabilistic point of view, and also to evaluate how the interactions among variables, objectives and local search parameters are captured by the Bayesian networks. Results show that HMOBEDA outperforms the other approaches. It not only provides the best values for hypervolume, capacity and inverted generational distance indicators in most of the experiments, but it also presents a high diversity solution set close to the estimated Pareto front.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Morcos, Karim M. « Genetic network parameter estimation using single and multi-objective particle swarm optimization ». Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/9207.

Texte intégral
Résumé :
Master of Science
Department of Electrical and Computer Engineering
Sanjoy Das
Stephen M. Welch
Multi-objective optimization problems deal with finding a set of candidate optimal solutions to be presented to the decision maker. In industry, this could be the problem of finding alternative car designs given the usually conflicting objectives of performance, safety, environmental friendliness, ease of maintenance, price among others. Despite the significance of this problem, most of the non-evolutionary algorithms which are widely used cannot find a set of diverse and nearly optimal solutions due to the huge size of the search space. At the same time, the solution set produced by most of the currently used evolutionary algorithms lacks diversity. The present study investigates a new optimization method to solve multi-objective problems based on the widely used swarm-intelligence approach, Particle Swarm Optimization (PSO). Compared to other approaches, the proposed algorithm converges relatively fast while maintaining a diverse set of solutions. The investigated algorithm, Partially Informed Fuzzy-Dominance (PIFD) based PSO uses a dynamic network topology and fuzzy dominance to guide the swarm of dominated solutions. The proposed algorithm in this study has been tested on four benchmark problems and other real-world applications to ensure proper functionality and assess overall performance. The multi-objective gene regulatory network (GRN) problem entails the minimization of the coefficient of variation of modified photothermal units (MPTUs) across multiple sites along with the total sum of similarity background between ecotypes. The results throughout the current research study show that the investigated algorithm attains outstanding performance regarding optimization aspects, and exhibits rapid convergence and diversity.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Monteagudo, Maykel Cruz. « Multi-Objective Optimization Based on Desirability Estimation of Several Interrelated Responses (MOOp-DESIRe) : A Computer-Aided Methodology for Multi-Criteria Drug Discovery ». Doctoral thesis, Faculdade de Farmácia da Universidade do Porto, 2009. http://hdl.handle.net/10216/63799.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Monteagudo, Maykel Cruz. « Multi-Objective Optimization Based on Desirability Estimation of Several Interrelated Responses (MOOp-DESIRe) : A Computer-Aided Methodology for Multi-Criteria Drug Discovery ». Tese, Faculdade de Farmácia da Universidade do Porto, 2009. http://hdl.handle.net/10216/63799.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Petrlík, Jiří. « Multikriteriální genetické algoritmy v predikci dopravy ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-412573.

Texte intégral
Résumé :
Porozumění chování silniční dopravy je klíčem pro její efektivní řízení a organizaci. Tato úloha se stává čím dál více důležitou s rostoucími požadavky na dopravu a počtem registrovaných vozidel. Informace o dopravní situaci je důležitá pro řidiče a osoby zodpovědné za její řízení. Naštěstí v posledních několika dekádách došlo k značnému rozvoji technologií pro monitorování dopravní situace. Stacionární senzory, jako jsou indukční smyčky, radary, kamery a infračervené senzory, mohou být nainstalovány na důležitých místech. Zde jsou schopny měřit různé mikroskopické a makroskopické dopravní veličiny. Bohužel mnohá měření obsahují nekorektní data, která není možné použít při dalším zpracování, například pro predikci dopravy a její inteligentní řízení. Tato nekorektní data mohou být způsobena poruchou zařízení nebo problémy při přenosu dat. Z tohoto důvodu je důležité navrhnout obecný framework, který je schopný doplnit chybějící data. Navíc by tento framework měl být také schopen poskytovat krátkodobou predikci budoucího stavu dopravy. Tato práce se především zabývá vybranými problémy v oblasti doplnění chybějících dopravních dat, predikcí dopravy v krátkém časovém horizontu a predikcí dojezdových dob. Navrhovaná řešení jsou založena na kombinaci současných metod strojového učení, například Support vector regression (SVR) a multikriteriálních evolučních algoritmů. SVR má mnoho meta-parametrů, které je nutné dobře nastavit tak, aby byla dosažena co nejkvalitnější predikce. Kvalita predikce SVR dále silně závisí na výběru vhodné množiny vstupních proměnných. V této práci používáme multiktriteriální optimalizaci pro optimalizaci SVR meta-parametrů a množiny vstupních proměnných. Multikriteriální optimalizace nám umožňuje získat mnoho Pareto nedominovaných řešení. Mezi těmito řešeními je možné dynamicky přepínat dle toho, jaká data jsou aktuálně k dispozici tak, aby bylo dosaženo maximální kvality predikce. Metody navržené v této práci jsou především vhodné pro prostředí s velkým množstvím chybějících hodnot v dopravních datech. Tyto metody jsme ověřili na reálných datech a porovnali jejich výsledky s metodami, které jsou v současné době používány. Navržené metody poskytují lepší výsledky než stávající metody, a to především ve scénářích, kde se vyskytuje mnoho chybějících hodnot v dopravních datech.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xu, Weili. « An Energy and Cost Performance Optimization Platform for Commercial Building System Design ». Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/956.

Texte intégral
Résumé :
Energy and cost performance optimization for commercial building system design is growing in popularity, but it is often criticized for its time consuming process. Moreover, the current process lacks integration, which not only affects time performance, but also investors’ confidence in the predicted performance of the generated design. Such barriers keep building owners and design teams from embracing life cycle cost consideration. This thesis proposes a computationally efficient design optimization platform to improve the time performance and to streamline the workflow in an integrated multi-objective building system design optimization process. First, building system cost estimation is typically completed through a building information model based quantity take-off process, which does not provide sufficient design decision support features in the design process. To remedy this issue, an automatic cost estimation framework that integrates EnergyPlus with an external database to perform building systems’ capital and operation costs is proposed. Optimization, typically used for building system design selection, requires a large amount of computational time. The optimization process evaluates building envelope, electrical and HVAC systems in an integrated system not only to explore the cost-saving potential from a single high performance system, but also the interrelated effects among different systems. An innovative optimization strategy that integrates machine learning techniques with a conventional evolutionary algorithm is proposed. This strategy can reduce run time and improve the quality of the solutions. Lastly, developing baseline energy models typically takes days or weeks depending on the scale of the design. An automated system for generating baseline energy model according to ANSI/ASHRAE/IESNA Standard 90.1 performance rating method is thus proposed to provide a quick appraisal of optimal designs in comparison with the baseline energy requirements. The main contribution of this thesis is the development of a new design optimization platform to expedite the conventional decision making process. The platform integrates three systems: (1) cost estimation, (2) optimization and (3) benchmark comparison for minimizing the first cost and energy operation costs. This allows designers to confidently select an optimal design with high performance building systems by making a comparison with the minimum energy baseline set by standards in the building industry. Two commercial buildings are selected as case studies to demonstrate the effectiveness of this platform. One building is the Center for Sustainable Landscapes in Pittsburgh, PA. This case study is used as a new construction project. With 54 million possible design solutions, the platform is able to identify optimal designs in four hours. Some of the design solutions not only save the operation costs by up to 23% compared to the ASHRAE baseline design, but also reduce the capital cost ranging from 5% to 23%. Also, compared with the ASHRAE baseline design, one design solution demonstrates that the high investment of a product, building integrative photovoltaic (BiPV) system, can be justified through the integrative design optimization approach by the lower operation costs (20%) as well as the lower capital cost (12%). The second building is the One Montgomery Plaza, a large office building in Norristown, PA. This case study focuses on using the platform for a retrofit project. The calibrated energy model requires one hour to complete the simulation. There are 4000 possible design solutions proposed and the platform is able to find the optimal design solution in around 50 hours. Similarly, the results indicate that up to 25% capital cost can be saved with $1.7 million less operation costs in 25 years, compare to the ASHRAE baseline design.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hartikka, Alice, et Simon Nordenhög. « Emission Calculation Model for Vehicle Routing Planning : Estimation of emissions from heavy transports and optimization with carbon dioxide equivalents for a route planning software ». Thesis, Linköpings universitet, Energisystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178065.

Texte intégral
Résumé :
The transport sector is a major cause of emissions both in Sweden and globally. This master thesis aims to develop a model for estimating emissions from heavy transport on a specific route. The emissions can be used in a route planning software and help the driver choose a route that contributes to reduced emissions. The methodology was to investigate attributes, like vehicle-related attributes and topography, and their impact on transport emissions. The carbon dioxide, methane and nitrous oxide emissions were converted into carbon dioxide equivalents, which were incorporated as one cost together with a precalculated driving time as a second cost in a multi objective function used for route planning. Different tests were conducted to investigate the accuracy and the usability of the model. First, a validation test was performed, where the optimized routes were analyzed. The test showed that the model was more likely to choose a shorter route in general. The fuel consumption values largely met expectations towards generic values and measurements, that were gathered from research. A second test of the model made use of the driving time combined with the emissions in a multi objective function. In this test, a weighting coefficient was varied and analyzed to understand the possibility to find a value of the coefficient for the best trade-off. The result showed that the model generates different solutions for different coefficients and that it is possible to find a suitable trade-off between the driving time and emissions. Therefore, this study shows that there is a possibility to combine emission with other objectives such as driving time for route optimization. Finally, a sensitivity analysis was performed, where attribute factors and assumptions were varied to see how sensitive they were and, in turn, how much a change would impact the calculated emissions. The result from the sensitivity analysis showed that the changes in topography-attributes had less impact than changes on vehicle-related attributes. In conclusion, this thesis has built a foundation for route planning, based on the environmental aspect, for heavy transports.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Skolpadungket, Prisadarng. « Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms ». Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6306.

Texte intégral
Résumé :
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Thenon, Arthur. « Utilisation de méta-modèles multi-fidélité pour l'optimisation de la production des réservoirs ». Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066100/document.

Texte intégral
Résumé :
Les simulations d'écoulement sur des modèles représentatifs d'un gisement pétrolier sont généralement coûteuses en temps de calcul. Une pratique courante en ingénierie de réservoir consiste à remplacer ces simulations par une approximation mathématique, un méta-modèle. La méta-modélisation peut fortement réduire le nombre de simulations nécessaires à l'analyse de sensibilité, le calibrage du modèle, l'estimation de la production, puis son optimisation. Cette thèse porte sur l'étude de méta-modèles utilisant des simulations réalisées à différents niveaux de précision, par exemple pour des modèles de réservoir avec des maillages de résolutions différentes. L'objectif est d'accélérer la construction d'un méta-modèle prédictif en combinant des simulations coûteuses avec des simulations rapides mais moins précises. Ces méta-modèles multi-fidélité, basés sur le co-krigeage, sont comparés au krigeage pour l'approximation de sorties de la simulation d'écoulement. Une analyse en composantes principales peut être considérée afin de réduire le nombre de modèles de krigeage pour la méta-modélisation de réponses dynamiques et de cartes de propriétés. Cette méthode peut aussi être utilisée pour améliorer la méta-modélisation de la fonction objectif dans le cadre du calage d'historique. Des algorithmes de planification séquentielle d'expériences sont finalement proposés pour accélérer la méta-modélisation et tirer profit d'une approche multi-fidélité. Les différentes méthodes introduites sont testées sur deux cas synthétiques inspirés des benchmarks PUNQ-S3 et Brugge
Performing flow simulations on numerical models representative of oil deposits is usually a time consuming task in reservoir engineering. The substitution of a meta-model, a mathematical approximation, for the flow simulator is thus a common practice to reduce the number of calls to the flow simulator. It permits to consider applications such as sensitivity analysis, history-matching, production estimation and optimization. This thesis is about the study of meta-models able to integrate simulations performed at different levels of accuracy, for instance on reservoir models with various grid resolutions. The goal is to speed up the building of a predictive meta-model by balancing few expensive but accurate simulations, with numerous cheap but approximated ones. Multi-fidelity meta-models, based on co-kriging, are thus compared to kriging meta-models for approximating different flow simulation outputs. To deal with vectorial outputs without building a meta-model for each component of the vector, the outputs can be split on a reduced basis using principal component analysis. Only a few meta-models are then needed to approximate the main coefficients in the new basis. An extension of this approach to the multi-fidelity context is proposed. In addition, it can provide an efficient meta-modelling of the objective function when used to approximate each production response involved in the objective function definition. The proposed methods are tested on two synthetic cases derived from the PUNQ-S3 and Brugge benchmark cases. Finally, sequential design algorithms are introduced to speed-up the meta-modeling process and exploit the multi-fidelity approach
Styles APA, Harvard, Vancouver, ISO, etc.
10

Song, Yingying. « Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes ». Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0207/document.

Texte intégral
Résumé :
Une image hyperspectrale est un cube de données 3D dont chaque pixel fournit des informations spectrales locales sur un grand nombre de bandes contiguës sur une scène d'intérêt. Les images observées peuvent subir une dégradation due à l'instrument de mesure, avec pour conséquence l'apparition d'un flou sur les images qui se modélise par une opération de convolution. La déconvolution d'image hyperspectrale (HID) consiste à enlever le flou pour améliorer au mieux la résolution spatiale des images. Un critère de HID du type Tikhonov avec contrainte de non-négativité est proposé dans la thèse de Simon Henrot. Cette méthode considère les termes de régularisations spatiale et spectrale dont la force est contrôlée par deux paramètres de régularisation. La première partie de cette thèse propose le critère de courbure maximale MCC et le critère de distance minimum MDC pour estimer automatiquement ces paramètres de régularisation en formulant le problème de déconvolution comme un problème d'optimisation multi-objectif. La seconde partie de cette thèse propose l'algorithme de LMS avec un bloc lisant régularisé (SBR-LMS) pour la déconvolution en ligne des images hyperspectrales fournies par les systèmes de whiskbroom et pushbroom. L'algorithme proposé prend en compte la non-causalité du noyau de convolution et inclut des termes de régularisation non quadratiques tout en maintenant une complexité linéaire compatible avec le traitement en temps réel dans les applications industrielles. La troisième partie de cette thèse propose des méthodes de séparation-déconvolution conjointes basés sur le critère de Tikhonov en contextes hors-ligne ou en-ligne. L'ajout d'une contrainte de non-négativité permet d’améliorer leurs performances
A hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances
Styles APA, Harvard, Vancouver, ISO, etc.
11

Guerra, Jonathan. « Optimisation multi-objectif sous incertitudes de phénomènes de thermique transitoire ». Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0024/document.

Texte intégral
Résumé :
L'objectif de cette thèse est la résolution d’un problème d’optimisation multi-objectif sous incertitudes en présence de simulations numériques coûteuses. Une validation est menée sur un cas test de thermique transitoire. Dans un premier temps, nous développons un algorithme d'optimisation multi-objectif basé sur le krigeage nécessitant peu d’appels aux fonctions objectif. L'approche est adaptée au calcul distribué et favorise la restitution d'une approximation régulière du front de Pareto complet. Le problème d’optimisation sous incertitudes est ensuite étudié en considérant des mesures de robustesse pires cas et probabilistes. Le superquantile intègre tous les évènements pour lesquels la valeur de la sortie se trouve entre le quantile et le pire cas mais cette mesure de risque nécessite un grand nombre d’appels à la fonction objectif incertaine pour atteindre une précision suffisante. Peu de méthodes permettent de calculer le superquantile de la distribution de la sortie de fonctions coûteuses. Nous développons donc un estimateur du superquantile basé sur une méthode d'échantillonnage préférentiel et le krigeage. Il permet d’approcher les superquantiles avec une faible erreur et une taille d’échantillon limitée. De plus, un couplage avec l’algorithme multi-objectif permet la réutilisation des évaluations. Dans une dernière partie, nous construisons des modèles de substitution spatio-temporels capables de prédire des phénomènes dynamiques non linéaires sur des temps longs et avec peu de trajectoires d’apprentissage. Les réseaux de neurones récurrents sont utilisés et une méthodologie de construction facilitant l’apprentissage est mise en place
This work aims at solving multi-objective optimization problems in the presence of uncertainties and costly numerical simulations. A validation is carried out on a transient thermal test case. First of all, we develop a multi-objective optimization algorithm based on kriging and requiring few calls to the objective functions. This approach is adapted to the distribution of the computations and favors the restitution of a regular approximation of the complete Pareto front. The optimization problem under uncertainties is then studied by considering the worst-case and probabilistic robustness measures. The superquantile integrates every event on which the output value is between the quantile and the worst case. However, it requires an important number of calls to the uncertain objective function to be accurately evaluated. Few methods give the possibility to approach the superquantile of the output distribution of costly functions. To this end, we have developed an estimator based on importance sampling and kriging. It enables to approach superquantiles with little error and using a limited number of samples. Moreover, the setting up of a coupling with the multi-objective algorithm allows to reuse some of those evaluations. In the last part, we build spatio-temporal surrogate models capable of predicting non-linear, dynamic and long-term in time phenomena by using few learning trajectories. The construction is based on recurrent neural networks and a construction facilitating the learning is proposed
Styles APA, Harvard, Vancouver, ISO, etc.
12

Vigliassi, Marcos Paulo. « Algoritmo evolutivo multiobjetivo em tabelas e matriz H&Delta ; para projeto de sistemas de medição para estimação de estado ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-19052017-154501/.

Texte intégral
Résumé :
O problema de projeto de sistemas de medição, para efeito de Estimação de Estado em Sistemas Elétricos de Potência, é um problema de otimização multiobjetivo, combinatório, que exige a investigação de um grande número de possíveis soluções. Dessa forma, metaheurísticas vêm sendo empregadas para sua solução. Entretanto, a maioria delas trata o problema de forma mono-objetivo e as poucas que consideram uma formulação multiobjetivo, não contemplam todos os requisitos de desempenho que devem ser atendidos para obtenção de um Sistema de Medição Confiável (SMC) (observabilidade e ausência de Medidas Críticas, Conjuntos Críticos de Medidas, Unidades Terminais Remotas Críticas e Unidades de Medição Fasoriais Críticas). Propõe-se, nesta tese, uma formulação multiobjetivo para o problema de projeto de sistemas de medição de uma forma mais ampla, considerando todas requisitos de desempenho que devem ser atendidos para obtenção de um SMC. Propõe-se, ainda, o desenvolvimento e implantação, em computador, de um método para tratamento desse problema, considerando o trade-off entre os requisitos de desempenho e o custo, fazendo uso do conceito de Fronteira de Pareto. O método possibilita, em uma única execução, a obtenção de quatro tipos de sistemas de medição, a partir da análise de soluções não dominadas. O método permite o projeto de sistemas de medição novos e o aprimoramento de sistemas de medição já existentes, considerando a existência apenas de medidas convencionais SCADA, apenas de Medidas Fasoriais Sincronizadas ou a existência dos dois tipos de medidas. O método proposto faz uso de um Algoritmo Evolutivo Multiobjetivo e do procedimento de obtenção e análise da matriz HΔ. Esse procedimento permite a realização de uma Busca Local, minimizando o custo para atendimento de cada um dos requisitos de desempenho mencionados acima. Simulações são realizadas utilizando dados dos sistemas de 6, 14, 30, 118 e 300 barras do IEEE, bem como do sistema de 61 barras da Eletropaulo, de forma a ilustrar, testar e validar o método proposto. Alguns dos resultados dessas simulações são comparados com resultados obtidos por outros métodos encontrados na literatura.
Metering system planning for power system state estimation is a multi-objective, combinatorial optimization problem that may require the investigation of many possible solutions. As a consequence, meta-heuristics have been employed to solve the problem. However in the majority of them the multi-objective problem is converted in a mono-objective problem and those few considering a multi-objective formulation do not consider all the performance requirements that must be attended in order to obtain a Reliable Metering System (RMS) (system observability and absence of Critical Measurements, Critical Sets, Critical Remote Terminal Units and Critical Phasor Measurement Units). This thesis proposes a multi-objective formulation for the metering system planning problem in a wide way, that is, considering all the performance requirements that must be attended to obtain a RMS. This thesis also proposes the development and implementation, in computer, of a method to solve the metering system planning problem, considering the trade-off between the two conflicting objectives of the problem (minimizing cost while maximizing the performance requirements) making use of the concept of Pareto Frontier. The method allows, in only one execution, the project of four types of metering systems, from the analysis of non-dominated solutions. The method enable the design of new metering systems as well as the improvement of existing ones, considering the existence of only conventional SCADA measurements, or only synchronized phasor measurements or the existence of both types of measurements. The proposed method combines a multi-objective evolutionary algorithm based on subpopulation tables with the properties of the so-called HΔ matrix. The subpopulations tables adequately model several metering system performance requirements enabling a better exploration of the solution space. On the other hand, the properties of the HΔ matrix enable a local search that improves the evolutionary process and minimizes the computational effort. Simulations results with IEEE 6, 14, 30, 118 and 300-bus test systems and with a 61-bus system of Eletropaulo illustrate the efficiency of the proposed method. Some of the results of these simulations will be compared with those published in literature.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ebadi, Nasim. « Estimating Costs of Reducing Environmental Emissions From a Dairy Farm : Multi-objective epsilon-constraint Optimization Versus Single Objective Constrained Optimization ». Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99304.

Texte intégral
Résumé :
Agricultural production is an important source of environmental emissions. While water quality concerns related to animal agriculture have been studied extensively, air quality issues have become an increasing concern. Due to the transfer of nutrients between air, water, and soil, emissions to air can harm water quality. We conduct a multi-objective optimization analysis for a representative dairy farm with two different approaches: nonlinear programming (NLP) and ϵ-constraint optimization to evaluate trade-offs among reduction of multiple pollutants including nitrogen (N), phosphorus (P), greenhouse gas (GHG), and ammonia. We evaluated twenty-six different scenar- ios in which we define incremental reductions of N, P, ammonia, and GHG from five to 25% relative to a baseline scenario. The farm entails crop production, livestock production (dairy and broiler), and manure management activities. Results from NLP optimization indicate that reducing P and ammonia emissions is relatively more expen- sive than N and GHG. This result is also confirmed by the ϵ-constraint optimization. However, the latter approach provides limited evidence of trade-offs among reduction of farm pollutants and net returns, while the former approach includes different re- duction scenarios that make trade-offs more evident. Results from both approaches indicate changes in crop rotation and land retirement are the best strategies to reduce N and P emissions while cow diet changes involving less forage represents the best strategy to reduce ammonia and GHG emissions.
Master of Science
Human activities often damage and deplete the environment. For instance, nutrient pollution into air and water, which mostly comes from agricultural and industrial activ- ities, results in water quality degradation. Thus, mitigating the detrimental impacts of human activities is an important step toward environmental sustainability. Reducing environmental impacts of nutrient pollution from agriculture is a complicated problem, which needs a comprehensive understanding of types of pollution and their reduction strategies. Reduction strategies need to be both feasible and financially viable. Con- sequently, practices must be carefully selected to allow farmers to maximize their net return while reducing pollution levels to reach a satisfactory level. Thus, this paper conducts a study to evaluate the trade-offs associated with farm net return and re- ducing the most important pollutants generated by agricultural activities. The results of this study show that reducing N and GHG emissions from a representative dairy farm is less costly than reducing P and ammonia emissions, respectively. In addition, reducing one pollutant may result in reduction of other pollutants. In general, for N and P emissions reduction land retirement and varying crop rotations are the most effective strategies. However, for reducing ammonia and GHG emissions focusing on cow diet changes involving less forage is the most effective strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Pottier, Claire. « Combinaison multi-capteurs de données de couleur de l'eau : application en océanographie opérationnelle ». Phd thesis, Université Paul Sabatier - Toulouse III, 2006. http://tel.archives-ouvertes.fr/tel-00179729.

Texte intégral
Résumé :
Le phytoplancton joue un rôle important dans le cycle du carbone sur Terre, de par l'absorption du dioxyde de carbone au cours de la photosynthèse. Si les campagnes en mer offrent la possibilité d'acquérir des données à haute fréquence et à fine échelle spatio-temporelle, l'observation spatiale procure une description synoptique et sur de longues périodes de la chlorophylle-a, pigment principal du phytoplancton océanique. Chaque mission satellitaire qui mesure la couleur de l'eau est limitée en couverture océanique (traces du satellite, nuages, etc.). La couverture spatiale journalière peut augmenter considérablement en combinant les données issues de plusieurs satellites. L'objectif de cette thèse a été de concevoir, développer et tester des méthodes de combinaison de données couleur de l'eau, provenant des capteurs américains SeaWiFS et MODIS/Aqua, pour des applications en temps réel relevant de l'océanographie opérationnelle. Trois concepts ont été retenus : la moyenne pondérée par l'erreur capteur (conserve la netteté des structures mais n'utilise que les données existantes), l'analyse objective (améliore la couverture spatiale, mais lisse le champ en contrepartie), et une dernière approche innovante basée sur la transformée en ondelettes (conserve la netteté des structures et améliore la couverture du champ). L'opérationnalité de ces trois méthodes a été démontrée.
L'intérêt d'utiliser des données combinées a été montré à travers la mise en évidence des modes de variabilité dominants de la dynamique océanographique et biologique dans l'Océan Austral, en utilisant les données combinées SeaWiFS + MODIS/Aqua de la ceinture circumpolaire pour la période 2002-2006.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Silva, Tiago Vieira da. « Algoritmos evolutivos como estimadores de frequência e fase de sinais elétricos : métodos multiobjetivos e paralelização em FPGAs ». Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14012014-105606/.

Texte intégral
Résumé :
Este trabalho propõe o desenvolvimento de Algoritmos Evolutivos (AEs) para estimação dos parâmetros que modelam sinais elétricos (frequência, fase e amplitude) em tempo-real. A abordagem proposta deve ser robusta a ruídos e harmônicos em sinais distorcidos, por exemplo devido à presença de faltas na rede elétrica. AEs mostram vantagens para lidar com tais tipos de sinais. Por outro lado, esses algoritmos quando implementados em software não possibilitam respostas em tempo-real para uso da estimação como relé de frequência ou Unidade de Medição Fasorial. O desenvolvimento em FPGA apresentado nesse trabalho torna possível paralelizar o cálculo da estimação em hardware, viabilizando AEs para análise de sinal elétrico em tempo real. Além disso, mostra-se que AEs multiobjetivos podem extrair informações não evidentes das três fases do sistema e estimar os parâmetros adequadamente mesmo em casos em que as estimativas por fase divirjam entre si. Em outras palavras, as duas principais contribuições computacionais são: a paralelização do AE em hardware por meio de seu desenvolvimento em um circuito de FPGA otimizado a nível de operações lógicas básicas e a modelagem multiobjetiva do problema possibilitando análises dos sinais de cada fase, tanto independentemente quanto de forma agregada. Resultados experimentais mostram superioridade do método proposto em relação ao estimador baseado em transformada de Fourier para determinação de frequência e fase
This work proposes the development of Evolutionary Algorithms (EAs) for the estimation of the basic parameters from electrical signals (frequency, phase and amplitude) in real time. The proposed approach must be robust to noise and harmonics in signals distorted, for example, due to the presence of faults in the electrical network. EAs show advantages for dealing with these types of signals. On the other hand, these algorithms when implemented in software cant produce real-time responses in order to use their estimations as frequency relay or Phasor Measurement Unit. The approach developed on FPGA proposed in this work parallelizes in hardware the process of estimation, enabling analyses of electrical signals in real time. Furthermore, it is shown that multi-objective EAs can extract non-evident information from the three phases of the system and properly estimate parameters even when the phase estimates diverge from each other. This research proposes: the parallelization of an EA in hardware through its design on FPGA circuit optimized at level of basic logic operations and the modeling of the problem enabling multi-objective analyses of the signals from each phase in both independent and aggregate ways. Experimental results show the superiority of the proposed method compared to an estimator based on Fourier transform for determining frequency and phase
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lai, Wei-Cheng, et 賴韋誠. « Application of Rational Fraction Polynomials and Multi-objective Genetic Algorithm to Modal Parameter Estimation ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7gn2ba.

Texte intégral
Résumé :
碩士
國立臺灣大學
機械工程學研究所
105
Modal testing is essential for the identification of important dynamical parameters of a mechanical system. However, commercially available modal testing packages are expensive and nonflexible. This thesis aims to employ the popular and powerful package MATLAB as the environment to develop a modal testing program that can be customized to meet the user’s needs. The efficiency of a modal testing program highly depends on the curve fitting algorithm used. Two different curve fitting algorithms, the rational fraction polynomials (RFP) method and the multi-objective genetic algorithms, are adopted. The effectiveness of these two methods on parameter identification is compared. The RFP method is based on the fact that the frequency response function (FRF) of a linear time-invariant system is a rational function in frequency. The lease squares method is used to determine the coefficients of the numerator and denominator polynomials. The RFP method can be classified into two different types, called the local curve fitting and global curve fitting, according to whether the FRFs are processed sequentially or simultaneously. The non-dominated sorting genetic algorithm-II (NSGA-II) is used to realize the multi-objective optimization. This algorithm employs the non-dominated sorting and crowding distance to select elite individuals for the next generation. In this case, the genetic diversity is maintained, early convergence to a local extrema is avoided, and high computational efficiency is achieved. In this thesis, we develop programs based on RFP and NSGAII. Some benchmark tests, for example, modes with nodes, high damping ratios, and double roots, which may present difficulties for parameter identification are used to evaluate these two methods. Possible guidelines to improve these two methods are proposed.
Styles APA, Harvard, Vancouver, ISO, etc.
17

DI, FINA DARIO. « Multi-Target Tracking and Facial Attribute Estimation in Smart Environments ». Doctoral thesis, 2016. http://hdl.handle.net/2158/1029030.

Texte intégral
Résumé :
This dissertation presents a study on three different computer vision topics that have applications to smart environments. We first propose a solution to improve multi-target data association based on l1-regularized sparse basis expansions. The method aims to improve the data association process by addressing problems like occlusion and change of appearance. Experimental results show that, for the pure data association problem, our proposed approach achieves state-of-the-art results on standard benchmark datasets. Next, we extend our new data association approach with a novel technique based on a weighted version of sparse reconstruction that enforces long-term consistency in multi-target tracking. We introduce a two-phase approach that first performs local data association, and then periodically uses accumulated usage statistics in order to merge tracklets and enforce long-term, global consistency in tracks. The result is a complete, end-to-end tracking system that is able to reduce tracklet fragmentation and ID switches, and to improve the overall quality of tracking. Finally, we propose a method to jointly estimate face characteristics such as Gender, Age, Ethnicity and head pose. We develop a random forest based method based around a new splitting criterion for multi-objective estimation. Our system achieves results comparable to the state-of-the-art, and has the additional advantage of simultaneously estimating multiple facial characteristics using a single pool of image features rather than characteristic-specific ones.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Chang, Chih Yao, et 張智堯. « Estimating CO2 Emissions from the Perspective of Domestic Consumption in Taiwan with a Multi-objective Programming Model ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/54021221247405725312.

Texte intégral
Résumé :
碩士
國立政治大學
經濟研究所
95
This paper aims at estimating the CO2 emissions of Taiwan from the perspective of domestic consumption side. Since the developed countries would achieve the emission reduction goal by transferring their emission-intensive industries form their lands to the developing countries, we would neglect the true CO2 emissions of nations if we only estimate their CO2 emissions from the perspective of domestic production side, therefore reduce the significance of the Kyoto Protocol, which aims at reducing emissions. On the contrary, If we estimate the CO2 emissions of nations through the consumption side, we can provide the incentives for emission reduction more effectively, prompting the development of the technology of emission reduction or inducing consumers to conserve the use of energy. Consequently, this paper first estimates the CO2 emissions of Taiwan from the perspective of domestic consumption side through an input-output model, then estimates the import and export emissions of industry sectors, finally it analyzes the policies for CO2 emission reduction by a multi-objective programming model and provides suggestions for the development of industries.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie