Dissertationen zum Thema „Convent of Lindau“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Convent of Lindau.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Convent of Lindau" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Detsch, Denise Trevisoli 1983. „Sobre problemas associados a cones de segunda ordem“. [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306040.

Der volle Inhalt der Quelle
Annotation:
Orientador: Maria Aparecida Diniz Ehrhardt
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-18T12:52:22Z (GMT). No. of bitstreams: 1 Detsch_DeniseTrevisoli_M.pdf: 1081241 bytes, checksum: c29e7e81ee2ecb8467802991fc75be51 (MD5) Previous issue date: 2011
Resumo: Este trabalho teve como foco o estudo de problemas SOCP, tanto nos seus aspectos teóricos quanto nos seus aspectos práticos. Problemas SOCP são problemas convexos de otimização nos quais uma função linear 'e minimizada sobre restrições lineares e restrições de cone quadrático. Tivemos dois objetivos principais: estudar o conceito, as aplicações e os métodos de resolução de problemas SOCP, permitindo verificar a viabilidade de trabalhar com tais problemas; e verificar na prática o benefício de se utilizar uma ferramenta específica de SOCP para a resolução de problemas que se enquadram nessa classe. Para a avaliação prática utilizamos um software de otimização genérica (fmincon) e outro específico de SOCP (CVXOPT). A análise ficou concentrada nos requisitos robustez, número de iterações e variação do tempo com o aumento da dimensão dos problemas. Diante dos resultados obtidos com os testes numéricos, pudemos concluir que 'e interessante usar SOCP sempre que possível
Abstract: This dissertation focuses on the study of SOCP problems, both in its theoretical, and in its practical aspects. SOCP problems are convex optimization problems in which a linear function is minimized over linear constraints and second-order cone constraints. We had two main objectives: study the concept, applications and methods for solving the SOCP problem, making it possible to verify the feasibility of working with such problems; and to verify the practical benefits of using a SOCP specific tool for the resolution of problems of this class. The experimental evaluation used a generic optimization software (fmincon) and other SOCP specific software (CVXOPT). The analysis was concentrated on the robustness, number of iterations and time variation with the increasing scale of the problems. From results obtained with the numerical tests, we concluded that SOCP is worth to be used whenever possible
Mestrado
Matematica Aplicada
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Moura, Phablo Fernando Soares. „Recoloração convexa de grafos: algoritmos e poliedros“. Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-19112013-193725/.

Der volle Inhalt der Quelle
Annotation:
Neste trabalho, estudamos o problema a recoloração convexa de grafos, denotado por RC. Dizemos que uma coloração dos vértices de um grafo G é convexa se, para cada cor tribuída d, os vértices de G com a cor d induzem um subgrafo conexo. No problema RC, é dado um grafo G e uma coloração de seus vértices, e o objetivo é recolorir o menor número possível de vértices de G tal que a coloração resultante seja convexa. A motivação para o estudo deste problema surgiu em contexto de árvores filogenéticas. Sabe-se que este problema é NP-difícil mesmo quando G é um caminho. Mostramos que o problema RC parametrizado pelo número de mudanças de cor é W[2]-difícil mesmo se a coloração inicial usa apenas duas cores. Além disso, provamos alguns resultados sobre a inaproximabilidade deste problema. Apresentamos uma formulação inteira para a versão com pesos do problema RC em grafos arbitrários, e então a especializamos para o caso de árvores. Estudamos a estrutura facial do politopo definido como a envoltória convexa dos pontos inteiros que satisfazem as restrições da formulação proposta, apresentamos várias classes de desigualdades que definem facetas e descrevemos os correspondentes algoritmos de separação. Implementamos um algoritmo branch-and-cut para o problema RC em árvores e mostramos os resultados computacionais obtidos com uma grande quantidade de instâncias que representam árvores filogenéticas reais. Os experimentos mostram que essa abordagem pode ser usada para resolver instâncias da ordem de 1500 vértices em 40 minutos, um desempenho muito superior ao alcançado por outros algoritmos propostos na literatura.
In this work we study the convex recoloring problem of graphs, denoted by CR. We say that a vertex coloring of a graph G is convex if, for each assigned color d, the vertices of G with color d induce a connected subgraph. In the CR problem, given a graph G and a coloring of its vertices, we want to find a recoloring that is convex and minimizes the number of recolored vertices. The motivation for investigating this problem has its roots in the study of phylogenetic trees. It is known that this problem is NP-hard even when G is a path. We show that the problem CR parameterized by the number of color changes is W[2]-hard even if the initial coloring uses only two colors. Moreover, we prove some inapproximation results for this problem. We also show an integer programming formulation for the weighted version of this problem on arbitrary graphs, and then specialize it for trees. We study the facial structure of the polytope defined as the convex hull of the integer points satisfying the restrictions of the proposed ILP formulation, present several classes of facet-defining inequalities and the corresponding separation algorithms. We also present a branch-and-cut algorithm that we have implemented for the special case of trees, and show the computational results obtained with a large number of instances. We considered instances which are real phylogenetic trees. The experiments show that this approach can be used to solve instances up to 1500 vertices in 40 minutes, comparing favorably to other approaches that have been proposed in the literature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Souza, Valeska Martins de. „PROJETO DE CONTROLADOR ROBUSTO VIA OTIMIZAÇÃO CONVEXA“. Universidade Federal do Maranhão, 2002. http://tedebc.ufma.br:8080/jspui/handle/tede/319.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2016-08-17T14:52:45Z (GMT). No. of bitstreams: 1 Valeska Martins Souza.pdf: 622283 bytes, checksum: 075dc5eb2d1ecc78b4ecd96ae57ab70e (MD5) Previous issue date: 2002-02-13
In this dissertation a new methodology of based convex optimization in linear matrix inaqualities is proposal as basic instrument for the synthesis of robust controllers of discrete and linear dynamic systems that take care of to the specifications of perturbations of worse case.
Nesta dissertação é proposta uma nova metodologia de otimização convexa baseada em desigualdades matriciais lineares como instrumento básico para a síntese de controladores robustos de sistemas dinâmicos discretos e lineares que atendam às especificações de pertubações de pior caso.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Deswarte, Raphaël. „Régression linéaire et apprentissage : contributions aux méthodes de régularisation et d’agrégation“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX047/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde le sujet de la régression linéaire dans différents cadres, liés notamment à l’apprentissage. Les deux premiers chapitres présentent le contexte des travaux, leurs apports et les outils mathématiques utilisés. Le troisième chapitre est consacré à la construction d’une fonction de régularisation optimale, permettant par exemple d’améliorer sur le plan théorique la régularisation de l’estimateur LASSO. Le quatrième chapitre présente, dans le domaine de l’optimisation convexe séquentielle, des accélérations d’un algorithme récent et prometteur, MetaGrad, et une conversion d’un cadre dit “séquentiel déterministe" vers un cadre dit “batch stochastique" pour cet algorithme. Le cinquième chapitre s’intéresse à des prévisions successives par intervalles, fondées sur l’agrégation de prédicteurs, sans retour d’expérience intermédiaire ni modélisation stochastique. Enfin, le sixième chapitre applique à un jeu de données pétrolières plusieurs méthodes d’agrégation, aboutissant à des prévisions ponctuelles court-terme et des intervalles de prévision long-terme
This thesis tackles the topic of linear regression, within several frameworks, mainly linked to statistical learning. The first and second chapters present the context, the results and the mathematical tools of the manuscript. In the third chapter, we provide a way of building an optimal regularization function, improving for instance, in a theoretical way, the LASSO estimator. The fourth chapter presents, in the field of online convex optimization, speed-ups for a recent and promising algorithm, MetaGrad, and shows how to transfer its guarantees from a so-called “online deterministic setting" to a “stochastic batch setting". In the fifth chapter, we introduce a new method to forecast successive intervals by aggregating predictors, without intermediate feedback nor stochastic modeling. The sixth chapter applies several aggregation methods to an oil production dataset, forecasting short-term precise values and long-term intervals
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ostrovskii, Dmitrii. „Reconstruction adaptative des signaux par optimisation convexe“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM004/document.

Der volle Inhalt der Quelle
Annotation:
Nous considérons le problème de débruitage d'un signal ou d'une image observés dans le bruit gaussien. Dans ce problème les estimateurs linéaires classiques sont quasi-optimaux quand l'ensemble des signaux, qui doit être convexe et compact, est connu a priori. Si cet ensemble n'est pas spécifié, la conception d'un estimateur adaptatif qui ``ne connait pas'' la structure cachée du signal reste un problème difficile. Dans cette thèse, nous étudions une nouvelle famille d'estimateurs des signaux satisfaisant certains propriétés d'invariance dans le temps. De tels signaux sont caractérisés par leur structure harmonique, qui est généralement inconnu dans la pratique.Nous proposons des nouveaux estimateurs capables d'exploiter la structure harmonique inconnue du signal è reconstruire. Nous démontrons que ces estimateurs obéissent aux divers "inégalités d'oracle," et nous proposons une implémentation algorithmique numériquement efficace de ces estimateurs basée sur des algorithmes d'optimisation de "premier ordre." Nous évaluons ces estimateurs sur des données synthétiques et sur des signaux et images réelles
We consider the problem of denoising a signal observed in Gaussian noise.In this problem, classical linear estimators are quasi-optimal provided that the set of possible signals is convex, compact, and known a priori. However, when the set is unspecified, designing an estimator which does not ``know'' the underlying structure of a signal yet has favorable theoretical guarantees of statistical performance remains a challenging problem. In this thesis, we study a new family of estimators for statistical recovery of signals satisfying certain time-invariance properties. Such signals are characterized by their harmonic structure, which is usually unknown in practice. We propose new estimators which are capable to exploit the unknown harmonic structure of a signal to reconstruct. We demonstrate that these estimators admit theoretical performance guarantees, in the form of oracle inequalities, in a variety of settings.We provide efficient algorithmic implementations of these estimators via first-order optimization algorithm with non-Euclidean geometry, and evaluate them on synthetic data, as well as some real-world signals and images
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cox, Bruce. „Applications of accuracy certificates for problems with convex structure“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39489.

Der volle Inhalt der Quelle
Annotation:
Applications of accuracy certificates for problems with convex structure   This dissertation addresses the efficient generation and potential applications of accuracy certificates in the framework of “black-box-represented” convex optimization problems - convex problems where the objective and the constraints are represented by  “black boxes” which, given on input a value x of the argument, somehow (perhaps in a fashion unknown to the user) provide on output the values and the derivatives of the objective and the constraints at x. The main body of the dissertation can be split into three parts.  In the first part, we provide our background --- state of the art of the theory of accuracy certificates for black-box-represented convex optimization. In the second part, we extend the toolbox of black-box-oriented convex optimization algorithms with accuracy certificates by equipping with these certificates a state-of-the-art algorithm for large-scale nonsmooth black-box-represented problems with convex structure, specifically, the Non-Euclidean Restricted Memory Level (NERML) method. In the third part, we present several novel academic applications of accuracy certificates. The dissertation is organized as follows: In Chapter 1, we motivate our research goals and present a detailed summary of our results. In Chapter 2, we outline the relevant background, specifically, describe four generic black-box-represented generic problems with convex structure (Convex Minimization, Convex-Concave Saddle Point, Convex Nash Equilibrium, and Variational Inequality with Monotone Operator), and outline the existing theory of accuracy certificates for these problems. In Chapter 3, we develop techniques for equipping with on-line accuracy certificates the state-of-the-art NERML algorithm for large-scale nonsmooth problems with convex structure, both in the cases when the domain of the problem is a simple solid and in the case when the domain is given by Separation oracle. In Chapter 4, we develop  several novel academic applications of accuracy certificates, primarily to (a) efficient certifying emptiness of the intersection of finitely many solids given by Separation oracles, and (b) building efficient algorithms for convex minimization over solids given by Linear Optimization oracles (both precise and approximate). In Chapter 5, we apply accuracy certificates to efficient decomposition of “well structured” convex-concave saddle point problems, with applications to computationally attractive decomposition of a large-scale LP program with the constraint matrix which becomes block-diagonal after eliminating a relatively small number of possibly dense columns (corresponding to “linking variables”) and possibly dense rows (corresponding to “linking constraints”).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Delyon, Alexandre. „Shape Optimisation Problems Around the Geometry of Branchiopod Eggs“. Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0123.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse nous nous intéressons à un problème de mathématiques appliquées à la biologie. Le but est d'expliquer la forme des œufs d'Eulimnadia, un petit animal appartenant à la classe des Branchiopodes, et plus précisément les Limnadiides. En effet, d'après la théorie de l'évolution il est raisonnable de penser que la forme des êtres vivants où des objets issus d'êtres vivants est optimisée pour garantir la survie et l'expansion de l'espèce en question. Pour ce faire nous avons opté pour la méthode de modélisation inverse. Cette dernière consiste à proposer une explication biologique à la forme des œufs, puis de la modéliser sous forme d'un problème de mathématique, et plus précisément d'optimisation de forme, que l'on cherche à résoudre pour enfin comparer la forme obtenue à la forme réelle des œufs. Nous avons étudié deux modélisations, l'une amenant à des problèmes de géométrie et de packing, l'autre à des problèmes d'optimisation de forme en élasticité linéaire. Durant la résolution du premier problème issue de la modélisation, une autre question mathématique s'est naturellement posée à nous, et nous sommes parvenus à la résoudre, donnant lieu à l'obtention du diagramme de Blaschke Santalo (A,D,r) complet. En d'autre mots nous pouvons répondre à la question suivante : étant donné trois nombres A,D, et r positifs, est-il possible de trouver un ensemble convexe du plan dont l'aire est égale à A, le diamètre égal à D, et le rayon du cercle inscrit égal à r ?
In this thesis we are interested in a problem of mathematics applied to biology. The aim is to explain the shape of the eggs of Eulimnadia, a small animal belonging to the class Branchiopods}, and more precisely the Limnadiidae. Indeed, according to the theory of evolution it is reasonable to think that the shape of living beings or objects derived from living beings is optimized to ensure the survival and expansion of the species in question. To do this we have opted for the inverse modeling method. The latter consists in proposing a biological explanation for the shape of the eggs, then modeling it in the form of a mathematical problem, and more precisely a shape optimisation problem which we try to solve and finally compare the shape obtained to the real one. We have studied two models, one leading to geometry and packing problems, the other to shape optimisation problems in linear elasticity. After the resolution of the first modeling problem, another mathematical question naturally arose to us, and we managed to solve it, resulting in the complete Blaschke-Santalò (A,D,r) diagram. In other words we can answer the following question: given three positive numbers A,D, and r, and it is possible to find a convex set of the plane whose area is equal to A, diameter equal to D, and radius of the inscribed circle equal to r
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Karas, Elizabeth Wegner. „Exemplos de trajetória central mal comportada em otimização convexa e um algoritmo de filtros para programação não linear“. Florianópolis, SC, 2002. http://repositorio.ufsc.br/xmlui/handle/123456789/82651.

Der volle Inhalt der Quelle
Annotation:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de Produção.
Made available in DSpace on 2012-10-19T17:03:59Z (GMT). No. of bitstreams: 1 186157.pdf: 1340419 bytes, checksum: 96d87b8ae1c485061c1b898b42e15bc5 (MD5)
Neste trabalho apresentamos alguns exemplos de trajetória central mal comportada em otimização convexa. Alguns destes exemplos se parecem com uma antena de TV, contendo uma infinidade de segmentos horizontais de comprimento constante. Outros tem a forma de ziguezague com variação infinita. Mostramos que estes exemplos podem ocorrer mesmo que as funções envolvidas sejam infinitamente diferenciáveis. Apresentamos também, nesta tese, um algoritmo de filtro para programação não linear e provamos sua convergência global para pontos estacionários. Cada iteração é composta em duas fases totalmente independentes, e o único acoplamento entre elas é estabelecido pelo filtro. Sob hipóteses padrões, nós mostramos dois resultados: para o filtro com um tamanho mínimo, o algoritmo gera um ponto de acumulação estacionário; para um filtro levemente maior, todos os pontos de acumulação são estacionários.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Oliveira, André Marcorin de. „Estimating and control of Markov jump linear systems with partial observation of the operation mode“. Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-01032019-144518/.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we present some contributions to the Markov jump linear systems theory in a context of partial information on the Markov chain. We consider that the state of the Markov chain cannot be measured, but instead there is only an observed variable that could model an asynchronous phenomenon between the application and the plant, or a simple fault detection and isolation device. In this formulation, we investigate the problem of designing controllers and filters depending only on the observed variable in the context of H2, H?, and mixed H2/H? control theory. Numerical examples and academic applications are presented for active-fault tolerant control systems and networked control systems.
Nesta tese, apresentamos algumas contribuições para a teoria de sistemas lineares com saltos markovianos em um contexto de observação parcial da cadeia de Markov. Consideramos que o estado da cadeia de Markov não pode ser medido, porém existe uma variável observada que pode modelar um fenômeno assíncrono entre a aplicação e a planta, ou ainda um dispositivo de detecção de falhas simples. Através desse modelo, investigamos o problema da síntese de controladores e filtros que dependem somente da variável observada no contexto das teorias de controle H2, H?, e misto H2/H?. Exemplos numéricos e aplicações acadêmicas são apresentadas no âmbito dos sistemas de controle tolerantes a falhas e dos sistemas de controle através da rede.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Marins, Fernando Augusto Silva. „Estudos de programas em redes lineares por partes“. [s.n.], 1987. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260695.

Der volle Inhalt der Quelle
Annotation:
Orientador : Clovis Perin Filho
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-14T20:12:42Z (GMT). No. of bitstreams: 1 Marins_FernandoAugustoSilva_D.pdf: 8217799 bytes, checksum: 742307af34df06fe0a33afc37f0dd706 (MD5) Previous issue date: 1987
Resumo: Este trabalho propõe um refinamentodo metodo simplex especializado para Programas em Redes Lineares por Partes, denomi nado MSFV. Este refinamento e uma extensão do conceito de bases fortemente viáveis para Programas em Redes, desenvolvido por W.H. Cunningham. A viabilidade forte e mantida por meio de uma regra de saida especifica, para escolha da variável básica que deve deixar a base em cada iteração do simplex. Prova-se que, o uso de viabilidade forte em conjunto com regras de entrada adequadas, evita os fenômenos de ciclagem ("cycling") e de empacamento ("stalling"). Alem disto são apresentados resultados computacionais testando o MSFV combinado com várias regras de entrada. Adicionalmente, é realizada uma investigação do desempenho do MSFV incorporando a Tecnica de Mudança de Escala, proposta por Edmonds e Kar
Abstract: This work proposes a refinementof the simplex method especialized for solving Piecewise-Linear Network Programs, named MSFV. Such a refinement is an extension of the strongly feasible bases concept for Network Programs, developed by W.H. Cunningham. Strongly feasibility is preserved by a specific leaving variable selection rule at each simplex ite~ation. It is proved that the use of strong feasibility together with adequate entering variable selection rules prevents two phenomena cycling (ciclic sequence of degenerate iterations) and stall ing (exponentially long sequence of degenerated iterations). Moreover it is reported a computational testing of MSFV linked with several entering variable selection rules. In addition, it is investigated the performance of MSFV with theScaling Technique, proposed by Edmonds and Kar
Doutorado
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Zaourar, Sofia. „Optimisation convexe non-différentiable et méthodes de décomposition en recherche opérationnelle“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM099.

Der volle Inhalt der Quelle
Annotation:
Les méthodes de décomposition sont une application du concept de diviser pour régner en optimisation. L'idée est de décomposer un problème d'optimisation donné en une séquence de sous-problèmes plus faciles à résoudre. Bien que ces méthodes soient les meilleures pour un grand nombre de problèmes de recherche opérationnelle, leur application à des problèmes réels de grande taille présente encore de nombreux défis. Cette thèse propose des améliorations méthodologiques et algorithmiques de méthodes de décomposition. Notre approche est basée sur l'analyse convexe et l'optimisation non-différentiable. Dans la décomposition par les contraintes (ou relaxation lagrangienne) du problème de planification de production électrique, même les sous-problèmes sont trop difficiles pour être résolus exactement. Mais des solutions approchées résultent en des prix instables et chahutés. Nous présentons un moyen simple d'améliorer la structure des prix en pénalisant leurs oscillations, en utilisant en particulier une régularisation par variation totale. La consistance de notre approche est illustrée sur des problèmes d'EDF. Nous considérons ensuite la décomposition par les variables (ou de Benders) qui peut avoir une convergence excessivement lente. Avec un point de vue d'optimisation non-différentiable, nous nous concentrons sur l'instabilité de l'algorithme de plans sécants sous-jacent à la méthode. Nous proposons une stabilisation quadratique de l'algorithme de Benders, inspirée par les méthodes de faisceaux en optimisation convexe. L'accélération résultant de cette stabilisation est illustrée sur des problèmes de conception de réseau et de localisation de plates-formes de correspondance (hubs). Nous nous intéressons aussi plus généralement aux problèmes d'optimisation convexe non-différentiable dont l'objectif est coûteux à évaluer. C'est en particulier une situation courante dans les procédures de décomposition. Nous montrons qu'il existe souvent des informations supplémentaires sur le problème, faciles à obtenir mais avec une précision inconnue, qui ne sont pas utilisées dans les algorithmes. Nous proposons un moyen d'incorporer ces informations incontrôlées dans des méthodes classiques d'optimisation convexe non-différentiable. Cette approche est appliquée avec succès à desproblèmes d'optimisation stochastique. Finalement, nous introduisons une stratégie de décomposition pour un problème de réaffectation de machines. Cette décomposition mène à une nouvelle variante de problèmes de conditionnement vectoriel (vectorbin packing) où les boîtes sont de taille variable. Nous proposons des heuristiques efficaces pour ce problème, qui améliorent les résultats de l'état de l'art du conditionnement vectoriel. Une adaptation de ces heuristiques permet de construire des solutions réalisables au problème de réaffectation de machines de Google
Decomposition methods are an application of the divide and conquer principle to large-scale optimization. Their idea is to decompose a given optimization problem into a sequence of easier subproblems. Although successful for many applications, these methods still present challenges. In this thesis, we propose methodological and algorithmic improvements of decomposition methods and illustrate them on several operations research problems. Our approach heavily relies on convex analysis and nonsmooth optimization. In constraint decomposition (or Lagrangian relaxation) applied to short-term electricity generation management, even the subproblems are too difficult to solve exactly. When solved approximately though, the obtained prices show an unstable noisy behaviour. We present a simple way to improve the structure of the prices by penalizing their noisy behaviour, in particular using a total variation regularization. We illustrate the consistency of our regularization on real-life problems from EDF. We then consider variable decomposition (or Benders decomposition), that can have a very slow convergence. With a nonsmooth optimization point of view on this method, we address the instability of Benders cutting-planes algorithm. We present an algorithmic stabilization inspired by bundle methods for convex optimization. The acceleration provided by this stabilization is illustrated on network design andhub location problems. We also study more general convex nonsmooth problems whose objective function is expensive to evaluate. This situation typically arises in decomposition methods. We show that it often exists extra information about the problem, cheap but with unknown accuracy, that is not used by the algorithms. We propose a way to incorporate this coarseinformation into classical nonsmooth optimization algorithms and apply it successfully to two-stage stochastic problems.Finally, we introduce a decomposition strategy for the machine reassignment problem. This decomposition leads to a new variant of vector bin packing problems, where the bins have variable sizes. We propose fast and efficient heuristics for this problem that improve on state of the art results of vector bin packing problems. An adaptation of these heuristics is also able to generate feasible solutions for Google instances of the machine reassignment problem
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Borghi, Alexandre. „Adaptation de l’algorithmique aux architectures parallèles“. Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112205/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous nous intéressons à l'adaptation de l'algorithmique aux architectures parallèles. Les plateformes hautes performances actuelles disposent de plusieurs niveaux de parallélisme et requièrent un travail considérable pour en tirer parti. Les superordinateurs possèdent de plus en plus d'unités de calcul et sont de plus en plus hétérogènes et hiérarchiques, ce qui complexifie d'autant plus leur utilisation.Nous nous sommes intéressés ici à plusieurs aspects permettant de tirer parti des architectures parallèles modernes. Tout au long de cette thèse, plusieurs problèmes de natures différentes sont abordés, de manière plus théorique ou plus pratique selon le cadre et l'échelle des plateformes parallèles envisagées.Nous avons travaillé sur la modélisation de problèmes dans le but d'adapter leur formulation à des solveurs existants ou des méthodes de résolution existantes, en particulier dans le cadre du problème de la factorisation en nombres premiers modélisé et résolu à l'aide d'outils de programmation linéaire en nombres entiers.La contribution la plus importante de cette thèse correspond à la conception d'algorithmes pensés dès le départ pour être performants sur les architectures modernes (processeurs multi-coeurs, Cell, GPU). Deux algorithmes pour résoudre le problème du compressive sensing ont été conçus dans ce cadre : le premier repose sur la programmation linéaire et permet d'obtenir une solution exacte, alors que le second utilise des méthodes de programmation convexe et permet d'obtenir une solution approchée.Nous avons aussi utilisé une bibliothèque de parallélisation de haut niveau utilisant le modèle BSP dans le cadre de la vérification de modèles pour implémenter de manière parallèle un algorithme existant. A partir d'une unique implémentation, cet outil rend possible l'utilisation de l'algorithme sur des plateformes disposant de différents niveaux de parallélisme, tout en ayant des performances de premier ordre sur chacune d'entre elles. En l'occurrence, la plateforme de plus grande échelle considérée ici est le cluster de machines multiprocesseurs multi-coeurs. De plus, dans le cadre très particulier du processeur Cell, une implémentation a été réécrite à partir de zéro pour tirer parti de celle-ci
In this thesis, we are interested in adapting algorithms to parallel architectures. Current high performance platforms have several levels of parallelism and require a significant amount of work to make the most of them. Supercomputers possess more and more computational units and are more and more heterogeneous and hierarchical, which make their use very difficult.We take an interest in several aspects which enable to benefit from modern parallel architectures. Throughout this thesis, several problems with different natures are tackled, more theoretically or more practically according to the context and the scale of the considered parallel platforms.We have worked on modeling problems in order to adapt their formulation to existing solvers or resolution methods, in particular in the context of integer factorization problem modeled and solved with integer programming tools.The main contribution of this thesis corresponds to the design of algorithms thought from the beginning to be efficient when running on modern architectures (multi-core processors, Cell, GPU). Two algorithms which solve the compressive sensing problem have been designed in this context: the first one uses linear programming and enables to find an exact solution, whereas the second one uses convex programming and enables to find an approximate solution.We have also used a high-level parallelization library which uses the BSP model in the context of model checking to implement in parallel an existing algorithm. From a unique implementation, this tool enables the use of the algorithm on platforms with different levels of parallelism, while obtaining cutting edge performance for each of them. In our case, the largest-scale platform that we considered is the cluster of multi-core multiprocessors. More, in the context of the very particular Cell processor, an implementation has been written from scratch to take benefit from it
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Calmon, Andre du Pin. „Variação do controle como fonte de incerteza“. [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259270.

Der volle Inhalt der Quelle
Annotation:
Orientador: João Bosco Ribeiro do Val
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-14T00:07:24Z (GMT). No. of bitstreams: 1 Calmon_AndreduPin_M.pdf: 862345 bytes, checksum: 122780715dca28ac7fa3199aa0586e7c (MD5) Previous issue date: 2009
Resumo: Este trabalho apresenta a caracterização teórica e a estratégia de controle para sistemas estocásticos em tempo discreto onde a variação da ação de controle aumenta a incerteza sobre o estado (sistemas VCAI). Este tipo de sistema possui várias aplicações práticas, como em problemas de política monetária, medicina e, de forma geral, em problemas onde um modelo dinâmico completo do sistema é complexo demais para ser conhecido. Utilizando ferramentas da análise de funções não suaves, mostra-se para um sistema VCAI multidimensional que a convexidade é uma invariante da função valor da Programação Dinâmica quando o custo por estágio é convexo. Esta estratégia indica a existência de uma região no espaço de estados onde a ação ótima de controle é de não variação (denominada região de não-variação), estando de acordo com a natureza cautelosa do controle de sistemas subdeterminados. Adicionalmente, estudou-se algoritmos para a obtenção da política ótima de controle para sistemas VCAI, com ênfase no caso mono-entrada avaliado através de uma função custo quadrática. Finalmente, os resultados obtidos foram aplicados no problema da condução da política monetária pelo Banco Central.
Abstract: This dissertation presents a theoretical framework and the control strategy for discrete-time stochastic systems for which the control variations increase state uncertainty (CVIU systems). This type of system model can be useful in many practical situations, such as in monetary policy problems, medicine and biology, and, in general, in problems for which a complete dynamic model is too complex to be feasible. The optimal control strategy for a multidimensional CVIU system associated with a convex cost functional is devised using dynamic programming and tools from nonsmooth analysis. Furthermore, this strategy points to a region in the state space in which the optimal action is of no variation (the region of no variation), as expected from the cautionary nature of controlling underdetermined systems. Numerical strategies for obtaining the optimal policy in CVIU systems were developed, with focus on the single-input input case evaluated through a quadratic cost functional. These results are illustrated through a numerical example in economics.
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Baratov, Rishat. „Efficient conic decomposition and projection onto a cone in a Banach ordered space“. Thesis, University of Ballarat, 2005. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/61401.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ilyes, Amy Louise. „Using linear programming to solve convex quadratic programming problems“. Case Western Reserve University School of Graduate Studies / OhioLINK, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=case1056644216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Duda, Jakub. „Aspects of delta-convexity /“. free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3115539.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Hou, Liangshao. „Solving convex programming with simple convex constraints“. HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/739.

Der volle Inhalt der Quelle
Annotation:
The problems we studied in this thesis are linearly constrained convex programming (LCCP) and nonnegative matrix factorization (NMF). The resolutions of these two problems are all closely related to convex programming with simple convex constraints. The work can mainly be described in the following three parts. Firstly, an interior point algorithm following a parameterized central path for linearly constrained convex programming is proposed. The convergence and polynomial-time complexity are proved under the assumption that the Hessian of the objective function is locally Lipschitz continuous. Also, an initialization strategy is proposed, and some numerical results are provided to show the efficiency of the proposed algorithm. Secondly, the path following algorithm is promoted for general barrier functions. A class of barrier functions is proposed, and their corresponding paths are proved to be continuous and converge to optimal solutions. Applying the path following algorithm to these paths provide more flexibility to interior point methods. With some adjustments, the initialization method is equipped to validate implementation and convergence. Thirdly, we study the convergence of hierarchical alternating least squares algorithm (HALS) and its fast form (Fast HALS) for nonnegative matrix factorization. The coordinate descend idea for these algorithms is restated. With a precise estimation of objective reduction, some limiting properties are illustrated. The accumulation points are proved to be stationary points, and some adjustments are proposed to improve the implementation and efficiency
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Araújo, Pedro Felippe da Silva. „Programação linear e suas aplicações: definição e métodos de soluções“. Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/3126.

Der volle Inhalt der Quelle
Annotation:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:12:32Z No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:34:23Z (GMT) No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-09-23T11:34:23Z (GMT). No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-03-18
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Problems involving the idea of optimization are found in various elds of study, such as, in Economy is in search of cost minimization and pro t maximization in a rm or country, from the available budget; in Nutrition is seeking to redress the essential nutrients daily with the lowest possible cost, considering the nancial capacity of the individual; in Chemistry studies the pressure and temperature minimum necessary to accomplish a speci c chemical reaction in the shortest possible time; in Engineering seeks the lowest cost for the construction of an aluminium alloy mixing various raw materials and restrictions obeying minimum and maximum of the respective elements in the alloy. All examples cited, plus a multitude of other situations, seek their Remedy by Linear Programming. They are problems of minimizing or maximizing a linear function subject to linear inequalities or Equalities, in order to nd the best solution to this problem. For this show in this paper methods of problem solving Linear Programming. There is an emphasis on geometric solutions and Simplex Method, to form algebraic solution. Wanted to show various situations which may t some of these problems, some general cases more speci c cases. Before arriving eventually in solving linear programming problems, builds up the eld work of this type of optimization, Convex Sets. There are presentations of de nitions and theorems essential to the understanding and development of these problems, besides discussions on the e ciency of the methods applied. During the work, it is shown that there are cases which do not apply the solutions presented, but mostly t e ciently, even as a good approximation.
Problemas que envolvem a ideia de otimiza c~ao est~ao presentes em v arios campos de estudo como, por exemplo, na Economia se busca a minimiza c~ao de custos e a maximiza c~ao do lucro em uma rma ou pa s, a partir do or camento dispon vel; na Nutri c~ao se procura suprir os nutrientes essenciais di arios com o menor custo poss vel, considerando a capacidade nanceira do indiv duo; na Qu mica se estuda a press~ao e a temperatura m nimas necess arias para realizar uma rea c~ao qu mica espec ca no menor tempo poss vel; na Engenharia se busca o menor custo para a confec c~ao de uma liga de alum nio misturando v arias mat erias-primas e obedencendo as restri c~oes m nimas e m aximas dos respectivos elementos presentes na liga. Todos os exemplos citados, al em de uma in nidade de outras situa c~oes, buscam sua solu c~ao atrav es da Programa c~ao Linear. S~ao problemas de minimizar ou maximizar uma fun c~ao linear sujeito a Desigualdades ou Igualdades Lineares, com o intuito de encontrar a melhor solu c~ao deste problema. Para isso, mostram-se neste trabalho os m etodos de solu c~ao de problemas de Programa c~ao Linear. H a ^enfase nas solu c~oes geom etricas e no M etodo Simplex, a forma alg ebrica de solu c~ao. Procuram-se mostrar v arias situa c~oes as quais podem se encaixar alguns desses problemas, dos casos gerais a alguns casos mais espec cos. Antes de chegar, eventualmente, em como solucionar problemas de Programa c~ao Linear, constr oi-se o campo de trabalho deste tipo de otimiza c~ao, os Conjuntos Convexos. H a apresenta c~oes das de ni c~oes e teoremas essenciais para a compreens~ao e o desenvolvimento destes problemas; al em de discuss~oes sobre a e ci^encia dos m etodos aplicados. Durante o trabalho, mostra-se que h a casos os quais n~ao se aplicam as solu c~oes apresentadas, por em, em sua maioria, se enquadram de maneira e ciente, mesmo como uma boa aproxima c~ao.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Trienis, Michael Joseph. „Computational convex analysis : from continuous deformation to finite convex integration“. Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/2799.

Der volle Inhalt der Quelle
Annotation:
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Banjac, Goran. „Operator splitting methods for convex optimization : analysis and implementation“. Thesis, University of Oxford, 2018. https://ora.ox.ac.uk/objects/uuid:17ac73af-9fdf-4cf6-a946-3048da3fc9c2.

Der volle Inhalt der Quelle
Annotation:
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Pal, Anibrata. „Multi-objective optimization in learn to pre-compute evidence fusion to obtain high quality compressed web search indexes“. Universidade Federal do Amazonas, 2016. http://tede.ufam.edu.br/handle/tede/5128.

Der volle Inhalt der Quelle
Annotation:
Submitted by Sáboia Nágila (nagila.saboia01@gmail.com) on 2016-07-29T14:09:40Z No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:54:46Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:57:29Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Made available in DSpace on 2016-08-15T17:57:29Z (GMT). No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) Previous issue date: 2016-04-19
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The world of information retrieval revolves around web search engines. Text search engines are one of the most important source for routing information. The web search engines index huge volumes of data and handles billions of documents. The learn to rank methods have been adopted in the recent past to generate high quality answers for the search engines. The ultimate goal of these systems are to provide high quality results and, at the same time, reduce the computational time for query processing. Drawing direct correlation from the aforementioned fact; reading from smaller or compact indexes always accelerate data read or in other words, reduce computational time during query processing. In this thesis we study about using learning to rank method to not only produce high quality ranking of search results, but also to optimize another important aspect of search systems, the compression achieved in their indexes. We show that it is possible to achieve impressive gains in search engine index compression with virtually no loss in the final quality of results by using simple, yet effective, multi objective optimization techniques in the learning process. We also used basic pruning techniques to find out the impact of pruning in the compression of indexes. In our best approach, we were able to achieve more than 40% compression of the existing index, while keeping the quality of results at par with methods that disregard compression.
Máquinas de busca web para a web indexam grandes volumes de dados, lidando com coleções que muitas vezes são compostas por dezenas de bilhões de documentos. Métodos aprendizagem de máquina têm sido adotados para gerar as respostas de alta qualidade nesses sistemas e, mais recentemente, há métodos de aprendizagem de máquina propostos para a fusão de evidências durante o processo de indexação das bases de dados. Estes métodos servem então não somente para melhorar a qualidade de respostas em sistemas de busca, mas também para reduzir custos de processamento de consultas. O único método de fusão de evidências em tempo de indexação proposto na literatura tem como foco exclusivamente o aprendizado de funções de fusão de evidências que gerem bons resultados durante o processamento de consulta, buscando otimizar este único objetivo no processo de aprendizagem. O presente trabalho apresenta uma proposta onde utiliza-se o método de aprendizagem com múltiplos objetivos, visando otimizar, ao mesmo tempo, tanto a qualidade de respostas produzidas quando o grau de compressão do índice produzido pela fusão de rankings. Os resultados apresentados indicam que a adoção de um processo de aprendizagem com múltiplos objetivos permite que se obtenha melhora significativa na compressão dos índices produzidos sem que haja perda significativa na qualidade final do ranking produzido pelo sistema.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Silva, Marla Patrícia Garcia de Lima da. „Uma porta para o passado...estudo paleoantropológico de uma amostra de Não-Adultos dos vestígios Antropológicos exumados do Largo do Convento do Carmo (Lisboa) (séc. XVI – XVIII)“. Master's thesis, Instituto Superior de Ciências Sociais e Políticas, 2014. http://hdl.handle.net/10400.5/12697.

Der volle Inhalt der Quelle
Annotation:
Dissertação de Mestrado em Antropologia
O objectivo desta dissertação prende-se com a descrição e estudo da amostra de 50 esqueletos não-adultos dos vestígios antropológicos exumados do Largo do Convento do Carmo, em Lisboa, que datam do período entre o século XVI a meados do século XVIII. O que implicou a quantificação de número de peças ósseas por indivíduo através do registo do Índice de Conservação Anatómica (ICA) e do estado geral da amostra (ICAG). O valor calculado de 27,77% apresentou-se bastante baixo. Foi realizado o estudo dos parâmetros paleodemográficos, na estimativa da idade à morte estimou-se a idade à morte em 40 dos indivíduos da amostra e, na determinação sexual aplicou-se o método a 14 indivíduos. Na observação morfológica (não-métrica) foram observados os caracteres discretos cranianos e pós-cranianos mas sem grande expressão. Nos indicadores de stresse ou condicionamentos de crescimento observou-se a criba orbitalia, a hiperostose porótica mas, foi nas hipoplasias lineares do esmalte dentário que se verificou uma prevalência maior de 36%. Por fim, na patologia oral observaram-se as cáries, o desgaste dentário e o tártaro. A prevalência de cáries foi de 6,73% e de 681 dentes observados tinham cáries cavitadas 11 dentes.
The purpose of this dissertation was to make a description and study of a sample of 50 non-adult skeletons excavated from the Largo do Convento do Carmo, in Lisbon (16th to mid 18th century). It implied the quantification of bone pieces per individual in order to attain the anatomical conservational index, which was quite low 27,77%. It was also carried out a study of paleodemographic parameters, such as age at death which was obtained in 40 individuals, in the determination of sex the method could only be used on 14 individuals. On the non-metrical morphological approach was registered the epigenetic traits of the skull and of the axial skeleton but they did not show much expression. Regarding non-specific stress indicators, criba orbitalia and porotic hiperostoses was registered, but it was the linear enamel dental defects that with 36% prevalence stood out. Last but not least, on oral paleopathology was registered dental wear and dental calculus, but the prevalence went to 6,73% on carious lesions. Dental wear and dental calculus prevalence was very low and non-expressive.
N/A
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Joo, Jhi-Young. „Adaptive Load Management: Multi-Layered And Multi-Temporal Optimization Of The Demand Side In Electric Energy Systems“. Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/307.

Der volle Inhalt der Quelle
Annotation:
Well-designed demand response is expected to play a vital role in operatingpower systems by reducing economic and environmental costs. However,the current system is operated without much information on the benefits ofend-users, especially the small ones, who use electricity. This thesis proposes aframework of operating power systems with demand models including the diversityof end-users’ benefits, namely adaptive load management (ALM). Sincethere are a large number of end-users having different preferences and conditionsin energy consumption, the information on the end-users’ benefits needsto be aggregated at the system level. This leads us to model the system ina multi-layered way, including end-users, load serving entities, and a systemoperator. On the other hand, the information of the end-users’ benefits can beuncertain even to the end-users themselves ahead of time. This information isdiscovered incrementally as the actual consumption approaches and occurs. Forthis reason ALM requires a multi-temporal model of a system operation andend-users’ benefits within. Due to the different levels of uncertainty along thedecision-making time horizons, the risks from the uncertainty of informationon both the system and the end-users need to be managed. The methodologyof ALM is based on Lagrange dual decomposition that utilizes interactive communicationbetween the system, load serving entities, and end-users. We showthat under certain conditions, a power system with a large number of end-userscan balance at its optimum efficiently over the horizon of a day ahead of operationto near real time. Numerical examples include designing ALM for theright types of loads over different time horizons, and balancing a system with a large number of different loads on a congested network. We conclude thatwith the right information exchange by each entity in the system over differenttime horizons, a power system can reach its optimum including a variety ofend-users’ preferences and their values of consuming electricity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Chen, Binyuan. „FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMS“. Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145120.

Der volle Inhalt der Quelle
Annotation:
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Atutey, Olivia Abena. „Linear Mixed Model Selection via Minimum Approximated Information Criterion“. Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1594910831256966.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Nizard, David. „Programmation mathématique non convexe non linéaire en variables entières : un exemple d'application au problème de l'écoulement de larges blocs d'actifs“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG015.

Der volle Inhalt der Quelle
Annotation:
La programmation mathématique fournit un cadre pour l'étude et la résolution des problèmes d'optimisation contraints ou non. Elle constitue une branche active des mathématiques appliquées, depuis la deuxième moitié du XXème siècle.L'objet de cette thèse est la résolution d'un programme mathématique non convexe non linéaire en variables entières, sous contrainte linéaire d'égalité. Le problème proposé, bien qu'abordé dans cette étude uniquement pour le cas déterministe, trouve son origine en finance, sous le nom d'écoulement de larges blocs d'actifs, ou de liquidation optimale de portefeuille. Il consiste à vendre une (très large) quantité M donnée d'un actif financier en temps fini (discrétisé en N instants) en maximisant le produit de cette vente. A chaque instant, le prix de vente est modélisé par une fonction de pénalité qui reflète le comportement antagoniste du marché face à l'écoulement progressif.Du point de vue, de la programmation mathématique, cette classe de problème est NP-difficile résoudre d'après Garey et Johson, car la non-convexité de la fonction objectif impose d'adapter les méthodes classiques de résolutions (Branch and Bound , coupes) en variables entières. De plus, comme on ne connait pas de méthode de résolution générale pour cette classe de problèmes, les méthodes utilisées doivent être adaptées aux spécificités du problème.La première partie de cette thèse est consacrée à la résolution exacte ou approchée utilisant la programmation dynamique. Nous montrons en effet, que l'équation de Bellman s'applique au problème proposé et permet ainsi de résoudre exactement et rapidement les petites instances. Pour les moyennes et grandes instances, où la programmation dynamique n'est plus disponible et/ou performante, nous proposons des bornes inférieures via différentes heuristiques utilisant la programmation dynamique ainsi que des méthodes de recherche locale, dont nous étudions la qualité (optimalité, temps CPU) et la complexité.La seconde partie de la thèse s'intéresse à la reformulation équivalente du problème de thèse sous forme factorisée et à sa relaxation convexe via les inégalités de McCormick. Nous proposons alors deux algorithmes de résolution exacte du type Branch and Bound, qui fournissent l'optimum global ou un encadrement en temps limité.Dans une troisième partie, dédiée aux expérimentations numériques, nous comparons les méthodes de résolutions proposées entre elles et aux solvers de l'état de l'art. Nous observons notamment que les bornes obtenues sont souvent proches et mêmes parfois meilleures que celles des solvers libres ou commerciaux auxquels nous nous comparons (ex : LocalSolver, Scip, Baron, Couenne et Bonmin).De plus, nous montrons que nos méthodes de résolutions peuvent s'appliquer à toute fonction de pénalité suffisamment régulière et croissante, ce qui comprend notamment des fonctions qui ne sont pas actuellement pas prises en charge par certains solvers, bien qu'elles aient un sens économique pour le problème, comme par exemple les fonctions trigonométriques ou la fonction arctangente.Numériquement, la programmation dynamique permet de résoudre à l'optimum, sous la minute, des instances de taille N<100 et M<10 000. Les heuristiques proposées fournissent de très bonnes bornes inférieures, qui atteignent très souvent l'optimum, pour N<1 000 et M<100 000. Par contraste, la résolution du problème factorisé n'est efficace que pour N< 10, M<1 000, mais nous obtenons des bornes supérieures relativement bonnes. Enfin, pour les grandes instances (M>1 000 000), nos heuristiques à base de programmation dynamique, lorsqu'elles sont disponibles, fournissent les meilleures bornes inférieures, mais nous n'avons pas d'encadrement précis de l'optimum car nos bornes supérieures ne sont pas fines
Mathematical programming provides a framework to study and resolve optimization problems, constrained or not. It represents an active domain of Applied Mathematics, for the second half of the 20th century.The aim of this thesis is to solve an non convex, non linear, pure integer, mathematical program, under a linear constraint of equality. This problem, although studied in this dissertation only in the deterministic case, stems from a financial application, known as the large block sale problem, or optimal portfolio liquidation. It consists in selling a (very large) known quantity M of a financial asset in finite time, discretized in N points in time, while maximizing the proceeds of the sale. At each point in time, the sell price is modeled by a penalty function, which reflects the antagonistic behavior of the market in response to our progressive selling flow.From the standpoint of the mathematical programming, this class of problems is NP-hard to solve according to Garey and Johnson, because the non convexity of the objective function imposes on us to adapt classical resolutions methods (Branch and Bound, cuts) for integer variables. In addition, as no general resolution method for this class of problems is known, the methods used for solving must be adapted to the problem specifics.The first part of the thesis is devoted to solve the problem, either exactly or approximately, using Dynamic Programming. We indeed prove that Bellman's equation applies to the problem studied and thus enables to solve it exactly and quickly for small instances. For medium and large instances, for which Dynamic Programming is either not available and/or efficient, we provide lower bounds using different heuristics relying on Dynamic Programming, or local search methods, for which performance (tightness and CPU time) and complexity are studied.The second part of this thesis focuses on the equivalent reformulation of the problem in a factored form, and on its convex relaxation using McCormick's inequalities. We introduce two exact resolution algorithms, which belongs to the Branch and Bound category. They return the global optimum or bound it in limited time.In a third part, dedicated to numerical experiments, we compare our resolution methods between each other and to state of the art solvers. We notice in particular that our bounds are comparable and sometimes even better than solvers' bounds, both free and commercial (e.g LocalSolver, Scip, Baron, Couenne et Bonmin), which we use as benchmark.In addition, we show that our resolution methods may apply to sufficiently regular and increasing penalty functions, especially functions which are currently not handled by some solvers, even though they make economic sense for the problem, as does trigonometric functions or the arctangent function for instance.Numerically, Dynamic Programming does optimally solve the problem, within a minute, for instances of size N<100 and M< 10 000. Our heuristics provide very tight lower bounds, which often reach the optimum, for N<1 000 and M<100 000. By contrast, optimal resolution of the factored problem proves efficient for instances of size N<10, M<1 000, even though we obtain relatively good upper bounds. Lastly, for large instances (M>1 000 000), our heuristics based on Dynamic Programming, when available, return the best lower bounds. However, we are not able to bound the optimum tightly, since our upper bounds are not thin
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Pinheiro, Ricardo Bento Nogueira [UNESP]. „Um método previsor-corretor primal-dual de pontos interiores barreira logarítmica modificada, com estratégias de convergência global e de ajuste cúbico, para problemas de programação não-linear e não-convexa“. Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/87189.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-06-11T19:22:34Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-08-22Bitstream added on 2014-06-13T19:08:11Z : No. of bitstreams: 1 pinheiro_rbn_me_bauru.pdf: 19855827 bytes, checksum: 0c72e37d2b42539464b7fafb4a4e52a2 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Neste trabalho apresentamos o método previsor-corretor primal-dual de pontos interiores, com barreira logarítmica modificada e estratégia de ajuste cúbico (MPIBLM-EX) e o método previsor-corretor primal-dual de pontos interiores, com barreira logarítmica modificada, com estratégias de ajuste cúbico e de convergência global (MPIBLMCG-EX). Na definição do algoritmo proposto, a função barreira logarítmica modificada auxilia o método em sua inicialização com pontos inviáveis. Porém, a inviabilidade pode ocorrer em pontos tais que o logaritmo não está definido, consequentemente, isso implica na não existência de função barreira logarítmica modificada. Para suprir essa dificuldade um polinômio cúbico ajustado ao logaritmo, que preserva as derivadas de primeira e segunda do mestre definido a partir de um ponto da região ampliada ao método previsor-corretor primal-dual de pontos interiores com barreira logarítmica modificada (MPIBML); no processo previsor são realizadas atualizações do parâmetro de barreira nos resíduos das restrições de complementaridade, considerando aproximações de primeira ordem do sistema de direções de busca, enquanto que no procedimento corretor, incluímos os termos quadráticos não-lineares dos resíduos citados, que foram desprezados no procedimento previsor. Considerando também a estratégia de convergência global para o MPIBLM-EX, a qual utiliza uma variante do método de Levenberg-Marquardt para ajustar a matriz dual normal da função lagrangiana, caso esta não seja definida positiva. A matriz dual normal é redefinida para as restrições primais de igualdade, de desigualdade e para as variáveis canalizadas, incorporando variáveis duais e matrizes diagonais relativas às restrições de complementariade. Desse estudo, o MPIBLM-EX é transformado no MPIBLMCG-EX e mostramos...
This work presents a predictor primal-dual interior point method with modified log-barrier and third order extrapolation strategy (IPMLBM-EX) and also and extension of this method with the inclusion of the global convergence strategy (IPMLBGCM-EX). In the definition of the proposed algorithm, the modified log-barrier function helps the method initialize with infeasible points. However, infeasibility may occur for some point where the logarithm is not defined. The implicates in non-existence of the modified log-barrier function. To cope with such as problem, a cubic polynomial function is adjusted to the logarithmic function. Sucha polynomial function preserves first and second order derivatives in certain point defined in the extended region. This function is applied to the predictor-corretor primal-dual interior point method with modified log-barrier function. In the predictor procedure, the barrier parameter is updated in the complementarity conditions considering first-order approximations of the search direction, while the corrector procedure includes the nonlinear quadratic terms of the mentioned residuals, which were neglected in the predictor procedure. We also consider the global convergence strategy for the method, which uses a variant of the Levenberg-Marquardt method to update the normal dual matrix of the Langrangian function, should it fail to be positively defined. In this case, this matrix is redefined for equality primal constraints, bounded inequality primal constraints and bounded variables, incorporating dual variables and diagonal matrices of the complementarity constraints. From such studies, the IPMLBM-EX method is extended to include the global convergence strategy (IPMLBGCM-EX). We have show that both methods are projected gradient methods. An implementation performed with Matlab 6.1 has shown the... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Pinheiro, Ricardo Bento Nogueira. „Um método previsor-corretor primal-dual de pontos interiores barreira logarítmica modificada, com estratégias de convergência global e de ajuste cúbico, para problemas de programação não-linear e não-convexa /“. Bauru : [s.n.], 2012. http://hdl.handle.net/11449/87189.

Der volle Inhalt der Quelle
Annotation:
Orientador: Antonio Roberto Balbo
Banca: Edilaine Martins Soler
Banca: Leonardo Nepomuceno
Resumo: Neste trabalho apresentamos o método previsor-corretor primal-dual de pontos interiores, com barreira logarítmica modificada e estratégia de ajuste cúbico (MPIBLM-EX) e o método previsor-corretor primal-dual de pontos interiores, com barreira logarítmica modificada, com estratégias de ajuste cúbico e de convergência global (MPIBLMCG-EX). Na definição do algoritmo proposto, a função barreira logarítmica modificada auxilia o método em sua inicialização com pontos inviáveis. Porém, a inviabilidade pode ocorrer em pontos tais que o logaritmo não está definido, consequentemente, isso implica na não existência de função barreira logarítmica modificada. Para suprir essa dificuldade um polinômio cúbico ajustado ao logaritmo, que preserva as derivadas de primeira e segunda do mestre definido a partir de um ponto da região ampliada ao método previsor-corretor primal-dual de pontos interiores com barreira logarítmica modificada (MPIBML); no processo previsor são realizadas atualizações do parâmetro de barreira nos resíduos das restrições de complementaridade, considerando aproximações de primeira ordem do sistema de direções de busca, enquanto que no procedimento corretor, incluímos os termos quadráticos não-lineares dos resíduos citados, que foram desprezados no procedimento previsor. Considerando também a estratégia de convergência global para o MPIBLM-EX, a qual utiliza uma variante do método de Levenberg-Marquardt para ajustar a matriz dual normal da função lagrangiana, caso esta não seja definida positiva. A matriz dual normal é redefinida para as restrições primais de igualdade, de desigualdade e para as variáveis canalizadas, incorporando variáveis duais e matrizes diagonais relativas às restrições de complementariade. Desse estudo, o MPIBLM-EX é transformado no MPIBLMCG-EX e mostramos... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: This work presents a predictor primal-dual interior point method with modified log-barrier and third order extrapolation strategy (IPMLBM-EX) and also and extension of this method with the inclusion of the global convergence strategy (IPMLBGCM-EX). In the definition of the proposed algorithm, the modified log-barrier function helps the method initialize with infeasible points. However, infeasibility may occur for some point where the logarithm is not defined. The implicates in non-existence of the modified log-barrier function. To cope with such as problem, a cubic polynomial function is adjusted to the logarithmic function. Sucha polynomial function preserves first and second order derivatives in certain point defined in the extended region. This function is applied to the predictor-corretor primal-dual interior point method with modified log-barrier function. In the predictor procedure, the barrier parameter is updated in the complementarity conditions considering first-order approximations of the search direction, while the corrector procedure includes the nonlinear quadratic terms of the mentioned residuals, which were neglected in the predictor procedure. We also consider the global convergence strategy for the method, which uses a variant of the Levenberg-Marquardt method to update the normal dual matrix of the Langrangian function, should it fail to be positively defined. In this case, this matrix is redefined for equality primal constraints, bounded inequality primal constraints and bounded variables, incorporating dual variables and diagonal matrices of the complementarity constraints. From such studies, the IPMLBM-EX method is extended to include the global convergence strategy (IPMLBGCM-EX). We have show that both methods are projected gradient methods. An implementation performed with Matlab 6.1 has shown the... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Vance, Katelynn Atkins. „Robust Control for Inter-area Oscillations“. Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/36313.

Der volle Inhalt der Quelle
Annotation:
In order to reduce the detrimental effects of inter-area oscillations on system stability, it is possible to use Linear Matrix Inequalities (LMIs) to design a multi-objective state feedback. The LMI optimization finds a control law that stabilizes several contingencies simultaneously using a polytopic model of the system. However, the number of cases to be considered is limited by computational complexity which increases the chances of infeasibility. In order to circumvent this problem, this paper presents a method for solving multiple polytopic problems having a common base case. The proposed algorithm determines the necessary polytopic control for a particular contingency and classifies the data as belonging to that polytopic domain. The technique has been tested on an 8-machine, 13 bus, system and has been found to give satisfactory results.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Nelakanti, Anil Kumar. „Modélisation du langage à l'aide de pénalités structurées“. Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066033.

Der volle Inhalt der Quelle
Annotation:
La modélisation de la langue naturelle est l¿un des défis fondamentaux de l'intelligence artificielle et de la conception de systèmes interactifs, avec applications dans les systèmes de dialogue, la génération de texte et la traduction automatique. Nous proposons un modèle log-linéaire discriminatif donnant la distribution des mots qui suivent un contexte donné. En raison de la parcimonie des données, nous proposons un terme de pénalité qui code correctement la structure de l'espace fonctionnel pour éviter le sur-apprentissage et d'améliorer la généralisation, tout en capturant de manière appropriée les dépendances à long terme. Le résultat est un modèle efficace qui capte suffisamment les dépendances longues sans occasionner une forte augmentation des ressources en espace ou en temps. Dans un modèle log-linéaire, les phases d'apprentissage et de tests deviennent de plus en plus chères avec un nombre croissant de classes. Le nombre de classes dans un modèle de langue est la taille du vocabulaire, qui est généralement très importante. Une astuce courante consiste à appliquer le modèle en deux étapes: la première étape identifie le cluster le plus probable et la seconde prend le mot le plus probable du cluster choisi. Cette idée peut être généralisée à une hiérarchie de plus grande profondeur avec plusieurs niveaux de regroupement. Cependant, la performance du système de classification hiérarchique qui en résulte dépend du domaine d'application et de la construction d'une bonne hiérarchie. Nous étudions différentes stratégies pour construire la hiérarchie des catégories de leurs observations
Modeling natural language is among fundamental challenges of artificial intelligence and the design of interactive machines, with applications spanning across various domains, such as dialogue systems, text generation and machine translation. We propose a discriminatively trained log-linear model to learn the distribution of words following a given context. Due to data sparsity, it is necessary to appropriately regularize the model using a penalty term. We design a penalty term that properly encodes the structure of the feature space to avoid overfitting and improve generalization while appropriately capturing long range dependencies. Some nice properties of specific structured penalties can be used to reduce the number of parameters required to encode the model. The outcome is an efficient model that suitably captures long dependencies in language without a significant increase in time or space requirements. In a log-linear model, both training and testing become increasingly expensive with growing number of classes. The number of classes in a language model is the size of the vocabulary which is typically very large. A common trick is to cluster classes and apply the model in two-steps; the first step picks the most probable cluster and the second picks the most probable word from the chosen cluster. This idea can be generalized to a hierarchy of larger depth with multiple levels of clustering. However, the performance of the resulting hierarchical classifier depends on the suitability of the clustering to the problem. We study different strategies to build the hierarchy of categories from their observations
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Theußl, Stefan, Florian Schwendinger und Kurt Hornik. „ROI: An extensible R Optimization Infrastructure“. WU Vienna University of Economics and Business, 2019. http://epub.wu.ac.at/5858/1/ROI_StatReport.pdf.

Der volle Inhalt der Quelle
Annotation:
Optimization plays an important role in many methods routinely used in statistics, machine learning and data science. Often, implementations of these methods rely on highly specialized optimization algorithms, designed to be only applicable within a specific application. However, in many instances recent advances, in particular in the field of convex optimization, make it possible to conveniently and straightforwardly use modern solvers instead with the advantage of enabling broader usage scenarios and thus promoting reusability. This paper introduces the R Optimization Infrastructure which provides an extensible infrastructure to model linear, quadratic, conic and general nonlinear optimization problems in a consistent way. Furthermore, the infrastructure administers many different solvers, reformulations, problem collections and functions to read and write optimization problems in various formats.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hadjou, Tayeb. „Analyse numérique des méthodes de points intérieurs : simulations et applications“. Rouen, 1996. http://www.theses.fr/1996ROUES062.

Der volle Inhalt der Quelle
Annotation:
La thèse porte sur une étude à la fois théorique et pratique des méthodes de points intérieurs pour la programmation linéaire et la programmation quadratique convexe. Dans une première partie, elle donne une introduction aux méthodes de points intérieurs pour la programmation linéaire, décrit les outils de base, classifie et présente d'une façon unifiée les différentes méthodes. Elle présente dans la suite un exposé des algorithmes de trajectoire centrale pour la programmation linéaire et la programmation quadratique convexe. Dans une seconde partie sont étudiées des procédures de purification en programmation linéaire. Il s'agit des procédures qui déterminent, via une méthode de points intérieurs, un sommet (ou face) optimal. Dans cette partie, nous avons introduit et développé une nouvelle procédure de purification qui permet de mener dans tous les cas à un sommet optimal et de réduire le temps de calcul. La dernière partie est consacrée aux illustrations et aux expériences numériques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Godard, Hadrien. „Résolution exacte du problème de l'optimisation des flux de puissance“. Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1258/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a pour objet la résolution exacte d’un problème d’optimisation des flux de puissance (OPF) dans un réseau électrique. Dans l’OPF, on doit planifier la production et la répartition des flux de puissances électriques permettant de couvrir, à un coût minimal, la consommation en différents points du réseau. Trois variantes du problème de l’OPF sont étudiées dans ce manuscrit. Nous nous concentrerons principalement sur la résolution exacte des deux problèmes (OPF − L) et (OPF − Q), puis nous montrerons comment notre approche peut naturellement s’´étendre à la troisième variante (OPF − UC). Cette thèse propose de résoudre ces derniers à l’aide d’une méthode de reformulation que l’on appelle RC-OPF. La contribution principale de cette thèse réside dans l’étude, le développement et l’utilisation de notre méthode de résolution exacte RC-OPF sur les trois variantes d’OPF. RC-OPF utilise également des techniques de contractions de bornes, et nous montrons comment ces techniques classiques peuvent être renforcées en utilisant des résultats issus de notre reformulation optimale
Alternative Current Optimal Power Flow (ACOPF) is naturally formulated as a non-convex problem. In that context, solving (ACOPF) to global optimality remains a challenge when classic convex relaxations are not exact. We use semidefinite programming to build a quadratic convex relaxation of (ACOPF). We show that this quadratic convex relaxation has the same optimal value as the classical semidefinite relaxation of (ACOPF) which is known to be tight. In that context, we build a spatial branch-and-bound algorithm to solve (ACOPF) to global optimality that is based on a quadratic convex programming bound
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Yao, Liangjin. „Decompositions and representations of monotone operators with linear graphs“. Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/2807.

Der volle Inhalt der Quelle
Annotation:
We consider the decomposition of a maximal monotone operator into the sum of an antisymmetric operator and the subdifferential of a proper lower semicontinuous convex function. This is a variant of the well-known decomposition of a matrix into its symmetric and antisymmetric part. We analyze in detail the case when the graph of the operator is a linear subspace. Equivalent conditions of monotonicity are also provided. We obtain several new results on auto-conjugate representations including an explicit formula that is built upon the proximal average of the associated Fitzpatrick function and its Fenchel conjugate. These results are new and they both extend and complement recent work by Penot, Simons and Zălinescu. A nonlinear example shows the importance of the linearity assumption. Finally, we consider the problem of computing the Fitzpatrick function of the sum, generalizing a recent result by Bauschke, Borwein and Wang on matrices to linear relations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dhillon, Harpreet Singh. „Optimal Sum-Rate of Multi-Band MIMO Interference Channel“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/34766.

Der volle Inhalt der Quelle
Annotation:
While the channel capacity of an isolated noise-limited wireless link is well-understood, the same is not true for the interference-limited wireless links that coexist in the same area and occupy the same frequency band(s). The performance of these wireless systems is coupled to each other due to the mutual interference. One such wireless scenario is modeled as a network of simultaneously communicating node pairs and is generally referred to as an interference channel (IC). The problem of characterizing the capacity of an IC is one of the most interesting and long-standing open problems in information theory. A popular way of characterizing the capacity of an IC is to maximize the achievable sum-rate by treating interference as Gaussian noise, which is considered optimal in low-interference scenarios. While the sum-rate of the single-band SISO IC is relatively well understood, it is not so when the users have multiple-bands and multiple-antennas for transmission. Therefore, the study of the optimal sum-rate of the multi-band MIMO IC is the main goal of this thesis. The sum-rate maximization problem for these ICs is formulated and is shown to be quite similar to the one already known for single-band MIMO ICs. This problem is reduced to the problem of finding the optimal fraction of power to be transmitted over each spatial channel in each frequency band. The underlying optimization problem, being non-linear and non-convex, is difficult to solve analytically or by employing local optimization techniques. Therefore, we develop a global optimization algorithm by extending the Reformulation and Linearization Technique (RLT) based Branch and Bound (BB) strategy to find the provably optimal solution to this problem. We further show that the spatial and spectral channels are surprisingly similar in a multi-band multi-antenna IC from a sum-rate maximization perspective. This result is especially interesting because of the dissimilarity in the way the spatial and frequency channels affect the perceived interference. As a part of this study, we also develop some rules-of-thumb regarding the optimal power allocation strategies in multi-band MIMO ICs in various interference regimes. Due to the recent popularity of Interference Alignment (IA) as a means of approaching capacity in an IC (in high-interference regime), we also compare the sum-rates achievable by our technique to the ones achievable by IA. The results indicate that the proposed power control technique performs better than IA in the low and intermediate interference regimes. Interestingly, the performance of the power control technique improves further relative to IA with an increase in the number of orthogonal spatial or frequency channels.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Knapp, Greg. „Minkowski's Linear Forms Theorem in Elementary Function Arithmetic“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495545998803274.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Godard, Hadrien. „Résolution exacte du problème de l'optimisation des flux de puissance“. Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1258.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a pour objet la résolution exacte d’un problème d’optimisation des flux de puissance (OPF) dans un réseau électrique. Dans l’OPF, on doit planifier la production et la répartition des flux de puissances électriques permettant de couvrir, à un coût minimal, la consommation en différents points du réseau. Trois variantes du problème de l’OPF sont étudiées dans ce manuscrit. Nous nous concentrerons principalement sur la résolution exacte des deux problèmes (OPF − L) et (OPF − Q), puis nous montrerons comment notre approche peut naturellement s’´étendre à la troisième variante (OPF − UC). Cette thèse propose de résoudre ces derniers à l’aide d’une méthode de reformulation que l’on appelle RC-OPF. La contribution principale de cette thèse réside dans l’étude, le développement et l’utilisation de notre méthode de résolution exacte RC-OPF sur les trois variantes d’OPF. RC-OPF utilise également des techniques de contractions de bornes, et nous montrons comment ces techniques classiques peuvent être renforcées en utilisant des résultats issus de notre reformulation optimale
Alternative Current Optimal Power Flow (ACOPF) is naturally formulated as a non-convex problem. In that context, solving (ACOPF) to global optimality remains a challenge when classic convex relaxations are not exact. We use semidefinite programming to build a quadratic convex relaxation of (ACOPF). We show that this quadratic convex relaxation has the same optimal value as the classical semidefinite relaxation of (ACOPF) which is known to be tight. In that context, we build a spatial branch-and-bound algorithm to solve (ACOPF) to global optimality that is based on a quadratic convex programming bound
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Silva, Paulo Araújo da. „Transformações geométricas no plano“. Universidade Federal de Sergipe, 2014. https://ri.ufs.br/handle/riufs/6500.

Der volle Inhalt der Quelle
Annotation:
Não informado.
No presente trabalho fazemos um estudo sobre transformações geométricas no plano, explorando características geométricas e algébricas. A relação entre a geometria e a álgebra é responsável por extraordinários progressos na matemática e suas aplicações. Nosso objetivo inicial é apresentar algumas das principais transformações geométricas, a exemplo das Homotetias, das Translações, de Cisalhamentos, das Simetrias, das Rotações, das Re exões, das Isometrias, etc., de forma intuitiva e ilustrando com exemplos simples. Em seguida exploramos características algébricas elementares que permitem tratar e generalizar o estudo de transformações. Apresentamos ainda os conceitos de Mor smos e Deformações de imagens utilizando noções, por exemplo, como Combinação Linear Convexa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Ramos, Tales Pulinho. „Modelagem híbrida para o planejamento da operação de sistemas hidrotérmicos considerando as não linearidades das usinas hidráulicas“. Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/233.

Der volle Inhalt der Quelle
Annotation:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2015-12-16T11:02:24Z No. of bitstreams: 1 talespulinhoramos.pdf: 6134665 bytes, checksum: 349537ae72f568271488022944942fb6 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2015-12-16T11:20:33Z (GMT) No. of bitstreams: 1 talespulinhoramos.pdf: 6134665 bytes, checksum: 349537ae72f568271488022944942fb6 (MD5)
Made available in DSpace on 2015-12-16T11:20:33Z (GMT). No. of bitstreams: 1 talespulinhoramos.pdf: 6134665 bytes, checksum: 349537ae72f568271488022944942fb6 (MD5) Previous issue date: 2015-02-23
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O Sistema Interligado Nacional (SIN) apresenta cerca de 150 usinas hidráulicas e o planejamento de médio prazo contempla entre 5 e 10 anos de estudo, a representação do sistema à usinas individualizadas faz com que a resolução do problema seja muito custoso computacionalmente. Para isso, o sistema é representado a partir de sistemas equivalentes de energia. Existe um trabalho anterior onde foi realizado a flexibilização da modelagem do sistema, denominada modelagem híbrida, em que parte do sistema é representado através de sistemas equivalentes de energia e outra é representada à usinas individualizadas com a produtibilidade constante. Desta forma, consegue-se um maior detalhamento nos estudos de médio prazo mantendo a complexidade do sistema em um nível adequado computacionalmente. Este trabalho apresenta a modelagem híbrida entre sistemas equivalentes de energia e à usinas individualizadas, porém, considerando as não linearidades das usinas hidráulicas. As não linearidades das usinas basicamente se dão em relação a variação do nível do reservatório e da vazão defluente (vazão turbinada acrescida da vazão vertida), o que implica diretamente na geração hidráulica. A proposta consiste em modelar a geração hidráulica das usinas (Função de Produção Hidráulica - FPH), que é uma função analítica não linear e não convexa, por uma função linear por partes convexa que represente adequadamente a função de produção hidráulica analítica. Há um trabalho anterior onde a FPH é aproximada por uma função linear por partes em duas etapas, inicialmente a função é aproximada nas dimensões do armazenamento e do turbinamento e, em uma segunda etapa, é adicionado a contribuição do vertimento. Já neste trabalho, a FPH é aproximada por uma função linear por partes obtida em apenas uma etapa para as três dimensões a partir do algoritmo Convex Hull. Assim, é possível resolver o problema de médio prazo considerando parte do sistema representado de forma equivalente e outra parte de forma individualizada considerando a variação da geração hidráulica em função do volume armazenado, vazão turbinada e vertida (se houver influência no canal de fuga).
The National Interconnected Power System (NIPS) presents around 150 hydraulic plants and the medium term planning contemplates between 5 to 10 years of study, the representation of the system to individualized plants makes the problem impracticable in computing; then the system is represented from equivalent systems of energy. There is an alternative of modeling flexibility of the system named hybrid modeling, in which part of the system is represented through equivalent systems of energy and the other is represented to individualized plants with constant productivity. As a consequence, it is obtained greater detail in the long term studies, maintaining the complexity of the system in an adequate level in computing. This paper presents the hybrid modeling between equivalent systems of energy and individualized plants. However, it considers non-linearities on generation of hydraulic plants. The non-linear characteristic on generation function basically comes from the influence of the reservoir level (head term) and the release term (turbinated outflow added to spilled outflow). The suggestion is to model the hydraulic generation of the plants (Hydraulic Production Function - HPF), which is a non-linear and non-convex analytical function, into a convex piecewise linear function that represents appropriately the function of the analytical hydraulic production. It will be described in detail in this paper the technique used to obtain this piecewise linear function by applying the Convex Hull algorithm to guarantee the convexity of this function. To conclude, it is possible to solve the problem of long term considering part of the system represented by equivalent form and the other part in individualized manner considering the variation of the hydraulic generation in relation to the volume stored, turbaned and spilled outflow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

ABDALLA, TALAL ALMUTAZ ALMANSI. „Recursive Algorithms for Set-Membership Estimation“. Doctoral thesis, Politecnico di Torino, 2022. https://hdl.handle.net/11583/2972788.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Halalchi, Houssem. „Commande linéaire à paramètres variants des robots manipulateurs flexibles“. Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00762367.

Der volle Inhalt der Quelle
Annotation:
Les robots flexibles sont de plus en plus utilisés dans les applications pratiques. Ces robots sont caractérisés par une conception mécanique légère, réduisant ainsi leur encombrement, leur consommation d'énergie et améliorant leur sécurité. Cependant, la présence de vibrations transitoires rend difficile un contrôle précis de la trajectoire de ces systèmes. Cette thèse est précisément consacrée à l'asservissement en position des manipulateurs flexibles dans les espaces articulaire et opérationnel. Des méthodes de commande avancées, basées sur des outils de la commande robuste et de l'optimisation convexe, ont été proposées. Ces méthodes font en particulier appel à la théorie des systèmes linéaires à paramètres variants (LPV) et aux inégalités matricielles linéaires (LMI). En comparaison avec des lois de commande non-linéaires disponibles dans la littérature, les lois de commande LPV proposées permettent de considérerdes contraintes de performance et de robustesse de manière simple et systématique. L'accent est porté dans notre travail sur la gestion appropriée de la dépendance paramétrique du modèle LPV, en particulier les dépendances polynomiale et rationnelle. Des simulations numériques effectuées dans des conditions réalistes, ont permis d'observer une meilleure robustesse de la commande LPV par rapport à la commande non-linéaire par inversion de modèle face aux bruits de mesure, aux excitations de haute fréquence et aux incertitudes de modèle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Yan, Zheng. „The Econometrics of Piecewise Linear Budget Constraints With Skewed Error Distributons: An Application To Housing Demand In The Presence Of Capital Gains Taxation“. Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28606.

Der volle Inhalt der Quelle
Annotation:
This paper examines the extent to which thin markets in conjunction with tax induced kinks in the budget constraint cause consumer demand to be skewed. To illustrate the principles I focus on the demand for owner-occupied housing. Housing units are indivisible and heterogeneous while tastes for housing are at least partly idiosyncratic, causing housing markets to be thin. In addition, prior to 1998, capital gains tax provisions introduced a sharp kink in the budget constraint of existing owner-occupiers in search of a new home: previous homeowners under age 55 paid no capital gains tax if they bought up, but were subject to capital gains tax if they bought down. I first characterize the economic conditions under which households err on the up or down side when choosing a home in the presence of a thin market and a kinked budget constraint. I then specify an empirical model that takes such effects into account. Results based on Monte Carlo experiments indicate that failing to allow for skewness in the demand for housing leads to biased estimates of the elasticities of demand when such skewness is actually present. In addition, estimates based on American Housing Survey data suggest that such bias is substantial: controlling for skewness reduces the price elasticity of demand among previous owner-occupiers from 1.6 to 0.3. Moreover, 58% of previous homeowners err on the up while only 42% err on the down side. Thus, housing demand is skewed.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Winkler, Gunter. „Control constrained optimal control problems in non-convex three dimensional polyhedral domains“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800626.

Der volle Inhalt der Quelle
Annotation:
The work selects a specific issue from the numerical analysis of optimal control problems. We investigate a linear-quadratic optimal control problem based on a partial differential equation on 3-dimensional non-convex domains. Based on efficient solution methods for the partial differential equation an algorithm known from control theory is applied. Now the main objectives are to prove that there is no degradation in efficiency and to verify the result by numerical experiments. We describe a solution method which has second order convergence, although the intermediate control approximations are piecewise constant functions. This superconvergence property is gained from a special projection operator which generates a piecewise constant approximation that has a supercloseness property, from a sufficiently graded mesh which compensates the singularities introduced by the non-convex domain, and from a discretization condition which eliminates some pathological cases. Both isotropic and anisotropic discretizations are investigated and similar superconvergence properties are proven. A model problem is presented and important results from the regularity theory of solutions to partial differential equation in non-convex domains have been collected in the first chapters. Then a collection of statements from the finite element analysis and corresponding numerical solution strategies is given. Here we show newly developed tools regarding error estimates and projections into finite element spaces. These tools are necessary to achieve the main results. Known fundamental statements from control theory are applied to the given model problems and certain conditions on the discretization are defined. Then we describe the implementation used to solve the model problems and present all computed results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Tölli, A. (Antti). „Resource management in cooperative MIMO-OFDM cellular systems“. Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514287763.

Der volle Inhalt der Quelle
Annotation:
Abstract Radio resource management techniques for broadband wireless systems beyond the existing cellular systems are developed while considering their special characteristics such as multi-carrier techniques, adaptive radio links and multiple-input multiple-output (MIMO) antenna techniques. Special focus is put on the design of linear transmission strategies in a cooperative cellular system where signal processing can be performed in a centralised manner across distributed base station (BS) antenna heads. A time-division duplex cellular system based on orthogonal frequency division multiplexing (OFDM) with adaptive MIMO transmission is considered in the case where the received signals are corrupted by non-reciprocal inter-cell interference. A bandwidth efficient closed-loop compensation algorithm combined with interference suppression at the receiver is proposed to compensate for the interference and to guarantee the desired Quality of Service (QoS) when the interference structure is known solely at the receiver. A greedy beam ordering and selection algorithm is proposed to maximise the sum rate of a multiuser MIMO downlink (DL) with a block zero forcing (ZF) transmission. The performance of the block-ZF transmission combined with the greedy scheduling is shown to approach the sum capacity as the number of users increases. The maximum sum rate is often found to be achieved by transmitting to a smaller number of users or beams than the spatial dimensions allow. In addition, a low complexity algorithm for joint user, bit and power allocation with a low signalling overhead is proposed. Different linear transmission schemes, including the ZF as a special case, are developed for the scenario where the cooperative processing of the transmitted signal is applied to users located within a soft handover (SHO) region. The considered optimisation criteria include minimum power beamformer design; balancing the weighted signal-to-interference-plus-noise ratio (SINR) values per data stream; weighted sum rate maximisation; and balancing the weighted rate per user with additional QoS constraints such as guaranteed bit rate per user. The method can accommodate supplementary constraints, e.g., per antenna or per BS power constraints, and upper/lower bounds for the SINR values of the data streams. The proposed iterative algorithms are shown to provide powerful solutions for difficult non-convex transceiver optimisation problems. System level evaluation is performed in order to assess the impact of a realistic multi-cell environment on the performance of a cellular MIMO-OFDM system. The users located in the SHO region are shown to benefit from greatly increased transmission rates. Consequently, significant overall system level gains result from cooperative SHO processing. The proposed SHO scheme can be used for providing a more evenly distributed service over the entire cellular network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Camargo, Fernando Taietti. „Estudo comparativo de passos espectrais e buscas lineares não monótonas“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-16062008-211538/.

Der volle Inhalt der Quelle
Annotation:
O método do Gradiente Espectral, introduzido por Barzilai e Borwein e analisado por Raydan, para minimização irrestrita, é um método simples cujo desempenho é comparável ao de métodos tradicionais como, por exemplo, gradientes conjugados. Desde a introdução do método, assim como da sua extensão para minimização em conjuntos convexos, foram introduzidas várias combinações de passos espectrais diferentes, assim como de buscas lineares não monótonas diferentes. Dos resultados numéricos apresentados em vários trabalhos não é possível inferir se existem diferenças significativas no desempenho dos diversos métodos. Além disso, também não fica clara a relevância das buscas não monótonas como uma ferramenta em si próprias ou se, na verdade, elas são úteis apenas para permitir que o método seja o mais parecido possível com o método original de Barzilai e Borwein. O objetivo deste trabalho é comparar os diversos métodos recentemente introduzidos como combinações de diferentes buscas lineares não monótonas e diferentes passos espectrais para encontrar a melhor combinação e, a partir daí, aferir o desempenho numérico do método.
The Spectral Gradient method, introduced by Barzilai and Borwein and analized by Raydan for unconstrained minimization, is a simple method whose performance is comparable to traditional methods, such as conjugate gradients. Since the introduction of method, as well as its extension to minimization of convex sets, there were introduced various combinations of different spectral steplengths, as well as different nonmonotone line searches. By the numerical results presented in many studies it is not possible to infer whether there are siginificant differences in the performance of various methods. It also is not sure the relevance of the nonmonotone line searches as a tool in themselves or whether, in fact, they are usefull only to allow the method to be as similar as possible with the original method of Barzilai e Borwein. The objective of this study is to compare the different methods recently introduced as different combinations of nonmonotone linear searches and different spectral steplengths to find the best combination and from there, evaluating the numerical performance of the method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Silva, Caroline Albuquerque Dantas. „Proposta de equalizador cego baseado em algoritmos gen?ticos“. PROGRAMA DE P?S-GRADUA??O EM ENGENHARIA EL?TRICA E DE COMPUTA??O, 2016. https://repositorio.ufrn.br/jspui/handle/123456789/21975.

Der volle Inhalt der Quelle
Annotation:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-02-13T19:22:38Z No. of bitstreams: 1 CarolineAlbuquerqueDantasSilva_DISSERT.pdf: 1138216 bytes, checksum: b1401c36a2ad5415e6adc770fee68fbc (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-02-14T17:45:51Z (GMT) No. of bitstreams: 1 CarolineAlbuquerqueDantasSilva_DISSERT.pdf: 1138216 bytes, checksum: b1401c36a2ad5415e6adc770fee68fbc (MD5)
Made available in DSpace on 2017-02-14T17:45:51Z (GMT). No. of bitstreams: 1 CarolineAlbuquerqueDantasSilva_DISSERT.pdf: 1138216 bytes, checksum: b1401c36a2ad5415e6adc770fee68fbc (MD5) Previous issue date: 2016-07-18
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)
Esse trabalho prop?e um esquema de otimiza??o convexa, baseada em programa??o linear e algoritmos gen?ticos, para equalizadores cegos aplicados a sistemas de comunica??es digitais. Ele surgiu da necessidade crescente de melhorias nos sistemas de comunica??o no intuito de transportar o m?ximo de informa??o poss?vel por um meio f?sico de forma con??vel.O esquema proposto, ELC-GA (Equalizador Linear Cego baseado em Algoritmos Gen?ticos), ? caracterizado por realizar a equaliza??o adaptativa cega do canal em blocos ?xos de dados, utilizando como algoritmo adaptativo um algoritmo gen?tico, cuja fun??o objetivo ? uma fun??o linear com restri??es, globalmente convergente. Entretanto, devido ?s caracter?sticas aleat?rias do sinal modelado com interfer?ncia intersimb?lica e ru?do aditivo branco gaussiano, a fun??o linear utilizada passa a representar uma programa??o linear estoc?stica. Nesse sentido, o uso de algoritmos gen?ticos ? particularmente adequado por ser capaz de buscar solu??es ?timas percorrendo uma por??o consider?vel do espa?o de busca, que corresponde aos v?rios cen?rios estoc?sticos. O trabalho tamb?m descreve os detalhes de implementa??o do esquema proposto e as simula??es computacionais realizadas. Na an?lise de desempenho, os resultados do ELC-GA s?o comparados aos resultados de uma das mais tradicionais t?cnicas de equaliza??o cega, o CMA, utilizado como refer?ncia dessa an?lise. Os resultados obtidos s?o exibidos e comentados segundo as m?tricas de an?lise adequadas.As conclus?es do trabalho apontam o ELC-GA como uma alternativa promissora para equaliza??o cega devido ao seu desempenho de equaliza??o, que atinge a converg?ncia global num intervalo de s?mbolos consideravelmente menor que a t?cnica usada como refer?ncia.
This paper proposes a convex optimization scheme based on linear programming and genetic algorithms for the blind equalizers applied to digital communications systems. It arose from the growing need for improvements in communication systems in order to transmit as much information as possible in a physical environment reliably. The proposed scheme, ELC-GA (Blind Linear Equalizer Linear based on Genetic Algorithms), is characterized by performing blind adaptive channel equalization in fixed units of data, using a genetic algorithm as adaptive algorithm, whose objective function is a globally convergent constrained linear function. However, due to the random characteristics of the signal modeled with intersymbol interference and additive white Gaussian noise, the used linear function now represents a stochastic linear programming. Accordingly, the use of genetic algorithms is particularly suitable for being able to get optimal solutions covering a considerable portion of the search space, which corresponds to the various stochastic scenarios. This work also describes the implementation details of the proposed scheme and the performed computational simulations. In the performance analysis, the ELC- GA results are compared to the results of one of the traditional blind equalization techniques, CMA, used as reference in this analysis. The results are shown and discussed under the appropriate metric analysis. The conclusions of the study indicate the GA - ELC as a promising alternative to blind equalization due to its equalization performance, which reaches global convergence in a considerably smaller range of symbols than the technique used as reference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Yu, Haofeng. „A Numerical Investigation Of The Canonical Duality Method For Non-Convex Variational Problems“. Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/29095.

Der volle Inhalt der Quelle
Annotation:
This thesis represents a theoretical and numerical investigation of the canonical duality theory, which has been recently proposed as an alternative to the classic and direct methods for non-convex variational problems. These non-convex variational problems arise in a wide range of scientific and engineering applications, such as phase transitions, post-buckling of large deformed beam models, nonlinear field theory, and superconductivity. The numerical discretization of these non-convex variational problems leads to global minimization problems in a finite dimensional space. The primary goal of this thesis is to apply the newly developed canonical duality theory to two non-convex variational problems: a modified version of Ericksen's bar and a problem of Landau-Ginzburg type. The canonical duality theory is investigated numerically and compared with classic methods of numerical nature. Both advantages and shortcomings of the canonical duality theory are discussed. A major component of this critical numerical investigation is a careful sensitivity study of the various approaches with respect to changes in parameters, boundary conditions and initial conditions.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Gladkikh, Egor. „Optimisation de l'architecture des réseaux de distribution d'énergie électrique“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT055/document.

Der volle Inhalt der Quelle
Annotation:
Pour faire face aux mutations du paysage énergétique, les réseaux de distribution d'électricité sont soumis à des exigences de fonctionnement avec des indices de fiabilité à garantir. Dans les années à venir, de grands investissements sont prévus pour la construction des réseaux électriques flexibles, cohérents et efficaces, basés sur de nouvelles architectures et des solutions techniques innovantes, adaptatifs à l'essor des énergies renouvelables. En prenant en compte ces besoins industriels sur le développement des réseaux de distribution du futur, nous proposons, dans cette thèse, une approche reposant sur la théorie des graphes et l'optimisation combinatoire pour la conception de nouvelles architectures pour les réseaux de distribution. Notre démarche consiste à étudier le problème général de recherche d'une architecture optimale qui respecte l'ensemble de contraintes topologiques (redondance) et électrotechniques (courant maximal, plan de tension) selon des critères d'optimisation bien précis : minimisation du coût d'exploitation (OPEX) et minimisation de l'investissement (CAPEX). Ainsi donc, les deux familles des problèmes combinatoires (et leurs relaxations) ont été explorées pour proposer des résolutions efficaces (exactes ou approchées) du problème de planification des réseaux de distribution en utilisant une formulation adaptée. Nous nous sommes intéressés particulièrement aux graphes 2-connexes et au problème de flot arborescent avec pertes quadratiques minimales. Les résultats comparatifs de tests sur les instances de réseaux (fictifs et réels) pour les méthodes proposées ont été présentés
To cope with the changes in the energy landscape, electrical distribution networks are submitted to operational requirements in order to guarantee reliability indices. In the coming years, big investments are planned for the construction of flexible, consistent and effective electrical networks, based on the new architectures, innovative technical solutions and in response to the development of renewable energy. Taking into account the industrial needs of the development of future distribution networks, we propose in this thesis an approach based on the graph theory and combinatorial optimization for the design of new architectures for distribution networks. Our approach is to study the general problem of finding an optimal architecture which respects a set of topological (redundancy) and electrical (maximum current, voltage plan) constraints according to precise optimization criteria: minimization of operating cost (OPEX) and minimization of investment (CAPEX). Thus, the two families of combinatorial problems (and their relaxations) were explored to propose effective resolutions (exact or approximate) of the distribution network planning problem using an adapted formulation. We are particularly interested in 2-connected graphs and the arborescent flow problem with minimum quadratic losses. The comparative results of tests on the network instances (fictional and real) for the proposed methods were presented
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lu, Zhaosong. „Algorithm Design and Analysis for Large-Scale Semidefinite Programming and Nonlinear Programming“. Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7151.

Der volle Inhalt der Quelle
Annotation:
The limiting behavior of weighted paths associated with the semidefinite program (SDP) map $X^{1/2}SX^{1/2}$ was studied and some applications to error bound analysis and superlinear convergence of a class of primal-dual interior-point methods were provided. A new approach for solving large-scale well-structured sparse SDPs via a saddle point mirror-prox algorithm with ${cal O}(epsilon^{-1})$ efficiency was developed based on exploiting sparsity structure and reformulating SDPs into smooth convex-concave saddle point problems. An iterative solver-based long-step primal-dual infeasible path-following algorithm for convex quadratic programming (CQP) was developed. The search directions of this algorithm were computed by means of a preconditioned iterative linear solver. A uniform bound, depending only on the CQP data, on the number of iterations performed by a preconditioned iterative linear solver was established. A polynomial bound on the number of iterations of this algorithm was also obtained. One efficient ``nearly exact' type of method for solving large-scale ``low-rank' trust region subproblems was proposed by completely avoiding the computations of Cholesky or partial Cholesky factorizations. A computational study of this method was also provided by applying it to solve some large-scale nonlinear programming problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Jonsson, Robin. „Optimal Linear Combinations of Portfolios Subject to Estimation Risk“. Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Der volle Inhalt der Quelle
Annotation:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie