Thèses sur le sujet « Global Optimization, Clustering Methods »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Global Optimization, Clustering Methods.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Global Optimization, Clustering Methods ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

SOUZA, Ellen Polliana Ramos. « Swarm optimization clustering methods for opinion mining ». Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/25227.

Texte intégral
Résumé :
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-07-25T19:46:45Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Ellen Polliana Ramos Souza.pdf: 1140564 bytes, checksum: 0afe0dc25ea5b10611d057c23af46dec (MD5)
Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-07-26T21:58:03Z (GMT) No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Ellen Polliana Ramos Souza.pdf: 1140564 bytes, checksum: 0afe0dc25ea5b10611d057c23af46dec (MD5)
Made available in DSpace on 2018-07-26T21:58:03Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Ellen Polliana Ramos Souza.pdf: 1140564 bytes, checksum: 0afe0dc25ea5b10611d057c23af46dec (MD5) Previous issue date: 2017-02-22
Opinion Mining (OM), also known as sentiment analysis, is the field of study that analyzes people’s sentiments, evaluations, attitudes, and emotions about different entities expressed in textual input. This is accomplished through the classification of an opinion into categories, such as positive, negative, or neutral. Supervised machine learning (ML) and lexicon-based are the most frequent approaches for OM. However, these approaches require considerable effort for preparing training data and to build the opinion lexicon, respectively. In order to address the drawbacks of these approaches, this Thesis proposes the use of unsupervised clustering approach for the OM task which is able to produce accurate results for several domains without manually labeled data for the training step or tools which are language dependent. Three swarm algorithms based on Particle Swarm Optimization (PSO) and Cuckoo Search (CS) are proposed: the DPSOMUT which is based on a discrete PSO binary version, the IDPSOMUT that is based on an Improved Self-Adaptive PSO algorithm with detection function, and the IDPSOMUT/CS that is a hybrid version of IDPSOMUT and CS. Several experiments were conducted with different corpora types, domains, text language, class balancing, fitness function, and pre-processing techniques. The effectiveness of the clustering algorithms was evaluated with external measures such as accuracy, precision, recall, and F-score. From the statistical analysis, it was possible to observe that the swarm-based algorithms, especially the PSO ones, were able to find better solutions than conventional grouping techniques, such as K-means and Agglomerative. The PSO-based algorithms achieved better accuracy using a word bigram pre-processing and the Global Silhouette as fitness function. The OBCC corpus is also another contribution of this Thesis and contains a gold collection with 2,940 tweets in Brazilian Portuguese with opinions of consumers about products and services.
A mineração de opinião, também conhecida como análise de sentimento, é um campo de estudo que analisa os sentimentos, opiniões, atitudes e emoções das pessoas sobre diferentes entidades, expressos de forma textual. Tal análise é obtida através da classificação das opiniões em categorias, tais como positiva, negativa ou neutra. As abordagens de aprendizado supervisionado e baseadas em léxico são mais comumente utilizadas na mineração de opinião. No entanto, tais abordagens requerem um esforço considerável para preparação da base de dados de treinamento e para construção dos léxicos de opinião, respectivamente. A fim de minimizar as desvantagens das abordagens apresentadas, esta Tese propõe o uso de uma abordagem de agrupamento não supervisionada para a tarefa de mineração de opinião, a qual é capaz de produzir resultados precisos para diversos domínios sem a necessidade de dados rotulados manualmente para a etapa treinamento e sem fazer uso de ferramentas dependentes de língua. Três algoritmos de agrupamento não-supervisionado baseados em otimização de partícula de enxame (Particle Swarm Optimization - PSO) são propostos: o DPSOMUT, que é baseado em versão discreta do PSO; o IDPSOMUT, que é baseado em uma versão melhorada e autoadaptativa do PSO com função de detecção; e o IDPSOMUT/CS, que é uma versão híbrida do IDPSOMUT com o Cuckoo Search (CS). Diversos experimentos foram conduzidos com diferentes tipos de corpora, domínios, idioma do texto, balanceamento de classes, função de otimização e técnicas de pré-processamento. A eficácia dos algoritmos de agrupamento foi avaliada com medidas externas como acurácia, precisão, revocação e f-medida. A partir das análises estatísticas, os algortimos baseados em inteligência coletiva, especialmente os baseado em PSO, obtiveram melhores resultados que os algortimos que utilizam técnicas convencionais de agrupamento como o K-means e o Agglomerative. Os algoritmos propostos obtiveram um melhor desempenho utilizando o pré-processamento baseado em n-grama e utilizando a Global Silhouete como função de otimização. O corpus OBCC é também uma contribuição desta Tese e contem uma coleção dourada com 2.940 tweets com opiniões de consumidores sobre produtos e serviços em Português brasileiro.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ren, Zhiwei. « Portfolio Construction using Clustering Methods ». Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-042605-092010/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Schütze, Oliver. « Set oriented methods for global optimization ». [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=976566982.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Gutmann, H. M. « Radial basis function methods for global optimization ». Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599804.

Texte intégral
Résumé :
In many real world optimization problems it is essential or desirable to determine the global minimum of the objective function. The subject of this dissertation is a new class of methods that tackle such problems. In particular, we have in mind problems where function evaluations are expensive and no additional information is available. The methods employ radial basis functions that have been proved to be useful for interpolation problems. Examples include thin plate splines and multiquadrics. Specifically, in each iteration, radial basis function interpolation is used to define a utility function. A maximizer of this function is chosen to be the next point where the objective function is evaluated. Relations to similar optimization methods are established, and a general framework is presented that combines these methods and our methods. A large part of the dissertation is devoted to the convergence theory. We show that convergence can be achieved for most types of basis functions without further assumptions on the objective function. For other types, however, a similar results could not be obtained. This is due to the properties of the so-called native space that is associated with a basis function. In particular, it is of interest whether this space contains sufficiently smooth functions with compact support. In order to address this question, we present two approaches. First, we establish a characterization of the native space in terms of generalized Fourier transforms. For many types, for example thin plate splines, this helps to derive conditions on the smoothness of a function that guarantee that it is in the native space. For other types, for example multiquadrics, however, we show that the native space does not contain a nonzero function with compact support. The second approach we present gives slightly weaker results, but it employs some new theory using interpolation on an infinite regular grid.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Stepanenko, Svetlana. « Global Optimization Methods based on Tabu Search ». Doctoral thesis, kostenfrei, 2008. http://www.opus-bayern.de/uni-wuerzburg/volltexte/2008/3060/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Pettersson, Tobias. « Global optimization methods for estimation of descriptive models ». Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11781.

Texte intégral
Résumé :

Using mathematical models with the purpose to understand and store knowlegde about a system is not a new field in science with early contributions dated back to, e.g., Kepler’s laws of planetary motion.

The aim is to obtain such a comprehensive predictive and quantitative knowledge about a phenomenon so that mathematical expressions or models can be used to forecast every relevant detail about that phenomenon. Such models can be used for reducing pollutions from car engines; prevent aviation incidents; or developing new therapeutic drugs. Models used to forecast, or predict, the behavior of a system are refered to predictive models. For such, the estimation problem aims to find one model and is well known and can be handeled by using standard methods for global nonlinear optimization.

Descriptive models are used to obtain and store quantitative knowledge of system. Estimation of descriptive models has not been much described by the literature so far; instead the methods used for predictive models have beed applied. Rather than finding one particular model, the parameter estimation for descriptive models aims to find every model that contains descriptive information about the system. Thus, the parameter estimation problem for descriptive models can not be stated as a standard optimization problem.

The main objective for this thesis is to propose methods for estimation of descriptive models. This is made by using methods for nonlinear optimization including both new and existing theory.

Styles APA, Harvard, Vancouver, ISO, etc.
7

McMeen, John Norman Jr. « Ranking Methods for Global Optimization of Molecular Structures ». Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2447.

Texte intégral
Résumé :
This work presents heuristics for searching large sets of molecular structures for low-energy, stable systems. The goal is to find the globally optimal structures in less time or by consuming less computational resources. The strategies intermittently evaluate and rank structures during molecular dynamics optimizations, culling possible weaker solutions from evaluations earlier, leaving better solutions to receive more simulation time. Although some imprecision was introduced from not allowing all structures to fully optimize before ranking, the strategies identify metrics that can be used to make these searches more efficient when computational resources are limited.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Akteke, Basak. « Derivative Free Optimization Methods : Application In Stirrer Configuration And Data Clustering ». Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606591/index.pdf.

Texte intégral
Résumé :
Recent developments show that derivative free methods are highly demanded by researches for solving optimization problems in various practical contexts. Although well-known optimization methods that employ derivative information can be very effcient, a derivative free method will be more effcient in cases where the objective function is nondifferentiable, the derivative information is not available or is not reliable. Derivative Free Optimization (DFO) is developed for solving small dimensional problems (less than 100 variables) in which the computation of an objective function is relatively expensive and the derivatives of the objective function are not available. Problems of this nature more and more arise in modern physical, chemical and econometric measurements and in engineering applications, where computer simulation is employed for the evaluation of the objective functions. In this thesis, we give an example of the implementation of DFO in an approach for optimizing stirrer configurations, including a parametrized grid generator, a flow solver, and DFO. A derivative free method, i.e., DFO is preferred because the gradient of the objective function with respect to the stirrer&rsquo
s design variables is not directly available. This nonlinear objective function is obtained from the flow field by the flow solver. We present and interpret numerical results of this implementation. Moreover, a contribution is given to a survey and a distinction of DFO research directions, to an analysis and discussion of these. We also state a derivative free algorithm used within a clustering algorithm in combination with non-smooth optimization techniques to reveal the effectiveness of derivative free methods in computations. This algorithm is applied on some data sets from various sources of public life and medicine. We compare various methods, their practical backgrounds, and conclude with a summary and outlook. This work may serve as a preparation of possible future research.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stolpe, Mathias. « On Models and Methods for Global Optimization of Structural Topology ». Doctoral thesis, KTH, Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3478.

Texte intégral
Résumé :

This thesis consists of an introduction and sevenindependent, but closely related, papers which all deal withproblems in structural optimization. In particular, we considermodels and methods for global optimization of problems intopology design of discrete and continuum structures.

In the first four papers of the thesis the nonconvex problemof minimizing the weight of a truss structure subject to stressconstraints is considered. First itis shown that a certainsubclass of these problems can equivalently be cast as linearprograms and thus efficiently solved to global optimality.Thereafter, the behavior of a certain well-known perturbationtechnique is studied. It is concluded that, in practice, thistechnique can not guarantee that a global minimizer is found.Finally, a convergent continuous branch-and-bound method forglobal optimization of minimum weight problems with stress,displacement, and local buckling constraints is developed.Using this method, several problems taken from the literatureare solved with a proof of global optimality for the firsttime.

The last three papers of the thesis deal with topologyoptimization of discretized continuum structures. Theseproblems are usually modeled as mixed or pure nonlinear 0-1programs. First, the behavior of certain often usedpenalization methods for minimum compliance problems isstudied. It is concluded that these methods may fail to producea zero-one solution to the considered problem. To remedy this,a material interpolation scheme based on a rational functionsuch that compli- ance becomes a concave function is proposed.Finally, it is shown that a broad range of nonlinear 0-1topology optimization problems, including stress- anddisplacement-constrained minimum weight problems, canequivalently be modeled as linear mixed 0-1 programs. Thisresult implies that any of the standard methods available forgeneral linear integer programming can now be used on topologyoptimization problems.

Keywords:topology optimization, global optimization,stress constraints, linear programming, mixed integerprogramming, branch-and-bound.

Styles APA, Harvard, Vancouver, ISO, etc.
10

Robertson, Blair Lennon. « Direct Search Methods for Nonsmooth Problems using Global Optimization Techniques ». Thesis, University of Canterbury. Mathematics and Statistics, 2010. http://hdl.handle.net/10092/5060.

Texte intégral
Résumé :
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods. In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems. Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Jiapu. « Derivative-free hybrid methods in global optimization and their applications ». Thesis, University of Ballarat, 2005. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/34054.

Texte intégral
Résumé :
In recent years large-scale global optimization (GO) problems have drawn considerable attention. These problems have many applications, in particular in data mining and biochemistry. Numerical methods for GO are often very time consuming and could not be applied for high-dimensional non-convex and / or non-smooth optimization problems. The thesis explores reasons why we need to develop and study new algorithms for solving large-scale GO problems .... The thesis presents several derivative-free hybrid methods for large scale GO problems. These methods do not guarantee the calculation of a global solution; however, results of numerical experiments presented in this thesis demonstrate that they, as a rule, calculate a solution which is a global one or close to it. Their applications to data mining problems and the protein folding problem are demonstrated.
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
12

Hu, Jiaqiao. « Randomized search methods for solving Markov decision processes and global optimization ». College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3770.

Texte intégral
Résumé :
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Raber, Ulrich. « Nonconvex all-quadratic global optimization problems solution methods, application and related topics / ». [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=961036478.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Quttineh, Nils-Hassan. « Models and Methods for Costly Global Optimization and Military Decision Support Systems ». Doctoral thesis, Linköpings universitet, Optimeringslära, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77078.

Texte intégral
Résumé :
The thesis consists of five papers. The first three deal with topics within costly global optimization and the last two concern military decision support systems. The first part of the thesis addresses so-called costly problems where the objective function is seen as a “black box” to which the input parameter values are sent and a function value is returned. This means in particular that no information about derivatives is available. The black box could, for example, solve a large system of differential equations or carry out   timeconsuming simulation, where a single function evaluation can take several hours! This is the reason for describing such problems as costly and why they require customized algorithms. The goal is to construct algorithms that find a (near)-optimal solution using as few function evaluations as possible. A good example of a real life application comes from the automotive industry, where the development of new engines utilizes advanced mathematical models that are governed by a dozen key parameters. The objective is to optimize the engine by changing these parameters in such a way that it becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints. The first three papers describe algorithms and implementation details for these costly global optimization problems. The second part deals with military mission planning, that is, problems that concern logistics, allocation and deployment of military resources. Given a fleet of resource, the decision problem is to allocate the resources against the enemy so that the overall mission success is optimized. We focus on the problem of the attacker and consider two separate problem classes. In the fourth paper we introduce an effect oriented planning approach to an advanced weapon-target allocation problem, where the objective is to maximize the expected outcome of a coordinated attack. We present a mathematical model together with efficient solution techniques. Finally, in the fifth paper, we introduce a military aircraft mission planning problem, where an aircraft fleet should attack a given set of targets. Aircraft routing is an essential part of the problem, and the objective is to maximize the expected mission success while minimizing the overall mission time. The problem is stated as a generalized vehicle routing model with synchronization and precedence side constraints.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Koullias, Stefanos. « Methodology for global optimization of computationally expensive design problems ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49085.

Texte intégral
Résumé :
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm. This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems. Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Otero, Richard Edward. « Problem decomposition by mutual information and force-based clustering ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43641.

Texte intégral
Résumé :
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter-dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Gurol, Selime. « Statistical Learning And Optimization Methods For Improving The Efficiency In Landscape Image Clustering And Classification Problems ». Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606595/index.pdf.

Texte intégral
Résumé :
Remote sensing techniques are vital for early detection of several problems such as natural disasters, ecological problems and collecting information necessary for finding optimum solutions to those problems. Remotely sensed information has also important uses in predicting the future risks, urban planning, communication.Recent developments in remote sensing instrumentation offered a challenge to the mathematical and statistical methods to process the acquired information. Classification of satellite images in the context of land cover classification is the main concern of this study. Land cover classification can be performed by statistical learning methods like additive models, decision trees, neural networks, k-means methods which are already popular in unsupervised classification and clustering of image scene inverse problems. Due to the degradation and corruption of satellite images, the classification performance is limited both by the accuracy of clustering and by the extent of the classification. In this study, we are concerned with understanding the performance of the available unsupervised methods with k-means, supervised methods with Gaussian maximum likelihood which are very popular methods in land cover classification. A broader approach to the classification problem based on finding the optimal discriminants from a larger range of functions is considered also in this work. A novel method based on threshold decomposition and Boolean discriminant functions is developed as an implementable application of this approach. All methods are applied to BILSAT and Landsat satellite images using MATLAB software.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Zhan, Lixin. « Fast Stochastic Global Optimization Methods and Their Applications to Cluster Crystallization and Protein Folding ». Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1246.

Texte intégral
Résumé :
Two global optimization methods are proposed in this thesis. They are the multicanonical basin hopping (MUBH) method and the basin paving (BP) method.

The MUBH method combines the basin hopping (BH) method, which can be used to efficiently map out an energy landscape associated with local minima, with the multicanonical Monte Carlo (MUCA) method, which encourages the system to move out of energy traps during the computation. It is found to be more efficient than the original BH method when applied to the Lennard-Jones systems containing 150-185 particles.

The asynchronous multicanonical basin hopping (AMUBH) method, a parallelization of the MUBH method, is also implemented using the message passing interface (MPI) to take advantage of the full usage of multiprocessors in either a homogeneous or a heterogeneous computational environment. AMUBH, MUBH and BH are used together to find the global minimum structures for Co nanoclusters with system size N≤200.

The BP method is based on the BH method and the idea of the energy landscape paving (ELP) strategy. In comparison with the acceptance scheme of the ELP method, moving towards the low energy region is enhanced and no low energy configuration may be missed during the simulation. The applications to both the pentapeptide Met-enkephalin and the villin subdomain HP-36 locate new configurations having energies lower than those determined previously.

The MUBH, BP and BH methods are further employed to search for the global minimum structures of several proteins/peptides using the ECEPP/2 and ECEPP/3 force fields. These two force fields may produce global minima with different structures. The present study indicates that the global minimum determination from ECEPP/3 prefers helical structures. Also discussed in this thesis is the effect of the environment on the formation of beta hairpins.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Barreau, Thibaud. « Strategic optimization of a global bank capital management using statistical methods on open data ». Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273413.

Texte intégral
Résumé :
This project is about the optimization of the capital management of a French global bank. Capital management corresponds here to allocating the available capital to the different business units. In this project, I focus on the optimization of the allocation of the risk weighted assets (RWA) between some of the business units of the bank, as a representation of the allocated capital. Emphasis is put on the market and retail part of the bank and the first step was to be able to model the evolution of a business unit given an economic environment. The second one was about optimizing the distribution of RWA among the selected parts of the bank.
Projektets ämne handlar om att optimering allokering av kapital inom en fransk global bank. Kapital management syftar här på hur kapital ska fördelas mellan olika avdelningar inom banken. I detta projekt fokuserar jag på optimering av allokeringen av riskvägda resurser (RWA) mellan några av bankens enheter, som en representation av det allokerade kapitalet. Uppsatsen inriktar sig främst emot retail-delen av banken. Första steget var att modellera utvecklingen av en bankavdelning givet en ekonomisk omgivning? Andra steget var att försöka optimera fördelningen av RWA mellan de utvalda bankavdelningarna.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Yamakawa, Yuya. « Studies on Optimization Methods for Nonlinear Semidefinite Programming Problems ». 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199446.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Van, Assche Dimitri. « New methodological perspectives on PROMETHEE methods ». Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/287858/6/contratDV.pdf.

Texte intégral
Résumé :
A few methodological contributions to the PROMETHEE method, essentially based on 3 articles:-FlowSort parameters elicitation based on categorization examples;-PROMETHEE is Not Quadratic: An O (qnlog (n)) Algorithm;-Lexicographic constrained multicriteria ordered clustering.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Styles APA, Harvard, Vancouver, ISO, etc.
22

Enqvist, Per. « Spectral Estimation by Geometric, Topological and Optimization Methods ». Doctoral thesis, Stockholm, 2001. http://media.lib.kth.se:8080/kthdisseng.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hendrich, Christopher. « Proximal Splitting Methods in Nonsmooth Convex Optimization ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Texte intégral
Résumé :
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Mahjani, Behrang. « Methods from Statistical Computing for Genetic Analysis of Complex Traits ». Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-284378.

Texte intégral
Résumé :
The goal of this thesis is to explore, improve and implement some advanced modern computational methods in statistics, focusing on applications in genetics. The thesis has three major directions. First, we study likelihoods for genetics analysis of experimental populations. Here, the maximum likelihood can be viewed as a computational global optimization problem. We introduce a faster optimization algorithm called PruneDIRECT, and explain how it can be parallelized for permutation testing using the Map-Reduce framework. We have implemented PruneDIRECT as an open source R package, and also Software as a Service for cloud infrastructures (QTLaaS). The second part of the thesis focusses on using sparse matrix methods for solving linear mixed models with large correlation matrices. For populations with known pedigrees, we show that the inverse of covariance matrix is sparse. We describe how to use this sparsity to develop a new method to maximize the likelihood and calculate the variance components. In the final part of the thesis we study computational challenges of psychiatric genetics, using only pedigree information. The aim is to investigate existence of maternal effects in obsessive compulsive behavior. We add the maternal effects to the linear mixed model, used in the second part of this thesis, and we describe the computational challenges of working with binary traits.
eSSENCE
Styles APA, Harvard, Vancouver, ISO, etc.
25

Titi, Jihad [Verfasser]. « Matrix Methods for the Tensorial and Simplicial Bernstein Forms with Application to Global Optimization / Jihad Titi ». Konstanz : KOPS Universität Konstanz, 2019. http://d-nb.info/1214180582/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Miller, Michael Chad. « Global Resource Management of Response Surface Methodology ». PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1621.

Texte intégral
Résumé :
Statistical research can be more difficult to plan than other kinds of projects, since the research must adapt as knowledge is gained. This dissertation establishes a formal language and methodology for designing experimental research strategies with limited resources. It is a mathematically rigorous extension of a sequential and adaptive form of statistical research called response surface methodology. It uses sponsor-given information, conditions, and resource constraints to decompose an overall project into individual stages. At each stage, a "parent" decision-maker determines what design of experimentation to do for its stage of research, and adapts to the feedback from that research's potential "children", each of whom deal with a different possible state of knowledge resulting from the experimentation of the "parent". The research of this dissertation extends the real-world rigor of the statistical field of design of experiments to develop an deterministic, adaptive algorithm that produces deterministically generated, reproducible, testable, defendable, adaptive, resource-constrained multi-stage experimental schedules without having to spend physical resource.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Paudel, Danda Pani. « Local and global methods for registering 2D image sets and 3D point clouds ». Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS077/document.

Texte intégral
Résumé :
Pas de résumé
In this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels
Styles APA, Harvard, Vancouver, ISO, etc.
28

Fitiwi, Desta Zahlay. « Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems ». Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192363.

Texte intégral
Résumé :
Driven by a number of factors, the electric power industry is expected to undergo a paradigm shift with a considerably increased level of variable energy sources. A significant integration of such sources requires heavy transmission investments over geographically wide and large-scale networks. However, the stochastic nature of such sources, along with the sheer size of network systems, results in problems that may become intractable. Thus, the challenge addressed in this work is to design efficient and reasonably accurate models, strategies and tools that can solve large-scale TEP problems under uncertainty. A long-term stochastic network planning tool is developed, considering a multi-stage decision framework and a high level integration of renewables. Such a tool combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. Furthermore, in order to significantly reduce the combinatorial solution search space, a specific heuristic solution strategy is devised. This works by decomposing the original problem into successive optimization phases.One of the modeling challenges addressed in this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. Another relevant contribution is a domain-driven clustering process of snapshots which is based on a “moments” technique. Finally, the developed models, methods and solution strategies have been tested on standard and real-life systems. This thesis also presents numerical results of an aggregated 1060-node European network system considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions.
Driven by several techno-economic, environmental and structural factors, the electric energy industry is expected to undergo a paradigm shift with a considerably increased level of renewables (mainly variable energy sources such as wind and solar), gradually replacing conventional power production sources. The scale and the speed of integrating such sources of energy are of paramount importance to effectively address a multitude of global and local concerns such as climate change, sustainability and energy security. In recent years, wind and solar power have been attracting large-scale investments in many countries, especially in Europe. The favorable agreements of states to curb greenhouse gas emissions and mitigate climate change, along with other driving factors, will further accelerate the renewable integration in power systems. Renewable energy sources (RESs), wind and solar in particular, are abundant almost everywhere, although their energy intensities differ very much from one place to another. Because of this, a significant integration of such energy sources requires heavy investments in transmission infrastructures. In other words, transmission expansion planning (TEP) has to be carried out in geographically wide and large-scale networks. This helps to effectively accommodate the RESs and optimally exploit their benefits while minimizing their side effects. However, the uncertain nature of most of the renewable sources, along with the size of the network systems, results in optimization problems that may become intractable in practice or require a huge computational effort. Thus, the challenge addressed in this work is to design models, strategies and tools that may solve large-scale and uncertain TEP problems, being computationally efficient and reasonably accurate. Of course, the specific definition of the term “reasonably accurate” is the key issue of the thesis work, since it requires a deep understanding of the main cost and technical drivers of adequate TEP investment decisions. A new formulation is proposed in this dissertation for a long-term planning of transmission investments under uncertainty, with a multi-stage decision framework and considering a high level of renewable sources integration. This multi-stage strategy combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. The TEP problem is defined as a stochastic mixed-integer linear programming (S-MILP) optimization, an exact solution method. This allows the use of effective off-the-shelf solvers to obtain solutions within a reasonable computational time, enhancing overall problem tractability. Furthermore, in order to significantly reduce the combinatorial solution search (CSS) space, a specific heuristic solution strategy is devised. In this global heuristic strategy, the problem is decomposed into successive optimization phases. Each phase uses more complex optimization models than the previous one, and uses the results of the previous phase so that the combinatorial solution search space is reduced after each phase. Moreover, each optimization phase is defined and solved as an independent problem; thus, allowing the use of specific decomposition techniques, or parallel computation when possible. A relevant feature of the solution strategy is that it combines deterministic and stochastic modeling techniques on a multi-stage modeling framework with a rolling-window planning concept. The planning horizon is divided into two sub-horizons: medium- and long-term, both having multiple decision stages. The first sub-horizon is characterized by a set of investments, which are good enough for all scenarios, in each stage while scenario-dependent decisions are made in the second sub-horizon. One of the first modeling challenges of this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. The thesis includes extensive analysis of existing and improved network models such as AC, linearized AC, “DC”, hybrid and pipeline models, both for the existing and the candidate lines. Finally, a DC network model is proposed as the most suitable option. This work also analyzes alternative losses models. Some of them are already available and others are proposed as original contributions of the thesis. These models are evaluated in the context of the target problem, i.e., in finding the right balance between accuracy and computational effort in a large-scale TEP problem subject to significant RES integration. It has to be pointed out that, although losses are usually neglected in TEP studies because of computational limitations, they are critical in network expansion decisions. In fact, using inadequate models may lead not only to cost-estimation errors, but also to technical errors such as the so-called “artificial losses”. Another relevant contribution of this work is a domain-driven clustering process to handle operational states. This allows a more compact and efficient representation of uncertainty with little loss of accuracy. This is relevant because, together with electricity demand and other traditional sources of uncertainty, the integration of variable energy sources introduces an additional operational variability and uncertainty. A substantial part of this uncertainty and variability is often handled by a set of operational states, here referred to as “snapshots”, which are generation-demand patterns of power systems that lead to optimal power flow (OPF) patterns in the transmission network. A large set of snapshots, each one with an estimated probability, is then used to evaluate and optimize the network expansion. In a long-term TEP problem of large networks, the number of operational states must be reduced. Hence, from a methodological perspective, this thesis shows how the snapshot reduction can be achieved by means of clustering, without relevant loss of accuracy, provided that a good selection of classification variables is used in the clustering process. The proposed method relies on two ideas. First, the snapshots are characterized by their OPF patterns (the effects) instead of the generation-demand patterns (the causes). This is simply because the network expansion is the target problem, and losses and congestions are the drivers to network investments. Second, the OPF patterns are classified using a “moments” technique, a well-known approach in Optical Pattern Recognition problems. The developed models, methods and solution strategies have been tested on small-, medium- and large-scale network systems. This thesis also presents numerical results of an aggregated 1060-node European network system obtained considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions.

QC 20160919

Styles APA, Harvard, Vancouver, ISO, etc.
29

Kazazakis, Nikolaos. « Parallel computing, interval derivative methods, heuristic algorithms, and their implementation in a numerical solver, for deterministic global optimization ». Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45359.

Texte intégral
Résumé :
This thesis presents new algorithms for the deterministic global optimization of general non-linear programming problems (NLPs). It is proven that the αBB general underestimator may provide exact lower bounds on a function only if rigorous conditions are satisfied. These conditions are derived and the μ-subenergy methodology is proposed to achieve tighter αBB underestimation when they are violated. An interval lower bounding test is proposed to improve αBB lower bounds and avoid expensive algorithmic steps. Piecewise-linear relaxations (PLR) are proposed for the underestimation of general functions. Calculation of these relaxations is accelerated using parallel computing. Quality bounds tightening (QBT) is proposed to reduce the cost of bounds tightening algorithms by avoiding unnecessary calculations. Violation branching is proposed to improve the performance of branching strategies by considering constraint violation when selecting a branching variable. Furthermore, a novel bounds tightening method, PLR bounds tightening (PLR-BT), is proposed. Variable-based convexity (VBC) is proposed as a general reformulation algorithm, to build tighter relaxations by exploiting underlying convexity. This work also introduces algorithms for parallel branching for the global solution of NLPs. A parallel node subdivision strategy, Multisection Branching, is proposed to achieve tighter bounds, and a parallel node selection strategy, Future Branching, is proposed to accelerate the investigation of the branch-and-bound tree. A parallel solver is presented, where MPI is used to distribute node calculations and an asynchronous bounds tightening algorithm is proposed to reduce bounds tightening wall times. This algorithm is implemented using multithreading for asynchronous feasibility-based bounds tightening (AF-BBT). All algorithms are implemented in a global solver, and its parallel architecture and features are described. This solver is used to perform numerical studies on the benefits of using the new algorithms in tandem. The new solver is benchmarked against the BARON 15.9.22 global solver for a set of problems, in which it achieves comparable performance.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hua, Xiaoqin. « Studies on block coordinate gradient methods for nonlinear optimization problems with separable structure ». 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199447.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Zhang, Fan. « Statistical Methods for Characterizing Genomic Heterogeneity in Mixed Samples ». Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/419.

Texte intégral
Résumé :
"Recently, sequencing technologies have generated massive and heterogeneous data sets. However, interpretation of these data sets is a major barrier to understand genomic heterogeneity in complex diseases. In this dissertation, we develop a Bayesian statistical method for single nucleotide level analysis and a global optimization method for gene expression level analysis to characterize genomic heterogeneity in mixed samples. The detection of rare single nucleotide variants (SNVs) is important for understanding genetic heterogeneity using next-generation sequencing (NGS) data. Various computational algorithms have been proposed to detect variants at the single nucleotide level in mixed samples. Yet, the noise inherent in the biological processes involved in NGS technology necessitates the development of statistically accurate methods to identify true rare variants. At the single nucleotide level, we propose a Bayesian probabilistic model and a variational expectation maximization (EM) algorithm to estimate non-reference allele frequency (NRAF) and identify SNVs in heterogeneous cell populations. We demonstrate that our variational EM algorithm has comparable sensitivity and specificity compared with a Markov Chain Monte Carlo (MCMC) sampling inference algorithm, and is more computationally efficient on tests of relatively low coverage (27x and 298x) data. Furthermore, we show that our model with a variational EM inference algorithm has higher specificity than many state-of-the-art algorithms. In an analysis of a directed evolution longitudinal yeast data set, we are able to identify a time-series trend in non-reference allele frequency and detect novel variants that have not yet been reported. Our model also detects the emergence of a beneficial variant earlier than was previously shown, and a pair of concomitant variants. Characterization of heterogeneity in gene expression data is a critical challenge for personalized treatment and drug resistance due to intra-tumor heterogeneity. Mixed membership factorization has become popular for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. At the gene expression level, we derive a global optimization (GOP) algorithm that provides a guaranteed epsilon-global optimum for a sparse mixed membership matrix factorization problem for molecular subtype classification. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently. The GOP algorithm is well-suited for parallel computations in the key optimization steps. "
Styles APA, Harvard, Vancouver, ISO, etc.
32

Lunday, Brian Joseph. « Resource Allocation on Networks : Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods ». Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/77323.

Texte intégral
Résumé :
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Weeraddana, P. C. (Pradeep Chathuranga). « Optimization techniques for radio resource management in wireless communication networks ». Doctoral thesis, Oulun yliopisto, 2011. http://urn.fi/urn:isbn:9789514296550.

Texte intégral
Résumé :
Abstract The application of optimization techniques for resource management in wireless communication networks is considered in this thesis. It is understood that a wide variety of resource management problems of recent interest, including power/rate control, link scheduling, cross-layer control, network utility maximization, beamformer design of multiple-input multiple-output networks, and many others are directly or indirectly reliant on the general weighted sum-rate maximization (WSRMax) problem. Thus, in this dissertation a greater emphasis is placed on the WSRMax problem, which is known to be NP-hard. A general method, based on the branch and bound technique, is developed, which solves globally the nonconvex WSRMax problem with an optimality certificate. Efficient analytic bounding techniques are derived as well. More broadly, the proposed method is not restricted to WSRMax. It can also be used to maximize any system performance metric, which is Lipschitz continuous and increasing on signal-to-interference-plus-noise ratio. The method can be used to find the optimum performance of any network design method, which relies on WSRMax, and therefore it is also useful for evaluating the performance loss encountered by any heuristic algorithm. The considered link-interference model is general enough to accommodate a wide range of network topologies with various node capabilities, such as singlepacket transmission, multipacket transmission, simultaneous transmission and reception, and many others. Since global methods become slow in large-scale problems, fast local optimization methods for the WSRMax problem are also developed. First, a general multicommodity, multichannel wireless multihop network where all receivers perform singleuser detection is considered. Algorithms based on homotopy methods and complementary geometric programming are developed for WSRMax. They are able to exploit efficiently the available multichannel diversity. The proposed algorithm, based on homotopy methods, handles efficiently the self interference problem that arises when a node transmits and receives simultaneously in the same frequency band. This is very important, since the use of supplementary combinatorial constraints to prevent simultaneous transmissions and receptions of any node is circumvented. In addition, the algorithm together with the considered interference model, provide a mechanism for evaluating the gains when the network nodes employ self interference cancelation techniques with different degrees of accuracy. Next, a similar multicommodity wireless multihop network is considered, but all receivers perform multiuser detection. Solutions for the WSRMax problem are obtained by imposing additional constraints, such as that only one node can transmit to others at a time or that only one node can receive from others at a time. The WSRMax problem of downlink OFDMA systems is also considered. A fast algorithm based on primal decomposition techniques is developed to jointly optimize the multiuser subcarrier assignment and power allocation to maximize the weighted sum-rate (WSR). Numerical results show that the proposed algorithm converges faster than Lagrange relaxation based methods. Finally, a distributed algorithm for WSRMax is derived in multiple-input single-output multicell downlink systems. The proposed method is based on classical primal decomposition methods and subgradient methods. It does not rely on zero forcing beamforming or high signal-to-interference-plus-noise ratio approximation like many other distributed variants. The algorithm essentially involves coordinating many local subproblems (one for each base station) to resolve the inter-cell interference such that the WSR is maximized. The numerical results show that significant gains can be achieved by only a small amount of message passing between the coordinating base stations, though the global optimality of the solution cannot be guaranteed
Tiivistelmä Tässä työssä tutkitaan optimointimenetelmien käyttöä resurssienhallintaan langattomissa tiedonsiirtoverkoissa. Monet ajankohtaiset resurssienhallintaongelmat, kuten esimerkiksi tehonsäätö, datanopeuden säätö, radiolinkkien ajastus, protokollakerrosten välinen optimointi, verkon hyötyfunktion maksimointi ja keilanmuodostus moniantenniverkoissa, liittyvät joko suoraan tai epäsuorasti painotetun summadatanopeuden maksimointiongelmaan (weighted sum-rate maximization, WSRMax). Tästä syystä tämä työ keskittyy erityisesti WSRMax-ongelmaan, joka on tunnetusti NP-kova. Työssä kehitetään yleinen branch and bound -tekniikkaan perustuva menetelmä, joka ratkaisee epäkonveksin WSRMax-ongelman globaalisti ja tuottaa todistuksen ratkaisun optimaalisuudesta. Työssä johdetaan myös tehokkaita analyyttisiä suorituskykyrajojen laskentatekniikoita. Ehdotetun menetelmän käyttö ei rajoitu vain WSRMax-ongelmaan, vaan sitä voidaan soveltaa minkä tahansa suorituskykymetriikan maksimointiin, kunhan se on Lipschitz-jatkuva ja kasvava signaali-häiriö-plus-kohinasuhteen funktiona. Menetelmää voidaan käyttää minkä tahansa WSRMax-ongelmaan perustuvan verkkosuunnittelumenetelmän optimaalisen suorituskyvyn määrittämiseen, ja siksi sitä voidaan hyödyntää myös minkä tahansa heuristisen algoritmin aiheuttaman suorituskykytappion arvioimiseen. Tutkittava linkki-häiriömalli on riittävän yleinen monien erilaisten verkkotopologioiden ja verkkosolmujen kyvykkyyksien mallintamiseen, kuten esimerkiksi yhden tai useamman datapaketin siirtoon sekä yhtäaikaiseen lähetykseen ja vastaanottoon. Koska globaalit menetelmät ovat hitaita suurien ongelmien ratkaisussa, työssä kehitetään WSRMax-ongelmalle myös nopeita paikallisia optimointimenetelmiä. Ensiksi käsitellään yleistä useaa eri yhteyspalvelua tukevaa monikanavaista langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat yhden käyttäjän ilmaisun, ja kehitetään algoritmeja, joiden perustana ovat homotopiamenetelmät ja komplementaarinen geometrinen optimointi. Ne hyödyntävät tehokkaasti saatavilla olevan monikanavadiversiteetin. Esitetty homotopiamenetelmiin perustuva algoritmi käsittelee tehokkaasti itsehäiriöongelman, joka syntyy, kun laite lähettää ja vastaanottaa samanaikaisesti samalla taajuuskaistalla. Tämä on tärkeää, koska näin voidaan välttää lisäehtojen käyttö yhtäaikaisen lähetyksen ja vastaanoton estämiseksi. Lisäksi algoritmi yhdessä tutkittavan häiriömallin kanssa auttaa arvioimaan, paljonko etua saadaan, kun laitteet käyttävät itsehäiriön poistomenetelmiä erilaisilla tarkkuuksilla. Seuraavaksi tutkitaan vastaavaa langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat monen käyttäjän ilmaisun. Ratkaisuja WSRMax-ongelmalle saadaan asettamalla lisäehtoja, kuten että vain yksi lähetin kerrallaan voi lähettää tai että vain yksi vastaanotin kerrallaan voi vastaanottaa. Edelleen tutkitaan WSRMax-ongelmaa laskevalla siirtotiellä OFDMA-järjestelmässä, ja johdetaan primaalihajotelmaan perustuva nopea algoritmi, joka yhteisoptimoi monen käyttäjän alikantoaalto- ja tehoallokaation maksimoiden painotetun summadatanopeuden. Numeeriset tulokset osoittavat, että esitetty algoritmi suppenee nopeammin kuin Lagrangen relaksaatioon perustuvat menetelmät. Lopuksi johdetaan hajautettu algoritmi WSRMax-ongelmalle monisoluisissa moniantennilähetystä käyttävissä järjestelmissä laskevaa siirtotietä varten. Esitetty menetelmä perustuu klassisiin primaalihajotelma- ja aligradienttimenetelmiin. Se ei turvaudu nollaanpakotus-keilanmuodostukseen tai korkean signaali-häiriö-plus-kohinasuhteen approksimaatioon, kuten monet muut hajautetut muunnelmat. Algoritmi koordinoi monta paikallista aliongelmaa (yhden kutakin tukiasemaa kohti) ratkaistakseen solujen välisen häiriön siten, että WSR maksimoituu. Numeeriset tulokset osoittavat, että merkittävää etua saadaan jo vähäisellä yhdessä toimivien tukiasemien välisellä viestinvaihdolla, vaikka globaalisti optimaalista ratkaisua ei voidakaan taata
Styles APA, Harvard, Vancouver, ISO, etc.
34

Cerqueira, Tiago F. T. [Verfasser], Silvana Gutachter] Botti, Claudia [Gutachter] Draxl et Angel [Gutachter] [Rubio. « Structural prediction and materials design : from high throughput to global minima optimization methods / Tiago F.T. Cerqueira ; Gutachter : Silvana Botti, Claudia Draxl, Angel Rubio ». Jena : Friedrich-Schiller-Universität Jena, 2017. http://d-nb.info/1177603349/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Popov, Mikhail. « Analytic and Numerical Methods for the Solution of Electromagnetic Inverse Source Problems ». Doctoral thesis, KTH, Electromagnetic Theory, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3134.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Costa, Jardel da Silva. « Minimização do potencial de Lennard-Jones via otimização global ». Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=1604.

Texte intégral
Résumé :
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Devido à sua importância, o chamado problema de Lennard-Jones tem atraído pesquisadores de diversos campos da ciência pura e aplicada. Tal problema resume-se em achar as coordenadas de um sistema no espaço Euclidiano tridimensional, as quais correspondem a um mínimo de um potencial de energia. Esse problema desempenha um papel de fundamental importância na determinação da estabilidade de moléculas em arranjos altamente ramificados, como das proteínas. A principal dificuldade para resolver o problema de Lennard-Jones decorre do fato de que a função objetivo é não-convexa e altamente não-linear com várias variáveis, apresentando, dessa forma, um grande número de mínimos locais. Neste trabalho, foram utilizados alguns métodos de otimização global estocástica, onde procurou-se comparar os resultados numéricos dos algoritmos, com o objetivo de verificar quais se adaptam melhor à minimização do referido potencial. No presente estudo, abordou-se somente micro agrupamentos possuindo de 3 a 10 átomos. Os resultados obtidos foram comparados também com o melhores resultados conhecidos atualmente na literatura. Os algoritmos de otimização utilizados foram todos implementados em linguagem C++.
Because of its importance, the so-called Lennard-Jones problem has attracted researchers from various fields of pure and applied science. This problem boils down to find the coordinates of a system with three-dimensional Euclidean space, which correspond to minimum potential energy. This problem plays a fundamental role in determining the stability of molecules in highly branched arrangement, such as proteins. The main difficulty in solving the problem of Lennard-Jones from the fact that the objective function is non-convex and highly nonlinear with several variables, thus presenting a large number of local minima. Here, we used some methods of stochastic global optimization, where we seek to compare the results of the numerical algorithm, in order to see which are better suited to the minimization of the potential. In this study, we addressed only micro groups having 3-10 atoms. The results were also compared with the currently best known results in literature. The optimization algorithms were all implemented in C + +.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Desai, Jitamitra. « Solving Factorable Programs with Applications to Cluster Analysis, Risk Management, and Control Systems Design ». Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28211.

Texte intégral
Résumé :
Ever since the advent of the simplex algorithm, linear programming (LP) has been extensively used with great success in many diverse fields. The field of discrete optimization came to the forefront as a result of the impressive developments in the area of linear programming. Although discrete optimization problems can be viewed as belonging to the class of nonconvex programs, it has only been in recent times that optimization research has confronted the more formidable class of continuous nonconvex optimization problems, where the objective function and constraints are often highly nonlinear and nonconvex functions, defined in terms of continuous (and bounded) decision variables. Typical classes of such problems involve polynomial, or more general factorable functions.

This dissertation focuses on employing the Reformulation-Linearization Technique (RLT) to enhance model formulations and to design effective solution techniques for solving several practical instances of continuous nonconvex optimization problems, namely, the hard and fuzzy clustering problems, risk management problems, and problems arising in control systems.

Under the umbrella of the broad RLT framework, the contributions of this dissertation focus on developing models and algorithms along with related theoretical and computational results pertaining to three specific application domains. In the basic construct, through appropriate surrogation schemes and variable substitution strategies, we derive strong polyhedral approximations for the polynomial functional terms in the problem, and then rely on the demonstrated (robust) ability of the RLT for determining global optimal solutions for polynomial programming problems. The convergence of the proposed branch-and-bound algorithm follows from the tailored branching strategy coupled with consistency and exhaustive properties of the enumeration tree. First, we prescribe an RLT-based framework geared towards solving the hard and fuzzy clustering problems. In the second endeavor, we examine two risk management problems, providing novel models and algorithms. Finally, in the third part, we provide a detailed discussion on studying stability margins for control systems using polynomial programming models along with specialized solution techniques.
Ph. D.

Styles APA, Harvard, Vancouver, ISO, etc.
38

Samarakoon, S. (Sumudu). « Learning-based methods for resource allocation and interference management in energy-efficient small cell networks ». Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526216874.

Texte intégral
Résumé :
Abstract Resource allocation and interference management in wireless small cell networks have been areas of key research interest in the past few years. Although a large number of research studies have been carried out, the needs for high capacity, reliability, and energy efficiency in the emerging fifth-generation (5G) networks warrants the development of methodologies focusing on ultra-dense and self-organizing small cell network (SCN) scenarios. In this regard, the prime motivation of this thesis is to propose an array of distributed methodologies to solve the problem of joint resource allocation and interference management in SCNs pertaining to different network architectures. The present dissertation proposes and investigates distributed control mechanisms for wireless SCNs mainly in three cases: a backhaul-aware interference management mechanism of the uplink of wireless SCNs, a dynamic cluster-based approach for maximizing the energy efficiency of dense wireless SCNs, and a joint power control and user scheduling mechanism for optimizing energy efficiency in ultra-dense SCNs. Optimizing SCNs, especially in the ultra-dense regime, is extremely challenging due to the severe coupling in interference and the dynamics of both queues and channel states. Moreover, due to the lack of inter-base station/cluster communications, smart distributed learning mechanisms are required to autonomously choose optimal transmission strategies based on local information. To overcome these challenges, an array of distributed algorithms are developed by combining the tools from machine learning, Lyapunov optimization and mean-field theory. For each of the above proposals, extensive sets of simulations have been carried out to validate the performance of the proposed methods compared to conventional models that fail to account for the limitations due to network scale, dynamics of queue and channel states, backhaul heterogeneity and capacity constraints, and the lack of coordination between network elements. The results of the proposed methods yield significant gains of the proposed methods in terms of energy savings, rate improvements, and delay reductions compared to the conventional models studied in the existing literature
Tiivistelmä Langattomien piensoluverkkojen resurssien allokointi ja häiriön hallinta on ollut viime vuosina tärkeä tutkimuskohde. Tutkimuksia on tehty paljon, mutta uudet viidennen sukupolven (5G) verkot vaativat suurta kapasiteettia, luotettavuutta ja energiatehokkuutta. Sen vuoksi on kehitettävä menetelmiä, jotka keskittyy ultratiheisiin ja itseorganisoituviin piensoluverkkoihin. (SCN). Tämän väitöskirjan tärkein tavoite onkin esittää joukko hajautettuja menetelmiä piensoluverkkojen yhteisten resurssien allokointiin ja häiriön hallintaan, kun käytössä on erilaisia verkkoarkkitehtuureja. Tässä väitöskirjassa ehdotetaan ja tutkitaan hajautettuja menetelmiä langattomien piensoluverkkojen hallintaan kolmessa eri tilanteessa: välityskanavan huomioiva häiriönhallinta menetelmä langattomissa piensoluverkoissa, dynaamisiin klustereihin perustuva malli tiheiden langattomien piensoluverkkojen energiatehokkuuden maksimointiin ja yhteinen tehonsäädön ja käyttäjien allokaatio menetelmä ultratiheiden piensoluverkkojen energiatehokkuuden optimointiin. Ultratiheiden piensoluverkkojen optimointi on erittäin haastavaa häiriön sekä jonojen ja kanavatilojen vahvojen kytkösten vuoksi. Lisäksi, koska klustereilla/tukiasemilla ei ole kommunikaatiota, tarvitaan hajautettuja oppimisalgoritmeja, jotta saadaan itsenäisesti valittua optimaaliset lähetys menetelmät hyödyntäen vain paikallista tietoa. Tämän vuoksi kehitetään useita hajautettuja algoritmeja, jotka hyödyntävät koneoppimista, Lyapunov optimointia ja mean-field teoriaa. Kaikki yllä olevat esitetyt menetelmät on validoitu laajoilla simulaatioilla, joilla on voitu todentaa niiden suorituskyky perinteisiin malleihin verrattuna. Perinteiset mallit eivät pysty ottamaan huomioon verkon laajuuden, jonon ja kanavatilojen dynamiikan, eri välityskanavien ja rajallisen kapasiteetin asettamia rajoituksia sekä verkon elementtien välisen koordinoinnin puuttumista. Esitetyt menetelmät tuottavat huomattavia parannuksia energiansäästöön, siirtonopeuteen ja viiveiden vähentämiseen verrattuna perinteisiin malleihin, joita kirjallisuudessa on tarkasteltu
Styles APA, Harvard, Vancouver, ISO, etc.
39

Coelho, Ana Carolina Rios. « Estimação de parâmetros em modelos para eliminação enzimática de substratos no fígado : um estudo via otimização global ». Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=873.

Texte intégral
Résumé :
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Neste trabalho, abordamos um problema de otimização de parâmetros da biofísica em que o objetivo é a obtenção da taxa média de concentração de substrato no fígado. Este problema é altamente não-linear, multimodal e com função-objetivo não-diferenciável. Resolvemos o mesmo através de métodos de otimização da literatura e introduzimos três métodos de otimização. Os métodos introduzidos neste trabalho são baseados na hibridização de um método estocástico, que explora o espaço de busca, com um método determinístico de busca direta, que faz uma busca local mais refinada nas áreas mais promissoras deste espaço. Os novos métodos são comparados aos da literatura e é verificado que o desempenho dos primeiros é superior.
In this work, we attack a parameter optimization problem from Biophysics, where the aim is to obtain the substrate concentration rate of a liver. This problem is highly non-linear, multimodal, and with non-differentiable objective-function. We solve it using optimization methods from the literature and three methods introduced in this work. The latter methods are based on the hybridization of a stochastic technique which explores the search space, with a direct search deterministic technique which exploits the most promising areas. Our results show that the new optimization methods perform better than those from the literature.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Gouveia, Paulo Sergio da Silva. « Resolução do problema de alinhamento estrutural entre proteínas via técnicas de otimização global ». [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305964.

Texte intégral
Résumé :
Orientadores: Ana Friedlander de Martinez Perez, Roberto Andreani
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-17T18:28:36Z (GMT). No. of bitstreams: 1 Gouveia_PauloSergiodaSilva_D.pdf: 2266379 bytes, checksum: 85bb53a412744c3d168ac6fed4b701e0 (MD5) Previous issue date: 2011
Resumo: A comparação estrutural entre proteínas é um problema fundamental na Biologia Molecular, pois estruturas similares entre proteínas, frequentemente refletem uma funcionalidade ou origem em comum entre as mesmas. No Problema de Alinhamento Estrutural entre Proteínas, buscamos encontrar o melhor alinhamento estrutural entre duas proteínas, ou seja, a melhor sobreposição entre duas estruturas proteicas, uma vez que alinhamentos locais podem levar a conclusões distorcidas sobre as características c funcionalidades das proteínas em estudo. A maioria dos métodos atuais para abordar este problema ou tem um custo computacional muito elevado ou não tem nenhuma garantia de convergência para o melhor alinhamento entre duas proteínas. Neste trabalho, propomos métodos computacionais para o Problema de Alinhamento Estrutural entre Proteínas que tenham boas garantias de encontrar o melhor alinhamento, mas em um tempo computacional razoável, utilizando as mais variadas técnicas de Otimização Global. A análise sobre os desempenhos de cada método tanto em termos quantitativos quanto qualitativos, além de um gráfico de Pareto, são apresentados de forma a facilitar a comparação entre os métodos com respeito à qualidade da solução e ao tempo computacional
Abstract: The structural comparison of proteins is a fundamental problem in Molecular Biology because similar structures often reflect a comrnon origin or funcionality. In the Protein Alignment problem onc seeks the best structural alignment between two proteins, i.e. the best overlap between two protein structures. Merely local alignments can lead to distorted conclusions on the problem features and functions. Most methods addressing this problem have a very high computational cost or are not supported with guarantecs of convergence to the best alignment. In this work we des-cribe computational methods for Protein Structural Alignment with good certificatea of optimality and reasonable computational execution time. We employ several Global Op-timization techniques. The performance is visualized by means of profile graphics and Pareto curves in order to take into account simultaneously emeiency and robustness of the methods
Doutorado
Otimização
Doutor em Matemática Aplicada
Styles APA, Harvard, Vancouver, ISO, etc.
41

Ittiwattana, Waraporn. « A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations ». The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Vanaret, Charlie. « Hybridation d’algorithmes évolutionnaires et de méthodes d’intervalles pour l’optimisation de problèmes difficiles ». Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/13807/1/vanaret.pdf.

Texte intégral
Résumé :
L’optimisation globale fiable est dédiée à la recherche d’un minimum global en présence d’erreurs d’arrondis. Les seules approches fournissant une preuve numérique d’optimalité sont des méthodes d’intervalles qui partitionnent l’espace de recherche et éliminent les sous-espaces qui ne peuvent contenir de solution optimale. Ces méthodes exhaustives, appelées branch and bound par intervalles, sont étudiées depuis les années 60 et ont récemment intégré des techniques de réfutation et de contraction, issues des communautés d’analyse par intervalles et de programmation par contraintes. Il est d’une importance cruciale de calculer i) un encadrement précis de la fonction objectif et des contraintes sur un sous-domaine ; ii) une bonne approximation (un majorant) du minimum global. Les solveurs de pointe sont généralement des méthodes intégratives : ils invoquent sur chaque sous-domaine des algorithmes d’optimisation locale afin d’obtenir une bonne approximation du minimum global. Dans ce document, nous nous intéressons à un cadre coopératif combinant des méthodes d’intervalles et des algorithmes évolutionnaires. Ces derniers sont des algorithmes stochastiques faisant évoluer une population de solutions candidates (individus) dans l’espace de recherche de manière itérative, dans l’espoir de converger vers des solutions satisfaisantes. Les algorithmes évolutionnaires, dotés de mécanismes permettant de s’échapper des minima locaux, sont particulièrement adaptés à la résolution de problèmes difficiles pour lesquels les méthodes traditionnelles peinent à converger. Au sein de notre solveur coopératif Charibde, l’algorithme évolutionnaire et l’algorithme sur intervalles exécutés en parallèle échangent bornes, solutions et espace de recherche par passage de messages. Une stratégie couplant une heuristique d’exploration géométrique et un opérateur de réduction de domaine empêche la convergence prématurée de la population vers des minima locaux et évite à l’algorithme évolutionnaire d’explorer des sous-espaces sous-optimaux ou non réalisables. Une comparaison de Charibde avec des solveurs de pointe (GlobSol, IBBA, Ibex) sur une base de problèmes difficiles montre un gain de temps d’un ordre de grandeur. De nouveaux résultats optimaux sont fournis pour cinq problèmes multimodaux pour lesquels peu de solutions, même approchées, sont connues dans la littérature. Nous proposons une application aéronautique dans laquelle la résolution de conflits est modélisée par un problème d’optimisation sous contraintes universellement quantifiées, et résolue par des techniques d’intervalles spécifiques. Enfin, nous certifions l’optimalité de la meilleure solution connue pour le cluster de Lennard-Jones à cinq atomes, un problème ouvert en dynamique moléculaire.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Bilicz, Sandor. « Application of Design-of-Experiment Methods and Surrogate Models in Electromagnetic Nondestructive Evaluation ». Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00601753.

Texte intégral
Résumé :
Le contrôle non destructif électromagnétique (CNDE) est appliqué dans des domaines variés pour l'exploration de défauts cachés affectant des structures. De façon générale, le principe peut se poser en ces termes : un objet inconnu perturbe un milieu hôte donné et illuminé par un signal électromagnétique connu, et la réponse est mesurée sur un ou plusieurs récepteurs de positions connues. Cette réponse contient des informations sur les paramètres électromagnétiques et géométriques des objets recherchés et toute la difficulté du problème traité ici consiste à extraire ces informations du signal obtenu. Plus connu sous le nom de " problèmes inverses ", ces travaux s'appuient sur une résolution appropriée des équations de Maxwell. Au " problème inverse " est souvent associé le " problème direct " complémentaire, qui consiste à déterminer le champ électromagnétique perturbé connaissant l'ensemble des paramètres géométriques et électromagnétiques de la configuration, défaut inclus. En pratique, cela est effectué via une modélisation mathématique et des méthodes numériques permettant la résolution numérique de tels problèmes. Les simulateurs correspondants sont capables de fournir une grande précision sur les résultats mais à un coût numérique important. Sachant que la résolution d'un problème inverse exige souvent un grand nombre de résolution de problèmes directs successifs, cela rend l'inversion très exigeante en termes de temps de calcul et de ressources informatiques. Pour surmonter ces challenges, les " modèles de substitution " qui imitent le modèle exact peuvent être une solution alternative intéressante. Une manière de construire de tels modèles de substitution est d'effectuer un certain nombre de simulations exactes et puis d'approximer le modèle en se basant sur les données obtenues. Le choix des simulations (" prototypes ") est normalement contrôlé par une stratégie tirée des outils de méthodes de " plans d'expérience numérique ". Dans cette thèse, l'utilisation des techniques de modélisation de substitution et de plans d'expérience numérique dans le cadre d'applications en CNDE est examinée. Trois approches indépendantes sont présentées en détail : une méthode d'inversion basée sur l'optimisation d'une fonction objectif et deux approches plus générales pour construire des modèles de substitution en utilisant des échantillonnages adaptatifs. Les approches proposées dans le cadre de cette thèse sont appliquées sur des exemples en CNDE par courants de Foucault
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ouali, Abdelkader. « Méthodes hybrides parallèles pour la résolution de problèmes d'optimisation combinatoire : application au clustering sous contraintes ». Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC215/document.

Texte intégral
Résumé :
Les problèmes d’optimisation combinatoire sont devenus la cible de nombreuses recherches scientifiques pour leur importance dans la résolution de problèmes académiques et de problèmes réels rencontrés dans le domaine de l’ingénierie et dans l’industrie. La résolution de ces problèmes par des méthodes exactes ne peut être envisagée à cause des délais de traitement souvent exorbitants que nécessiteraient ces méthodes pour atteindre la (les) solution(s) optimale(s). Dans cette thèse, nous nous sommes intéressés au contexte algorithmique de résolution des problèmes combinatoires, et au contexte de modélisation de ces problèmes. Au niveau algorithmique, nous avons appréhendé les méthodes hybrides qui excellent par leur capacité à faire coopérer les méthodes exactes et les méthodes approchées afin de produire rapidement des solutions. Au niveau modélisation, nous avons travaillé sur la spécification et la résolution exacte des problématiques complexes de fouille des ensembles de motifs en étudiant tout particulièrement le passage à l’échelle sur des bases de données de grande taille. D'une part, nous avons proposé une première parallélisation de l'algorithme DGVNS, appelée CPDGVNS, qui explore en parallèle les différents clusters fournis par la décomposition arborescente en partageant la meilleure solution trouvée sur un modèle maître-travailleur. Deux autres stratégies, appelées RADGVNS et RSDGVNS, ont été proposées qui améliorent la fréquence d'échange des solutions intermédiaires entre les différents processus. Les expérimentations effectuées sur des problèmes combinatoires difficiles montrent l'adéquation et l'efficacité de nos méthodes parallèles. D'autre part, nous avons proposé une approche hybride combinant à la fois les techniques de programmation linéaire en nombres entiers (PLNE) et la fouille de motifs. Notre approche est complète et tire profit du cadre général de la PLNE (en procurant un haut niveau de flexibilité et d’expressivité) et des heuristiques spécialisées pour l’exploration et l’extraction de données (pour améliorer les temps de calcul). Outre le cadre général de l’extraction des ensembles de motifs, nous avons étudié plus particulièrement deux problèmes : le clustering conceptuel et le problème de tuilage (tiling). Les expérimentations menées ont montré l’apport de notre proposition par rapport aux approches à base de contraintes et aux heuristiques spécialisées
Combinatorial optimization problems have become the target of many scientific researches for their importance in solving academic problems and real problems encountered in the field of engineering and industry. Solving these problems by exact methods is often intractable because of the exorbitant time processing that these methods would require to reach the optimal solution(s). In this thesis, we were interested in the algorithmic context of solving combinatorial problems, and the modeling context of these problems. At the algorithmic level, we have explored the hybrid methods which excel in their ability to cooperate exact methods and approximate methods in order to produce rapidly solutions of best quality. At the modeling level, we worked on the specification and the exact resolution of complex problems in pattern set mining, in particular, by studying scaling issues in large databases. On the one hand, we proposed a first parallelization of the DGVNS algorithm, called CPDGVNS, which explores in parallel the different clusters of the tree decomposition by sharing the best overall solution on a master-worker model. Two other strategies, called RADGVNS and RSDGVNS, have been proposed which improve the frequency of exchanging intermediate solutions between the different processes. Experiments carried out on difficult combinatorial problems show the effectiveness of our parallel methods. On the other hand, we proposed a hybrid approach combining techniques of both Integer Linear Programming (ILP) and pattern mining. Our approach is comprehensive and takes advantage of the general ILP framework (by providing a high level of flexibility and expressiveness) and specialized heuristics for data mining (to improve computing time). In addition to the general framework for the pattern set mining, two problems were studied: conceptual clustering and the tiling problem. The experiments carried out showed the contribution of our proposition in relation to constraint-based approaches and specialized heuristics
Styles APA, Harvard, Vancouver, ISO, etc.
45

Hamdani, Hamid. « Métamodèles pour l’étude fiabiliste des systèmes mécatroniques Métamodélisation pour une conception robuste des systèmes mécatroniques Reliability analysis of tape based chip-scale packages based metamodel Optimization of solder joints in embedded mechatronic systemsvia Kriging-assisted CMA-ES algorithm Metamodel assisted evolution strategies for global optimization of solder joints reliability in embedded mechatronic devices ». Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR12.

Texte intégral
Résumé :
Les défaillances des systèmes mécatroniques sont souvent causées par la rupture par fatigue des joints de brasure de ses composants électroniques. Avec la miniaturisation croissante des boîtiers électroniques, la sollicitation des joints de brasure, qui assurent la liaison entre les sorties du composant et la carte, peut devenir problématique. En effet, les joints de brasures peuvent accepter des taux de déformation importants, mais une accumulation des sollicitations répétées provoquent leur vieillissement prématuré pouvant conduire à la rupture de joints brasés (phénomène de fatigue thermomécanique). Ainsi, les études basées sur la simulation par les méthodes des éléments finis sont menées pour étudier numériquement la durée de vie des boîtiers montés sur circuit imprimé (fiabilité de deuxième niveau). Le coût de calcul élevé de la résolution de ce type de problèmes peut rendre la procédure d’optimisation et d’évaluation de la fiabilité irréalisable en raison de la grande consommation de temps de calcul de la simulation multiphysique par la méthode des éléments finis. Cette thèse propose d’une part, une adaptation de la méthode CMA-ES Assisté par le métamodèle de krigeage pour l’optimisation globale d’un problème. La mise en œuvre de cet algorithme proposé a été réalisée dans l’optimisation globale des joints de brasure d’un boîtier de type PQFP. Les résultats des études numériques montrent que l’algorithme KA-CMA-ES proposé est plus efficace et plus performant que l’algorithme CMA-ES standard. D’autre part, une méthodologie générale pour l’analyse de la fiabilité en fatigue est proposée dans ce manuscrit. Elle s’appuie sur la modélisation des incertitudes (chargement, propriétés du matériau, géométrie) et vise à quantifier le niveau de fiabilité du système étudié pour un scénario de défaillance en fatigue. Une méthode basée sur les techniques de métamodélisation est précisément proposée pour remédier à la complexité des systèmes mécatroniques dans la résolution du problème de fiabilité. Le métamodèle est utilisé pour imiter le comportement du modèle mécanique tout en satisfaisant en même temps l’efficacité et la précision de ce dernier. La mise en œuvre de cette méthodologie est appliquée pour l’analyse de fiabilité d’un boîtier de type T-CSP
Mechatronic system failures are often caused by fatigue failure of the solder joints of its electronic devices. With the increasing miniaturization of electronic devices, the stress on solder joints, which provide the connection between the component outputs and the PCB, has become a major challenge. Indeed, solder joints can accept high deformation rates, but an accumulation of repeated stresses causes their premature ageing which can lead to the rupture of solder joints (thermomechanical fatigue phenomenon). Thus, the studies based on finite element simulation methods are performed to numerically investigate the lifetime of PCB-mounted devices (secondlevel reliability). The high computational costs required to solve such problems may make the optimization and reliability assessment procedure impracticable due to the high computation time of multi-physics finite element simulation. This thesis proposes on the one side, an adaptation of the CMA-ES method Assisted by the kriging metamodel for the global optimization of a given problem. The implementation of this proposed algorithm was performed in the global optimization of the solder joints in a PQFP package. The results of the numerical studies show that the proposed KA-CMA-ES algorithm is more efficient and more efficient than the standard CMA-ES algorithm. On the other side, a general methodology for reliability analysis in fatigue is proposed in this manuscript. It is based on the uncertainty modeling (loading, material properties, geometry) and aims to quantify the reliability level of the system studied for a fatigue failure scenario. A method based on metamodelling techniques is precisely proposed to address the complexity of mechatronic systems in solving the reliability problem. The metamodel is used to emulate the mechanical model behaviour while satisfying both the its efficiency and accuracy. The implementation of the proposed methodology is applied for the reliability analysis of a T-CSP package
Styles APA, Harvard, Vancouver, ISO, etc.
46

Santos, Genasil Francisco dos. « Identificação de danos estruturais utilizando técnicas de otimização ». Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=5647.

Texte intégral
Résumé :
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Sistemas estruturais em suas variadas aplicações incluindo-se veículos espaciais, automóveis e estruturas de engenharia civil tais como prédios, pontes e plataformas off-shore, acumulam dano durante suas vidas úteis. Em muitas situações, tal dano pode não ser visualmente observado. Do ponto de vista da segurança e da performance da estrutura, é desejável monitorar esta possível ocorrência, localizá-la e quantificá-la. Métodos de identificação de sistemas, que em geral, são classificados numa categoria de Técnicas de Avaliação Não-Destrutivas, podem ser utilizados para esta finalidade. Usando dados experimentais tais como frequências naturais, modos de vibração e deslocamentos estáticos, e um modelo analítico estrutural, parâmetros da estrutura podem ser identificados. As propriedades estruturais do modelo analítico são modificadas de modo a minimizar a diferença entre os dados obtidos por aquele modelo e a resposta medida. Isto pode ser definido como um problema inverso onde os parâmetros da estrutura são identificados. O problema inverso, descrito acima, foi resolvido usando métodos globais de otimização devido à provável presença de inúmeros mínimos locais e a não convexidade do espaço de projeto. Neste trabalho o método da Evolução Diferencial (Differential Evolution, DE) foi utilizado como ferramenta principal de otimização. Trata-se de uma meta-heurística inspirada numa população de soluções sucessivamente atualizada por operações aritméticas como mutações, recombinações e critérios de seleção dos melhores indivíduos até que um critério de convergência seja alcançado. O método da Evolução Diferencial foi desenvolvido como uma heurística para minimizar funções não diferenciáveis e foi aplicado a estruturas planas de treliças com diferentes níveis de danos.
Structural systems in a variety of applications including aerospace vehicles, automobiles and civil engineering structures such as tall buildings, bridges and offshore platforms, accumulate damage during their service life. In several situations, such damage may not be visually observable. From the standpoint of both safety and performance, it is desirable to monitor the occurrence, location and extent of such damage.System identification methods, which may be classified in a general category of nondestructive evaluation techniques, can be employed for this purpose. Using experimental data, such as eigenmodes, eigenvectors and static displacements, and an analytical structural model, parameters of the structures can be identified. The approach used in the present work is one where the structural properties of the analytical model are varied to minimize the difference between the analytically predicted and empirically measured response. This is an inverse problem where the structural parameters are identified. In this work a reduced number of vibration modes were used as the measured response. For the damage assessment problem a close analytical model of the structural system is available and the model of the damaged structure will be identified. Damage will be represented by a reduction in the elastic stiffness properties of the structure.The problem described above was solved using global methods of optimization due to the fact that depending on the number of variables or the location of damage the resulting design space is nonconvex presenting several local minima. In the present work, the Differential Evolution Optimization Technique (DE) was used. It is a metaheuristic inspired by a population of solutions that is successively updated by arithmetic operations such as mutation and recombination, until convergence. The approach was applied to simple truss structures with different levels of damage.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Schardong, André. « Aplicação de técnicas de programação linear e extensões para otimização da alocação de água em sistemas de recursos hídricos, utilizando métodos de pontos interiores ». Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3147/tde-01122006-171713/.

Texte intégral
Résumé :
Neste trabalho é apresentada uma ferramenta de otimização para análise de problemas de alocação de água em bacias hidrográficas utilizando técnicas de programação linear e linear por partes, integradas a um modelo de amortecimentos de ondas em canais. A otimização é feita de forma global, com uso de softwares de programação linear baseados nos métodos de pontos interiores. A metodologia de uso do sistema consiste em se obter uma solução ?ótima? para situações de disponibilidade de água insuficiente a todos os usos conflitantes na bacia. A ferramenta está sendo acoplada e incorporada ao AcquaNet, um Sistema de Suporte a Decisões (SSD) para análise de sistemas de recursos hídricos, que utiliza um algoritmo de rede de fluxo afim de otimizar a alocação de água. A formulação utilizando programação linear permite a análise global do sistema e por isso, espera-se melhor aproveitamento da água disponível, seja no menor déficit de atendimento às demandas ou maior armazenamento nos reservatórios. A programação linear com utilização de métodos de pontos interiores é atualmente uma técnica bastante conhecida e bem desenvolvida. Existem vários pacotes computacionais gratuitos com implementações eficientes dos métodos de pontos interiores que motivaram sua utilização neste trabalho.
This work presents an optimization tool for analyzing the problems of water allocation in watersheds by utilizing techniques of linear and piecewise linear programming integrated to a pattern of stream flow routing. The optimization is done in a global way with the usage of linear programming packages based upon the Internal Point Methods. The methodology of the usage consists in the acquirement of an optimal solution for situation of insufficient water availability for all conflicting consumptions from the watershed. The tool is being attached and incorporated to AcquaNet, which is a decision support system (DSS) for analysis of water resources systems that utilizes a network flow algorithm, with the purpose of optimizing the water allocation. The formulation that uses the linear programming leads to the analysis of the system as a whole and for this reason it is expected a better usage of the available water with a lower deficit in the supply or a greater storage in the reservoirs. Linear Programming with Internal Point Methods is nowadays a well known and very well developed technique. There are several computational packages with efficient implementations of the Internal Points Methods freely available, and that, has brought great motivation in its usage in the present work.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Tigli, Luca. « Saving local searches in global optimization ». Doctoral thesis, 2020. http://hdl.handle.net/2158/1188596.

Texte intégral
Résumé :
This thesis concerns the use of local searches within global optimization algorithms. In particular, we focus our attention on the strategies to decide whether to start or not a local search from a starting point. More specifically, our aim is to avoid the waste of computational effort due to local searches which lead to already detected local minima or to local minimizers with a poor function value. Our clustering-based strategies can be easily used to solve large scale global optimization problems and to enhance the performance of any memetic algorithm in general.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Moa, Belaid. « Interval methods for global optimization ». Thesis, 2007. http://hdl.handle.net/1828/198.

Texte intégral
Résumé :
We propose interval arithmetic and interval constraint algorithms for global optimization. Both of these compute lower and upper bounds of a function over a box, and return a lower and an upper bound for the global minimum. In interval arithmetic methods, the bounds are computed using interval arithmetic evaluations. Interval constraint methods instead use domain reduction operators and consistency algorithms. The usual interval arithmetic algorithms for global optimization suffer from at least one of the following drawbacks: - Mixing the fathoming problem, in which we ask for the global minimum only, with the localization problem, in which we ask for the set of points at which the global minimum occurs. - Not handling the inner and outer approximations for epsilon-minimizer, which is the set of points at which the objective function is within epsilon of the global minimum. - Nothing is said about the quality for their results in actual computation. The properties of the algorithms are stated only in the limit for infinite running time, infinite memory, and infinite precision of the floating-point number system. To handle these drawbacks, we propose interval arithmetic algorithms for fathoming problems and for localization problems. For these algorithms we state properties that can be verified in actual executions of the algorithms. Moreover, the algorithms proposed return the best results that can be computed with given expressions for the objective function and the conditions, and a given hardware. Interval constraint methods combine interval arithmetic and constraint processing techniques, namely consistency algorithms, to obtain tighter bounds for the objective function over a box. The basic building block of interval constraint methods is the generic propagation algorithm. This explains our efforts to improve the generic propagation algorithm as much as possible. All our algorithms, namely dual, clustered, deterministic, and selective propagation algorithms, are developed as an attempt to improve the efficiency of the generic propagation algorithm. The relational box-consistency algorithm is another key algorithm in interval constraints. This algorithm keeps squashing the left and right bounds of the intervals of the variables until no further narrowing is possible. A drawback of this way of squashing is that as we proceed further, the process of squashing becomes slow. Another drawback is that, for some cases, the actual narrowing occurs late. To address these problems, we propose the following algorithms: - Dynamic Box-Consistency algorithm: instead of pruning the left and then the right bound of each domain, we alternate the pruning between all the domains. - Adaptive Box-Consistency algorithm: the idea behind this algorithm is to get rid of the boxes as soon as possible: start with small boxes and extend them or shrink them depending on the pruning outcome. This adaptive behavior makes this algorithm very suitable for quick squashing. Since the efficiency of interval constraint optimization methods depends heavily on the sharpness of the upper bound for the global minimum, we must make some effort to find the appropriate point or box to use for computing the upper bound, and not to randomly pick one as is commonly done. So, we introduce interval constraints with exploration. These methods use non-interval methods as an exploratory step in solving a global optimization problem. The results of the exploration are then used to guide interval constraint algorithms, and thus improve their efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Ocloo, Senanu K. « Global optimization methods for adaptive IIR filters ». 2007. http://www.lib.ncsu.edu/theses/available/etd-07112007-150000/unrestricted/etd.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie