To see the other types of publications on this topic, follow the link: Set covering.

Dissertations / Theses on the topic 'Set covering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Set covering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Umetani, Shunji, Mutsunori Yagiura, and 睦憲 柳浦. "RELAXATION HEURISTICS FOR THE SET COVERING PROBLEM." 日本オペレーションズ・リサーチ学会, 2007. http://hdl.handle.net/2237/11724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parrish, Edna L. "Generalized total and partial set covering problems." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/91140.

Full text
Abstract:
This thesis is concerned with the development of two generalized set covering models. The first model is formulated for the total set covering problem where cost is minimized subject to the constraint that each customer must be served by at least one facility. The second model is constructed for the partial set covering problem in which customer coverage is maximized subject to a budget constraint. The conventional formulations of both the total set covering and partial set covering problems are shown to be special cases of the two generalized models that arc developed. Appropriate solution strategies arc discussed for each generalized model. A specialized algorithm for a particular case of the partial covering problem is constructed and computational results are presented.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
3

Lambie-Hanson, Christopher. "Covering Matrices, Squares, Scales, and Stationary Reflection." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/368.

Full text
Abstract:
In this thesis, we present a number of results in set theory, particularly in the areas of forcing, large cardinals, and combinatorial set theory. Chapter 2 concerns covering matrices, combinatorial structures introduced by Viale in his proof that the Singular Cardinals Hypothesis follows from the Proper Forcing Axiom. In the course of this proof and subsequent work with Sharon, Viale isolated two reflection principles, CP and S, which can hold of covering matrices. We investigate covering matrices for which CP and S fail and prove some results about the connections between such covering matrices and various square principles. In Chapter 3, motivated by the results of Chapter 2, we introduce a number of square principles intermediate between the classical and (+). We provide a detailed picture of the implications and independence results which exist between these principles when is regular. In Chapter 4, we address three questions raised by Cummings and Foreman regarding a model of Gitik and Sharon. We first analyze the PCF-theoretic structure of the Gitik-Sharon model, determining the extent of good and bad scales. We then classify the bad points of the bad scales existing in both the Gitik-Sharon model and various other models containing bad scales. Finally, we investigate the ideal of subsets of singular cardinals of countable cofinality carrying good scales. In Chapter 5, we prove that, assuming large cardinals, it is consistent that there are many singular cardinals such that every stationary subset of + reflects but there are stationary subsets of + that do not reflect at ordinals of arbitrarily high cofinality. This answers a question raised by Todd Eisworth and is joint work with James Cummings. In Chapter 6, we extend a result of Gitik, Kanovei, and Koepke regarding intermediate models of Prikry-generic forcing extensions to Radin generic forcing extensions. Specifically, we characterize intermediate models of forcing extensions by Radin forcing at a large cardinal using measure sequences of length less than. In the final brief chapter, we prove some results about iterations of w1-Cohen forcing with w1-support, answering a question of Justin Moore.
APA, Harvard, Vancouver, ISO, and other styles
4

Feng, Jinghua. "Redundancy in nonlinear systems, a set covering approach." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0010/MQ52546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Djannaty, Farhad. "Network based heuristics for the set covering problem." Thesis, Brunel University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rosas, José Humberto Ablanedo. "Algorithms for very large scale set covering problems /." Full text available from ProQuest UM Digital Dissertations, 2007. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1609001671&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1244747021&clientId=22256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Annen, Oliver. "Das Partial set covering problem und Erweiterungen Modellierung und Lösungsverfahren /." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=969854226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Engder. "Neural computing for minimum set covering and gate-packing problems." Case Western Reserve University School of Graduate Studies / OhioLINK, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=case1056655652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

El-Darzi, E. "Methods for solving the set covering and set partitioning problems using graph theoretic (relaxation) algorithms." Thesis, Brunel University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maltais, Elizabeth Jane. "Graph-dependent Covering Arrays and LYM Inequalities." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34434.

Full text
Abstract:
The problems we study in this thesis are all related to covering arrays. Covering arrays are combinatorial designs, widely used as templates for efficient interaction-testing suites. They have connections to many areas including extremal set theory, design theory, and graph theory. We define and study several generalizations of covering arrays, and we develop a method which produces an infinite family of LYM inequalities for graph-intersecting collections. A common theme throughout is the dependence of these problems on graphs. Our main contribution is an extremal method yielding LYM inequalities for $H$-intersecting collections, for every undirected graph $H$. Briefly, an $H$-intersecting collection is a collection of packings (or partitions) of an $n$-set in which the classes of every two distinct packings in the collection intersect according to the edges of $H$. We define ``$F$-following" collections which, by definition, satisfy a LYM-like inequality that depends on the arcs of a ``follow" digraph $F$ and a permutation-counting technique. We fully characterize the correspondence between ``$F$-following" and ``$H$-intersecting" collections. This enables us to apply our inequalities to $H$-intersecting collections. For each graph $H$, the corresponding inequality inherently bounds the maximum number of columns in a covering array with alphabet graph $H$. We use this feature to derive bounds for covering arrays with the alphabet graphs $S_3$ (the star on three vertices) and $\kvloop{3}$ ($K_3$ with loops). The latter improves a known bound for classical covering arrays of strength two. We define covering arrays on column graphs and alphabet graphs which generalize covering arrays on graphs. The column graph encodes which pairs of columns must be $H$-intersecting, where $H$ is a given alphabet graph. Optimizing covering arrays on column graphs and alphabet graphs is equivalent to a graph-homomorphism problem to a suitable family of targets which generalize qualitative independence graphs. When $H$ is the two-vertex tournament, we give constructions and bounds for covering arrays on directed column graphs. FOR arrays are the broadest generalization of covering arrays that we consider. We define FOR arrays to encompass testing applications where constraints must be considered, leading to forbidden, optional, and required interactions of any strength. We model these testing problems using a hypergraph. We investigate the existence of FOR arrays, the compatibility of their required interactions, critical systems, and binary relational systems that model the problem using homomorphisms.
APA, Harvard, Vancouver, ISO, and other styles
11

Bhowmick, Santanu. "Multi-covering problems and their variants." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5418.

Full text
Abstract:
In combinatorial optimization, covering problems are those problems where given a ground set and a family of subsets of the ground set, the objective is to find a minimum cost set of subsets whose union contains the ground set. We consider covering problems in the context of Computational Geometry, which is a subfield of Computer Science that deals with problems associated with geometric objects such as points, lines, disks, polygons etc. In particular, geometric covering is an active research area, where the ground set and the family of subsets are induced by geometric objects. Covering problems in combinatorial optimizations often have a geometric analogue that arises naturally, and though such problems remain NP-hard, it is often possible to exploit the geometric properties of the set system to obtain better approximation algorithms. In this work, the fundamental problem that we consider is a generalization of geometric covering, where each element in the ground set may need to covered by more than one subset. To be precise, the problem is defined as follows: given two sets of points X, Y and a coverage function κ : X → Z+ ∪ {0}, construct balls centered on the points in Y such that each point in X is covered by at least κ(x) distinct balls. The objective in this problem is to minimize the total cost, which is a function of the radii of the balls. This problem is termed as the metric multi-cover (MMC) problem. We first consider version of the MMC problem where κ(x) = 1 for all clients, i.e. the 1-covering case. The known results that give a constant factor approximation for this problem are variations of LP-based primal-dual algorithm. We use a modified local search technique, motivated by geometric idea, to derive a simple, constant- factor approximation for this problem in Chapter 2. We then look at the MMC problem where the point sets X,Y are in the Euclidean plane, and each client x ∈ X needs to be covered by at least κ(x) distinct disks centered on the points in Y . In Chapter 4, we give the first polynomial time constant factor approximation for this problem, in which the constant is independent of the coverage function κ. Our solution also has an incremental property, which allows the algorithm to handle increases in the coverage requirement by increasing the radii of the current server disks, without affecting the approximation factor. In the next problem, we move from the Euclidean plane to arbitrary metric spaces where we consider the uniform MMC problem. In this problem, each client x has the demand κ(x) = k, where k > 0 is an integer. We give the first constant factor approximation (independent of k) for this problem. The key contribution that led to this result is the formulation of a partitioning scheme of the servers in the uniform MMC problem, that reduces the uniform MMC problem to k instances of 1-covering problem, while preserving the optimality of the solution to a constant multiplicative factor. We present the partitioning scheme as an independent result in Chapter 5, which we then use to solve the uniform MMC problem in Chapter 6. The MMC problem with arbitrary coverage function κ is then considered in Chapter 7. The key challenge that the non-uniform version presents is that the symmetry of the server partitioning scheme breaks down as the coverage demands of clients are independent of each other. We present a constant factor algorithm for this problem in Chapter 7. The last problem that we consider is the t-MMC problem, which is a restricted version of the uniform MMC problem. The objective is to compute a cover in which each client is covered by at least k distinct server disks, using atmost t server disks in total. This problem is a generalization of the clustering problem (where k = 1), and to our knowledge this is the first time this generalization has been considered. We give a constant factor approximation for this problem in Chapter 8.
APA, Harvard, Vancouver, ISO, and other styles
12

Shah, Mohak. "Sample compression, margins and generalization: Extensions to the set covering machine." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/29372.

Full text
Abstract:
This thesis studies the generalization behavior of algorithms in Sample Compression Settings. It extends the study of the Sample Compression framework to derive data-dependent bounds that give tighter guarantees to the algorithms where data-independent bounds such as the VC bounds are not applicable. It also studies the interplay between sparsity and the separating margin, of the classifier and shows how new compression based data-dependent bounds can be obtained that can exploit these two quantities explicitly. These bounds not only provide tight generalization guarantees but by themselves present optimization problems for learning leading to novel learning algorithms. This thesis studies the algorithms based on learning conjunctions or disjunctions of data-dependent Boolean features. With the Set Covering Machine (SCM) as its basis, the thesis shows how novel learning algorithms can be designed in compression settings that can perform a non-trivial margin-sparsity trade-off to yield better classifiers. Moreover, the thesis also shows how feature-selection can be integrated with the learning process in these settings yielding algorithms that not only perform successful feature selection but also have provable theoretical guarantees. In particular, the thesis proposes two novel learning algorithms. The first algorithm is for the SCM with data-dependent half-spaces along with a tight compression bound that can successfully perform model selection. The second algorithm aims at learning conjunctions of features called data-dependent Rays to classify gene expression data from DNA microarrays. The thesis shows how a PAC-Bayes approach to learning Rays' conjunctions can perform a non-trivial margin-sparsity trade-off to achieve classifiers that not only have provable theoretical guarantees but also utilize a significantly small number of attributes unlike traditional feature selection algorithms. This thesis also proposes two new formulations for the classical SCM algorithm with data-dependent balls aimed at performing margin-sparsity trade-off by utilizing Occam's Razor and PAC-Bayes principles respectively. The thesis shows how such approaches yield more general classifiers with tight risk bounds that can potentially guide the model selection process.
APA, Harvard, Vancouver, ISO, and other styles
13

Buezas, David. "Constraint-based modeling of minimum set covering: application to species differentation." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/6158.

Full text
Abstract:
Work presented in the context of the European Master in Computational Logics, as partial requisit for the graduation as Master in Computational Logics
A large number of species cannot be distinguished via standard non genetic analysis in the lab. In this dissertation I address the problem of finding all minimum sets of restriction enzymes that can be used to unequivocally identify the species of a yeast specimen by analyzing the size of digested DNA fragments in gel electrophoresis experiments. The problem is first mapped into set covering and then solved using various Constraint Programming techniques. One of the models for solving minimum set covering proved to be very efficient in finding the size of a minimum cover but was inapplicable to find all solutions due to the existence of symmetries. Many symmetry breaking algorithms were developed and tested for it. Hoping to get an efficient model suitable for both tasks also the global constraint involved on it was partially implemented in the CaSPER Constraint Solver, together with the most promising symmetry breaking algorithm. Eventually the best solution was obtained by combining two different constraint-based models, one to find the size of a minimum solution and the other to find all minimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Peng, and Robert M. Freund. "Computation of Minimum Volume Covering Ellipsoids." Massachusetts Institute of Technology, Operations Research Center, 2002. http://hdl.handle.net/1721.1/5090.

Full text
Abstract:
We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points al,...,am C Rn . This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30, 000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer.
APA, Harvard, Vancouver, ISO, and other styles
15

Meagher, Karen. "Covering arrays on graphs: Qualitative independence graphs and extremal set partition theory." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/29234.

Full text
Abstract:
There has been a good deal of research on covering arrays over the last 20 years. Most of this work has focused on constructions, applications and generalizations of covering arrays. The main focus of this thesis is a generalization of covering arrays, covering arrays on graphs. The original motivation for this generalization was to improve applications of covering arrays to testing systems and networks, but this extension also gives us new ways to study covering arrays. Two vectors v, w in Znk are qualitatively independent if for all ordered pairs (a, b) ∈ Zk x Zk there is a position i in the vectors where ( a, b) = (vi, w i). A covering array is an array with the property that any pair of rows are qualitatively independent. A covering array on a graph is an array with a row for each vertex of the graph with the property that any two rows which correspond to adjacent vertices are qualitatively independent. A covering array on the complete graph is a covering array. A covering array is optimal if it has the minimum number of columns among covering arrays with the same number of rows. The addition of a graph structure to covering arrays makes it possible to use methods from graph theory to study these designs. In this thesis, we define a family of graphs called the qualitative independence graphs . A graph has a covering array, with given parameters, if and only if there is a homomorphism from the graph to a particular qualitative independence graph. Cliques in qualitative independence graphs relate to covering arrays and independent sets are connected to intersecting partition systems. It is known that the exact size of an optimal binary covering array can be determined using Sperner's Theorem and the Erdős-Ko-Rado Theorem. In this thesis, we find good bounds on the size of an optimal binary covering array on a graph. In addition, we determine both the chromatic number and a core of the binary qualitative independence graphs. Since the rows of general covering arrays correspond to set partitions, we give extensions of Sperner's Theorem and the Erdős-Ko-Rado Theorem to set-partition systems. These results are part of a general framework to study extremal partition systems. The core of the binary qualitative independence graphs can be generalized to a subgraph of a general qualitative independence graph called the uniform qualitative independence graph. Cliques in the uniform qualitative independence graphs relate to balanced covering arrays. Using these graphs, we find bounds on the size of a balanced covering array. We give the spectra for several of these graphs and conjecture that they are graphs in an association scheme. We also give a new construction for covering arrays which yields many new upper bounds on the size of optimal covering arrays.
APA, Harvard, Vancouver, ISO, and other styles
16

Cacchiani, Valentina, Vera Hemmelmayr, and Fabien Tricoire. "A set-covering based heuristic algorithm for the periodic vehicle routing problem." Elsevier, 2014. http://dx.doi.org/10.1016/j.dam.2012.08.032.

Full text
Abstract:
We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011) [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
17

Bishop, Gregory J. "Ultrafilters generated by a closed set of functions and K- covering sets /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487779914823645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
APA, Harvard, Vancouver, ISO, and other styles
19

Alwabsi, Nowayer Afnan H. "FINDING A MINIMUM-WIDTH TROUNULUS COVERING A SET OF POINTS ON THE PLANE." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1495727703596122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Boskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zyl, Jacobus van. "Fuzzy set covering as a new paradigm for the induction of fuzzy classification rules." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=984768548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Naszódi, Márton. "On the constant of homothety for covering a convex set with its smaller copies." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ballone, Frank A. "Gamma-Sets and the (A, B_∞) Selection Principle." Ohio University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1490350606744796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sapkota, Nabin. "SIMULATION OF RANDOM SET COVERING PROBLEMS WITH KNOWN OPTIMAL SOLUTIONS AND EXPLICITLY INDUCED CORRELATIONS AMOONG COEFFICIENTS." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3483.

Full text
Abstract:
The objective of this research is to devise a procedure to generate random Set Covering Problem (SCP) instances with known optimal solutions and correlated coefficients. The procedure presented in this work can generate a virtually unlimited number of SCP instances with known optimal solutions and realistic characteristics, thereby facilitating testing of the performance of SCP heuristics and algorithms. A four-phase procedure based on the Karush-Kuhn-Tucker (KKT) conditions is proposed to generate SCP instances with known optimal solutions and correlated coefficients. Given randomly generated values for the objective function coefficients and the sum of the binary constraint coefficients for each variable and a randomly selected optimal solution, the procedure: (1) calculates the range for the number of possible constraints, (2) generates constraint coefficients for the variables with value one in the optimal solution, (3) assigns values to the dual variables, and (4) generates constraint coefficients for variables with value 0 in the optimal solution so that the KKT conditions are satisfied. A computational demonstration of the procedure is provided. A total of 525 SCP instances are simulated under seven correlation levels and three levels for the number of constraints. Each of these instances is solved using three simple heuristic procedures. The performance of the heuristics on the SCP instances generated is summarized and analyzed. The performance of the heuristics generally worsens as the expected correlation between the coefficients increases and as the number of constraints increases. The results provide strong evidence of the benefits of the procedure for generating SCP instances with correlated coefficients, and in particular SCP instances with known optimal solutions.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering and Management Systems
APA, Harvard, Vancouver, ISO, and other styles
25

Levy, Eythan. "Approximation algorithms for covering problems in dense graphs." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210359.

Full text
Abstract:
We present a set of approximation results for several covering problems in dense graphs. These results show that for several problems, classical algorithms with constant approximation ratios can be analyzed in a finer way, and provide better constant approximation ratios under some density constraints. In particular, we show that the maximal matching heuristic approximates VERTEX COVER (VC) and MINIMUM MAXIMAL MATCHING (MMM) with a constant ratio strictly smaller than 2 when the proportion of edges present in the graph (weak density) is at least 3/4, or when the normalized minimum degree (strong density) is at least 1/2. We also show that this result can be improved by a greedy algorithm which provides a constant ratio smaller than 2 when the weak density is at least 1/2. We also provide tight families of graphs for all these approximation ratios. We then looked at several algorithms from the literature for VC and SET COVER (SC). We present a unified and critical approach to the Karpinski/Zelikovsky, Imamura/Iwama and Bar-Yehuda/Kehat algorithms, identifying the general the general scheme underlying these algorithms.

Finally, we look at the CONNECTED VERTEX COVER (CVC) problem,for which we proposed new approximation results in dense graphs. We first analyze Carla Savage's algorithm, then a new variant of the Karpinski-Zelikovsky algorithm. Our results show that these algorithms provide the same approximation ratios for CVC as the maximal matching heuristic and the Karpinski-Zelikovsky algorithm did for VC. We provide tight examples for the ratios guaranteed by both algorithms. We also introduce a new invariant, the "price of connectivity of VC", defined as the ratio between the optimal solutions of CVC and VC, and showed a nearly tight upper bound on its value as a function of the weak density. Our last chapter discusses software aspects, and presents the use of the GRAPHEDRON software in the framework of approximation algorithms, as well as our contributions to the development of this system.

/

Nous présentons un ensemble de résultats d'approximation pour plusieurs problèmes de couverture dans les graphes denses. Ces résultats montrent que pour plusieurs problèmes, des algorithmes classiques à facteur d'approximation constant peuvent être analysés de manière plus fine, et garantissent de meilleurs facteurs d'aproximation constants sous certaines contraintes de densité. Nous montrons en particulier que l'heuristique du matching maximal approxime les problèmes VERTEX COVER (VC) et MINIMUM MAXIMAL MATCHING (MMM) avec un facteur constant inférieur à 2 quand la proportion d'arêtes présentes dans le graphe (densité faible) est supérieure à 3/4 ou quand le degré minimum normalisé (densité forte) est supérieur à 1/2. Nous montrons également que ce résultat peut être amélioré par un algorithme de type GREEDY, qui fournit un facteur constant inférieur à 2 pour des densités faibles supérieures à 1/2. Nous donnons également des familles de graphes extrémaux pour nos facteurs d'approximation. Nous nous somme ensuite intéressés à plusieurs algorithmes de la littérature pour les problèmes VC et SET COVER (SC). Nous avons présenté une approche unifiée et critique des algorithmes de Karpinski-Zelikovsky, Imamura-Iwama, et Bar-Yehuda-Kehat, identifiant un schéma général dans lequel s'intègrent ces algorithmes.

Nous nous sommes finalement intéressés au problème CONNECTED VERTEX COVER (CVC), pour lequel nous avons proposé de nouveaux résultats d'approximation dans les graphes denses, au travers de l'algorithme de Carla Savage d'une part, et d'une nouvelle variante de l'algorithme de Karpinski-Zelikovsky d'autre part. Ces résultats montrent que nous pouvons obtenir pour CVC les mêmes facteurs d'approximation que ceux obtenus pour VC à l'aide de l'heuristique du matching maximal et de l'algorithme de Karpinski-Zelikovsky. Nous montrons également des familles de graphes extrémaux pour les ratios garantis par ces deux algorithmes. Nous avons également étudié un nouvel invariant, le coût de connectivité de VC, défini comme le rapport entre les solutions optimales de CVC et de VC, et montré une borne supérieure sur sa valeur en fonction de la densité faible. Notre dernier chapitre discute d'aspects logiciels, et présente l'utilisation du logiciel GRAPHEDRON dans le cadre des algorithmes d'approximation, ainsi que nos contributions au développement du logiciel.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
26

Zyl, Jacobus van [Verfasser], and Ian [Akademischer Betreuer] Cloete. "Fuzzy set covering as a new paradigm for the induction of fuzzy classification rules / Jacobus van Zyl. Betreuer: Ian Cloete." Mannheim : Universitätsbibliothek Mannheim, 2014. http://d-nb.info/106249279X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Curtis, Andrew. "Two families of holomorphic correspondences." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/7978.

Full text
Abstract:
Holomorphic correspondences are multivalued functions from the Riemann sphere to itself. This thesis is concerned with a certain type of holomorphic correspondence known as a covering correspondence. In particular we are concerned with a one complexdimensional family of correspondences constructed by post-composing a covering correspondence with a conformal involution. Correspondences constructed in this manner have varied and intricate dynamics. We introduce and analyze two subfamilies of this parameter space. The first family consists of correspondences for which the limit set is a Cantor set, the second family consists of correspondences for which the limit set is connected and for which the action of the correspondence on the complement of this limit set exhibits certain group like behaviour.
APA, Harvard, Vancouver, ISO, and other styles
28

Okeson, Trent James. "Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning Methods." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7060.

Full text
Abstract:
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
29

Dwivedi, Aditi. "An Integrated Optimization Model for Distribution Center Location with Considerations of Population and Income." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1353939479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Janiczková, Lucie. "Optimální rozmístění státem poskytovaných auditních služeb v rámci Moravskoslezského kraje." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-205579.

Full text
Abstract:
Municipalities in the Czech Republic deal with their budget, which among others consists of granted subsidies from the region, State, European Union or other organizations. Nowadays the budget transactions are not being under the control. In the future, it is appropriate to introduce some external view to control their spending. Establishment of an audit service for each municipality would be financially inevitable. Therefore it is suggested to provide the State audit services only in some municipalities and to share them with more municipalities within one region. Deployment of the audit centers and assigning municipalities lead to solving a linear problem which falls under the covering problem class. The establishment of audit centers is only illustrative, the employment of more shared state services could follow a similar principle.
APA, Harvard, Vancouver, ISO, and other styles
31

Sheppard, Nicholas Paul. "Self-Reduction for Combinatorial Optimisation." Thesis, The University of Sydney, 2001. http://hdl.handle.net/2123/797.

Full text
Abstract:
This thesis presents and develops a theory of self-reduction. This process is used to map instances of combinatorial optimisation problems onto smaller, more easily solvable instances in such a way that a solution of the former can be readily re-constructed, without loss of information or quality, from a solution of the latter. Self-reduction rules are surveyed for the Graph Colouring Problem, the Maximum Clique Problem, the Steiner Problem in Graphs, the Bin Packing Problem and the Set Covering Problem. This thesis introduces the problem of determining the maximum sequence of self-reductions on a given structure, and shows how the theory of confluence can be adapted from term re-writing to solve this problem by identifying rule sets for which all maximal reduction sequences are equivalent. Such confluence results are given for a number of reduction rules on problems on discrete systems. In contrast, NP-hardness results are also presented for some reduction rules. A probabilistic analysis of self-reductions on graphs is performed, showing that the expected number of self-reductions on a graph tends to zero as the order of the graph tends to infinity. An empirical study is performed comparing the performance of self-reduction, graph decomposition and direct methods of solving the Graph Colouring and Set Covering Problems. The results show that self-reduction is a potentially valuable, but sometimes erratic, method of finding exact solutions to combinatorial problems.
APA, Harvard, Vancouver, ISO, and other styles
32

Sheppard, Nicholas Paul. "Self-Reduction for Combinatorial Optimisation." University of Sydney. Computer Science, 2001. http://hdl.handle.net/2123/797.

Full text
Abstract:
This thesis presents and develops a theory of self-reduction. This process is used to map instances of combinatorial optimisation problems onto smaller, more easily solvable instances in such a way that a solution of the former can be readily re-constructed, without loss of information or quality, from a solution of the latter. Self-reduction rules are surveyed for the Graph Colouring Problem, the Maximum Clique Problem, the Steiner Problem in Graphs, the Bin Packing Problem and the Set Covering Problem. This thesis introduces the problem of determining the maximum sequence of self-reductions on a given structure, and shows how the theory of confluence can be adapted from term re-writing to solve this problem by identifying rule sets for which all maximal reduction sequences are equivalent. Such confluence results are given for a number of reduction rules on problems on discrete systems. In contrast, NP-hardness results are also presented for some reduction rules. A probabilistic analysis of self-reductions on graphs is performed, showing that the expected number of self-reductions on a graph tends to zero as the order of the graph tends to infinity. An empirical study is performed comparing the performance of self-reduction, graph decomposition and direct methods of solving the Graph Colouring and Set Covering Problems. The results show that self-reduction is a potentially valuable, but sometimes erratic, method of finding exact solutions to combinatorial problems.
APA, Harvard, Vancouver, ISO, and other styles
33

De, Boer Jeroen Wouter. "Approximate Models And Solution Approaches For The Vehicle Routing Problem With Multiple Use Of Vehicles And Time Windows." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609620/index.pdf.

Full text
Abstract:
In this study we discuss the Vehicle Routing Problem with multiple use of vehicles (VRPM). In this variant of the routing problem the vehicles may replenish at any time at the depot. We present a detailed review of existing literature and propose two mathematical models to solve the VRPM. For these two models and their several variants we provide computational results based on the test problems taken from the literature. We also discuss a case study in which we are simultaneously dealing with side constraints such as time windows, working hour limits, backhaul customers and a heterogeneous vehicle fleet.
APA, Harvard, Vancouver, ISO, and other styles
34

Kantharaj, Krithica. "Evaluating Coverage Models for Emergency Services: A Case Study of Emergency Siren Placement in Lucas County, OH." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1384525360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Santiago, Pinto Leonor. "Cobertura com restrições de conexidade." Doctoral thesis, Instituto Superior de Economia e Gestão, 2004. http://hdl.handle.net/10400.5/4705.

Full text
Abstract:
Doutoramento em Matemática Aplicada à Economia e Gestão
Dado um grafo bipartido com classes de bipartição V e U, uma cobertura é um subconjunto C Ç V em que cada vértice de U é adjacente a pelo menos um vértice de C. 0 problema da cobertura procura uma cobertura de cardinalidade mínima. No contexto da selecção de reservas para a con¬servação de espécies, V é o conjunto de povoamentos passíveis de serem seleccionados para integrar a reserva, U o conjunto de espécies a proteger e as arestas descrevem as ocorrências das espécies nos povoamentos. Algumas coberturas apresentam, no entanto, configurações espaciais que não são ade¬quadas do ponto de vista conservacionista. Por razões de sustentabilidade a fragmentação é considerada um atributo indesejável. Assim, a conexidade tem um papel importante na protecção da biodiversidade e vários autores têm recentemente proposto algoritmos que incorporam a conexidade. Nesta dissertação considera-se a introdução explícita da conexidade no problema da cobertura, de forma a dar resposta a questões relevantes em biologia da conservação.
Given a bipartite graph with bipartition V and U, a cover is a subset C C V such that each node of U is adjacent to at least one node in C. The set cov¬ering problem seeks a minimum cardinality cover. In the context of reserve selection for conservation of species, V is a set of candidate sites from a re¬serve network, U is the set of species to be protected, and the edges describe which species are represented in each site. Some covers however may assume spatial configurations which are not adequate for conservational purposes. For sustainability reasons the fragmentation of existing natural habitats should be avoided. Thus, connectivity appears to be an important issue for persistence of biodiversity, and several authors have recently proposed algorithms which incorporate connectivity. We address the issue of explic¬itly introducing connectivity in the set covering problem, with relevance for conservation biology.
APA, Harvard, Vancouver, ISO, and other styles
36

Valná, Hana. "Analýza pracovních sil v prodejně stavebnin." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-72258.

Full text
Abstract:
The present thesis shows the manner of distributing workers on shifts with regular work load but uneven work schedules. It is generally known as the covering or set-covering problem. The present paper describes a way to solve it with mathematical modelling and programs for possible planning of workers on shifts. The issue is documented on a case of an existing company.
APA, Harvard, Vancouver, ISO, and other styles
37

Lukesová, Kristýna. "Logické úlohy a hlavolamy jako optimalizační problémy." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-113465.

Full text
Abstract:
This thesis applies classical optimization problems such as assignment or set-covering problem on logical puzzles or brainteasers. Listed in the first part are mathematical model, description and typical example of each optimization problem used in this thesis. The second part contains these models applied to the particular brainteasers for example Sudoku or Einstein's Puzzle. Exercises are divided into simpler and more complex ones. There is specification, source and a described method of solution stated for each of them. The calculation examples use Lingo or MS Excel or both. The aim is to show the possibility to address logical puzzles and brainteasers with the use of optimization problems, and thus confirm the wide possibilities of using these models. These examples can clarify and diversify the curriculum.
APA, Harvard, Vancouver, ISO, and other styles
38

Barbato, Michele. "A Polyhedral Approach for the Double TSP with Multiple Stacks and Lexicographical Orders." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCD049/document.

Full text
Abstract:
Dans cette thèse nous considérons deux problèmes d'optimisation combinatoire.Le premier s'appelle problème du double voyageur de commerce avec contraintes de piles. Dans ce problème, un véhicule doit ramasser un certain nombre d'objets dans une région pour les livrer à des clients situés dans une autre région. Lors du ramassage, les objets sont stockés dans les différentes piles du véhicule et la livraison des objets se fait selon une politique de type last-in-first-out. Le ramassage et la livraison consistent chacune en une tournée Hamiltonienne effectuée par le véhicule dans la région correspondante.Nous donnons une formulation linéaire en nombres entiers pour ce problème. Elle est basée sur des variables de précédence et sur des contraintes de chemins infaisables. Nous donnons par la suite des résultats polyédraux sur l'enveloppe convexe des solutions de notre formulation. En particulier, nous montrons des liens forts avec un polytope associé au problème du voyageur de commerce et des liens avec un polytope de type set covering. Cette étude polyédrale nous permet de renforcer la formulation initiale et de développer un algorithme de coupes et branchements efficace. Le deuxième problème que nous considérons consiste à trouver la description des polytopes lexicographiques. Ces derniers sont les enveloppes convexes des points entiers lexicographiquement compris entre deux points entiers fixés. Nous donnons une description complète de ces polytopes en termes d'inégalités linéaires. Nous démontrons que la famille des polytopes lexicographiques est fermée par intersection
In this thesis we consider two problems arising in combinatorial optimization.The first one is the double traveling salesman problem with multiple stacks. In this problem a vehicle picks up a given set of items in a region and subsequently delivers them to demanding customers in another region. When an item is picked up, it is put in a stack of the vehicle. The items are delivered observing a last-in-first-out policy. The pickup phase and the delivery phase consist in two Hamiltonian circuits, each performed by the vehicle in the corresponding region. We give a new integer linear programming formulation for this problem. Its main features are the presence of precedence variables and new infeasible path constraints. We provide polyhedral results on the convex hull of the solutions to our formulation. In particular, we show strong links with a specific TSPpolytope and a specific set covering polytope. We deduce strengthening inequalities for the initial formulation, subsequently embedded in an efficient branch-and-cut algorithm. The second problem we consider consists in finding the description of the lexicographical polytopes. These are convex hulls of the integer points lexicographically between two given integer points. We give a complete description of these polytopes by means of linear inequalities. We show that the lexicographical polytope family is closed under intersection
APA, Harvard, Vancouver, ISO, and other styles
39

Popek, Miloš. "Řešení optimalizačních úloh inspirované živými organismy." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-235554.

Full text
Abstract:
We meet with solving of optimization problems every day, when we try to do our tasks in the best way. An Ant Colony Optimization is an algorithm inspired by behavior of ants seeking a source of food. The Ant Colony Optimization is successfuly using on optimization tasks, on which is not possible to use a classical optimization methods. A Genetic Algorithm is inspired by transmision of a genetic information during crossover. The Genetic Algorithm is used for solving optimization tasks like the ACO algorithm. The result of my master's thesis is created simulator for solving choosen optimization tasks by the ACO algorithm and the Genetic Algorithm and a comparison of gained results on implemented tasks.
APA, Harvard, Vancouver, ISO, and other styles
40

Gibson, Matthew Richard. "Clusters and covers: geometric set cover algorithms." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/502.

Full text
Abstract:
The set cover problem is a well studied problem in computer science. The problem cannot be approximated to better than an log n-factor in polynomial time unless P = NP and has an O(log n)-factor approximation algorithm. We consider several special cases of the set cover problem in which geometry plays a key role. With geometric structure introduced to the problem, it may be possible to construct approximation algorithms with approximation ratios asymptotically better than log n. The first problem we consider is the decomposing coverings problem. Here, we consider a combinatorial problem: given a collection of points in the plane and a collection of objects in the plane such that each point is contained in at least k objects, partition the objects into as many sets as possible so that each set covers all of the points. We show that if the objects are translates of a convex polygon, then it is possible to partition the translates into Ω(k) covers. The second problem we consider is the planar sensor cover problem. This problem is a generalization of the decomposing coverings problem. We are given a collection of points in the plane and a collection of objects in the plane. Each of the objects can be thought of as a sensor. The sensors have a duration which can be thought of as the battery life of the sensor. The planar sensor cover problem is to schedule a start time to each of the sensors so that the points are covered by a sensor for as long as possible. We give a constant factor approximation for this problem. The key contribution to this result is a constant factor approximation to a one-dimensional version of the problem called the restricted strip cover (RSC) problem. Our result for RSC improves upon the previous best O(log log log n)-approximation, and our result for the planar sensor cover problem improves upon the previous best O(log n)-approximation. The next problem we consider is the metric clustering to minimize the sum of radii problem. Here, we are given an n-point metric (P,d) and an integer k > 0. We are interested in covering the points in P with at most k balls so that the sum of the radii of the balls is minimized. We give a randomized algorithm which solves the problem exactly in nO(log n log Δ) time, where Δ is the ratio of the maximum interpoint distance to the minimum interpoint distance. We also show that the problem is NP-hard, even in metrics induced by weighted planar graphs and when the metric has constant doubling dimension. The last problem we consider is the minimum dominating set problem for disk graphs. In this problem, we are given a set of disks in the plane, and we want to choose a minimum-cardinality subset of disks such that every disk is either in the set or intersects a disk in the set. For any ε > 0, we show that a simple local search algorithm is a (1+ ε)-approximation for the problem which improves upon the previous best O(log n)-approximation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Vieira, Deborah Luisa Detânico. "Planejamento da cobertura de redes móveis de quarta geração através de metaheurística híbrida." Universidade do Vale do Rio dos Sinos, 2017. http://www.repositorio.jesuita.org.br/handle/UNISINOS/6994.

Full text
Abstract:
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-04-12T13:49:50Z No. of bitstreams: 1 Deborah Luisa Detânico Vieira_.pdf: 1504339 bytes, checksum: 49a2adc770aff79d216c818e22dea099 (MD5)
Made available in DSpace on 2018-04-12T13:49:50Z (GMT). No. of bitstreams: 1 Deborah Luisa Detânico Vieira_.pdf: 1504339 bytes, checksum: 49a2adc770aff79d216c818e22dea099 (MD5) Previous issue date: 2017-05-17
Nenhuma
Com a crescente demanda de serviços de voz e, principalmente, dados móveis se fez necessário o desenvolvimento das tecnologias de quarta geração (4G). O padrão Long Term Evolution (LTE), desenvolvido pela Third Generation Partnership Project (3GPP), foi escolhido pela International Telecommunications Union (ITU) como tecnologia para atender os requisitos da quarta geração de serviços móveis. Para as operadoras inserirem esta nova tecnologia em suas redes existentes, se faz necessário um estudo meticuloso de planejamento, muito embora, na prática, este planejamento seja desenvolvido de forma empírica. O problema de planejamento de redes é conhecido e bem estudado no ramo da computação, conhecido como problema de recobrimento de conjuntos e classificado, pela sua complexidade, como NP-difícil. Dadas as características diferenciadas da arquitetura da rede do LTE, este trabalho busca resolver o problema de planejamento de redes de quarta geração (4G), utilizando uma modelagem matemática aplicada a uma metaheurística híbrida, composta de Algoritmo Genético e Busca Tabu. Almejase resolver o problema de cobertura de uma determinada região, cobrindo a maior área possível com o menor número possível de Base Stations (BS), visando ao planejamento com maior assertividade e redução do custo de implantação da rede 4G.
With the constantly demand of voice services and mostly in mobile data, there was the need the development of the mobile services of fourth generation (4G). The pattern Long Term Evolution, developed by the Third Generation Partnership Project (3GPP) was chosen by the International Telecommunications Union (ITU) as technology to attend the requirements of the fourth generation of mobile services. For the mobile operators introduce and apply this new generation in their own existing networks, they need to do an extensive research and planning, even if, in practical means, it is applied using the empirical way. The network planning problem is widely known and studied in computing area as set-covering problem ans classified as NPhard. Due the unique characteristics of network architecture of LTE, this work aims to solve the mobile’s fourth generation planning problem using a mathematics modelling apply to a hybrid metaheuristics, composed with Genetic Algorithm and Tabu Search. It aims solve the coverage problem of a specific region, covering the largest area possible with the fewest number of Base Sations (BS) possible, seeking the best compliance and cost reduction of the LTE network deployment.
APA, Harvard, Vancouver, ISO, and other styles
42

Hodges, Justin E. "Evaluation of loggerhead sea turtle carapace properties and prototype biomimetic carapace fabrication." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26496.

Full text
Abstract:
Thesis (M. S.)--Civil and Environmental Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Scott, David; Committee Member: Kurtis, Kimberly; Committee Member: Work, Paul. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
43

Briheche, Yann. "Optimisation du maillage de la veille sur radar à balayage électronique à panneau fixe." Thesis, Ecole centrale de Nantes, 2017. http://www.theses.fr/2017ECDN0036.

Full text
Abstract:
Les radars modernes sont des systèmes complexes. Leurs missions, incluant surveillance, suivi et identification, se sont étendues conjointement à leurs capacités, favorisées par le développement de l'électronique et du numérique. Ces radars peuvent balayer dynamiquement et librement l'espace grâce à des panneaux numériques, les libérant des limitations des moteurs mécaniques. La guerre électronique, où les temps de réaction sont toujours plus courts, nécessite néanmoins une gestion parcimonieuse du temps disponible au radar pour accomplir ces missions. Dans ce contexte, l'optimisation du temps utilisé pour la surveillance doit exploiter pleinement les capacités des nouveaux radars. Les travaux réalisés durant cette thèse ont été de formaliser mathématiquement ce problème, de déterminer et adapter les outils pertinents pour sa résolution, et d'en explorer les possibilités. Le problème de la surveillance radar se rapproche conceptuellement du recouvrement d'ensemble en optimisation combinatoire. Grâce à des algorithmes utilisant la programmation dynamique et la programmation linéaire en nombres entiers, ce problème a pu être résolu, et étendu à des situations plus complexes, incluant différentes contraintes opérationnelles. Cette approche fructueuse ouvre de nouvelles pistes pour l'amélioration des performances des radars, et offre de nombreuses possibilités d'applications. Entre autres l'aide à la conception des couvertures des radars actuels, la simulation des performances d'architectures de futurs radars et le développement de radars cognitifs, capables de s'adapter à leur environnement opérationnel
Modern radars are complex systems, capable of multiple functions: scanning, tracking, identification, etc. With the advent of electronic and digital technologies, radars can dynamically and freely sweep their surroundings using fixed-panels, freeing them from the limitations of mechanical rotation. With increasingly intelligent and adaptable systems competing in modern warfare in ever shorter time, careful management of the radar available time-budget is required to achieve desired performances and ensure civilian and military safety. In this context, optimization of radar search pattern time-budget must exploit modern radars full potential. This thesis main accomplishments are the mathematical modelling of radar search pattern optimization, the identification and development of appropriate tools for its solving, and the exploration of the model possibilities. Radar search pattern design can be related to covering problems in combinatorial optimization. Radar covering can be solved using methods based on dynamic programming and integer programming, and can furthermore be extended to account for more complex situations with multiple operational constraints. The tools developed in this thesis provide a powerful and flexible framework for solving radar covers problems. This framework opens interesting research avenues for improving radar performances. It offers various possible applications for aided-design of radar search patterns, simulation of new radar architectures performances, and development of cognitive radar systems capable of adapting in real time to the operational environment
APA, Harvard, Vancouver, ISO, and other styles
44

Lazaravičius, Saulius. "Telekomunikacijų prieigos tinklo optimizavimo uždavinių analizė ir realizacija." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_144224-78186.

Full text
Abstract:
Darbo tikslas – sukurti bendrą prieigos tinklo modeliavimo metodiką bei jos programinę realizaciją, atitinkančią šiuos reikalavimus: • n užduotų prieigos tinklo parametrų reikšmių optimalus nustatymas pagal m užduotų prieigos tinklo kokybės apribojimų, kai n ≥ 1, o m ≥ 0; • optimalus stočių koordinačių nustatymas mobiliojo telefono ryšio tinklui; • optimalus stočių koordinačių nustatymas laidinio telefono ryšio tinklui; Darbo pradžioje apžvelgiamos telekomunikacijų sektoriaus užduotys, kurios gali būti sprendžiamos kombinatorinio optimizavimo metodais. Taipogi pristatomi ir suklasifikuojami galimi šių užduočių sprendimo metodai. Tiriamojoje darbo dalyje pristatomas daugiaparametrinis prieigos tinklo optimizavimo algoritmas integruotas su stočių išdėstymo algoritmais. Stočių išdėstymui pateikiami du meta-euristiniai algoritmai: • Skruzdžių kolonijos algoritmas, papildytas lokalios paieškos procedūra; • Genetinis algoritmas, papildytas lokalios paieškos procedūra. Minėtų algoritmų realizacijos skirstomos pagal šias prieigos tinklo ryšio topologijas: • Mobiliojo telefono ryšio tinklui; • Fiksuoto telefono ryšio tinklui. Esminiai darbe pasiekti rezultatai: • Sukurta universali metodika, leidžianti kurti realius prieigos tinklo modelius; • Sukurta šios metodikos programinė realizacija. Darbe nagrinėjamų uždavinių ir algoritmų pagrindu buvo paskelbti ir pristatyti šie straipsniai: • „Prieigos tinklo parametrų optimalaus... [toliau žr. visą tekstą]
The objective of this work is creation of telecommunication network approach algorithm and its realization. The created algorithm must fulfill following requirements: • optimal values evaluation of n given network approach parameters with m given network approach quality constrains, where n ≥ 1, o m ≥ 0; • optimal solution for transmitters placement problem in mobile phone network; • optimal solution for transmitters placement problem in fixed phone network; In the beginning of this paper we present a set of telecommunication segment problems which can be solved using combinatorial optimization methods. Also we present a set of combinatorial optimization methods which can be used for solving these problems. Finally we present a graphical classification of analyzed problems and connect it with algorithms which are capable for solving it. In the research part of this paper we present a multi parametric network approach optimization algorithm united with algorithms for placing transmitters. Next we present two Meta heuristics based optimization algorithms: • Ant Colony Optimization algorithm with local search procedure; • Genetic algorithm with local search procedure. The realization of these two algorithms depends on the topology of the network approach being analyzed. In this paper we analyze two most common types of network approaches: • Mobile phone network approach; • Fixed phone network approach. The two main achievements of... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
45

Pinheiro, Douglas Beserra. "Ensaios sobre oferta pública de ações e estabilização de preços." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/11534.

Full text
Abstract:
Submitted by douglas pinheiro (douglasbpinheiro@bol.com.br) on 2014-03-13T02:59:35Z No. of bitstreams: 1 Texto Entrega - Final.pdf: 1947912 bytes, checksum: 3c9bf11c80c41c608d5be807e73886f1 (MD5)
Approved for entry into archive by PAMELA BELTRAN TONSA (pamela.tonsa@fgv.br) on 2014-03-17T14:33:20Z (GMT) No. of bitstreams: 1 Texto Entrega - Final.pdf: 1947912 bytes, checksum: 3c9bf11c80c41c608d5be807e73886f1 (MD5)
Made available in DSpace on 2014-03-17T14:36:39Z (GMT). No. of bitstreams: 1 Texto Entrega - Final.pdf: 1947912 bytes, checksum: 3c9bf11c80c41c608d5be807e73886f1 (MD5) Previous issue date: 2014-02-24
In public offerings, whether initial public offerings (IPO) or seasoned public offerings (SEO), whose sale occurs via firm commitment, is common to include a stabilization clause, which allows underwriters repurchase shares on the market after the beginning of negotiations with the aim of delaying or preventing the fall in the value. The repurchased shares are those from the option given by the issuer to the underwriter for the sale in excess of up to 15% of the shares initially offered, in addition, in case of strong demand an additional sale can occur up to the limit of 20% of the initial offering, called hot-issue, but these actions are cannot be repurchased. The green shoe option allows the underwriters buy part or all the shares sold in excess, increasing supply permanently. This thesis evaluates, in this order, the main determinants of stabilization and its effects on short-term return of SEOs, the effects on the cost of issuing shares and then the effect on the long-term return of IPOs and SEOs, each topic is evaluated in a different and independent chapter, after an introductory chapter which describes the structure of the thesis. In the first chapter the results indicate that in SEOs, the risk, liquidity and demand of domestic and foreign institutional investors are important in determining the occurrence of over-allocation and stabilization, and in addition to these factors the demand from retail investors also affect the intensity of stabilization. As for the effects on return immediately after the end of stabilization, the non-stabilized SEOs exhibit short-term return higher than stabilized and, on average, the prices do not fall after the end of stabilization, however, the greater the intensity of the repurchases, the lower are post-stabilization returns. In the second chapter, it was found that underwriters predict the intensity of stabilizing and/or the exercise of the greenshoe and hot-issue options, and since they are paid by the sale of these additional shares, they adjust ex-ante the remuneration charged to issuers, not appropriating of an eventual profit nor bearing the cost of repurchase and return the shares to the issuer. These results were observed both in IPOs and SEOs, moreover, the same effect was observed in the total costs. In the latter chapter has been shown that the level of stabilization and/or exercise of the green shoe and hot-issue does not affect the long-term return of IPOs, however the greenshoe and 9 hot-issue exercise negatively affect the adjusted cumulative returns of SEOs until the 3rd year after the issuance, even when controlling by the issuance size and market-to-book ratio. Some major contributions of this thesis are: 1) despite the difference in the risk level, the results of the occurrence and intensity of stabilization are generally similar to those observed for IPOs, 2) this is the first study to examine the stabilization effects in SEO short-term returns, as well as its finding that the market return influences the intensity of stabilization, 3) for the first time shows that the size of the stabilization process are anticipated by underwriters and in the context of a competitive market, are included into the cost of issuing IPO and SEO, 4) notes that the stabilization process has effect on long-term return of SEOs. All these results cast doubt on studies of return and compensation of public offerings that exclude such information in their models.
Em ofertas públicas de ações, sejam elas ofertas públicas iniciais (IPO) ou ofertas públicas subseqüentes (SEO), cuja venda ocorre via garantia firme de subscrição e liquidação é comum a inclusão de uma clausula de estabilização, que permite aos underwriters recomprar ações no mercado após o início das negociações com o objetivo de postergar ou evitar a queda do valor. As ações recompradas são aquelas provenientes da opção dada pelo emissor ao underwriter para a venda em excesso de até 15% das ações inicialmente ofertadas, além disso, no caso de forte demanda uma venda complementar pode ocorrer até o limite de 20% da oferta inicial, chamada de hot-issue, porém essas ações não são passíveis de recompra. A opção de greenshoe permite ao underwriter comprar parte ou todas as ações vendidas em excesso, aumentando a oferta permanentemente. Essa tese avalia, nesta ordem, os principais determinantes da estabilização e efeitos no retorno de curto prazo de SEOs, os efeitos sobre o custo da emissão e depois o efeito sobre o retorno de longo prazo de SEOs e IPOs, cada assunto é tratado em um capítulo diferente e independente, após um capítulo introdutório que descreve a estrutura da tese. No primeiro, os resultados indicam que em SEOs o risco, a liquidez e a demanda de investidores institucionais nacionais e estrangeiros são importantes na determinação da sobrealocação e ocorrência da estabilização, e que além desses fatores a demanda dos investidores de varejo também influencia a intensidade da estabilização. Quanto aos efeitos no retorno imediatamente após o fim da estabilização, SEOs não estabilizados apresentam retorno de curto prazo maior que os estabilizados não há, em média, queda do preço das ações após o fim da estabilização, no entanto, foi constatado que SEOs intensamente estabilizados apresentam retorno pós-estabilização menores. No segundo, foi constatada que os underwriters prevêem a intensidade da estabilização e/ou do exercício do greenshoe e hot-issue, e uma vez que são remunerados pela venda dessas ações suplementares, ajustam ex-ante a remuneração cobrada dos emissores, não se apropriando um eventual lucro nem suportando o custo de recompra e devolução das ações. Esses resultados foram observados tanto nos IPOs quanto nos SEOs e o mesmo efeito foi observado nos custos totais. No último foi mostrado que o nível de estabilização e/ou exercício do greenshoe e hot-issue não afetam o retorno de longo prazo dos IPOs, no entanto exercício do greenshoe e hot-issue afetou negativamente o retorno acumulado ajustado dos SEOs até o 3º ano após a emissão, inclusive quando controlado pelo volume da emissão e market-to-book. Algumas contribuições importantes desta tese são: 1) apesar da diferença no nível de risco, os resultados da ocorrência e intensidade da estabilização são em geral semelhantes àqueles observados para IPOs; 2) este é o primeiro trabalho a analisar as conseqüências da estabilização no retorno de curto prazo dos SEOs, bem como a constatar que o retorno de mercado influencia intensidade da estabilização; 3) pela primeira vez mostra que as dimensões do processo de estabilização são antecipadas pelos underwriters e no contexto de um mercado competitivo, são incorporadas ao custo da emissão de IPOs e SEO; 4) constata que o processo de estabilização possui efeito sobre retorno de longo prazo dos SEOs. Todos esses resultados lançam dúvida quanto aos estudos sobre retorno e remuneração em ofertas públicas que excluem essas informações de seus modelos.
APA, Harvard, Vancouver, ISO, and other styles
46

"Algorithms to Solve Set Covering." Tesis, Universidad de las Américas Puebla, 2004. http://catarina.udlap.mx/u_dl_a/tales/documentos/lis/santillan_r_r/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Wen-Chih, and 黃文志. "A genetic algorithm approach for set covering problems." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/70804017262176760944.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程研究所
81
Set covering problems appear in a wide range of applications such as resource allocation and scheduling. Genetic Algorithm is a general purpose optimization algorithm. Recently many NP- complete problems such as traveling salesman and sequencing problems are solved using Genetic Algorithm and the results are good. In this thesis, we introduce a genetic algorithm approach for set covering problems. Since the set covering problems are constrained optimization problems we bring up a new penalty function which can value neighbors of optima higher. In addition we propose a mutation operation which can guide the search toward the optima from both sides of feasible/infeasible border. To illustrate the applicability of our algorithm, we use it to solve several instances of computationally difficult set covering problems that arise from computing the 1-width of the incidence matrix of Steiner triple systems. We can in advance find solutions better than the currently best known for two larger instances. This success means that genetic algorithms can efficiently explore and exploit the solution space. Since traditional algorithms do not have the ability of "attacking from two sides". We want to experiment our approach on real-world application problems, such as airline crew scheduling problems. Our ultimate goal is to develop a genetic algorithm for general integer programming.
APA, Harvard, Vancouver, ISO, and other styles
48

Grant, Elyot. "Covering Problems via Structural Approaches." Thesis, 2011. http://hdl.handle.net/10012/6317.

Full text
Abstract:
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible. In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide: - An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC. - A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure. - A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity. - Applications of the above to various capacitated covering problems via linear programming strengthening and rounding. In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Yuh-Ho, and 陳裕厚. "A Design of Hybrid Heuristic Methods for Set Covering Problems." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/49165661834553506103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

HUANG, MING-CHONG, and 黃銘崇. "Location-search-based solution methods for the set covering problem." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/99011874033646558249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography