Dissertations / Theses on the topic 'Computationnal geometry'

To see the other types of publications on this topic, follow the link: Computationnal geometry.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computationnal geometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Baer, Lawrence H. "Numerical aspects of computational geometry." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the numerical issues resulting from the implementation of geometric algorithms on finite precision digital computers. From an examination of the general problem and a survey of previous research, it appears that the central problem of numerical computational geometry is how to deal with degenerate and nearly degenerate input. For some applications, such as solid modeling, degeneracy is often intended but we cannot always ascertain its existence using finite precision. For other applications, degenerate input is unwanted but nearly degenerate input is unavoidable. Near degeneracy is associated with ill-conditioning of the input and can lead to a serious loss of accuracy and program failure. These observations lead us to a discussion of problem condition in the context of computational geometry. We use the Voronoi diagram construction problem as a case study and show that problem condition can also play a role in algorithm design.
2

Hussain, R. "Computational geometry using fourier analysis." Thesis, De Montfort University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eades, Patrick Fintan. "Uncertainty Models in Computational Geometry." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years easily and cheaply available internet-connected devices have enabled the collection of vast amounts of data, which has driven a continued interest in efficient, elegant combinatorial algorithms with mathematical guarantees. Much of this data contains an inherent element of uncertainty; whether because of imperfect measurements, because the data contains predictions about the future, or because the data is derived from machine learning algorithms which are inherently probabilistic. There is therefore a need for algorithms which include uncertainty in their definition and give answers in terms of that uncertainty. Questions about the most likely solution, the solution with lowest expected cost or a solution which is correct with high probability are natural here. Computational geometry is the sub-field of theoretical computer science concerned with developing algorithms and data structures for geometric problems, that is problems involving points, distances, angles and shapes. In computational geometry uncertainty is included in the location of the input points, or in which potential points are included in the input. The study of uncertainty in computational geometry is relatively recent; earlier research concerned imprecise points, which are known to appear somewhere in a geometric region. More recently the focus has been on points whose location, or presence, is given by a probability distribution. In this thesis we describe the most commonly used uncertainty models which are the subject of ongoing research in computational geometry. We present specific problems in those models and present new results, both positive and negative. In Chapter 3 we consider universal solutions, and show a new lower bound on the competitive ratio of the Universal Traveling Salesman Problem. In Chapter 4 we describe how to determine if two moving entities are ever mutually visible, and how data structures can be repeatedly queried to simulate uncertainty. In Chapter 5 we describe how to construct a graph on uncertain points with high probability of being a geometric spanner, an example of redundancy protecting against uncertainty. In Chapter 6 we introduce the online ply maintenance problem, an online problem where uncertainty can be reduced at a cost, and give an optimal algorithm.
4

Pirzadeh, Hormoz. "Computational Geometry with the Rotating Calipers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0027/MQ50856.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Doskas, Michael. "Various stabbing problems in computational geometry." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pătrașcu, Mihai. "Computational geometry through the information lens." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 111-117).
This thesis revisits classic problems in computational geometry from the modern algorithmic perspective of exploiting the bounded precision of the input. In one dimension, this viewpoint has taken over as the standard model of computation, and has led to a powerful suite of techniques that constitute a mature field of research. In two or more dimensions, we have seen great success in understanding orthogonal problems, which decompose naturally into one dimensional problems. However, problems of a nonorthogonal nature, the core of computational geometry, have remained uncracked for many years despite extensive effort. For example, Willard asked in SODA'92 for a o(nlg n) algorithm for Voronoi diagrams. Despite growing interest in the problem, it was not successfully solved until this thesis. Formally, let w be the number of bits in a computer word, and consider n points with O(w)-bit rational coordinates. This thesis describes: * a data structure for 2-d point location with O(n) space, and 0( ... )query time. * randomized algorithms with running time 9 ... ) for 3-d convex hull, 2-d Voronoi diagram, 2-d line segment intersection, and a variety of related problems. * a data structure for 2-d dynamic convex hull, with O ( ... )query time, and O ( ... ) update time. More generally, this thesis develops a suite of techniques for exploiting bounded precision in geometric problems, hopefully laying the foundations for a rejuvenated research direction.
by Mihai Pǎtraşcu.
S.M.
7

Selmi-Dei, Fabio Pakk. "Um visualizador para uma extensão de CGAL ao plano projetivo orientado." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientadores: Pedro Jussieu de Rezende
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-04T08:54:01Z (GMT). No. of bitstreams: 1 Selmi-Dei_FabioPakk_M.pdf: 2287860 bytes, checksum: 97e2fc68f82f1ee33b0e737ed3b9f831 (MD5) Previous issue date: 2005
Resumo: Visualizadores são softwares capazes de gerar, através de recursos gráficos computacionais, figuras geométricas a partir de estruturas de dados e seus estados. Suas imagens facilitam a compreensão e depuração de algoritmos, bem como aumentam a intuição do usuário sobre os objetos geométricos e o espaço que os abriga. O presente trabalho descreve o projeto e a criação de um visualizador geométrico para uma extensão de CGAL ao plano projetivo orientado ('T POT 2'). CGAL é uma biblioteca de algoritmos geométricos e estruturas de dados desenvolvida por um consórcio de universidades com o objetivo de ser uma ferramenta de fácil acesso usada no desenvolvimento de aplicações que necessitem resolver problemas geométricos em 'R POT 2'. Através do trabalho [dO04], esta biblioteca foi estendida para incorporar 'T POT 2', preservando sua robustez, corretude e confiabilidade. O plano projetivo orientado é um espaço geométrico estritamente maior que o plano cartesiano 'R POT 2', porém com geometria semelhante. Uma das principais características de 'T POT 2' é o uso de coordenadas homogêneas sinaladas, o que permite lidar com o conceito de pontos no infinito de maneira homogênea ao tratamento dos pontos do plano, possibilitando o projeto de algoritmos geométricos que não mais precisam tratar separadamente muitos casos particulares, tornando-os mais simples e sucintos. Neste contexto, o visualizador aqui descrito tem por finalidade a criação de um ambiente de visualização que permite a observação das características intrínsecas à geometria projetiva orientada, o que é de grande benefício para o usuário-programador da extensão de CGAL para 'T POT 2'
Abstract: A graphical viewer is a software that enables the display of geometric figures from data structures and their varying states. The images it provides improve comprehension, make debugging easier and raise the users' intuition regarding geometric objects and their embedding space. The present work describes the design and creation of a geometrical viewer for an oriented projective plane ('T POT 2') extension of CGAL. CGAL is a library of geometric algorithms and data structures developed by a consortium of universities with the goal of producing an easy-to-use tool for building applications that require problem solving in 'R POT 2'. In [dO04], Oliveira describes an extension of this library that incorporates 'T POT 2' into CGAL, while adhering to its robustness, correctness and reliability. The oriented projective plane is a geometric space strictly larger than the Cartesian plane R2, though with similar geometry. One of the main features of 'T POT 2' is the use of signed homogeneous coordinates, which enables one to work with points at infinity in a way similar to working with proper points on the plane, allowing for the design of algorithms that no longer need to handle many particular cases, making them simpler and shorter. In this context, the viewer described here has the purpose of providing a visualization system that allows for the perception of the intrinsic characteristics of the oriented projective geometry, which is of great benefit to programmers of the extension of CGAL to 'T POT 2'
Mestrado
Geometria Computacional
Mestre em Ciência da Computação
8

Lundqvist, Samuel. "Computational algorithms for algebras." Doctoral thesis, Stockholm : Department of Mathematics, Stockholm University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-31552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diss. (sammanfattning) Stockholm : Stockholms universitet, 2009.
At the time of doctoral defence, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript. Paper 5: Manuscript. Paper 6: Manuscript. Härtill 6 uppsatser.
9

Murri, Riccardo. "Computational techniques in graph homology of the moduli space of curves." Doctoral thesis, Scuola Normale Superiore, 2013. http://hdl.handle.net/11384/85723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The object of this thesis is the automated computation of the rational (co)homology of the moduli spaces of smooth marked Riemann surfaces Mg;n. This is achieved by using a computer to generate a chain complex, known in advance to have the same homology as Mg;n, and explicitly spell out the boundary operators in matrix form. As an application, we compute the Betti numbers of some moduli spaces Mg;n. Our original contribution is twofold. In Chapter 3, we develop algorithms for the enumeration of fatgraphs and their automorphisms, and the computation of the homology of the chain complex formed by fatgraphs of a given genus g and number of boundary components n. In Chapter 4, we describe a new practical parallel algorithm for performing Gaussian elimination on arbitrary matrices with exact computations: projections indicate that the size of the matrices involved in the Betti number computation can easily exceed the computational power of a single computer, so it is necessary to distribute the work over several processing units. Experimental results prove that our algorithm is in practice faster than freely available exact linear algebra codes. An effective implementation of the fatgraph algorithms presented here is available at http://code.google.com/p/fatghol. It has so far been used to compute the Betti numbers of Mg;n for (2g + n) 6 6. The Gaussian elimination code is likewise publicly available as open-source software from http://code.google.com/p/rheinfall.
10

Scibilia, Francesco. "Explicit Model Predictive Control:Solutions Via Computational Geometry." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The thesis is mainly focused on issues involved with explicit model predictive control approaches. Conventional model predictive control (MPC) implementation requires at each sampling time the solution of an open-loop optimal control problem with the current state as the initial condition of the optimization. Formulating the MPC problem as a multi-parametric programming problem, the online optimization effort can be moved offline and the optimal control law given as an explicitly defined piecewise affine (PWA) function with dependence on the current state. The domain where the PWA function is defined corresponds to the feasible set which is partitioned into convex regions. This makes explicit MPC solutions into promising approaches to extend the scope of applicability of MPC schemes. The online computation reduces to simple evaluations of a PWA function, allowing implementations on simple hardware and with fast sampling rates. Furthermore, the closed form of the MPC solutions allows offline analysis of the performance, providing additional insight of the controller behavior. However, explicit MPC implementations may still be prohibitively costly for large optimization problems. The offline computational effort needed to solve the multiparametric optimization problem may be discouraging, and even the online computation needed to evaluate a complex PWA controller may cause difficulties if low-cost hardware is used. The first contribution of this thesis is to propose a technique for computing approximate explicit MPC solutions for the cases where optimal explicit MPC solutions are impractical due to the offline computational effort needed and their complexity for online evaluations. This technique is based on computational geometry, a branch of computer science which focuses heavily on computational complexity since the algorithms are intended to be used on large data-sets. The approximate solution is suboptimal only over the subregion of the feasible set where constraints are active. In this subregion, the ineffective optimal explicit MPC solution is replaced by an approximation based on Delaunay tessellations and is computed from a finite number of samples of the exact solution. Finer tessellations can be obtained in order to achieve a desired level of accuracy Successively, the thesis presents a twofold contribution concerned with the computation of feasible sets for MPC and their suitable approximations. First, an alternative approach is suggested for computing the feasible set which uses set relations instead of the conventional orthogonal projection. The approach can be implemented incrementally on the length of the MPC prediction horizon, and proves to be computationally less demanding than the standard approach. Thereafter, an algorithm for computing suitable inner approximations of the feasible set is proposed, which constitutes the main contribution. Such approximations are characterized by simpler representations and preserve the essential properties of the feasible set as convexity, positive invariance, inclusion of the set of expected initial states. This contribution is particularly important in the context of finding less complex suboptimal explicit MPC solutions, where the complexity of the feasible set plays a decisive role. The last part of the thesis is concerned with robustness of nominal explicit MPC solutions to model uncertainty. In the presence of model mismatch, when the controller designed using the nominal model is applied to the real plant, the feasible set may lose its invariance property, and this means violation of constraints. Also, since the PWA control law is designed only over the feasible set, there is the technical problem that the control action is undefined if the state moves outside of this set. To deal with this issue, a tool is proposed to analyze how uncertainty on the model affects the PWA control law computed using the nominal model. Given the linear system describing the plant and the PWA control law, the algorithm presented considers the polytopic model uncertainty and constructs the maximal robust feasible set, i.e. the largest subset of the feasible set which is guaranteed to be feasible for any model in the family of models described by the polytopic uncertainty. The appendix of the thesis contains two additional contributions which are only marginally related to the main theme of the thesis. MPC approaches are often implemented as state feedback controllers. The state variables are not always measured, and in these cases a state estimation approach has to be adopted to obtain the state from the measurements. The two contributions deal with state estimation in two different applications, but not with the explicit goal of being used in MPC approaches.
11

Colley, Paul. "Visibility problems and optimization in computational geometry." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27818.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Teillaud, Monique. "Towards dynamic randomized algorithms in computational geometry /." Berlin [u.a.] : Springer, 1993. http://www.loc.gov/catdir/enhancements/fy0815/93023628-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Petrauskas, Karolis. "Computational Modelling of Biosensors of Complex Geometry." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110701_105911-89480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biosensors are analytical devices mainly used to detect analytes and measure their concentrations. Mathematical modeling is widely used for optimizing and analyzing an operation of biosensors for reducing price of development of new biosensors. The object of this research is mathematical and computer models, describing an operation of biosensors, made of several parts with different properties. The dissertation covers models, formulated in one and two-dimensional spaces by partial differential equations with non-linear members, and solved numerically, using the method of finite differences. The numerical models are implemented by a computer program. An original mathematical model for a biosensor with a carbon nanotube electrode is presented in the dissertation. The conditions at which the one-dimensional mathematical model can be used instead of two-dimensional one for accurate prediction of the biosensor response are investigated. Elements, used to build models of biosensors with a complex structure, were systemized. The biosensor description language is proposed and the computer software, simulating an operation of biosensors in the one-dimensional space and a rectangular domain of the two-dimensional space, is developed. An adequateness of the model for the biosensor with the carbon nanotube electrode and the impact of structural and geometrical properties on a response of the biosensor were investigated, performing computer experiments using the developed software.
Biojutikliai yra įrenginiai, skirti medžiagoms aptikti bei jų koncentracijoms matuoti. Siekiant sumažinti biojutiklių gamybos kaštus yra pasitelkiamas matematinis biojutikliuose vykstančių procesų modeliavimas. Disertacijoje nagrinėjami matematiniai ir kompiuteriniai biojutiklių modeliai, aprašantys biojutiklių, sudarytų iš kelių, skirtingas savybes turinčių dalių, veikimą. Nagrinėjami modeliai yra formuluojami vienmatėje bei dvimatėje erdvėse, aprašomi diferencialinėmis lygtimis dalinėmis išvestinėmis su netiesiniais nariais ir yra sprendžiami skaitiškai, naudojant baigtinių skirtumų metodą. Skaitiniai modeliai yra įgyvendinami kompiuterine programa. Disertacijoje pateikiamas originalus matematinis modelis biojutikliui su anglies nanovamzdelių elektrodu, nustatyti kriterijai, apibrėžiantys, kada biojutiklį su perforuota membrana galima modeliuoti vienmačiu modeliu. Darbe susisteminti elementai, naudojami biojutiklių modelių formulavimui, pagrindinį dėmesį skiriant biojutiklio struktūrinėms savybėms modeliuoti. Apibrėžta biojutiklių modelių aprašo kalba ir sukurta programinė įranga, leidžianti modeliuoti biojutiklių veikimą vienmačiais modeliais arba modeliais, formuluojamais stačiakampėje dvimatės erdvės srityje. Taikant sukurtą biojutiklių modeliavimo programinę įrangą, ištirtas biojutiklio su anglies nanovamzdelių elektrodu modelio adekvatumas ir struktūrinių bei geometrinių savybių įtaka biojutiklio elgsenai.
14

Shifler, Ryan M. "Computational Algebraic Geometry Applied to Invariant Theory." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Commutative algebra finds its roots in invariant theory and the connection is drawn from a modern standpoint. The Hilbert Basis Theorem and the Nullstellenstatz were considered lemmas for classical invariant theory. The Groebner basis is a modern tool used and is implemented with the computer algebra system Mathematica. Number 14 of Hilbert\'s 23 problems is discussed along with the notion of invariance under a group action of GLn(C). Computational difficulties are also discussed in reference to Groebner bases and Invariant theory.The straitening law is presented from a Groebner basis point of view and is motivated as being a key piece of machinery in proving First Fundamental Theorem of Invariant Theory.
Master of Science
15

Valiveti, Natana Carleton University Dissertation Computer Science. "Parallel computational geometry on Analog Hopfield Networks." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Silva, André Carvalho 1987. "Sobre a caracterização de grafos de visibilidade de leques convexos." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientadores: Pedro Jussieu de Rezende, Orlando Lee
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T03:33:56Z (GMT). No. of bitstreams: 1 Silva_AndreCarvalho_M.pdf: 846347 bytes, checksum: ec1253f464900a70a0ef3bb68f9af1fa (MD5) Previous issue date: 2013
Resumo: Grafos de visibilidade entre vértices de polígonos são estruturas que resumem as informações de visibilidade de tais vértices. Existem três relevantes problemas relativos a grafos de visibilidade: caracterização, reconhecimento e reconstrução. O problema da caracterização consiste em encontrar um conjunto de condições necessárias e suficientes para a classe de grafos de visibilidade. A procura de uma forma algorítmica para se reconhecer quando um grafo é de visibilidade define o problema de reconhecimento. O problema de reconstrução trata da questão de se reconstruir um polígono a partir de um grafo de visibilidade de tal forma que este seja isomorfo ao grafo de visibilidade do polígono. Neste trabalho, abordamos estes problemas para uma subclasse destes grafos: grafos de visibilidade de leques convexos. Dois resultados principais são apresentados nesse trabalho. O primeiro deles é um conjunto de três condições necessárias que um grafo de visibilidade de um leque convexo deve satisfazer. Adicionalmente, mostramos que estas condições são necessárias e suficientes para grafos de visibilidade de pseudo-leques convexos. Um resultado colateral deste processo é a equivalência entre grafos de visibilidade entre vértices, e grafos de visibilidade vértice-aresta, para leques convexos em posição geral. O segundo resultado consiste em que podemos reduzir o problema de reconstrução de polígonos unimonótonos para o mesmo problema para leques convexos
Abstract: The (vertex) visibility graph of a polygon is a graph that gathers all the visibility information among the vertices of the polygon. Three relevant problems related to visibility graphs are: characterization, recognition and reconstruction. Characterization calls for a set of necessary and sufficient conditions that every visibility graph must satisfy. Recognition deals with the problem of determining whether a given graph is the visibility graph of some polygon. Reconstruction asks for the generation of a polygon whose visibility graph is isomorphic to a given visibility graph. In this work, we study these problems on a restricted class of polygons, namely, convex fans: polygons that contain a convex vertex in its kernel. This work is comprised of two main results. The first one presents three necessary conditions that visibility graphs of convex fans must satisfy. We also show that those conditions are necessary and sufficient for visibility graphs of convex pseudo-fans. As a byproduct, we show that we can construct the vertex-edge visibility graph of a convex fan in general position from its vertex visibility graph. In the second major result, we show that we can reduce the reconstruction problem of unimonotone polygons to the same problem for convex fans
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
17

Wildman, Raymond A. "Geometry optimization and computational electromagnetics methods and applications /." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 191 p, 2008. http://proquest.umi.com/pqdweb?did=1481670101&sid=23&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jost, Christine. "Topics in Computational Algebraic Geometry and Deformation Quantization." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-87399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis consists of two parts, a first part on computations in algebraic geometry, and a second part on deformation quantization. More specifically, it is a collection of four papers. In the papers I, II and III, we present algorithms and an implementation for the computation of degrees of characteristic classes in algebraic geometry. Paper IV is a contribution to the field of deformation quantization and actions of the Grothendieck-Teichmüller group. In Paper I, we present an algorithm for the computation of degrees of Segre classes of closed subschemes of complex projective space. The algorithm is based on the residual intersection theorem and can be implemented both symbolically and numerically. In Paper II, we describe an algorithm for the computation of the degrees of Chern-Schwartz-MacPherson classes and the topological Euler characteristic of closed subschemes of complex projective space, provided an algorithm for the computation of degrees of Segre classes. We also explain in detail how the algorithm in Paper I can be implemented numerically. Together this yields a symbolical and a numerical version of the algorithm. Paper III describes the Macaulay2 package CharacteristicClasses. It implements the algorithms from papers I and II, as well as an algorithm for the computation of degrees of Chern classes. In Paper IV, we show that L-infinity-automorphisms of the Schouten algebra T_poly(R^d) of polyvector fields on affine space R^d which satisfy certain conditions can be globalized. This means that from a given L-infinity-automorphism of T_poly(R^d) an L-infinity-automorphism of T_poly(M) can be constructed, for a general smooth manifold M. It follows that Willwacher's action of the Grothendieck-Teichmüller group on T_poly(R^d) can be globalized, i.e., the Grothendieck-Teichmüller group acts on the Schouten algebra T_poly(M) of polyvector fields on a general manifold M.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 4: Accepted.

19

Zhu, Binhai. "Computational geometry in two and a half dimensions." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we study computational geometry in two and a half dimensions. These so-called polyhedral terrains have many applications in practice: computer graphics, navigation and motion planning, CAD/CAM, military surveillance, forest fire monitoring, etc.
We investigate a series of fundamental problems regarding polyhedral terrains and present efficient algorithms to solve them. We propose an O(n) time algorithm to decide whether or not a geometric object is a terrain and an O(n log n) time algorithm to compute the shortest watchtower of a polyhedral terrain. We study the terrain guarding problem, obtain tight bounds for vertex and edge guards and O(n) algorithms to place these guards. We study the tetrahedralization of certain simple and non-simple polyhedra (which include some special classes of solid terrains) and present efficient algorithms to tetrahedralize them. We also investigate the problem of computing the $ alpha$-hull of a terrain. Finally, we present efficient algorithms for the intersection detection and computation of Manhattan terrains.
20

Khanban, Ali Asghar. "Basic algorithms in computational geometry with imprecise input." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Smith, Justin W. "Problems and Results in Discrete and Computational Geometry." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1352402504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Knight, Alan (Alan Evan) Carleton University Dissertation Computer Science. "Implementation of algorithms in a computational geometry workbench." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Machat, Mohamed. "Computational geometry for the determination of biomolecular structures." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066359/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En bioinformatique structurale, une partie des méthodes computationnelles qui calculent les structures de protéines à l'aide de données expérimentales, effectuent une optimisation de la position des atomes sous les contraintes expérimentales mesurées sur le système étudié, ainsi que sous des contraintes provenant de la connaissance générique de la stéréochimie organique. Ces méthodes d'optimisation présentent l'inconvénient de ne pas garantir la détermination de la meilleure solution. De plus, la validation de l'optimisation se fait en comparant les résultats obtenus pour des calculs répétés, et le résultat d'un calcul est accepté dans la mesure où le même résultat est obtenu plusieurs fois. Par cette approche, on rend plus difficile la détection de conformations alternatives de protéines, qui sont pourtant le sujet d'un vif intérêt dans la littérature. En effet, le développement de la sensibilité des techniques de résonance magnétique nucléaire (RMN) a permis de mettre en évidence plusieurs cas d'échange conformationnel reliés à la fonction des protéines. Dans ce projet de thèse, nous avons étudié une nouvelle approche pour le calcul de structures des protéines et l'exploration de leurs espaces conformationnels, basée sur la résolution du problème de Géométrie de Distance associé aux contraintes de distances dans une protéine par l'algorithme "interval Branch and Prune". Le logiciel implémentant cette méthode est appelée iBPprot, il incarne l'une des premières tentatives d'échantillonnage exhaustive des espaces conformationnels des protéines. Dans un premier temps, on s'est intéressé à l'application de la méthode en utilisant exclusivement des constraintes de distances exactes. Les résultats ont démontré que iBPprot était capable de reconstruire des structures références en s'appuyant seulement sur quelques contraintes à courte portée. De plus, la reconstruction a été d'une précision telle que la conformation générée présentait un RMSD de 1 Angstrom maximum avec la structure référence. L'exploration exhaustive de l'espace conformationnel a été possible pour une bonne partie des protéines cibles. Les temps de calcul pour l'exploration des espaces conformationnels ont été très variables allant de quelques secondes pour quelques protéines jusqu'à des semaines pour d'autres. L'évaluation de la qualité des structures obtenues a démontré qu'au moins 68% des valeurs de phi et psi sont localisées dans la zone 'core' du diagramme de Ramachandran. Cependant, des clash stériques ont été détectées dans plusieurs conformations mettant en jeu jusqu'à 7% d'atomes dans quelques unes de ces conformations. Dans un deuxième temps, on s'est intéressé à l'application de la méthode en incluant des intervalles de distances comme contraintes dans les calculs. Dans ce cas de figure, la méthode a réussi a reconstruire des structures références avec un RMSD inférieur à 5 Angstrom pour plus de la moitié des protéines cibles. En contre partie, le parcours complet de l'espace conformationnel n'a été possible que pour la plus petite protéine de l'ensemble des protéines étudiées. Pour la moitié des autres protéines, plus de 70% des atomes ont vu leurs positions échantillonnées. La qualité des structures obtenues a regressé en comparaison avec les simulations faites avec des distances exactes. En effet, seulement 53% des valeurs de phi et psi étaient localisées dans la zone 'core' du diagramme de Ramachandran, et le pourcentage d'atomes impliqués dans un clash stérique s'élevait jusqu'à 22% pour quelques protéines. Concernant le temps de calcul, le taux de génération de conformations a été déterminé pour chaque protéine cible, et il s'est avéré que globalement sa valeur etait compétitive par rapport aux valeurs des taux observables dans la littérature
Structural biology has allowed us expand our knowledge of living organisms. It is defined as the investigation of the structure and function of biological systems at the molecular level. Studying a biomolecule's structure offers insight into its geometry, as angles and distances between the biomolecule's atoms are measured in order to determine the biomolecular structure. The values of these geometrical parameters may be obtained from biophysical techniques, such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. One of the most used methods to calculate protein structures from geometric restraints is simulated annealing. This method does not guarantee an exhaustive sampling of protein conformational space, which is a shortcoming as one protein may adopt multiple functional conformations, and it is important to determine them exhaustively. In this PhD project, the efficiency of a new method - derived from operations research and computational geometry - is studied in order to answer this question: How does this method explore the conformational spaces of small proteins? This method - implemented within the iBPprot software framework - treats protein structure determination as a distance geometry problem, which the interval branch-and-prune algorithm tries to solve by the full exploration of its solutions space. The results obtained by iBPprot on a set of test proteins, with sizes ranging from 24 to 120 residues and with known structures, are analyzed here. Using short-range exact distance restraints, it was possible to rebuild the structure of all protein targets, and for many of them it was possible to exhaustively explore their conformational spaces. In practice, it is not always possible to obtain exact distance restraints from experiments. Therefore, this method was then tested with interval data restraints. In these cases, iBPprot permitted the sampling of the positions of more than 70% of the atoms constituting the protein backbone for most of the targets. Furthermore, conformations whose r.m.s. deviations closer than 6 Angstrom to the target ones were obtained during the conformational space exploration. The quality of the generated structures was satisfactory with respect to Ramachandran plots, but needs improvement because of the presence of steric clashes in some conformers. The runtime for most performed calculations was competitive with existing structure determination method
24

Machat, Mohamed. "Computational geometry for the determination of biomolecular structures." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En bioinformatique structurale, une partie des méthodes computationnelles qui calculent les structures de protéines à l'aide de données expérimentales, effectuent une optimisation de la position des atomes sous les contraintes expérimentales mesurées sur le système étudié, ainsi que sous des contraintes provenant de la connaissance générique de la stéréochimie organique. Ces méthodes d'optimisation présentent l'inconvénient de ne pas garantir la détermination de la meilleure solution. De plus, la validation de l'optimisation se fait en comparant les résultats obtenus pour des calculs répétés, et le résultat d'un calcul est accepté dans la mesure où le même résultat est obtenu plusieurs fois. Par cette approche, on rend plus difficile la détection de conformations alternatives de protéines, qui sont pourtant le sujet d'un vif intérêt dans la littérature. En effet, le développement de la sensibilité des techniques de résonance magnétique nucléaire (RMN) a permis de mettre en évidence plusieurs cas d'échange conformationnel reliés à la fonction des protéines. Dans ce projet de thèse, nous avons étudié une nouvelle approche pour le calcul de structures des protéines et l'exploration de leurs espaces conformationnels, basée sur la résolution du problème de Géométrie de Distance associé aux contraintes de distances dans une protéine par l'algorithme "interval Branch and Prune". Le logiciel implémentant cette méthode est appelée iBPprot, il incarne l'une des premières tentatives d'échantillonnage exhaustive des espaces conformationnels des protéines. Dans un premier temps, on s'est intéressé à l'application de la méthode en utilisant exclusivement des constraintes de distances exactes. Les résultats ont démontré que iBPprot était capable de reconstruire des structures références en s'appuyant seulement sur quelques contraintes à courte portée. De plus, la reconstruction a été d'une précision telle que la conformation générée présentait un RMSD de 1 Angstrom maximum avec la structure référence. L'exploration exhaustive de l'espace conformationnel a été possible pour une bonne partie des protéines cibles. Les temps de calcul pour l'exploration des espaces conformationnels ont été très variables allant de quelques secondes pour quelques protéines jusqu'à des semaines pour d'autres. L'évaluation de la qualité des structures obtenues a démontré qu'au moins 68% des valeurs de phi et psi sont localisées dans la zone 'core' du diagramme de Ramachandran. Cependant, des clash stériques ont été détectées dans plusieurs conformations mettant en jeu jusqu'à 7% d'atomes dans quelques unes de ces conformations. Dans un deuxième temps, on s'est intéressé à l'application de la méthode en incluant des intervalles de distances comme contraintes dans les calculs. Dans ce cas de figure, la méthode a réussi a reconstruire des structures références avec un RMSD inférieur à 5 Angstrom pour plus de la moitié des protéines cibles. En contre partie, le parcours complet de l'espace conformationnel n'a été possible que pour la plus petite protéine de l'ensemble des protéines étudiées. Pour la moitié des autres protéines, plus de 70% des atomes ont vu leurs positions échantillonnées. La qualité des structures obtenues a regressé en comparaison avec les simulations faites avec des distances exactes. En effet, seulement 53% des valeurs de phi et psi étaient localisées dans la zone 'core' du diagramme de Ramachandran, et le pourcentage d'atomes impliqués dans un clash stérique s'élevait jusqu'à 22% pour quelques protéines. Concernant le temps de calcul, le taux de génération de conformations a été déterminé pour chaque protéine cible, et il s'est avéré que globalement sa valeur etait compétitive par rapport aux valeurs des taux observables dans la littérature
Structural biology has allowed us expand our knowledge of living organisms. It is defined as the investigation of the structure and function of biological systems at the molecular level. Studying a biomolecule's structure offers insight into its geometry, as angles and distances between the biomolecule's atoms are measured in order to determine the biomolecular structure. The values of these geometrical parameters may be obtained from biophysical techniques, such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. One of the most used methods to calculate protein structures from geometric restraints is simulated annealing. This method does not guarantee an exhaustive sampling of protein conformational space, which is a shortcoming as one protein may adopt multiple functional conformations, and it is important to determine them exhaustively. In this PhD project, the efficiency of a new method - derived from operations research and computational geometry - is studied in order to answer this question: How does this method explore the conformational spaces of small proteins? This method - implemented within the iBPprot software framework - treats protein structure determination as a distance geometry problem, which the interval branch-and-prune algorithm tries to solve by the full exploration of its solutions space. The results obtained by iBPprot on a set of test proteins, with sizes ranging from 24 to 120 residues and with known structures, are analyzed here. Using short-range exact distance restraints, it was possible to rebuild the structure of all protein targets, and for many of them it was possible to exhaustively explore their conformational spaces. In practice, it is not always possible to obtain exact distance restraints from experiments. Therefore, this method was then tested with interval data restraints. In these cases, iBPprot permitted the sampling of the positions of more than 70% of the atoms constituting the protein backbone for most of the targets. Furthermore, conformations whose r.m.s. deviations closer than 6 Angstrom to the target ones were obtained during the conformational space exploration. The quality of the generated structures was satisfactory with respect to Ramachandran plots, but needs improvement because of the presence of steric clashes in some conformers. The runtime for most performed calculations was competitive with existing structure determination method
25

Silveira, Luís Fernando Schultz Xavier da. "Algoritmos para união de círculos e polígonos." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-25072016-184451/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Este trabalho aborda dois problemas de geometria computacional: união de círculos e união de (vários) polígonos. Para o problema da união de círculos, os principais algoritmos da literatura são revisados e um algoritmo simples, porém ineficiente, é introduzido. Este algoritmo é então adaptado para resolver o problema da união de polígonos, produzindo um algoritmo que é competitivo com o estado da arte e, dependendo da aplicação, utiliza menos armazenamento.
This work deals with two problems from the field of computational geometry: union of circles and union of (many) polygons. For the union of circles problem, the main algorithms in the literature are revised and a simple, albeit inefficient, algorithm is introduced. This algorithm is then adapted to solve the union of polygons problem, resulting in an algorithm that is competitive with the state of the art and, depending on the application, makes use of less storage.
26

Duménil, Charles. "Expected Size of the 3-Dimensional Delaunay Triangulation of Random Points on a Surface." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse a pour but d'évaluer la taille de la triangulation de Delaunay de points distribués sur une surface avec une distribution aléatoire. La triangulation de Delaunay, est un objet géométrique qui est apparu de manière récurrente dans l'histoire scientifique. En dimension 2, la triangulation de Delaunay de X est l'ensemble des triangles pour lesquels le cercle circonscrit ne contient pas d'autres points de X. Cette définition est généralisable en dimension supérieure. Aujourd'hui, la triangulation de Delaunay est l'une des structures les plus étudiées en géométrie algorithmique. Pour le cas en 2 dimensions, on sait que la taille de la triangulation de Delaunay reste linéaire avec le nombre de points. En 3 dimensions, la taille de la triangulation de Delaunay peut varier de linéaire à quadratique. Cette taille dépend de la façon dont les points sont distribués dans R^3. Sur une surface, la taille de la triangulation de Delaunay dépendra à la fois de la surface et de la façon dont ils sont répartis sur cette surface. Pour modéliser les points, nous choisissons d'utiliser un processus ponctuel de Poisson car il vérifie des propriétés d'homogénéité et d'indépendance qui sont pratiques pour les calculs. Afin de prouver la borne O(n log n) en espérance pour l'échantillon uniforme distribué sur un cylindre, Devillers et al. ont remarqué que l'intersection du cylindre avec une sphère passant par deux points p et q sur le cylindre contient toujours un triangle spécifique dessiné sur le cylindre. Cela les conduit à étudier un graphe à 2 dimensions dans lequel deux points sont voisins s'il existe un tel triangle qui ne contient pas d'autres points de données. Un tel graphe a une taille moyenne de O(n log n), et c'est ainsi qu'ils obtiennent la borne O(n log n). Dans la partie II, nous définissons un type de graphes de régions vides, nous formalisons une méthode pour calculer les limites inférieures et supérieures de leur taille moyenne, et nous donnons des résultats optimaux pour ces graphes. Comme Attali et al. l'ont souligné, l'intersection d'une sphère avec une surface générique a presque une forme elliptique, alignée avec les directions de courbure de la surface. Ceci nous amène à étudier un graphe particulier de régions vides pour lequel les régions sont des ellipses alignées avec les axes. Nous prouvons, dans la partie II, que si les ellipses concernées ont un rapport d'aspect compris entre b et 1, avec 0 < b < 1, alors le nombre moyen de voisins de tout point du graphe est O(ln b). Afin d'illustrer la méthode, nous calculons, dans la partie III, les limites asymptotiques sur la taille moyenne de la triangulation 3D-Delaunay dans deux cas spécifiques. Dans la partie III, chapitre 12, nous considérons un cylindre de révolution et confirmons la limite O(n ln n) mais pour un processus ponctuel de Poisson. Compte tenu de la similitude entre l'échantillon uniforme et l'échantillon de Poisson, le but de ce chapitre est principalement de présenter concrètement la méthode dans un cas simple en 3 dimensions. Ensuite, dans le chapitre 13, nous calculons la taille de la triangulation 3D-Delaunay d'un processus de Poisson distribué sur une sphère aplatie. Nous montrons que la taille moyenne de la triangulation est O(n). Enfin, dans la partie IV, nous traitons le cas des surfaces génériques. Même si un sphéroïde aplati est une surface spécifique, nous pourrons réutiliser certains calculs de cette partie moyennant quelques adaptations. En effet, le sphéroïde aplati est la surface d'un corps convexe, ce qui n'est généralement pas le cas. Il a beaucoup de symétries,ce qui n'est pas non plus le cas en général. Dans cette partie, nous nous concentrons plus sur la façon de gérer ces adaptations que sur les calculs qui étaient déjà assez fastidieux dans le cas du sphéroïde
This thesis aims at evaluating the size of the Delaunay triangulation of points drawn on a surface with a random distribution. The Delaunay triangulation, is a geometrical object that appeared recurrently in the scientific history. In dimension 2, the Delaunay triangulation of X is the set of triangles for which the circumscribing circle does not contain other points of X. This definition is generalizable in higher dimensions. Today, the Delaunay triangulation is one the most studied structures in computational geometry. For the 2 dimensional case, we know that the size of the Delaunay triangulation remains linear with the number of points. In 3 dimension, it is not anymore the case. The size of the 3D-Delaunay triangulation can range from linear to quadratic. This size depends on how the points are distributed in R^3. On a surface, the size of the Delaunay triangulation will depend both on the surface and on how they are distributed on this surface. To model points, we choose to use a Poisson point process since it verifies properties of homogeneity and independence that are convenient for the computations. In order to prove the expected O(n log n) bound for the uniform sample distributed on a cylinder, Devillers et al. remarked that the intersection of the cylinder with a sphere passing though two points p and q on the cylinder always contains a specific triangle drawn on the cylinder. That leads them to study a 2-dimensional graph in which two points are neighbors if there exists such a triangle that does not contain other data points. Such a graph has expected size O(n log n), and this is how they obtain the O(n log n) bound. In Part II, we define a kind of empty region graphs, we formalize a method to compute lower and upper bounds on their expected size, and give tight results for such graphs. As Attali et al. pointed out, the intersection of a sphere with a generic surface has almost an elliptic shape, aligned with the curvature directions of the surface. This leads us to study a particular empty region graph for which the regions are axis-aligned ellipses. We prove, in Part II, that if theinvolved ellipses have an aspect ratio ranging from b to 1, with 0 < b < 1, then the expected number of neighbors of any point in the graph is O(ln b). In order to illustrate the method, we compute, in Part III, tight asymptotic bounds on the expected size of the 3D-Delaunay triangulation in two specific cases. In Part III, Chapter 12, we consider a cylinder of revolution and reprove the O(N ln N) bound but for a Poisson point process. Considering the similarity between uniform and Poisson sample, the goal of this chapteris mainly to present concretely the method in a 3-dimensional simple case. Then, in Chapter 13, we compute the size of the 3D-Delaunay triangulation of a Poisson process distributed on a flattened sphere. We show that the expected size of the triangulation is O(N). Finally in Part IV, we treat the case of generic surfaces. Even if an oblate spheroid is a specific surface, we will be able to reuse some computations in this part up to some adaptations. Indeed the oblate spheroid is the surface of a convex body, that is not generally the case. It has a lot of symmetries,that is not generally the case either. In this part, we focus more on how to deal with these adaptations than on the computations that were already quite tedious in the spheroid case
27

Kellar, William Patrick. "Geometry modeling in computational fluid dynamics and design optimisation." Thesis, University of Cambridge, 2003. https://www.repository.cam.ac.uk/handle/1810/251878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Strandell, Ebbe. "Computational Geometry and Surface Reconstruction from Unorganized Point Clouds." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis addresses the problem of constructing virtual representations of surfaces only known as clouds of unstructured points in space. This problem is related to many areas including computer graphics, image processing, computer vision, reverse engineering and geometry studies. Data sets can be acquired from a wide range of sources including Computer Tomography (CT), Magnetic Resonance Imaging (MRI), medical cryosections, laser range scanners, seismic surveys or mathematical models. This thesis will furthermost focus on medical samples acquired through cryosections of bodies. In this thesis report various computational geometry approaches of surface reconstruction are evaluated in terms of adequateness for scientific uses. Two methods called “γ-regular shapes” and “the Power Crust” are implemented and evaluated. The contribution of this work is the proposal of a new hybrid method of surface reconstruction in three dimensions. The underlying thought of the hybrid solution is to utilize the inverse medial axis transformation, defined by the Power Crust, to recover holes that may appear in the three dimensional γ-regular shapes.
29

TIAN, BINYU. "COMPUTATIONAL AEROELASTIC ANALYSIS OF AIRCRAFT WINGS INCLUDING GEOMETRY NONLINEARITY." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070398084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tejada, Pedro J. "A Computational Geometry Approach to Digital Image Contour Extraction." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present a method for extracting contours from digital images, using techniques from computational geometry. Our approach is different from traditional pixel-based methods in image processing. Instead of working directly with pixels, we extract a set of oriented feature points from the input digital images, then apply classical geometric techniques, such as clustering, linking, and simplification, to find contours among these points. Experiments on synthetic and natural images show that our method can effectively extract contours, even from images with considerable noise; moreover, the extracted contours have a very compact representation.
31

Pennec, Xavier. "Statistical Computing on Manifolds for Computational Anatomy." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2006. http://tel.archives-ouvertes.fr/tel-00633163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
During the last decade, my main research topic was on medical image analysis, and more particularly on image registration. However, I was also following in background a more theoretical research track on statistical computing on manifolds. With the recent emergence of computational anatomy, this topic gained a lot of importance in the medical image analysis community. During the writing of this habilitation manuscript, I felt that it was time to present a more simple and uni ed view of how it works and why it can be important. This is why the usual short synthesis of the habilitation became a hundred pages long text where I tried to synthesizes the main notions of statistical computing on manifolds with application in registration and computational anatomy. Of course, this synthesis is centered on and illustrated by my personal research work.
32

Botnan, Magnus Bakke. "Three Approaches in Computational Geometry and Topology : Persistent Homology, Discrete Differential Geometry and Discrete Morse Theory." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study persistent homology, methods in discrete differential geometry and discrete Morse theory. Persistent homology is applied to computational biology and range image analysis. Theory from differential geometry is used to define curvature estimates of triangulated hypersurfaces. In particular, a well-known method for triangulated surfacesis generalised to hypersurfaces of any dimension. The thesis concludesby discussing a discrete analogue of Morse theory.
33

Garcia-Puente, Luis David. "Algebraic Geometry of Bayesian Networks." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/11133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We develop the necessary theory in algebraic geometry to place Bayesian networks into the realm of algebraic statistics. This allows us to create an algebraic geometry--statistics dictionary. In particular, we study the algebraic varieties defined by the conditional independence statements of Bayesian networks. A complete algebraic classification, in terms of primary decomposition of polynomial ideals, is given for Bayesian networks on at most five random variables. Hidden variables are related to the geometry of higher secant varieties. Moreover, a complete algebraic classification, in terms of generating sets of polynomial ideals, is given for Bayesian networks on at most three random variables and one hidden variable. The relevance of these results for model selection is discussed.
Ph. D.
34

Neyer, Gabriele. "Algorithms, complexity, and software engineering in computational geometry : case studies /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

McAllister, Michael Joseph. "The computational geometry of hydrology data in geographic information systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Voruganti, Ravinder Srinivas. "Symbolic and computational conjugate geometry for design and manufacturing applications." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-03032009-041021/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lui, Lok Ming. "Computational conformal geometry and its applications to human brain mapping." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1680018831&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Day, Andrew. "The serial and parallel implementation of geometric algorithms." Thesis, University of East Anglia, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sharrock, T. J. "Surface design with cyclide patches." Thesis, University of Cambridge, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Brandt, Aléx Fernando 1990. "Algoritmos exatos para problemas de dilatação mínima em grafos geométricos." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientadores: Cid Carvalho de Souza, Pedro Jussieu de Rezende
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-26T19:27:17Z (GMT). No. of bitstreams: 1 Brandt_AlexFernando_M.pdf: 1939918 bytes, checksum: c6d9d34f314830d07dc1e49ad43ab514 (MD5) Previous issue date: 2014
Resumo: Seja P um conjunto de pontos no plano. O grafo geométrico de P, G(P) = (P, E), é o grafo ponderado completo cujos vértices correspondem aos pontos de P e no qual o custo de uma aresta {i, j} é dado pela distância Euclidiana entre os pontos i e j. Inicialmente, considere um problema genérico em que se quer construir uma rede com boa qualidade de conexão ligando um conjunto de locais representados por pontos no plano. Em muitas aplicações deste tipo, o problema pode ser modelado com o auxílio de um grafo geométrico. Isso ocorre quando, por exemplo, para um par de pontos, a medida de qualidade é definida como a razão entre o comprimento do menor caminho que os conecta na rede projetada e a respectiva distância Euclidiana. Esta razão determina a dilatação daquele par de pontos na rede. Já a dilatação da rede construída em si é dada pela dilatação máxima sobre todos os pares de pontos. Nesta dissertação apresentamos métodos exatos para resolução dos problemas dedicados a encontrar uma árvore geradora ou uma triangulação planar de dilatação mínima, denominados, respectivamente, Problema da Árvore Geradora de Dilatação Mínima (MDSTP) e Problema da Triangulação de Dilatação Mínima (MDTP). Os métodos descritos são compostos principalmente pela formulação, redução e resolução de programas lineares inteiros mistos. A redução do tamanho destes modelos matemáticos é feita explorando-se a geometria dos problemas por meio de rotinas que determinam a presença ou da ausência de certos elementos da formulação em soluções com dilatação menor ou igual ao limitante primal fornecido por uma heurística. A aplicação destas rotinas em uma fase de pré-processamento permite uma redução significativa do tamanho do modelo levando à aceleração do seu tempo de resolução. Com a combinação destas técnicas obteve-se, pela primeira vez, soluções comprovadamente ótimas de instâncias até 20 pontos para o MDSTP e até 70 pontos para o MDTP. Os problemas e suas formulações, bem como suas formas de redução e de resolução, são apresentados em detalhes. Além disso, são feitas análises de desempenho computacional não só dos métodos exatos, mas também de algoritmos propostas por outros autores
Abstract: Let P be a set of points in the plane. The geometric graph of P, G(P) =(P, E), is the complete weighted graph whose vertices correspond to the points of P and in which the cost of an edge {i, j} is given by the Euclidean distance between the points i and j. Initially, consider a general problem where one wants to build a network with good connection quality joining a set of sites represented by points in the plane. In many applications of this kind, the problem can be modeled with the aid of a geometric graph. This occurs when, for example, for a pair of points, the quality measure is defined as the ratio of the length of the shortest path that connects them in the designed network and their Euclidean distance. This ratio determines the dilation of that pair of points in the network. On the other hand, the dilation of the built network itself is given by the maximum dilation over all pair of points. In this dissertation we present exact methods for solving problems dedicated to finding a spanning tree or a planar triangulation of minimum dilation, named, respectively, the Minimum Dilation Spanning Tree Problem (MDSTP) and Minimum Dilation Triangulation Problem (MDTP). The described methods are composed primarily by the formulation, downsizing and resolution of mixed integer linear programs. The downsizing of these mathematical models is done by exploiting the geometry of the problems by means of routines that determine the need of the presence or the absence of certain elements of the formulation in solutions with dilation less than or equal to the primal bound supplied by a heuristic. Applying these routines in a pre-processing phase allows a significant reduction of the model size leading to the acceleration of its resolution time. With the combination of these techniques, for the first time, proven optimal solutions of instances with up to 20 points for the MDSTP and up to 70 points to the MDTP were obtained. The problems and their formulations, as well as ways of reducing and solving them, are presented in detail. Furthermore, analyzes of computational performance not only of the exact methods, but also of algorithms proposed by other authors are made
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
41

Kettner, Lutz. "Software design in computational geometry and contour-edge based polyhedron visualization /." [S.l.] : [s.n.], 1999. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Côté, Brendan. "Applicability of advanced computational networks to the modelling of complex geometry." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis describes a research effort directed at producing a computational model based on artificially intelligent cellular automata. This model was developed for the purpose of learning a mapping from an input space to an output space. A specific problem that occurs in the mining industry was used to develop and test the model's ability to learn the mapping between a three-dimensional input volume and a three-dimensional output volume. In this case, the mapping was a consequence of the industrial processes used in mining as well as the properties of the material being mined.
Three main computational tools were combined in this work to form the complete mine stope prediction model. The three modules are a learning module, an optimisation module, and an overall network architecture. The overall network architecture is a 3-D lattice of cellular automata (CA) and has the capability to implicitly capture the complexities in shape that render other types of models arduous or inapplicable. The learning module uses a Discrete Time Cellular Neural Network (DTCNN) to store and recall information about a given mapping. The optimisation module uses the Simulated Annealing (SA) algorithm to perform a non-linear optimisation on the set of weights used by the DTCNN.
Variations of the model, and different experiments, were performed to test and explore the model in depth. Concepts such as "Small-Worlds" and "Forgetting Factor" were investigated. The applicability of a Partial Least Squares (PLS) model as an alternative to the DTCNN transition rule was also explored.
43

wang, zhiqiang. "STUDYING COMPUTATIONAL METHODS FOR BIOMEDICAL GEOMETRY EXTRACTION AND PATIENT SPECIFIC HEMODYNAMICS." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1493042299659479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Schkolnik, Müller Demian Aley. "A robust void-finding algorithm using computational geometry and parallelization techniques." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tesis para optar al grado de Magíster en Ciencias, Mención Computación
El modelo cosmológico actual y más aceptado del universo se llama Lambda Cold Dark Matter. Este modelo nos presenta el modelo más simple que proporciona una explicación razonablemente buena de la evidencia observada hasta ahora. El modelo sugiere la existencia de estructuras a gran escala presentes en nuestro universo: Nodos, filamentos, paredes y vacíos. Los vacíos son de gran interés para los astrofísicos ya que su observación sirve como validación para el modelo. Los vacíos son usualmente definidos como regiones de baja densidad en el espacio, con sólo unas pocas galaxias dentro de ellas. En esta tesis, presentamos un estudio del estado actual de los algoritmos de búsqueda de vacíos. Mostramos las diferentes técnicas y enfoques, e intentamos deducir la complejidad algorítmica y el uso de memoria de cada void-finder presentado. Luego mostramos nuestro nuevo algoritmo de búsqueda de vacíos, llamado ORCA. Fue construido usando triangulaciones de Delaunay para encontrar a los vecinos más cercanos de cada punto. Utilizando esto, clasificamos los puntos en tres categorías: Centro, borde y outliers. Los outliers se eliminan como ruido. Clasificamos los triángulos de la triangulación en triángulos de vacíos y centrales. Esto se hace verificando un criterio de distancia, y si los triángulos contienen outliers. Este método nos permite crear un algoritmo de búsqueda de vacíos rápido y robusto. Adicionalmente, se presenta una versión paralela del algoritmo.
45

Bose, Prosenjit. "Geometric and computational aspects of manufacturing processes." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two of the fundamental questions that arise in the manufacturing industry concerning every type of manufacturing process are: (1) Given an object, can it be built using a particular process? (2) Given that an object can be built using a particular process, what is the best way to construct the object? The latter question gives rise to many different problems depending on how best is qualified. We address these problems for two complimentary categories of manufacturing processes: rapid prototyping systems and casting processes. The method we use to address these problems is to first define a geometric model of the process in question and then answer the question on that model.
In the category of rapid prototyping systems, we concentrate on stereolithography, which is emerging as one of the most popular rapid prototyping systems. We model stereolithography geometrically and then study the class of objects that admit a construction in this model. For the objects that admit a construction, we find the orientations that allow a construction of the object.
In the category of casting processes, we concentrate on gravity casting and injection molding. We first model the process and its components geometrically. We then characterize and recognize the objects that can be formed using a re-usable two-part cast. Given that a cast of an object can be formed, we determine a suitable location for the pin gate, the point from which liquid is poured or injected into a mold. Finally, we compute an orientation of a mold that ensures a complete fill and minimizes the number of venting holes for molds used in gravity casting processes.
46

Gomes, de Oliveira Rodrigo. "Geographic referring expressions : doing geometry with words." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=232615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fazanaro, Dalton Ieda 1987. "Metamorfose planar via métodos Level Set e Particle Level Set para a reconstrução de superfícies tridimensionais." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientador: Hélio Pedrini
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-21T23:01:00Z (GMT). No. of bitstreams: 1 Fazanaro_DaltonIeda_M.pdf: 17180414 bytes, checksum: 024cc06fef1c6ac030cce29037783fd1 (MD5) Previous issue date: 2013
Resumo: Inicialmente centralizadas na solução de problemas científicos em Dinâmica dos Fluidos, as interfaces evolutivas, com o advento da modelagem mais eficiente e robusta provida pelo método Level Set, expandiram os seus limites originais de aplicabilidade, proporcionando uma nova frente de pesquisa para os campos dos mais diversos, com destaque à Ciência da Computação. Especificamente à área de Processamento de Imagens, os trabalhos até então apresentados, relacionando o Level Set à reconstrução de superfícies tridimensionais, concentram-se na reconstrução a partir de uma nuvem de dados dispersos no espaço; a abordagem baseada em fatias planas paralelas e transversais ao objeto a ser reconstruído evidencia-se ainda incipiente. Esse cenário fomenta, portanto, uma análise da viabilidade do Level Set para a reconstrução de superfícies tridimensionais. Fundamentando-se nessa constatação, a dissertação propõe-se a oferecer uma metodologia que agregue, simultaneamente, as idéias comprovadamente eficientes já publicadas sobre a aproximação em questão e as propostas para contornar as limitações inerentes ao método ainda não satisfatoriamente tratadas, em particular a suavização excessiva de características finas dos contornos em evolução sob o Level Set. Relativamente a esse ponto, o emprego da variante Particle Level Set é sugerido como uma possível solução, por sua intrínseca capacidade comprovada para a conservação de massa ou volume de fronteiras dinâmicas traduzirem-se, presumivelmente, em um controle ao problema destacado. Ao final, conjuntos de dados sintéticos e reais são utilizados para avaliar a metodologia de reconstrução de superfícies tridimensionais apresentada qualitativamente
Abstract: Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more efficient and robust modeling provided by Level Set method, their original boundaries of applicability were extended, offering a new front of research to the more diverse fields, especially to Computer Science. Specifically to Image Processing area, the works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Therefore, this scenario foments a feasibility analysis of Level Set to the reconstruction of tridimensional surfaces. Basing on this fact, this dissertation proposes to offer a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a possible solution, for its intrinsic proved capability to preserve mass or volume of dynamic fronts manifests itself, presumably, into a control of the stressed problem. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
48

Zambon, Mauricio Jose de Oliveira 1990. "Soluções exatas para o Problema Cromático da Galeria de Arte." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientadores: Pedro Jussieu de Rezende, Cid Carvalho de Souza
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-26T19:09:30Z (GMT). No. of bitstreams: 1 Zambon_MauricioJosedeOliveira_M.pdf: 2774684 bytes, checksum: 1d0ed1f5c1ae01b7646e4bffea6a3736 (MD5) Previous issue date: 2014
Resumo: Nesta dissertação, apresentamos a primeira abordagem algorítmica e os primeiros resultados experimentais da literatura para tratamento do Problema Cromático Discreto da Galeria de Arte (DCAGP). Trata-se de um problema de natureza geométrica que consiste de uma variante do clássico Problema da Galeria de Arte. Neste, deseja-se encontrar um conjunto de guardas com cardinalidade mínima que consiga vigiar toda uma dada galeria. Já no DCAGP temos por objetivo obter um conjunto de observadores que cubra a galeria e que admita uma coloração válida com o menor número de cores. Uma coloração é válida se dois observadores que veem um mesmo ponto recebem cores distintas. Abordamos a resolução deste problema através de duas abordagens: uma exata e uma heurística. Inicialmente, apresentamos uma heurística primal que fornece limitantes superiores de boa qualidade e, em seguida, um modelo de programação linear inteira para resolução exata do DCAGP. Este método foi capaz de resolver todas as instâncias de um extenso conjunto de galerias, representadas por polígonos simples aleatoriamente gerados, de até 2500 vértices, em menos de um minuto. Já num outro conjunto de instâncias onde a representação inclui polígonos com buracos e polígonos fractais de von Koch com até 800 vértices, o método encontrou soluções comprovadamente ótimas para 80% das instâncias em menos de 30 minutos. No contexto dessas soluções, discutimos o uso de lazy-constraints e de técnicas de fortalecimento do modelo, assim como uma breve análise da dificuldade das instâncias. Reportamos ainda resultados da utilização de relaxação Lagrangiana, para obtenção de bons limitantes, principalmente superiores, e também resultados obtidos por meio de uma variação da técnica relax-and-fix. Finalmente, discutimos um processo de branch-and-price para resolução exata do DCAGP
Abstract: In this dissertation, we present the first algorithmic approach and the first experimental results in the literature for solving the Discrete Chromatic Art Gallery Problem (DCAGP). This problem is geometric in nature and consists of a variation of the classic Art Gallery Problem. In the latter, we want to find a minimum cardinality guard set that is able to watch over a given gallery. On the other hand, in the DCAGP, the objective is to find a set of watchers that covers the gallery and admits a valid coloring with a minimum number of colors. A coloring is valid if two watchers that observe a same point are assigned different colors. To solve this problem we apply two approaches: an exact and a heuristic one. Firstly, we present a primal heuristic able to provide good quality upper bounds, and subsequently an integer programming model that yields exact solutions for the DCAGP. This method was able to solve all instances from an extensive set of galleries, represented by randomly generated simple polygons, of up to 2500 vertices, in less than one minute. On another set of instances, where the representation includes polygons with holes and fractal von Koch polygons, with up to 800 vertices, this method found proven optimal solutions for 80% of the instances in less than 30 minutes. In the context of these solutions, we discuss the use of lazy constraints and techniques for strengthening the model, besides a brief analysis of the hardness of the instances. Moreover, we report on results obtained through a Lagrangian relaxation, mainly as a means to obtain good upper bounds, as well as from a variation of the relax-and-fix technique. Lastly, we discuss a branch-and-price process for solving the DCAGP to exactness
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
49

Oshiro, Marcio Takashi Iura. "Clustering de trajetórias." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-29102015-142559/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Esta tese teve como objetivo estudar problemas cinéticos de clustering, ou seja, problemas de clustering nos quais os objetos se movimentam. O trabalho se concentrou no caso unidimensional, em que os objetos são pontos se movendo na reta real. Diversas variantes desse caso foram abordadas. Em termos do movimento, consideramos o caso em que cada ponto se move com uma velocidade constante num dado intervalo de tempo, o caso em que os pontos se movem arbitrariamente e temos apenas as suas posições em instantes discretos de tempo, o caso em que os pontos se movem com uma velocidade aleatória em que se conhece apenas o valor esperado da velocidade, e o caso em que, dada uma partição do intervalo de tempo, os pontos se movem com velocidades constantes em cada subintervalo. Em termos do tipo de clustering buscado, nos concentramos no caso em que o número de clusters é um dado do problema e consideramos diferentes medidas de qualidade para o clustering. Duas delas são tradicionais para problemas de clustering: a soma dos diâmetros dos clusters e o diâmetro máximo de um cluster. A terceira medida considerada leva em conta a característica cinética do problema, e permite, de uma maneira controlada, que o clustering mude com o tempo. Para cada uma das variantes do problema, são apresentados algoritmos, exatos ou de aproximação, alguns resultados de complexidade obtidos, e questões que ficaram em aberto.
This work aimed to study kinetic problems of clustering, i.e., clustering problems in which the objects are moving. The study focused on the unidimensional case, where the objects are points moving on the real line. Several variants of this case have been discussed. Regarding the movement, we consider the case where each point moves at a constant velocity in a given time interval, the case where the points move arbitrarily and we only know their positions in discrete time instants, the case where the points move at a random velocity in which only the expected value of the velocity is known, and the case where, given a partition of the time interval, the points move at constant velocities in each sub-interval. Regarding the kind of clustering sought, we focused in the case where the number of clusters is part of the input of the problem and we consider different measures of quality for the clustering. Two of them are traditional measures for clustering problems: the sum of the cluster diameters and the maximum diameter of a cluster. The third measure considered takes into account the kinetic characteristic of the problem, and allows, in a controlled manner, that a cluster change along time. For each of the variants of the problem, we present algorithms, exact or approximation, some obtained complexity results, and open questions.
50

Bhowmick, Santanu. "Multi-covering problems and their variants." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In combinatorial optimization, covering problems are those problems where given a ground set and a family of subsets of the ground set, the objective is to find a minimum cost set of subsets whose union contains the ground set. We consider covering problems in the context of Computational Geometry, which is a subfield of Computer Science that deals with problems associated with geometric objects such as points, lines, disks, polygons etc. In particular, geometric covering is an active research area, where the ground set and the family of subsets are induced by geometric objects. Covering problems in combinatorial optimizations often have a geometric analogue that arises naturally, and though such problems remain NP-hard, it is often possible to exploit the geometric properties of the set system to obtain better approximation algorithms. In this work, the fundamental problem that we consider is a generalization of geometric covering, where each element in the ground set may need to covered by more than one subset. To be precise, the problem is defined as follows: given two sets of points X, Y and a coverage function κ : X → Z+ ∪ {0}, construct balls centered on the points in Y such that each point in X is covered by at least κ(x) distinct balls. The objective in this problem is to minimize the total cost, which is a function of the radii of the balls. This problem is termed as the metric multi-cover (MMC) problem. We first consider version of the MMC problem where κ(x) = 1 for all clients, i.e. the 1-covering case. The known results that give a constant factor approximation for this problem are variations of LP-based primal-dual algorithm. We use a modified local search technique, motivated by geometric idea, to derive a simple, constant- factor approximation for this problem in Chapter 2. We then look at the MMC problem where the point sets X,Y are in the Euclidean plane, and each client x ∈ X needs to be covered by at least κ(x) distinct disks centered on the points in Y . In Chapter 4, we give the first polynomial time constant factor approximation for this problem, in which the constant is independent of the coverage function κ. Our solution also has an incremental property, which allows the algorithm to handle increases in the coverage requirement by increasing the radii of the current server disks, without affecting the approximation factor. In the next problem, we move from the Euclidean plane to arbitrary metric spaces where we consider the uniform MMC problem. In this problem, each client x has the demand κ(x) = k, where k > 0 is an integer. We give the first constant factor approximation (independent of k) for this problem. The key contribution that led to this result is the formulation of a partitioning scheme of the servers in the uniform MMC problem, that reduces the uniform MMC problem to k instances of 1-covering problem, while preserving the optimality of the solution to a constant multiplicative factor. We present the partitioning scheme as an independent result in Chapter 5, which we then use to solve the uniform MMC problem in Chapter 6. The MMC problem with arbitrary coverage function κ is then considered in Chapter 7. The key challenge that the non-uniform version presents is that the symmetry of the server partitioning scheme breaks down as the coverage demands of clients are independent of each other. We present a constant factor algorithm for this problem in Chapter 7. The last problem that we consider is the t-MMC problem, which is a restricted version of the uniform MMC problem. The objective is to compute a cover in which each client is covered by at least k distinct server disks, using atmost t server disks in total. This problem is a generalization of the clustering problem (where k = 1), and to our knowledge this is the first time this generalization has been considered. We give a constant factor approximation for this problem in Chapter 8.

To the bibliography