To see the other types of publications on this topic, follow the link: Polynomial-time algorithms.

Dissertations / Theses on the topic 'Polynomial-time algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Polynomial-time algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Domingues, Riaal. "A polynomial time algorithm for prime recognition." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08212007-100529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

朱紫君 and Chi-kwan Chu. "Polynomial time algorithms for linear and integer programming." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chu, Chi-kwan. "Polynomial time algorithms for linear and integer programming." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22718710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boljunčić, Jadranka. "Quadratic programming : quantitative analysis and polynomial running time algorithms." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/27532.

Full text
Abstract:
Many problems in economics, statistics and numerical analysis can be formulated as the optimization of a convex quadratic function over a polyhedral set. A polynomial algorithm for solving convex quadratic programming problems was first developed by Kozlov at al. (1979). Tardos (1986) was the first to present a polynomial algorithm for solving linear programming problems in which the number of arithmetic steps depends only on the size of the numbers in the constraint matrix and is independent of the size of the numbers in the right hand side and the cost coefficients. In the first part of the thesis we extended Tardos' results to strictly convex quadratic programming of the form max {cTx-½xTDx : Ax ≤ b, x ≥0} with D being symmetric positive definite matrix. In our algorithm the number of arithmetic steps is independent of c and b but depends on the size of the entries of the matrices A and D. Another part of the thesis is concerned with proximity and sensitivity of integer and mixed-integer quadratic programs. We have shown that for any optimal solution z̅ for a given separable quadratic integer programming problem there exist an optimal solution x̅ for its continuous relaxation such that
z̅ - x̅
∞≤n∆(A) where n is the number of variables and ∆(A) is the largest absolute sub-determinant of the integer constraint matrix A . We have further shown that for any feasible solution z, which is not optimal for the separable quadratic integer programming problem, there exists a feasible solution z̅ having greater objective function value and with
z - z̅
∞≤n∆(A). Under some additional assumptions the distance between a pair of optimal solutions to the integer quadratic programming problem with right hand side vectors b and b', respectively, depends linearly on
b — b'
₁. The extension to the mixed-integer nonseparable quadratic case is also given. Some sensitivity analysis results for nonlinear integer programming problems are given. We assume that the nonlinear 0 — 1 problem was solved by implicit enumeration and that some small changes have been made in the right hand side or objective function coefficients. We then established what additional information to keep in the implicit enumeration tree, when solving the original problem, in order to provide us with bounds on the optimal value of a perturbed problem. Also, suppose that after solving the original problem to optimality the problem was enlarged by introducing a new 0 — 1 variable, say xn+1. We determined a lower bound on the added objective function coefficients for which the new integer variable xn+1 remains at zero level in the optimal solution for the modified integer nonlinear program. We discuss the extensions to the mixed-integer case as well as to the case when integer variables are not restricted to be 0 or 1. The computational results for an example with quadratic objective function, linear constraints and 0—1 variables are provided. Finally, we have shown how to replace the objective function of a quadratic program with 0—1 variables ( by an integer objective function whose size is polynomially bounded by the number of variables) without changing the set of optimal solutions. This was done by making use of the algorithm given by Frank and Tardos (1985) which in turn uses the simultaneous approximation algorithm of Lenstra, Lenstra and Lovász (1982).
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Regan, K. W. "On the separation of complexity classes." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pardella, Gregor L. [Verfasser]. "Efficient Polynomial-Time Algorithms for Special Graph Partitioning Problems / Gregor L. Pardella." München : Verlag Dr. Hut, 2011. http://d-nb.info/1015604919/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anderson, Robert Lawrence. "An Exposition of the Deterministic Polynomial-Time Primality Testing Algorithm of Agrawal-Kayal-Saxena." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd869.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cuvelier, Thibaut. "Polynomial-Time Algorithms for Combinatorial Semibandits : Computationally Tractable Reinforcement Learning in Complex Environments." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG020.

Full text
Abstract:
La prise de décision séquentielle est une composante essentielle de nombreuses applications, de la gestion des réseaux informatiques aux annonces en ligne. L'outil principal est l'apprentissage par renforcement : un agent prend une séquence de décisions afin d'atteindre son objectif, avec des mesures typiquement bruitées de son environnement. Par exemple, un agent peut contrôler une voiture autonome; l'environnement est la ville dans laquelle la voiture se déplace. Les problèmes de bandits forment une classe d'apprentissage de renforcement pour laquelle on peut démontrer de très forts résultats théoriques. Les algorithmes de bandits se concentrent sur le dilemme exploration-exploitation : pour avoir une bonne performance, l'agent doit avoir une connaissance approfondie de son environnement (exploration) ; cependant, il doit aussi jouer des actions qui le rapprochent de son but (exploitation).Dans cette thèse, nous nous concentrons sur les bandits combinatoires, qui sont des bandits dont les décisions sont très structurées (une structure "combinatoire"). Il s'agit notamment des cas où l'agent détermine un chemin à suivre (sur une route, dans un réseau informatique, etc.) ou des publicités à afficher sur un site Web. De telles situations partagent leur complexité algorithmique : alors qu'il est souvent facile de déterminer la décision optimale lorsque les paramètres sont connus (le temps pour traverser une route, le profit généré par l'affichage d'une publicité à un endroit donné), la variante bandit (lorsque les paramètres doivent être déterminés par des interactions avec l'environnement) est bien plus complexe.Nous proposons deux nouveaux algorithmes pour aborder ces problèmes par des techniques d'optimisation mathématique. Basés sur des hypothèses faibles, ils présentent une complexité temporelle polynomiale, tout en étant performants par rapport aux algorithmes de pointe pour les mêmes problèmes. Ils présentent également d'excellentes propriétés statistiques, ce qui signifie qu'ils trouvent un équilibre entre exploration et exploitation proche de l'optimum théorique. Les travaux précédents sur les bandits combinatoires ont dû faire un choix entre le temps de calcul et la performance statistique ; nos algorithmes montrent que ce dilemme n'a pas lieu d'être
Sequential decision making is a core component of many real-world applications, from computer-network operations to online ads. The major tool for this use is reinforcement learning: an agent takes a sequence of decisions in order to achieve its goal, with typically noisy measurements of the evolution of the environment. For instance, a self-driving car can be controlled by such an agent; the environment is the city in which the car manœuvers. Bandit problems are a class of reinforcement learning for which very strong theoretical properties can be shown. The focus of bandit algorithms is on the exploration-exploitation dilemma: in order to have good performance, the agent must have a deep knowledge of its environment (exploration); however, it should also play actions that bring it closer to its goal (exploitation).In this dissertation, we focus on combinatorial bandits, which are bandits whose decisions are highly structured (a "combinatorial" structure). These include cases where the learning agent determines a path to follow (on a road, in a computer network, etc.) or ads to display on a Website. Such situations share their computational complexity: while it is often easy to determine the optimum decision when the parameters are known (the time to cross a road, the monetary gain of displaying an ad at a given place), the bandit variant (when the parameters must be determined through interactions with the environment) is more complex.We propose two new algorithms to tackle these problems by mathematical-optimisation techniques. Based on weak hypotheses, they have a polynomial time complexity, and yet perform well compared to state-of-the-art algorithms for the same problems. They also enjoy excellent statistical properties, meaning that they find a balance between exploration and exploitation that is close to the theoretical optimum. Previous work on combinatorial bandits had to make a choice between computational burden and statistical performance; our algorithms show that there is no need for such a quandary
APA, Harvard, Vancouver, ISO, and other styles
9

Heednacram, Apichat. "The NP-Hardness of Covering Points with Lines, Paths and Tours and their Tractability with FPT-Algorithms." Thesis, Griffith University, 2010. http://hdl.handle.net/10072/367754.

Full text
Abstract:
Given a problem for which no polynomial-time algorithm is likely to exist, we investigate how to attack this seemingly intractable problem based on parameterized complexity theory. We study hard geometric problems, and show that they are xed-parameter tractable (FPT) given an instance and a parameter k. This allows the problems to be solved exactly, rather than approximately, in polynomial time in the size of the input and exponential time in the parameter. Although the parameterized approach is still young, in recent years, there have been many results published concerning graph problems and databases. However, not many earlier results apply the parameterized approach in the eld of computational geometry. This thesis, therefore, focuses on geometric NP-hard problems. These problems are the Line Cover problem, the Rectilinear Line Cover problem in higher dimensions, the Rectilinear Minimum-Links Spanning Path problem in higher dimensions, the Rectilinear Hyper- plane Cover problem, the Minimum-Bends Traveling Salesman Prob- lem and the Rectilinear Minimum-Bends Traveling Salesman Prob- lem, in addition to some other variants of these problems. The Rectilinear Minimum-Links Spanning Path problem in higher dimensions and the Rectilinear Hyperplane Cover problem had been the subject of only conjectures about their intractability. Therefore, we present the NP-completeness proofs for these problems. After verifying their hardness, we use the xed-parameter approach to solve the two problems. We focus on solving the decision version of the problems, rather than solving the optimizations. However, with the Line Cover problem we demonstrate that it is not dicult to adapt algorithms for the decision version to algorithms for the optimization version. We also implement several algorithms for the Line Cover problem and conduct experimental evaluations of our algorithms with respect to previously known algorithms. For the remaining problems in the thesis, we will establish only the fundamental results. That is, we determine xed-parameter tractability of those problems.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Van-'T-Hof, Pim. "Exploiting structure to cope with NP-hard graph problems : polynomial and exponential time exact algorithms." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/285/.

Full text
Abstract:
An ideal algorithm for solving a particular problem always finds an optimal solution, finds such a solution for every possible instance, and finds it in polynomial time. When dealing with NP-hard problems, algorithms can only be expected to possess at most two out of these three desirable properties. All algorithms presented in this thesis are exact algorithms, which means that they always find an optimal solution. Demanding the solution to be optimal means that other concessions have to be made when designing an exact algorithm for an NP-hard problem: we either have to impose restrictions on the instances of the problem in order to achieve a polynomial time complexity, or we have to abandon the requirement that the worst-case running time has to be polynomial. In some cases, when the problem under consideration remains NP-hard on restricted input, we are even forced to do both. Most of the problems studied in this thesis deal with partitioning the vertex set of a given graph. In the other problems the task is to find certain types of paths and cycles in graphs. The problems all have in common that they are NP-hard on general graphs. We present several polynomial time algorithms for solving restrictions of these problems to specific graph classes, in particular graphs without long induced paths, chordal graphs and claw-free graphs. For problems that remain NP-hard even on restricted input we present exact exponential time algorithms. In the design of each of our algorithms, structural graph properties have been heavily exploited. Apart from using existing structural results, we prove new structural properties of certain types of graphs in order to obtain our algorithmic results.
APA, Harvard, Vancouver, ISO, and other styles
11

Robinson, Clare. "Multi-objective optimisation of polynomial models for time series prediction using genetic algorithms and neural networks." Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Suntisrivaraporn, Boontawee. "Polynomial-Time Reasoning Support for Design and Maintenance of Large-Scale Biomedical Ontologies." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1233830966436-59282.

Full text
Abstract:
Description Logics (DLs) belong to a successful family of knowledge representation formalisms with two key assets: formally well-defined semantics which allows to represent knowledge in an unambiguous way and automated reasoning which allows to infer implicit knowledge from the one given explicitly. This thesis investigates various reasoning techniques for tractable DLs in the EL family which have been implemented in the CEL system. It suggests that the use of the lightweight DLs, in which reasoning is tractable, is beneficial for ontology design and maintenance both in terms of expressivity and scalability. The claim is supported by a case study on the renown medical ontology SNOMED CT and extensive empirical evaluation on several large-scale biomedical ontologies.
APA, Harvard, Vancouver, ISO, and other styles
13

Suntisrivaraporn, Boontawee. "Polynomial-Time Reasoning Support for Design and Maintenance of Large-Scale Biomedical Ontologies." Doctoral thesis, Technische Universität Dresden, 2008. https://tud.qucosa.de/id/qucosa%3A23678.

Full text
Abstract:
Description Logics (DLs) belong to a successful family of knowledge representation formalisms with two key assets: formally well-defined semantics which allows to represent knowledge in an unambiguous way and automated reasoning which allows to infer implicit knowledge from the one given explicitly. This thesis investigates various reasoning techniques for tractable DLs in the EL family which have been implemented in the CEL system. It suggests that the use of the lightweight DLs, in which reasoning is tractable, is beneficial for ontology design and maintenance both in terms of expressivity and scalability. The claim is supported by a case study on the renown medical ontology SNOMED CT and extensive empirical evaluation on several large-scale biomedical ontologies.
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, J. "EXACT ALGORITHMS FOR SIZE CONSTRAINED CLUSTERING." Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/172513.

Full text
Abstract:
This thesis investigates the following general constrained clustering problem: given a dimension $d$, an $L^p$-norm, a set $X\subset\R^d$, a positive integer $k$ and a finite set $\mathcal M\subset\N$, find the optimal $k$-partition $\{A_1,...,A_k\}$ of $X$ w.r.t. the $L^p$-norm satisfying $|A_i|\in \mathcal M$, $i=1,...,k$. First of all, we prove that the problem is NP-hard even if $k=2$ (for all $p>1$), or $d=2$ and $|\mathcal M|=2$ (with Euclidean norm). Moreover, we put in evidence that the problem is computationally hard if $p$ is a non-integer rational. When $d=2$, $k=2$ and $\mathcal M=\{m,n-m\}$, we design an algorithm for solving the problem in time $O(n\sqrt[3]m \log^2 n)$ in the case of Euclidean norm; this result relies on combinatorial geometry techniques concerning $k$-sets and dynamic convex hulls. Finally, we study the problem in fixed dimension $d$ with $k=2$; by means of tools of real algebraic geometry and numerical techniques for localising algebraic roots we construct a polynomial-time method for solving the constrained clustering problem with integer $p$ given in unary notation.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Qingyu. "Delay-Aware Multi-Path Routing in a Multi-Hop Network: Algorithms and Applications." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90405.

Full text
Abstract:
Delay is known to be a critical performance metric for various real-world routing applications including multimedia communication and freight delivery. Provisioning delay-minimal (or at least delay-bounded) routing services for all traffic of an application is highly important. As a basic paradigm of networking, multi-path routing has been proven to be able to obtain lower delay performance than the single-path routing, since traffic congestions can be avoided. However, to our best knowledge, (i) many of existing delay-aware multi-path routing studies only consider the aggregate traffic delay. Considering that even the solution achieving the optimal aggregate traffic delay has a possibly unbounded delay performance for certain individual traffic unit, those studies may be insufficient in practice; besides, (ii) most existing studies which optimize or bound delays of all traffic are best-effort, where the achieved solutions have no theoretical performance guarantee. In this dissertation, we study four delay-aware multi-path routing problems, with the delay performances of all traffic taken into account. Three of them are in communication and one of them is in transportation. Note that our study differ from all related ones as we are the first to study the four fundamental problems to our best knowledge. Although we prove that our studied problems are all NP-hard, we design approximation algorithms with theoretical performance guarantee for solving each of them. To be specific, we claim the following contributions. Minimize maximum delay and average delay. First, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to stream a flow at a fixed rate to a receiver. Two important delay metrics are the average sender-to-receiver delay and the maximum sender-to-receiver delay. Existing results say that the two delay metrics of a flow cannot be both within bounded-ratio gaps to the optimal. In comparison, we design three different flow solutions, each of which can minimize the two delay metrics simultaneously within a $(1/epsilon)$-ratio gap to the optimal, at a cost of only delivering $(1-epsilon)$-fraction of the flow, for any user-defined $epsilonin(0,1)$. The gap $(1/epsilon)$ is proven to be at least near-tight, and we further show that our solutions can be extended to the multiple-unicast setting. Minimize Age-of-Information (AoI). Second, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to periodically send a batch of data to a receiver. We study a newly proposed delay-sensitive networking performance metric, AoI, defined as the elapsed time since the generation of the last received data. We consider the problem of minimizing AoI subject to throughput requirements, which we prove is NP-hard. We note that our AoI problem differs from existing ones in that we are the first to consider the batch generation of data and multi-path communication. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Our framework can build upon any polynomial-time $alpha$-approximation algorithm of the maximum delay minimization problem, to construct an $(alpha+c)$-approximate solution for minimizing AoI. Here $c$ is a constant dependent on throughput requirements. Maximize network utility. Third, we consider a multiple-unicast setting where in a multi-hop network there exist many network users. Each user requires a sender to use multiple paths to stream a flow to a receiver, incurring an utility that is a function of the experienced maximum delay or the achieved throughput. Our objective is to maximize the aggregate utility of all users under throughput requirements and maximum delay constraints. We observe that it is NP-complete either to construct an optimal solution under relaxed maximum delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Hence it is non-trivial even to obtain approximate solutions satisfying relaxed constraints in a polynomial time. We develop a polynomial-time approximation algorithm. Our algorithm obtains solutions with constant approximation ratios under realistic conditions, at the cost of violating constraints by up to constant-ratios. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a common truck operation scenario where a truck is driving in a national highway network to fulfill multiple transportation tasks in order. We study an NP-hard timely eco-routing problem of minimizing total fuel consumption under task pickup and delivery time window constraints. We note that optimizing task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design a fast and efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its optimality gap in case the conditions are not met. We simulate a heavy-duty truck driving across the US national highway system, and empirically observe that the fuel consumption achieved by our heuristic can be $22%$ less than that achieved by the fastest-/shortest- path baselines. Furthermore, the fuel saving of our heuristic as compared to the baselines is robust to the number of tasks.
Doctor of Philosophy
We consider a network modeled as a directed graph, where it takes time for data to traverse each link in the network. It models many critical applications both in the communication area and in the transportation field. For example, both the European education network and the US national highway network can be modeled as directed graphs. We consider a scenario where a source node is required to send multiple (a set of) data packets to a destination node through the network as fast as possible, possibly using multiple source-to-destination paths. In this dissertation we study four problems all of which try to figure out routing solutions to send the set of data packets, with an objective of minimizing experienced travel time or subject to travel time constraints. Although all of our four problems are NP-hard, we design approximation algorithms to solve them and obtain solutions with theoretically bounded gaps as compared to the optimal. The first three problems are in the communication area, and the last problem is in the transportation field. We claim the following specific contributions. Minimize maximum delay and average delay. First, we consider the setting of simultaneously minimizing the average travel time and the worst (largest) travel time of sending the set of data packets from source to destination. Existing results say that the two metrics of travel time cannot be minimized to be both within bounded-ratio gaps to the optimal. As a comparison, we design three different routing solutions, each of which can minimize the two metrics of travel time simultaneously within a constant bounded ratio-gap to the optimal, but at a cost of only delivering a portion of the data. Minimize Age-of-Information (AoI). Second, we consider the problem of minimizing a newly proposed travel-time-sensitive performance metric, i.e., AoI, which is the elapsed time since the generation of the last received data. Our AoI study differs from existing ones in that we are the first to consider a set of data and multi-path routing. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Maximize network utility. Third, we consider a more general setting with multiple source destination pairs. Each source incurs a utility that is a function of the experienced travel time or the achieved throughput to send data to its destination. Our objective is to maximize the aggregate utility under throughput requirements and travel time constraints. We develop a polynomial-time approximation algorithm, at the cost of violating constraints by up to constant-ratios. It is non-trivial to design such algorithms, as we prove that it is NPcomplete either to construct an optimal solution under relaxed delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a truck and multiple transportation tasks in order, where each task requires the truck to pick up cargoes at a source timely, and deliver them to a destination timely. The need of coordinating task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design an efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its performance gap as compared to the optimal in case the conditions are not met.
APA, Harvard, Vancouver, ISO, and other styles
16

Nakajima, Natsu. "Genetic Network Completion Using Dynamic Programming and Least-Squares Fitting." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/195987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Besner, Manfred [Verfasser], Winfried [Akademischer Betreuer] Hochstättler, Winfried [Gutachter] Hochstättler, André [Gutachter] Casajus, and Jörg [Gutachter] Homberger. "Axiomatizations of Harsanyi Solutions and Extensions, Values for Level Structures, and Polynomial-Time Algorithms / Manfred Besner ; Gutachter: Winfried Hochstättler, André Casajus, Jörg Homberger ; Betreuer: Winfried Hochstättler." Hagen : FernUniversität in Hagen, 2020. http://d-nb.info/1224100891/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Besner, Manfred Verfasser], Winfried [Akademischer Betreuer] [Hochstättler, Winfried [Gutachter] Hochstättler, André [Gutachter] Casajus, and Jörg [Gutachter] Homberger. "Axiomatizations of Harsanyi Solutions and Extensions, Values for Level Structures, and Polynomial-Time Algorithms / Manfred Besner ; Gutachter: Winfried Hochstättler, André Casajus, Jörg Homberger ; Betreuer: Winfried Hochstättler." Hagen : FernUniversität in Hagen, 2020. http://d-nb.info/1224100891/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mohamed, Sidi Mohamed Ahmed. "K-Separator problem." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0032/document.

Full text
Abstract:
Considérons un graphe G = (V,E,w) non orienté dont les sommets sont pondérés et un entier k. Le problème à étudier consiste à la construction des algorithmes afin de déterminer le nombre minimum de nœuds qu’il faut enlever au graphe G pour que toutes les composantes connexes restantes contiennent chacune au plus k-sommets. Ce problème nous l’appelons problème de k-Séparateur et on désigne par k-séparateur le sous-ensemble recherché. Il est une généralisation du Vertex Cover qui correspond au cas k = 1 (nombre minimum de sommets intersectant toutes les arêtes du graphe)
Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problem
APA, Harvard, Vancouver, ISO, and other styles
20

Mohamed, Sidi Mohamed Ahmed. "K-Separator problem." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0032.

Full text
Abstract:
Considérons un graphe G = (V,E,w) non orienté dont les sommets sont pondérés et un entier k. Le problème à étudier consiste à la construction des algorithmes afin de déterminer le nombre minimum de nœuds qu’il faut enlever au graphe G pour que toutes les composantes connexes restantes contiennent chacune au plus k-sommets. Ce problème nous l’appelons problème de k-Séparateur et on désigne par k-séparateur le sous-ensemble recherché. Il est une généralisation du Vertex Cover qui correspond au cas k = 1 (nombre minimum de sommets intersectant toutes les arêtes du graphe)
Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problem
APA, Harvard, Vancouver, ISO, and other styles
21

Aggarwal, Charu C., Haim Kaplan, and Robert E. 1948 Tarjan. "A Faster Primal Network Simplex Algorithm." Massachusetts Institute of Technology, Operations Research Center, 1996. http://hdl.handle.net/1721.1/5266.

Full text
Abstract:
We present a faster implementation of the polynomial time primal simplex algorithm due to Orlin [23]. His algorithm requires O(nm min{log(nC), m log n}) pivots and O(n2 m ??n{log nC, m log n}) time. The bottleneck operations in his algorithm are performing the relabeling operations on nodes, selecting entering arcs for pivots, and performing the pivots. We show how to speed up these operations so as to yield an algorithm whose running time is O(nm. log n) per scaling phase. We show how to extend the dynamic-tree data-structure in order to implement these algorithms. The extension may possibly have other applications as well.
APA, Harvard, Vancouver, ISO, and other styles
22

Deshpande, Ajay A. "A pseudo-polynomial time O(log² n)-approximation algorithm for art gallery problems." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36243.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 55-56).
In this thesis, we give a pseudo-polynomial time O(log² n)-approximation algorithm for a variant of the art gallery problem the point-guard problem. The point-guard problem involves finding the minimum number of points and their positions so that guards located at these points cover the interior of the art gallery. Our algorithm is pseudo-polynomial in the sense that it is polynomial in the number of walls of the art gallery but is possibly exponential in the number of bits required to represent the positions of the vertices of the art gallery. Our approach involves reducing the point-guard problem to a new problem of choosing a minimum number of guard-locations from a finite set obtained by a special subdivision procedure. The new problem has the optimal solution at most three times the optional solution of the point-guard problem. We further reduce the new problem to the set cover problem and obtain an approximate solution to the set cover problem.
by Ajay A. Deshpande.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Farooq, Rashid. "A polynomial-time algorithm for a stable matching problem with linear valuations and bounded side payments." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/136743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Freund, Robert M. "A Potential Reduction Algorithm With User-Specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm Start." Massachusetts Institute of Technology, Operations Research Center, 1991. http://hdl.handle.net/1721.1/5409.

Full text
Abstract:
This paper develops a potential reduction algorithm for solving a linear-programming problem directly from a "warm start" initial point that is neither feasible nor optimal. The algorithm is of an "interior point" variety that seeks to reduce a single potential function which simultaneously coerces feasibility improvement (Phase I) and objective value improvement (Phase II). The key feature of the algorithm is the ability to specify beforehand the desired balance between infeasibility and nonoptimality in the following sense. Given a prespecified balancing parameter /3 > 0, the algorithm maintains the following Phase I - Phase II "/3-balancing constraint" throughout: (cTx- Z*) < /3TX, where cTx is the objective function, z* is the (unknown) optimal objective value of the linear program, and Tx measures the infeasibility of the current iterate x. This balancing constraint can be used to either emphasize rapid attainment of feasibility (set large) at the possible expense of good objective function values or to emphasize rapid attainment of good objective values (set /3 small) at the possible expense of a lower infeasibility gap. The algorithm exhibits the following advantageous features: (i) the iterate solutions monotonically decrease the infeasibility measure, (ii) the iterate solutions satisy the /3-balancing constraint, (iii) the iterate solutions achieve constant improvement in both Phase I and Phase II in O(n) iterations, (iv) there is always a possibility of finite termination of the Phase I problem, and (v) the algorithm is amenable to acceleration via linesearch of the potential function.
APA, Harvard, Vancouver, ISO, and other styles
25

Shishenina, Elvira. "Space-Time Discretization of Elasto-Acoustic Wave Equation in Polynomial Trefftz-DG Bases." Thesis, Pau, 2018. http://www.theses.fr/2018PAUU3018/document.

Full text
Abstract:
Les méthodes d'éléments finis de type Galerkine discontinu (DG FEM) ont démontré précision et efficacité pour résoudre des problèmes d'ondes dans des milieux complexes. Cependant, elles nécessitent un très grand nombre de degrés de liberté, ce qui augmente leur coût de calcul en comparaison du coût des méthodes d'éléments finis continus.Parmi les différentes approches variationnelles pour résoudre les problèmes aux limites, se distingue une famille particulière, basée sur l'utilisation de fonctions tests qui sont des solutions locales exactes des équations à résoudre. L'idée vient de E.Trefftz en 1926 et a depuis été largement développée et généralisée. Les méthodes variationnelles de type Trefftz-DG appliquées aux problèmes d'ondes se réduisent à des intégrales de surface, ce qui devrait contribuer à réduire les coûts de calcul.Les approches de type Trefftz ont été largement développées pour les problèmes harmoniques, mais leur utilisation pour des simulations en domaine transitoire est encore limitée. Quand elles sont appliquées dans le domaine temporel, les méthodes de Trefftz utilisent des maillages qui recouvrent le domaine espace-temps. C'est une des paraticularités de ces méthodes. En effet, les méthodes DG standards conduisent à la construction d'un système semi-discret d'équations différentielles ordinaires en temps qu'on intègre avec un schéma en temps explicite. Mais les méthodes de Trefftz-DG appliquées aux problèmes d'ondes conduisent à résoudre une matrice globale, contenant la discrétisation en espace et en temps, qui est de grande taille et creuse. Cette particularité gêne considérablement le déploiement de cette technologie pour résoudre des problèmes industriels.Dans ce travail, nous développons un environnement Tre#tz-DG pour résoudre des problèmes d'ondes mécaniques, y compris les équations couplées de l'élasto-acoustique. Nous prouvons que les formulations obtenues sont bien posées et nous considérons la difficulté d'inverser la matrice globale en construisant un inverse approché obtenu à partir de la décomposition de la matrice globale en une matrice diagonale par blocs. Cette idée permet de réduire les coûts de calcul mais sa précision est limitée à de petits domaines de calcul. Etant données les limitations de la méthode, nous nous sommes intéressés au potentiel du "Tent Pitcher", en suivant les travaux récents de Gopalakrishnan et al. Il s'agit de construire un maillage espace-temps composé de macro-éléments qui peuvent être traités indépendamment en faisant une hypothèse de causalité. Nous avons obtenu des résultats préliminaires très encourageants qui illustrent bien l'intérêt du Tent Pitcher, en particulier quand il est couplé à une méthode de Trefftz-DG formulée à partir d'intégrales de surface seulement. Dans ce cas, le maillage espace-temps est composé d'éléments qui sont au plus de dimension 3. Il est aussi important de noter que ce cadre se prête à l'utilisation de pas de temps locaux ce qui est un plus pour gagner en précision avec des coûts de calcul réduits
Discontinuous Finite Element Methods (DG FEM) have proven flexibility and accuracy for solving wave problems in complex media. However, they require a large number of degrees of freedom, which increases the corresponding computational cost compared with that of continuous finite element methods. Among the different variational approaches to solve boundary value problems, there exists a particular family of methods, based on the use of trial functions in the form of exact local solutions of the governing equations. The idea was first proposed by Trefftz in 1926, and since then it has been further developed and generalized. A Trefftz-DG variational formulation applied to wave problems reduces to surface integrals that should contribute to decreasing the computational costs.Trefftz-type approaches have been widely used for time-harmonic problems, while their implementation for time-dependent simulations is still limited. The feature of Trefftz-DG methods applied to time-dependent problems is in the use of space-time meshes. Indeed, standard DG methods lead to the construction of a semi-discrete system of ordinary differential equations in time which are integrated by using an appropriate scheme. But Trefftz-DG methods applied to wave problems lead to a global matrix including time and space discretizations which is huge and sparse. This significantly hampers the deployment of this technology for solving industrial problems.In this work, we develop a Trefftz-DG framework for solving mechanical wave problems including elasto-acoustic equations. We prove that the corresponding formulations are well-posed and we address the issue of solving the global matrix by constructing an approximate inverse obtained from the decomposition of the global matrix into a block-diagonal one. The inversion is then justified under a CFL-type condition. This idea allows for reducing the computational costs but its accuracy is limited to small computational domains. According to the limitations of the method, we have investigated the potential of Tent Pitcher algorithms following the recent works of Gopalakrishnan et al. It consists in constructing a space-time mesh made of patches that can be solved independently under a causality constraint. We have obtained very promising numerical results illustrating the potential of Tent Pitcher in particular when coupled with a Trefftz-DG method involving only surface terms. In this way, the space-time mesh is composed of elements which are 3D objects at most. It is also worth noting that this framework naturally allows for local time-stepping which is a plus to increase the accuracy while decreasing the computational burden
APA, Harvard, Vancouver, ISO, and other styles
26

Kopřiva, Jan. "Srovnání algoritmů při řešení problému obchodního cestujícího." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-222126.

Full text
Abstract:
The Master Thesis deals with logistic module innovation of information system ERP. The principle of innovation is based on implementation of heuristic algorithms which solve Travel Salesman Problems (TSP). The software MATLAB is used for analysis and tests of these algorithms. The goal of Master Thesis is the comparison of selections algorithm, which are suitable for economic purposes (accuracy of solution, speed of calculation and memory demands).
APA, Harvard, Vancouver, ISO, and other styles
27

Brettell, Nicholas John. "Aspects of Matroid Connectivity." Thesis, University of Canterbury. School of Mathematics and Statistics, 2014. http://hdl.handle.net/10092/9215.

Full text
Abstract:
Connectivity is a fundamental tool for matroid theorists, which has become increasingly important in the eventual solution of many problems in matroid theory. Loosely speaking, connectivity can be used to help describe a matroid's structure. In this thesis, we prove a series of results that further the knowledge and understanding in the field of matroid connectivity. These results fall into two parts. First, we focus on 3-connected matroids. A chain theorem is a result that proves the existence of an element, or elements, whose deletion or contraction preserves a predetermined connectivity property. We prove a series of chain theorems for 3-connected matroids where, after fixing a basis B, the elements in B are only eligible for contraction, while the elements not in B are only eligible for deletion. Moreover, we prove a splitter theorem, where a 3-connected minor is also preserved, resolving a conjecture posed by Whittle and Williams in 2013. Second, we consider k-connected matroids, where k >= 3. A certain tree, known as a k-tree, can be used to describe the structure of a k-connected matroid. We present an algorithm for constructing a k-tree for a k-connected matroid M. Provided that the rank of a subset of E(M) can be found in unit time, the algorithm runs in time polynomial in |E(M)|. This generalises Oxley and Semple's (2013) polynomial-time algorithm for constructing a 3-tree for a 3-connected matroid.
APA, Harvard, Vancouver, ISO, and other styles
28

Koller, Angela Erika. "The frequency assignment problem." Thesis, Brunel University, 2004. http://bura.brunel.ac.uk/handle/2438/4967.

Full text
Abstract:
This thesis examines a wide collection of frequency assignment problems. One of the largest topics in this thesis is that of L(2,1)-labellings of outerplanar graphs. The main result in this topic is the fact that there exists a polynomial time algorithm to determine the minimum L(2,1)-span for an outerplanar graph. This result generalises the analogous result for trees, solves a stated open problem and complements the fact that the problem is NP-complete for planar graphs. We furthermore give best possible bounds on the minimum L(2,1)-span and the cyclic-L(2,1)-span in outerplanar graphs, when the maximum degree is at least eight. We also give polynomial time algorithms for solving the standard constraint matrix problem for several classes of graphs, such as chains of triangles, the wheel and a larger class of graphs containing the wheel. We furthermore introduce the concept of one-close-neighbour problems, which have some practical applications. We prove optimal results for bipartite graphs, odd cycles and complete multipartite graphs. Finally we evaluate different algorithms for the frequency assignment problem, using domination analysis. We compute bounds for the domination number of some heuristics for both the fixed spectrum version of the frequency assignment problem and the minimum span frequency assignment problem. Our results show that the standard greedy algorithm does not perform well, compared to some slightly more advanced algorithms, which is what we would expect. In this thesis we furthermore give some background and motivation for the topics being investigated, as well as mentioning several open problems.
APA, Harvard, Vancouver, ISO, and other styles
29

Kindap, Nihal. "On An Architecture For A Parallel Finite Field Multiplier With Low Complexity Based On Composite Fields." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605347/index.pdf.

Full text
Abstract:
In this thesis, a bit parallel architecture for a parallel finite field multiplier with low complexity in composite fields GF((2n)m) with k = n ·
m (k 32) is investigated. The architecture has lower complexity when the Karatsuba-Ofman algorithm is applied for certain k. Using particular primitive polynomials for composite fields improves the complexities. We demonstrated for the values m = 2, 4, 8 in details. This thesis is based on the paper &ldquo
A New Architecture for a Parallel Finite Field Multiplier with Low Complexity Based on Composite Fields &rdquo
by Christof Paar. The whole purpose of this thesis is to understand and present a detailed description of the results of the paper of Paar.
APA, Harvard, Vancouver, ISO, and other styles
30

Farhat, Mlouka. "Batch replenishment planning under capacity reservation contract." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0041.

Full text
Abstract:
Nous nous intéressons au Problème de Dimensionnement de Lots mono-produit (PDL) dans une chaîne logistique composée d'un détaillant et d'un fournisseur en y intégrant le contrat buyback et l'approvisionnement par batch. L'objectif est de déterminer un plan d'approvisionnement pour le détaillant pour satisfaire ses demandes déterministes sur un horizon fini, tout en minimisant ses coûts d'approvisionnement et de stockage. Concernant le coût d'approvisionnement, nous supposons deux structures différentes : FTL (Full Truck Load) et OFB (Only Full Batch). Trois types de contrat buyback sont étudiés : avec des périodes de retour fixes, avec une limite de temps sur les retours, et avec des retours uniquement dans les périodes d'approvisionnement. Chaque contrat est caractérisé par un pourcentage de retour maximal qui peut être égal à 100% (retour total) ou inférieur à 100% (retour partiel). Pour le PDL sous le contrat buyback avec des périodes de retour fixes, nous supposons le cas de ventes perdues (lost sales). En outre, un autre concept ajouté dans les PDL sous les trois types de contrat buyback réside dans le fait que le détaillant peut jeter la quantité invendue et non retournée au fournisseur, appelé mise au rebut (disposal). Nous avons modélisé ces différentes extensions du PDL par des Programmes Linéaires en Nombres Entiers (PLNE). Nous avons ensuite développé des algorithmes exacts polynomiaux de programmation dynamique pour certaines extensions, et montré la NP-difficulté pour d'autres. Pour chaque problème résolu en temps polynomial, nous avons comparé l'efficacité et les limites de l'algorithme proposé avec celles des quatre formulations en PLNE. Nous avons également proposé des modèles mathématiques pour les PDL sous d'autres types de contrats de réservation de capacité dans le cas déterministe à multi-périodes
We study the single-item Lot Sizing Problem (LSP) in a supply chain composed of a retailer and a supplier by integrating the buyback contract and the batch ordering. The purpose is to determine a replenishment planning for the retailer to satisfy his deterministic demands over a finite horizon, while minimizing the procurement and inventory costs. Regarding the procurement cost, we assume two different structures: FTL (Full Truck Load) and OFB (Only Full Batch). We consider three types of buyback contract: with fixed return periods, with a time limit on returns, and with returns permitted only in procurement periods. Each contract is characterized by the maximum return percentage being either equal to 100% (full return) or less than 100% (partial return). For the LSP under the buyback contract with fixed return periods, we assume the concept of lost sales. Another concept considered in the LSP's under the three types of buyback contract is the disposal of the unsold and unreturned quantities. We model these different LSP extensions as a Mixed Integer Linear Program (MILP). Thereafter, we develop exact polynomial time dynamic programming algorithms for some extensions and show the NP-hardness of others. For each problem solved in polynomial time, we compare the efficiency and the limits of the proposed algorithm with those of four MILP formulations by performing different tests. Finally, we propose mathematical models for the LSP's under other types of the capacity reservation contract in the deterministic and multi-period case
APA, Harvard, Vancouver, ISO, and other styles
31

Tlilane, Lydia. "Les matroïdes et leur implication dans l'allocation de ressources indivisibles : algorithmes d'approximation avec garantie de performance." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090068/document.

Full text
Abstract:
Nous nous intéressons dans cette thèse à la problématique de la décision collective. L’objectif est de déterminer une solution de compromis pour des problèmes soumis à de multiples points de vue. Les problèmes considérés sont de nature combinatoire. Plus précisément, il s’agit de la classe des systèmes d’ensembles qui ont une structure de matroïde. La théorie des matroïdes est centrale en optimisation combinatoire, elle a permis d’unifier des structures apparemment séparées comme les arbres et les couplages dans les graphes et elle a engendré des algorithmes efficaces pour résoudre des problèmes d’optimisation non triviaux en temps polynomial. Nous nous intéressons à fournir des algorithmes d’approximation polynomiaux centralisés et décentralisés avec garantie de performance pour déterminer une solution de compromis qui est une base du matroïde. La solution de compromis doit également être équitable pour tous les membres de la collectivité. Nous portons un intérêt particulier au problème de partage équitable de biens indivisibles qui est une thématique importante en choix social computationnel et dont le problème se modélise par les matroïdes
In this thesis, we are interested in collective decision-making. The objective is to find a tradeoff solution for problems that are evaluated by multiple points of view. We consider problems having a matroid structure. Matroid theory is significant in combinatorial optimization, it helped to unify apparently separated structures like forests and matchings in graphs and it includes efficient algorithms for solving non-trivial optimization problems in polynomial time. We are interested to provide polynomial time centralized and decentralized approximation algorithms for finding a tradeoff solution which is a base of the matroid. The tradeoff solution must also be fair for all the members of the community. We are particularly interested in the issue of the fair division of indivisible goods which is central in computational social choice and that can be modeled by matroids
APA, Harvard, Vancouver, ISO, and other styles
32

Masetti, Masha. "Product Clustering e Machine Learning per il miglioramento dell'accuratezza della previsione della domanda: il caso Comer Industries S.p.A." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
I lunghi lead time della catena di fornitura cinese dell’azienda Comer Industries S.p.A la obbligano ad ordinare i materiali con sei mesi di anticipo, data in cui spesso i clienti non sono consapevoli dei quantitativi di materiale che necessiteranno. Al fine di rispondere ai clienti mantenendo l’alto livello di servizio garantito storicamente da Comer Industries, risulta essenziale ordinare il materiale basandosi sulle previsioni della domanda. Tuttavia, attualmente le previsioni non sono sufficientemente accurate. L’obiettivo di questa ricerca è individuare un possibile metodo per incrementare l’accuratezza delle previsioni della domanda. Potrebbe, al fine del miglioramento della forecast accuracy, incidere positivamente l’utilizzo dell’Intelligenza Artificiale? Per rispondere alla domanda di ricerca, si sono implementati l’algoritmo K-Means e l’algoritmo Gerarchico in Visual Basic Application al fine di dividere i prodotti in cluster sulla base dei componenti comuni. Successivamente, si sono analizzati gli andamenti della domanda. Implementando differenti algoritmi di Machine Learning su Google Colaboratory, si sono paragonate le accuratezze ottenute e si è individuato un algoritmo di previsione ottimale per ciascun profilo di domanda. Infine, con le previsioni effettuate, si è potuto identificare con il K-means un miglioramento dell’accuracy di circa il 54,62% rispetto all’accuratezza iniziale ed un risparmio del 47% dei costi per il mantenimento del safety stock, mentre con il Clustering Gerarchico si è rilevato un miglioramento dell’accuracy del 11,15% ed un risparmio del 45% dei costi attuali. Si è, pertanto, concluso che la clusterizzazione dei prodotti potrebbe apportare un contributo positivo all’accuratezza delle previsioni. Inoltre, si è osservato come il Machine Learning potrebbe costituire lo strumento ideale per individuare le soluzioni ottimali sia all’interno degli algoritmi di Clustering sia all’interno dei metodi previsionali.
APA, Harvard, Vancouver, ISO, and other styles
33

"New polynomial-time cycle-canceling algorithms for minimum cost flows." Sloan School of Management, Massachusetts Institute of Technology], 1996. http://hdl.handle.net/1721.1/2630.

Full text
Abstract:
by P.T. Sokkalingam, Ravindra K. Ahuja, James B. Orlin.
Cover title.
Includes bibliographical references (p. 18).
Supported by a grant from the United Parcel Service and the Office of Naval Research. N00014-96-1-0051
APA, Harvard, Vancouver, ISO, and other styles
34

"Polynomial time algorithms for finite horizon, stationary Markov decision processes." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1988. http://hdl.handle.net/1721.1/3070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ozen, Hasan Cagan. "Long Time Propagation of Stochasticity by Dynamical Polynomial Chaos Expansions." Thesis, 2017. https://doi.org/10.7916/D8WH32C5.

Full text
Abstract:
Stochastic differential equations (SDEs) and stochastic partial differential equations (SPDEs) play an important role in many areas of engineering and applied sciences such as atmospheric sciences, mechanical and aerospace engineering, geosciences, and finance. Equilibrium statistics and long-time solutions of these equations are pertinent to many applications. Typically, these models contain several uncertain parameters which need to be propagated in order to facilitate uncertainty quantification and prediction. Correspondingly, in this thesis, we propose a generalization of the Polynomial Chaos (PC) framework for long-time solutions of SDEs and SPDEs driven by Brownian motion forcing. Polynomial chaos expansions (PCEs) allow us to propagate uncertainties in the coefficients of these equations to the statistics of their solutions. Their main advantages are: (i) they replace stochastic equations by systems of deterministic equations; and (ii) they provide fast convergence. Their main challenge is that the computational cost becomes prohibitive when the dimension of the parameters modeling the stochasticity is even moderately large. In particular, for equations with Brownian motion forcing, the long-time simulation by PC-based methods is notoriously difficult as the dimension of stochastic variables increases with time. With the goal in mind to deliver computationally efficient numerical algorithms for stochastic equations in the long time, our main strategy is to leverage the intrinsic sparsity in the dynamics by identifying the influential random parameters and construct spectral approximations to the solutions in terms of those relevant variables. Once this strategy is employed dynamically in time, using online constructions, approximations can retain their sparsity and accuracy; even for long times. To this end, exploiting Markov property of Brownian motion, we present a restart procedure that allows PCEs to expand the solutions at future times in terms of orthogonal polynomials of the measure describing the solution at a given time and the future Brownian motion. In case of SPDEs, the Karhunen-Loeve expansion (KLE) is applied at each restart to select the influential variables and keep the dimensionality minimal. Using frequent restarts and low degree polynomials, the algorithms are able to capture long-time solutions accurately. We will also introduce, using the same principles, a similar algorithm based on a stochastic collocation method for the solutions of SDEs. We apply the methods to the numerical simulation of linear and nonlinear SDEs, and stochastic Burgers and Navier-Stokes equations with white noise forcing. Our methods also allow us to incorporate time-independent random coefficients such as a random viscosity. We propose several numerical simulations, and show that the algorithms compare favorably with standard Monte Carlo methods in terms of accuracy and computational times. To demonstrate the efficiency of the algorithms for long-time simulations, we compute invariant measures of the solutions when they exist.
APA, Harvard, Vancouver, ISO, and other styles
36

Smith, Ronald Douglas. "A polynomial time heuristic algorithm for certain instances of 3-partition." 2014. http://liblink.bsu.edu/uhtbin/catkey/1749602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

"Polynomial-time highest-gain augmenting path algorithms for the generalized circulation problem." Sloan School of Management, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/2626.

Full text
Abstract:
by Donald Goldfarb, Zhiying Jin, James B. Orlin.
Includes bibliographical references (p. 15-16).
Supported in part by NSF. DMS 94-14438 DMS 95-27124 Supported in part by DOE. DE-FG02-92ER25126 Supported as well by grants from UPS and ONR. N00014-96-1-0051
APA, Harvard, Vancouver, ISO, and other styles
38

Baysan, Mehmet. "Polynomial time exact solutions for forwarding set problems in wireless ad hoc networks /." 2008. http://proquest.umi.com/pqdweb?did=1654502121&sid=3&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

"Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function." Sloan School of Management, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/2207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hušek, Radek. "Vlastnosti intervalových booleovských funkcí." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-341200.

Full text
Abstract:
Boolean function f is k-interval if - input vector viewed as n-bit number - f is true for and only for inputs from given (at most) k intervals. Recognition of k-interval fuction given its DNF representation is coNP-hard problem. This thesis shows that for DNFs from a given solvable class (class C of DNFs is solvable if we can for any DNF F ∈ C decide F ≡ 1 in polynomial time and C is closed under partial assignment) and fixed k we can decide whether F represents k-interval function in polynomial time. 1
APA, Harvard, Vancouver, ISO, and other styles
41

"A very simple polynomial-time algorithm for linear programming." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1988. http://hdl.handle.net/1721.1/3091.

Full text
Abstract:
by Paul Tseng.
Caption title. "September 1988."
Includes bibliographical references.
This research is partially supported by the U.S. Army Resaearch Office, contract DAAL03-86-K-0171 This research is partially supported by the National Science Foundation under grant NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
42

"A simple polynomial-time algorithm for convex quadratic programming." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1988. http://hdl.handle.net/1721.1/3092.

Full text
Abstract:
Paul Tseng.
Caption title.
Includes bibliographical references.
This research is partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems), contract DAAL03-86-K-0171 This research is partially supported by the National Science Foundation grant NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
43

Lu, Wei Fu, and 呂威甫. "A Polynomial Time Algorithm for Natural Exactly Covering Systems." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/74712819775745063480.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
81
A Covering System is a set of congruences such that every integer satisfies at least one of the congruence. A set of congruences is called an Exactly Covering System (ECS), if for every integer satisfies exactly one of the congruence. Undirected rooted trees which do not contain any vertex of degree 2 except (possibly) for the root, and assigned every edge of this rooted tree a value as follows: 1. to all edge incident with the root, the number R that is the degree of the root; 2. to any edge not incident with the root, the number S-1, where S is the degree of that endpoint of this edge which is closer to the root will be called Z rooted trees. An Exactly Covering System, whose moduli are the product of value of edges from root to leaf nodes one in a Z rooted tree, will be called an Natural Exactly Covering System (NECS). The main result of this paper is propose an algorithm to decide NECS. We will also prove the correctness and analyze the time complexity of the algorithm. Our algorithm use the new idea L rooted tree. We proved the NECS algorithm is really a process for constructing L rooted tree. So if the algorithm draw L rooted tree successfully, we know the set of residual classes is an NECS, otherwise it is not an NECS.
APA, Harvard, Vancouver, ISO, and other styles
44

"A polynomial time primal network simplex algorithm for minimum cost flows." Sloan School of Management, Massachusetts Institute of Technology], 1995. http://hdl.handle.net/1721.1/2584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chan, Hing Lun Joseph. "Primality Testing is Polynomial-time: A Mechanised Verification of the AKS Algorithm." Phd thesis, 2019. http://hdl.handle.net/1885/177195.

Full text
Abstract:
We present a formalisation of the Agrawal-Kayal-Saxena (AKS) algorithm, a deterministic polynomial-time primality test. This algorithm was first announced by the AKS team in 2002, later improved in 2004. Our work is based on the improved version, with Parts 1 and 2 aim at a formal proof of the correctness of the algorithm, and Part 3 aims at a formal analysis of the complexity of the algorithm. The entire work is carried out in the HOL4 theorem prover. The correctness of the AKS algorithm relies on a main theorem developed by the AKS team, based on the theory of finite fields. To achieve the goal for Parts 1 and 2, we start by building up a hierarchy of HOL4 libraries for algebraic structures: from monoids, to groups, then rings and fields. Equipped with this foundation, we develop an abstract algebra library covering subgroups, quotient groups, ideals, and vector spaces. We extend the algebra library with polynomials, quotient rings, quotient fields, and finite fields. With all these we can formulate the AKS main theorem, which gives the correctness of the algorithm. For the formal proof, we need to dive into several advanced topics in finite field, in particular the existence and uniqueness of finite fields, and properties of cyclotomic polynomials. Although algebraic structures, including finite fields, have been formalised in other theorem provers, our work is the first such comprehensive library in HOL4, covering also the uniqueness of finite fields up to isomorphism. Furthermore, by casting the AKS main theorem in the context of finite fields, we can see clearly the inter-relationship of various parts of the proof. As a result, we can make slight adjustments to the published version of the AKS algorithm. These slight adjustments are minor in terms of the significance of the AKS achievement, answering the challenge "Is Primes in P?" in the affirmative, but they simplify the implementation and analysis of the AKS algorithm. The AKS algorithm consists of several loops: loops for checking if a condition still holds, and loops for searching if a condition will hold. Thus for the goal of Part 3, we embark on an analysis of such loops: formalising their behaviour, in particular the bound on the number of iterations. The AKS algorithm mostly involves modular computations, using numbers or manipulating polynomials. We develop tools and techniques to formally assert the recurrence properties of loop computations, with emphasis on the analysis of the time complexity behaviour. As far as we know, this approach to complexity analysis has not been done in other theorem provers. Many offshoots from this work are interesting, even new to published proofs of the AKS algorithm. We have an elegant proof to a key result that enables us to slightly improve the bound on the AKS parameter. We present the relationship between the AKS algorithm and the AKS main theorem. We distill a picture to visualise the logic behind the proof of the AKS main theorem. We show in detail an implementation of the AKS algorithm that is suitable for loop analysis of complexity. We introduce an approach to study the time complexity of simple loops.
APA, Harvard, Vancouver, ISO, and other styles
46

Kimmett, Ben. "Improvement and partial simulation of King & Saia’s expected-polynomial-time Byzantine agreement algorithm." Thesis, 2020. http://hdl.handle.net/1828/11836.

Full text
Abstract:
We present a partial implementation of King and Saia 2016’s expected polyno- mial time byzantine agreement algorithm, which which greatly speeds up Bracha’s Byzantine agreement algorithm by introducing a shared coin flip subroutine and a method for detecting adversarially controlled nodes. In addition to implementing the King-Saia algorithm, we detail a new version of the “blackboard” abstraction used to implement the shared coin flip, which improves the subroutine’s resilience from t < n/4 to t < n/3 and leads to an improvement of the resilience of the King-Saia Byzantine agreement algorithm overall. We test the King-Saia algorithm, and detail a series of adversarial attacks against it; we also create a Monte Carlo simulation to further test one particular attack’s level of success at biasing the shared coin flip
Graduate
APA, Harvard, Vancouver, ISO, and other styles
47

黃思綸. "A polynomial-time approximation algorithm for the constrained connected dominating set problem in wireless networks." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/56310814524978450685.

Full text
Abstract:
碩士
國立交通大學
應用數學系所
98
In wireless ad hoc networks, selecting a set of nodes to form a virtual backbone has been investigated for more than two decades. It has been shown that a connected dominating set (CDS) can be used as a virtual backbone. There are many results for finding CDSs. In this thesis, we propose a new idea: a constrained connected dominating set (CCDS), which is a CDS having the property that some specified nodes must be included in it due to some special reason. For example, the specified nodes could be nodes with more remaining energy or nodes located at important locations. We propose a polynomial-time algorithm for constructing a CCDS; our algorithm works for all wireless networks and the message complexity of it is linear. When the given wireless network is a Unit Disk Graph, the performance ratio of our algorithm is 8|MCDS| + 3k, where |MCDS| is the size of a minimum connected dominating set (MCDS) and k is the number of constrained nodes.
APA, Harvard, Vancouver, ISO, and other styles
48

"A method for the parametric center problem, with a strictly monotone polynomial-time algorithm for linear programming." Massachusetts Institute of Technology, Operations Research Center, 1989. http://hdl.handle.net/1721.1/5247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Churchley, Ross William. "On graph-transverse matching problems." Thesis, 2012. http://hdl.handle.net/1828/4137.

Full text
Abstract:
Given graphs G,H, is it possible to find a matching which, when deleted from G, destroys all copies of H? The answer is obvious for some inputs—notably, when G is a large complete graph the answer is “no”—but in general this can be a very difficult question. In this thesis, we study this decision problem when H is a fixed tree or cycle; our aim is to identify those H for which it can be solved efficiently. The H-transverse matching problem, TM(H) for short, asks whether an input graph admits a matching M such that no subgraph of G − M is isomorphic to H. The main goal of this thesis is the following dichotomy. When H is a triangle or one of a few small-diameter trees, there is a polynomial-time algorithm to find an H-transverse matching if one exists. However, TM(H) is NP-complete when H is any longer cycle or a tree of diameter ≥ 4. In addition, we study the restriction of these problems to structured graph classes.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

Gustedt, Jens. "Algorithmic Aspects of Ordered Structures." Phd thesis, 1992. http://tel.archives-ouvertes.fr/tel-00549774.

Full text
Abstract:
In dieser Arbeit verbinden wir die Theorie der Quasi-Ordnungen mit der Theorie der Algorithmen einiger kombinatorischer Objekte. Zuerst entwickeln wir die Theorie der Wohl-Quasi-Ordnungen, WQO, im Zusammenhang zur maximalen Komplexität. Dann geben wir ein allgemeines 0-1-Gesetz für erbliche Eigenschaften, das Auswirkungen für die mittlere Komplexität hat. Dieses Ergebnis für mittlere Komplexität wird auf die Klasse der endlichen Graphen, versehen mit der Relation ``induzierter Subgraph'', angewendet. Wir erhalten, daß eine große Klasse von Problemen, welche z.B. Perfektheit umfaßt, Algorithmen mit im Mittel konstanter Laufzeit haben. Dann zeigen wir, indem wir ein Ergebnis von Damaschke für Cographen veralgemeinern, daß die Klassen der endlichen Ordnungen bzw. Graphen mit beschränktem Dekompositionsdurchmesser bzgl. der Relation ``induzierte Subordnung'' bzw. ``induzierter Subgraph'' WQO sind. Dies führt uns zu linearen Erkennungsalgorithmen für alle erblichen Eigenschaften über diesen Objekten. Unser Hauptresultat ist dann, daß die Menge der endlichen partiellen Ordnungen eine Wohl-Quasi-Ordnung bzgl. einer gewissen Relation ≤ , der sogenannten Ketten-Minor-Relation, ist. Um dies zu beweisen, führen wir eine verwandte Relation auf endlichen formalen Sprachen ein, die die gleiche Eigenschaft hat. Als Folgerung erhalten wir, daß jede Eigenschaft, die erblich bzgl. ≤ ist, einen Test in O(|P|c) Zeit zuläßt, wobei $c$ von der Eigenschaft abhängt. Dieser Test läßt sich leicht parallelisieren. Auf einer parallelen Maschine (\CRCW\ \PRAM) kann er so implementiert werden, daß er konstante Zeit auf O(|P|c) Prozessoren benötigt.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography