Academic literature on the topic 'Parameterised complexity analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parameterised complexity analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parameterised complexity analysis"

1

Corus, Dogan, Per Kristian Lehre, Frank Neumann, and Mojgan Pourhassan. "A Parameterised Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms." Evolutionary Computation 24, no. 1 (March 2016): 183–203. http://dx.doi.org/10.1162/evco_a_00147.

Full text
Abstract:
Bi-level optimisation problems have gained increasing interest in the field of combinatorial optimisation in recent years. In this paper, we analyse the runtime of some evolutionary algorithms for bi-level optimisation problems. We examine two NP-hard problems, the generalised minimum spanning tree problem and the generalised travelling salesperson problem in the context of parameterised complexity. For the generalised minimum spanning tree problem, we analyse the two approaches presented by Hu and Raidl ( 2012 ) with respect to the number of clusters that distinguish each other by the chosen representation of possible solutions. Our results show that a (1+1) evolutionary algorithm working with the spanning nodes representation is not a fixed-parameter evolutionary algorithm for the problem, whereas the problem can be solved in fixed-parameter time with the global structure representation. We present hard instances for each approach and show that the two approaches are highly complementary by proving that they solve each other’s hard instances very efficiently. For the generalised travelling salesperson problem, we analyse the problem with respect to the number of clusters in the problem instance. Our results show that a (1+1) evolutionary algorithm working with the global structure representation is a fixed-parameter evolutionary algorithm for the problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Creignou, Nadia, Raïda Ktari, Arne Meier, Julian-Steffen Müller, Frédéric Olive, and Heribert Vollmer. "Parameterised Enumeration for Modification Problems." Algorithms 12, no. 9 (September 9, 2019): 189. http://dx.doi.org/10.3390/a12090189.

Full text
Abstract:
Recently, Creignou et al. (Theory Comput. Syst. 2017), introduced the class Delay FPT into parameterised complexity theory in order to capture the notion of efficiently solvable parameterised enumeration problems. In this paper, we propose a framework for parameterised ordered enumeration and will show how to obtain enumeration algorithms running with an FPT delay in the context of general modification problems. We study these problems considering two different orders of solutions, namely, lexicographic order and order by size. Furthermore, we present two generic algorithmic strategies. The first one is based on the well-known principle of self-reducibility and is used in the context of lexicographic order. The second one shows that the existence of a neighbourhood structure among the solutions implies the existence of an algorithm running with FPT delay which outputs all solutions ordered non-decreasingly by their size.
APA, Harvard, Vancouver, ISO, and other styles
3

Aghighi, Meysam, and Christer Backstrom. "A Multi-Parameter Complexity Analysis of Cost-Optimal and Net-Benefit Planning." Proceedings of the International Conference on Automated Planning and Scheduling 26 (March 30, 2016): 2–10. http://dx.doi.org/10.1609/icaps.v26i1.13738.

Full text
Abstract:
Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.
APA, Harvard, Vancouver, ISO, and other styles
4

NOUY, A., and C. SOIZE. "Random field representations for stochastic elliptic boundary value problems and statistical inverse problems." European Journal of Applied Mathematics 25, no. 3 (March 21, 2014): 339–73. http://dx.doi.org/10.1017/s0956792514000072.

Full text
Abstract:
This paper presents new results allowing an unknown non-Gaussian positive matrix-valued random field to be identified through a stochastic elliptic boundary value problem, solving a statistical inverse problem. A new general class of non-Gaussian positive-definite matrix-valued random fields, adapted to the statistical inverse problems in high stochastic dimension for their experimental identification, is introduced and its properties are analysed. A minimal parameterisation of discretised random fields belonging to this general class is proposed. Using this parameterisation of the general class, a complete identification procedure is proposed. New results of the mathematical and numerical analyses of the parameterised stochastic elliptic boundary value problem are presented. The numerical solution of this parametric stochastic problem provides an explicit approximation of the application that maps the parameterised general class of random fields to the corresponding set of random solutions. This approximation can be used during the identification procedure in order to avoid the solution of multiple forward stochastic problems. Since the proposed general class of random fields possibly contains random fields which are not uniformly bounded, a particular mathematical analysis is developed and dedicated approximation methods are introduced. In order to obtain an algorithm for constructing the approximation of a very high-dimensional map, complexity reduction methods are introduced and are based on the use of sparse or low-rank approximation methods that exploit the tensor structure of the solution which results from the parameterisation of the general class of random fields.
APA, Harvard, Vancouver, ISO, and other styles
5

Smallman, Thomas Luke, David Thomas Milodowski, Eráclito Sousa Neto, Gerbrand Koren, Jean Ometto, and Mathew Williams. "Parameter uncertainty dominates C-cycle forecast errors over most of Brazil for the 21st century." Earth System Dynamics 12, no. 4 (November 23, 2021): 1191–237. http://dx.doi.org/10.5194/esd-12-1191-2021.

Full text
Abstract:
Abstract. Identification of terrestrial carbon (C) sources and sinks is critical for understanding the Earth system as well as mitigating and adapting to climate change resulting from greenhouse gas emissions. Predicting whether a given location will act as a C source or sink using terrestrial ecosystem models (TEMs) is challenging due to net flux being the difference between far larger, spatially and temporally variable fluxes with large uncertainties. Uncertainty in projections of future dynamics, critical for policy evaluation, has been determined using multi-TEM intercomparisons, for various emissions scenarios. This approach quantifies structural and forcing errors. However, the role of parameter error within models has not been determined. TEMs typically have defined parameters for specific plant functional types generated from the literature. To ascertain the importance of parameter error in forecasts, we present a Bayesian analysis that uses data on historical and current C cycling for Brazil to parameterise five TEMs of varied complexity with a retrieval of model error covariance at 1∘ spatial resolution. After evaluation against data from 2001–2017, the parameterised models are simulated to 2100 under four climate change scenarios spanning the likely range of climate projections. Using multiple models, each with per pixel parameter ensembles, we partition forecast uncertainties. Parameter uncertainty dominates across most of Brazil when simulating future stock changes in biomass C and dead organic matter (DOM). Uncertainty of simulated biomass change is most strongly correlated with net primary productivity allocation to wood (NPPwood) and mean residence time of wood (MRTwood). Uncertainty of simulated DOM change is most strongly correlated with MRTsoil and NPPwood. Due to the coupling between these variables and C stock dynamics being bi-directional, we argue that using repeat estimates of woody biomass will provide a valuable constraint needed to refine predictions of the future carbon cycle. Finally, evaluation of our multi-model analysis shows that wood litter contributes substantially to fire emissions, necessitating a greater understanding of wood litter C cycling than is typically considered in large-scale TEMs.
APA, Harvard, Vancouver, ISO, and other styles
6

Bodlaender, H. L., R. G. Downey, M. R. Fellows, M. T. Hallett, and H. T. Wareham. "Parameterized complexity analysis in computational biology." Bioinformatics 11, no. 1 (1995): 49–57. http://dx.doi.org/10.1093/bioinformatics/11.1.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Witteveen, Jouke, and Leen Torenvliet. "Fixed-parameter decidability: Extending parameterized complexity analysis." Mathematical Logic Quarterly 62, no. 6 (November 15, 2016): 596–607. http://dx.doi.org/10.1002/malq.201500077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fellows, Michael, Andreas Pfandler, Frances Rosamond, and Stefan Rümmele. "The Parameterized Complexity of Abduction." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 743–49. http://dx.doi.org/10.1609/aaai.v26i1.8224.

Full text
Abstract:
Abduction belongs to the most fundamental reasoning methods. It is a method for reverse inference, this means one is interested in explaining observed behavior by finding appropriate causes. We study logic-based abduction, where knowledge is represented by propositional formulas. The computational complexity of this problem is highly intractable in many interesting settings. In this work we therefore present an extensive parameterized complexity analysis of abduction within various fragments of propositional logic together with (combinations of) natural parameters.
APA, Harvard, Vancouver, ISO, and other styles
9

Bäckström, Christer, Yue Chen, Peter Jonsson, Sebastian Ordyniak, and Stefan Szeider. "The Complexity of Planning Revisited — A Parameterized Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1735–41. http://dx.doi.org/10.1609/aaai.v26i1.8361.

Full text
Abstract:
The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Bäckström and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.
APA, Harvard, Vancouver, ISO, and other styles
10

Bäckström, Christer, Peter Jonsson, Sebastian Ordyniak, and Stefan Szeider. "A complete parameterized complexity analysis of bounded planning." Journal of Computer and System Sciences 81, no. 7 (November 2015): 1311–32. http://dx.doi.org/10.1016/j.jcss.2015.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parameterised complexity analysis"

1

Wareham, Harold Todd. "Systematic parameterized complexity analysis in computational phonology." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ37368.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bergé, Pierre. "Algorithmes pour voyager sur un graphe contenant des blocages." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS480.

Full text
Abstract:
Nous étudions des problèmes NP-difficiles portant sur les graphes contenant des blocages.Nous traitons les problèmes de coupes du point de vue de la complexité paramétrée. La taille p de la coupe est le paramètre. Étant donné un ensemble de sources {s1,...,sk} et une cible t, nous proposons un algorithme qui construit une coupe de taille au plus p séparant au moins r sources de t. Nous nommons ce problème NP-complet Partial One-Target Cut. Notre algorithme est FPT. Nous prouvons également que la variante de Partial One-Target Cut, où la coupe est composée de noeuds, est W[1]-difficile. Notre seconde contribution est la construction d'un algorithme qui compte les coupes minimums entre deux ensembles S et T en temps $2^{O(plog p)}n^{O(1)}$.Nous présentons ensuite plusieurs résultats sur le ratio de compétitivité des stratégies déterministes et randomisées pour le problème du voyageur canadien.Nous prouvons que les stratégies randomisées n'utilisant pas de mémoire ne peuvent pas améliorer le ratio 2k+1. Nous apportons également des éléments concernant les bornes inférieures de compétitivité de l'ensemble des stratégies randomisées. Puis, nous étudions la compétitivité en distance d'un groupe de voyageurs avec et sans communication. Enfin, nous nous penchons sur la compétitivité des stratégies déterministes pour certaines familles de graphes. Deux stratégies, avec un ratio inférieur à 2k+1 sont proposées: une pour les graphes cordaux avec poids uniformes et l'autre pour les graphes où la taille de la plus grande coupe minimale séparant s et t est au plus k
We study NP-hard problems on graphs with blockages seen as models of networks which are exposed to risk of failures.We treat cut problems via the parameterized complexity framework. The cutset size p is taken as a parameter. Given a set of sources {s1,...,sk} and a target $t, we propose an algorithm which builds a small edge cut of size p separating at least r sources from t. This NP-complete problem is called Partial One-Target Cut. It belongs to the family of multiterminal cut problems. Our algorithm is fixed-parameter tractable (FPT) as its execution takes $2^{O(p^2)}n^{O(1)}$. We prove that the vertex version of this problem, which imposes cuts to contain vertices instead of edges, is W[1]-hard. Then, we design an FPT algorithm which counts the minimum vertex (S,T)-cuts of an undirected graph in time $2^{O(plog p)}n^{O(1)}$.We provide numerous results on the competitive ratio of both deterministic and randomized strategies for the Canadian Traveller Problem. The optimal ratio obtained for the deterministic strategies on general graphs is 2k+1, where k is a given upper bound on the number of blockages. We show that randomized strategies which do not use memory cannot improve the bound 2k+1. In addition, we discuss the tightness of lower bounds on the competitiveness of randomized strategies. The distance competitive ratio for a group of travellers possibly equipped with telecommunication devices is studied. Eventually, a strategy dedicated to equal-weight chordal graphs is proposed while another one is built for graphs with small maximum (s,t)-cuts. Both strategies outperform the ratio 2k+1
APA, Harvard, Vancouver, ISO, and other styles
3

Molter, Hendrik [Verfasser], Rolf [Akademischer Betreuer] Niedermeier, Rolf [Gutachter] Niedermeier, Thomas [Gutachter] Erlebach, and Ralf [Gutachter] Klasing. "Classic graph problems made temporal – a parameterized complexity analysis / Hendrik Molter ; Gutachter: Rolf Niedermeier, Thomas Erlebach, Ralf Klasing ; Betreuer: Rolf Niedermeier." Berlin : Universitätsverlag der TU Berlin, 2020. http://d-nb.info/1223023125/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chinot, Geoffrey. "Localization methods with applications to robust learning and interpolation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG002.

Full text
Abstract:
Cette thèse de doctorat est centrée sur l'apprentissage supervisé. L'objectif principal est l'utilisation de méthodes de localisation pour obtenir des vitesses rapides de convergence, c'est-à-dire, des vitesse de l'ordre O(1/n), où n est le nombre d'observations. Ces vitesses ne sont pas toujours atteignables. Il faut imposer des contraintes sur la variance du problème comme une condition de Bernstein ou de marge. Plus particulièrement, dans cette thèse nous tentons d'établir des vitesses rapides de convergences pour des problèmes de robustesse et d'interpolation.On dit qu'un estimateur est robuste si ce dernier présente certaines garanties théoriques, sous le moins d'hypothèses possibles. Cette problématique de robustesse devient de plus en plus populaire. La raison principale est que dans l'ère actuelle du “big data", les données sont très souvent corrompues. Ainsi, construire des estimateurs fiables dans cette situation est essentiel. Dans cette thèse nous montrons que le fameux minimiseur du risque empirique (regularisé) associé à une fonction de perte Lipschitz est robuste à des bruits à queues lourde ainsi qu'a des outliers dans les labels. En revanche si la classe de prédicteurs est à queue lourde, cet estimateur n'est pas fiable. Dans ce cas, nous construisons des estimateurs appelé estimateur minmax-MOM, optimal lorsque les données sont à queues lourdes et possiblement corrompues.En apprentissage statistique, on dit qu'un estimateur interpole, lorsque ce dernier prédit parfaitement sur un jeu d'entrainement. En grande dimension, certains estimateurs interpolant les données peuvent être bons. En particulier, cette thèse nous étudions le modèle linéaire Gaussien en grande dimension et montrons que l'estimateur interpolant les données de plus petite norme est consistant et atteint même des vitesses rapides
This PhD thesis deals with supervized machine learning and statistics. The main goal is to use localization techniques to derive fast rates of convergence, with a particular focus on robust learning and interpolation problems.Localization methods aim to analyze localized properties of an estimator to obtain fast rates of convergence, that is rates of order O(1/n), where n is the number of observations. Under assumptions, such as the Bernstein condition, such rates are attainable.A robust estimator is an estimator with good theoretical guarantees, under as few assumptions as possible. This question is getting more and more popular in the current era of big data. Large dataset are very likely to be corrupted and one would like to build reliable estimators in such a setting. We show that the well-known regularized empirical risk minimizer (RERM) with Lipschitz-loss function is robust with respect to heavy-tailed noise and outliers in the label. When the class of predictor is heavy-tailed, RERM is not reliable. In this setting, we show that minmax Median of Means estimators can be a solution. By construction minmax-MOM estimators are also robust to an adversarial contamination.Interpolation problems study learning procedure with zero training error. Surprisingly, in large dimension, interpolating the data does not necessarily implies over-fitting. We study a high dimensional Gaussian linear model and show that sometimes the over-fitting may be benign
APA, Harvard, Vancouver, ISO, and other styles
5

Pourhassan, Mojgan. "Parameterised complexity analysis of evolutionary algorithms for combinatorial optimization problems." Thesis, 2017. http://hdl.handle.net/2440/109799.

Full text
Abstract:
Evolutionary algorithms are general problem solvers that have been successfully used in solving combinatorial optimization problems. However, due to the great amount of randomness in these algorithms, theoretical understanding of them is quite challenging. In this thesis we analyse the parameterized complexity of evolutionary algorithms on combinatorial optimization problems. Studying the parameterized complexity of these algorithms can help us understand how different parameters of problems influence the runtime behaviour of the algorithm and consequently lead us in finding better performing algorithms. We focus on two NP-hard combinatorial optimization problems; the generalized travelling salesman problem (GTSP) and the vertex cover problem (VCP). For solving the GTSP, two hierarchical approaches with different neighbourhood structures have been proposed in the literature. In this thesis, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective and complementary abilities of the two approaches are pointed out by presenting instances where they mutually outperform each other. After investigating the runtime behaviour of the mentioned randomised algorithms on GTSP, we turn our attention to the VCP. Evolutionary multi-objective optimization for the classical vertex cover problem has been previously analysed in the context of parameterized complexity analysis. We extend the analysis to the weighted version of the problem. We also examine a dynamic version of the classical problem and analyse evolutionary algorithms with respect to their ability to maintain a 2-approximation. Inspired by the concept of duality, an edge-based evolutionary algorithm for solving the VCP has been introduced in the literature. Here we show that this edge-based EA is able to maintain a 2-approximation solution in the dynamic setting. Moreover, using the dual form of the problem, we extend the edge-based approach to the weighted vertex cover problem.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2017.
APA, Harvard, Vancouver, ISO, and other styles
6

Wareham, Harold. "Systematic parameterized complexity analysis in computational phonology." Thesis, 1999. https://dspace.library.uvic.ca//handle/1828/8806.

Full text
Abstract:
Many computational problems are NP-hard and hence probably do not have fast, i.e., polynomial time, algorithms. Such problems may yet have non-polynomial time algorithms, and the non-polynomial time complexities of these algorithm will be functions of particular aspects of that problem, i.e., the algorithm's running time is upper bounded by f (k) |x|ᶜ, where f is an arbitrary function, |x| is the size of the input x to the algorithm, k is an aspect of the problem, and c is a constant independent of |x| and k. Given such algorithms, it may still be possible to obtain optimal solutions for large instances of NP-hard problems for which the appropriate aspects are of small size or value. Questions about the existence of such algorithms are most naturally addressed within the theory of parameterized computational complexity developed by Downey and Fellows. This thesis considers the merits of a systematic parameterized complexity analysis in which results are derived relative to all subsets of a specified set of aspects of a given NP-hard problem. This set of results defines an “intractability map” that shows relative to which sets of aspects algorithms whose non-polynomial time complexities are purely functions of those aspects do and do not exist for that problem. Such maps are useful not only for delimiting the set of possible algorithms for an NP-hard problem but also for highlighting those aspects that are responsible for this NP-hardness. These points will be illustrated by systematic parameterized complexity analyses of problems associated with five theories of phonological processing in natural languages—namely, Simplified Segmental Grammars, finite-state transducer based rule systems, the KIMMO system, Declarative Phonology, and Optimality Theory. The aspects studied in these analyses broadly characterize the representations and mechanisms used by these theories. These analyses suggest that the computational complexity of phonological processing depends not on such details as whether a theory uses rules or constraints or has one, two, or many levels of representation but rather on the structure of the representation-relations encoded in individual mechanisms and the internal structure of the representations.
Graduate
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parameterised complexity analysis"

1

J, Woeginger Gerhard, and SpringerLink (Online service), eds. Parameterized and Exact Computation: 7th International Symposium, IPEC 2012, Ljubljana, Slovenia, September 12-14, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peter, Rossmanith, and SpringerLink (Online service), eds. Parameterized and Exact Computation: 6th International Symposium, IPEC 2011, Saarbrücken, Germany, September 6-8, 2011. Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rooij, Iris van, Mark Blokpoel, Johan Kwisthout, and Todd Wareham. Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge University Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rooij, Iris van, Mark Blokpoel, Johan Kwisthout, and Todd Wareham. Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge University Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rooij, Iris van, Mark Blokpoel, Johan Kwisthout, and Todd Wareham. Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge University Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rooij, Iris van, Mark Blokpoel, Johan Kwisthout, and Todd Wareham. Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge University Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marx, Daniel, and Peter Rossmanith. Parameterized and Exact Computation: 6th International Symposium, IPEC 2011, Saarbrücken, Germany, September 6-8, 2011. Revised Selected Papers. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parameterised complexity analysis"

1

Dorn, Britta, and Ildikó Schlotter. "Multivariate Complexity Analysis of Swap Bribery." In Parameterized and Exact Computation, 107–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17493-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neumann, Frank, and Andrew M. Sutton. "Parameterized Complexity Analysis of Randomized Search Heuristics." In Natural Computing Series, 213–48. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29414-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bonnet, Édouard, Bruno Escoffier, Vangelis Th Paschos, and Émeric Tourniaire. "Multi-parameter Complexity Analysis for Constrained Size Graph Problems: Using Greediness for Parameterization." In Parameterized and Exact Computation, 66–77. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03898-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Froese, Vincent, René van Bevern, Rolf Niedermeier, and Manuel Sorge. "A Parameterized Complexity Analysis of Combinatorial Feature Selection Problems." In Mathematical Foundations of Computer Science 2013, 445–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40313-2_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jaffke, Lars, and Bart M. P. Jansen. "Fine-Grained Parameterized Complexity Analysis of Graph Coloring Problems." In Lecture Notes in Computer Science, 345–56. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57586-5_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Balasubramanian, A. R., Lucie Guillou, and Chana Weil-Kennedy. "Parameterized Analysis of Reconfigurable Broadcast Networks." In Lecture Notes in Computer Science, 61–80. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99253-8_4.

Full text
Abstract:
AbstractReconfigurable broadcast networks (RBN) are a model of distributed computation in which agents can broadcast messages to other agents using some underlying communication topology which can change arbitrarily over the course of executions. In this paper, we conduct parameterized analysis of RBN. We consider cubes, (infinite) sets of configurations in the form of lower and upper bounds on the number of agents in each state, and we show that we can evaluate boolean combinations over cubes and reachability sets of cubes in . In particular, reachability from a cube to another cube is a -complete problem.To prove the upper bound for this parameterized analysis, we prove some structural properties about the reachability sets and the symbolic graph abstraction of RBN, which might be of independent interest. We justify this claim by providing two applications of these results. First, we show that the almost-sure coverability problem is -complete for RBN, thereby closing a complexity gap from a previous paper [3]. Second, we define a computation model using RBN, à la population protocols, called RBN protocols. We characterize precisely the set of predicates that can be computed by such protocols.
APA, Harvard, Vancouver, ISO, and other styles
7

Hermelin, Danny, and Liat Rozenberg. "Parameterized Complexity Analysis for the Closest String with Wildcards Problem." In Combinatorial Pattern Matching, 140–49. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07566-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kawamura, Akitoshi, Florian Steinberg, and Holger Thies. "Parameterized Complexity for Uniform Operators on Multidimensional Analytic Functions and ODE Solving." In Logic, Language, Information, and Computation, 223–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2018. http://dx.doi.org/10.1007/978-3-662-57669-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Carneiro, Alan Diêgo Aurélio, Fábio Protti, and Uéverton S. Souza. "Fine-Grained Parameterized Complexity Analysis of Knot-Free Vertex Deletion – A Deadlock Resolution Graph Problem." In Lecture Notes in Computer Science, 84–95. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94776-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Niraj Kumar, and Soubhik Chakraborty. "Partition Sort versus Quick Sort: A Comparative Average Case Analysis with Special Emphasis on Parameterized Complexity." In Advances in Computing and Information Technology, 107–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-31552-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parameterised complexity analysis"

1

Neumann, Frank, and Andrew M. Sutton. "Parameterized Complexity Analysis of Evolutionary Algorithms." In GECCO '15: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2739482.2756562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neumann, Frank, and Andrew M. Sutton. "Parameterized complexity analysis of evolutionary algorithms." In GECCO '14: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2598394.2605351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grüttemeier, Niels, and Christian Komusiewicz. "Learning Bayesian Networks Under Sparsity Constraints: A Parameterized Complexity Analysis." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/586.

Full text
Abstract:
We study the problem of learning the structure of an optimal Bayesian network when additional structural constraints are posed on the network or on its moralized graph. More precisely, we consider the constraint that the moralized graph can be transformed to a graph from a sparse graph class Π by at most k vertex deletions. We show that for Π being the graphs with maximum degree 1, an optimal network can be computed in polynomial time when k is constant, extending previous work that gave an algorithm with such a running time for Π being the class of edgeless graphs [Korhonen & Parviainen, NIPS 2015]. We then show that further extensions or improvements are presumably impossible. For example, we show that when Π is the set of graphs in which each component has size at most three, then learning an optimal network is NP-hard even if k=0. Finally, we show that learning an optimal network with at most k edges in the moralized graph presumably is not fixed-parameter tractable with respect to k and that, in contrast, computing an optimal network with at most k arcs can be computed is fixed-parameter tractable in k.
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Janio Carlos Nascimento, Uéverton dos Santos Souza, and Luiz Satoru Ochi. "Algorithmic Aspects of Problems Related to Optimization, Circuits, and Parameterized Complexity." In Concurso de Teses e Dissertações. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/ctd.2022.223305.

Full text
Abstract:
This thesis investigates several aspects of computational problems related to circuits and neighborhood exploration. Supported by a vast literature, we explore notable trends in algorithms, optimization, and computational complexity; and we provide some results for each topic discussed. The thesis' contributions are organized into four projects: (i) the study of SUCCINCT MONOTONE CIRCUIT CERTIFICATION (SMCC); (ii) the study of BEST-CASE ENERGY COMPLEXITY IN SATISFYING ASSIGNMENTS (MINEC+M); (iii) the proposition of the Th-hierarchy as an alternative to the hierarchy of complexity classes so-called W-hierarchy; and (iv) the study of the MAXIMUM MULTI IMPROVEMENT problem. Over the course of these four projects, we develop polynomial and parameterized reductions, NP-completeness proofs, classical and parameterized complexity analysis, and implementations of exact algorithms and metaheuristics.
APA, Harvard, Vancouver, ISO, and other styles
5

Souza, Uéverton, Fábio Protti, Maise Da Silva, and Dieter Rautenbach. "Multivariate Investigation of NP-Hard Problems: Boundaries Between Parameterized Tractability and Intractability." In XXVIII Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/ctd.2015.9996.

Full text
Abstract:
In this thesis we present a multivariate investigation of the complexity of some NP-hard problems, i.e., we first develop a systematic complexity analysis of these problems, defining its subproblems and mapping which one belongs to each side of an “imaginary boundary” between polynomial time solvability and intractability. After that, we analyze which sets of aspects of these problems are sources of their intractability, that is, subsets of aspects for which there exists an algorithm to solve the associated problem, whose non-polynomial time complexity is purely a function of those sets. Thus, we use classical and parameterized complexity in an alternate and complementary approach, to show which subproblems of the given problems are NP-hard and latter to diagnose for which sets of parameters the problems are fixed-parameter tractable, or in FPT. This thesis exhibits a classical and parameterized complexity analysis of different groups of NP-hard problems. The addressed problems are divided into four groups of distinct nature, in the context of data structures, combinatorial games, and graph theory: (I) and/or graph solution and its variants; (II) flooding-filling games; (III) problems on P3-convexity; (IV) problems on induced matchings.
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Zhe, Zexiang Liu, Jack Weitze, and Necmiye Ozay. "Sample Complexity Analysis and Self-regularization in Identification of Over-parameterized ARX Models." In 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022. http://dx.doi.org/10.1109/cdc51059.2022.9993310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boehmer, Niclas, Robert Bredereck, Piotr Faliszewski, and Rolf Niedermeier. "Winner Robustness via Swap- and Shift-Bribery: Parameterized Counting Complexity and Experiments." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/8.

Full text
Abstract:
We study the parameterized complexity of counting variants of Swap- and Shift-Bribery, focusing on the parameterizations by the number of swaps and the number of voters. Facing several computational hardness results, using sampling we show experimentally that Swap-Bribery offers a new approach to the robustness analysis of elections.
APA, Harvard, Vancouver, ISO, and other styles
8

Froese, Vincent, Pascal Kunz, and Philipp Zschoche. "Disentangling the Computational Complexity of Network Untangling." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/283.

Full text
Abstract:
We study the recently introduced network untangling problem, a variant of Vertex Cover on temporal graphs---graphs whose edge set changes over discrete time steps. There are two versions of this problem. The goal is to select at most k time intervals for each vertex such that all time-edges are covered and (depending on the problem variant) either the maximum interval length or the total sum of interval lengths is minimized. This problem has data mining applications in finding activity timelines that explain the interactions of entities in complex networks. Both variants of the problem are NP-hard. In this paper, we initiate a multivariate complexity analysis involving the following parameters: number of vertices, lifetime of the temporal graph, number of intervals per vertex, and the interval length bound. For both problem versions, we (almost) completely settle the parameterized complexity for all combinations of those four parameters, thereby delineating the border of fixed-parameter tractability.
APA, Harvard, Vancouver, ISO, and other styles
9

Nallaperuma, Samadhi, Andrew M. Sutton, and Frank Neumann. "Parameterized complexity analysis and more effective construction methods for ACO algorithms and the euclidean traveling salesperson problem." In 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, M., R. P. Jia, and J. J. Yu. "Conceptual Design of 2-DOF Flexure-Based Sensing Mechanisms for Superconductor Gravity Gradient." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46827.

Full text
Abstract:
By employing screw theory and the freedom and constraint topology (FACT), the type synthesis for 2-DOF flexure-based sensing mechanism of superconductor gravity gradient was produced with the parameterized compliance approach. Six types of mechanism with 1R1T DOF were deduced with freedom and constraint pattern in parallel topologies. Based on the compliance analysis, one type was selected as preferred sensing mechanism with the comparison of freedom, main direction compliance, parasitic errors, precision and complexity. For reducing the parasitic and coupling errors, optimization was produced with the parameterized compliance approach. Then specific geometric properties were presented with compact structure for the measurement application. The simulations showed the results of analytical models were close to that of FEA (finite elements analysis) models and the maximum errors of compliance parameters were less than 6%. The conceptual design of 2-DOF flexure-based sensing mechanisms could reach the required functions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography