Journal articles on the topic 'Parameterised complexity analysis'

To see the other types of publications on this topic, follow the link: Parameterised complexity analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parameterised complexity analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Corus, Dogan, Per Kristian Lehre, Frank Neumann, and Mojgan Pourhassan. "A Parameterised Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms." Evolutionary Computation 24, no. 1 (March 2016): 183–203. http://dx.doi.org/10.1162/evco_a_00147.

Full text
Abstract:
Bi-level optimisation problems have gained increasing interest in the field of combinatorial optimisation in recent years. In this paper, we analyse the runtime of some evolutionary algorithms for bi-level optimisation problems. We examine two NP-hard problems, the generalised minimum spanning tree problem and the generalised travelling salesperson problem in the context of parameterised complexity. For the generalised minimum spanning tree problem, we analyse the two approaches presented by Hu and Raidl ( 2012 ) with respect to the number of clusters that distinguish each other by the chosen representation of possible solutions. Our results show that a (1+1) evolutionary algorithm working with the spanning nodes representation is not a fixed-parameter evolutionary algorithm for the problem, whereas the problem can be solved in fixed-parameter time with the global structure representation. We present hard instances for each approach and show that the two approaches are highly complementary by proving that they solve each other’s hard instances very efficiently. For the generalised travelling salesperson problem, we analyse the problem with respect to the number of clusters in the problem instance. Our results show that a (1+1) evolutionary algorithm working with the global structure representation is a fixed-parameter evolutionary algorithm for the problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Creignou, Nadia, Raïda Ktari, Arne Meier, Julian-Steffen Müller, Frédéric Olive, and Heribert Vollmer. "Parameterised Enumeration for Modification Problems." Algorithms 12, no. 9 (September 9, 2019): 189. http://dx.doi.org/10.3390/a12090189.

Full text
Abstract:
Recently, Creignou et al. (Theory Comput. Syst. 2017), introduced the class Delay FPT into parameterised complexity theory in order to capture the notion of efficiently solvable parameterised enumeration problems. In this paper, we propose a framework for parameterised ordered enumeration and will show how to obtain enumeration algorithms running with an FPT delay in the context of general modification problems. We study these problems considering two different orders of solutions, namely, lexicographic order and order by size. Furthermore, we present two generic algorithmic strategies. The first one is based on the well-known principle of self-reducibility and is used in the context of lexicographic order. The second one shows that the existence of a neighbourhood structure among the solutions implies the existence of an algorithm running with FPT delay which outputs all solutions ordered non-decreasingly by their size.
APA, Harvard, Vancouver, ISO, and other styles
3

Aghighi, Meysam, and Christer Backstrom. "A Multi-Parameter Complexity Analysis of Cost-Optimal and Net-Benefit Planning." Proceedings of the International Conference on Automated Planning and Scheduling 26 (March 30, 2016): 2–10. http://dx.doi.org/10.1609/icaps.v26i1.13738.

Full text
Abstract:
Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.
APA, Harvard, Vancouver, ISO, and other styles
4

NOUY, A., and C. SOIZE. "Random field representations for stochastic elliptic boundary value problems and statistical inverse problems." European Journal of Applied Mathematics 25, no. 3 (March 21, 2014): 339–73. http://dx.doi.org/10.1017/s0956792514000072.

Full text
Abstract:
This paper presents new results allowing an unknown non-Gaussian positive matrix-valued random field to be identified through a stochastic elliptic boundary value problem, solving a statistical inverse problem. A new general class of non-Gaussian positive-definite matrix-valued random fields, adapted to the statistical inverse problems in high stochastic dimension for their experimental identification, is introduced and its properties are analysed. A minimal parameterisation of discretised random fields belonging to this general class is proposed. Using this parameterisation of the general class, a complete identification procedure is proposed. New results of the mathematical and numerical analyses of the parameterised stochastic elliptic boundary value problem are presented. The numerical solution of this parametric stochastic problem provides an explicit approximation of the application that maps the parameterised general class of random fields to the corresponding set of random solutions. This approximation can be used during the identification procedure in order to avoid the solution of multiple forward stochastic problems. Since the proposed general class of random fields possibly contains random fields which are not uniformly bounded, a particular mathematical analysis is developed and dedicated approximation methods are introduced. In order to obtain an algorithm for constructing the approximation of a very high-dimensional map, complexity reduction methods are introduced and are based on the use of sparse or low-rank approximation methods that exploit the tensor structure of the solution which results from the parameterisation of the general class of random fields.
APA, Harvard, Vancouver, ISO, and other styles
5

Smallman, Thomas Luke, David Thomas Milodowski, Eráclito Sousa Neto, Gerbrand Koren, Jean Ometto, and Mathew Williams. "Parameter uncertainty dominates C-cycle forecast errors over most of Brazil for the 21st century." Earth System Dynamics 12, no. 4 (November 23, 2021): 1191–237. http://dx.doi.org/10.5194/esd-12-1191-2021.

Full text
Abstract:
Abstract. Identification of terrestrial carbon (C) sources and sinks is critical for understanding the Earth system as well as mitigating and adapting to climate change resulting from greenhouse gas emissions. Predicting whether a given location will act as a C source or sink using terrestrial ecosystem models (TEMs) is challenging due to net flux being the difference between far larger, spatially and temporally variable fluxes with large uncertainties. Uncertainty in projections of future dynamics, critical for policy evaluation, has been determined using multi-TEM intercomparisons, for various emissions scenarios. This approach quantifies structural and forcing errors. However, the role of parameter error within models has not been determined. TEMs typically have defined parameters for specific plant functional types generated from the literature. To ascertain the importance of parameter error in forecasts, we present a Bayesian analysis that uses data on historical and current C cycling for Brazil to parameterise five TEMs of varied complexity with a retrieval of model error covariance at 1∘ spatial resolution. After evaluation against data from 2001–2017, the parameterised models are simulated to 2100 under four climate change scenarios spanning the likely range of climate projections. Using multiple models, each with per pixel parameter ensembles, we partition forecast uncertainties. Parameter uncertainty dominates across most of Brazil when simulating future stock changes in biomass C and dead organic matter (DOM). Uncertainty of simulated biomass change is most strongly correlated with net primary productivity allocation to wood (NPPwood) and mean residence time of wood (MRTwood). Uncertainty of simulated DOM change is most strongly correlated with MRTsoil and NPPwood. Due to the coupling between these variables and C stock dynamics being bi-directional, we argue that using repeat estimates of woody biomass will provide a valuable constraint needed to refine predictions of the future carbon cycle. Finally, evaluation of our multi-model analysis shows that wood litter contributes substantially to fire emissions, necessitating a greater understanding of wood litter C cycling than is typically considered in large-scale TEMs.
APA, Harvard, Vancouver, ISO, and other styles
6

Bodlaender, H. L., R. G. Downey, M. R. Fellows, M. T. Hallett, and H. T. Wareham. "Parameterized complexity analysis in computational biology." Bioinformatics 11, no. 1 (1995): 49–57. http://dx.doi.org/10.1093/bioinformatics/11.1.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Witteveen, Jouke, and Leen Torenvliet. "Fixed-parameter decidability: Extending parameterized complexity analysis." Mathematical Logic Quarterly 62, no. 6 (November 15, 2016): 596–607. http://dx.doi.org/10.1002/malq.201500077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fellows, Michael, Andreas Pfandler, Frances Rosamond, and Stefan Rümmele. "The Parameterized Complexity of Abduction." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 743–49. http://dx.doi.org/10.1609/aaai.v26i1.8224.

Full text
Abstract:
Abduction belongs to the most fundamental reasoning methods. It is a method for reverse inference, this means one is interested in explaining observed behavior by finding appropriate causes. We study logic-based abduction, where knowledge is represented by propositional formulas. The computational complexity of this problem is highly intractable in many interesting settings. In this work we therefore present an extensive parameterized complexity analysis of abduction within various fragments of propositional logic together with (combinations of) natural parameters.
APA, Harvard, Vancouver, ISO, and other styles
9

Bäckström, Christer, Yue Chen, Peter Jonsson, Sebastian Ordyniak, and Stefan Szeider. "The Complexity of Planning Revisited — A Parameterized Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1735–41. http://dx.doi.org/10.1609/aaai.v26i1.8361.

Full text
Abstract:
The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Bäckström and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.
APA, Harvard, Vancouver, ISO, and other styles
10

Bäckström, Christer, Peter Jonsson, Sebastian Ordyniak, and Stefan Szeider. "A complete parameterized complexity analysis of bounded planning." Journal of Computer and System Sciences 81, no. 7 (November 2015): 1311–32. http://dx.doi.org/10.1016/j.jcss.2015.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

De Haan, Ronald, Anna Roubickova, and Stefan Szeider. "Parameterized Complexity Results for Plan Reuse." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 224–31. http://dx.doi.org/10.1609/aaai.v27i1.8655.

Full text
Abstract:
Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings. We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of parameters. We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.
APA, Harvard, Vancouver, ISO, and other styles
12

Meier, Arne. "Incremental FPT Delay." Algorithms 13, no. 5 (May 15, 2020): 122. http://dx.doi.org/10.3390/a13050122.

Full text
Abstract:
In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.
APA, Harvard, Vancouver, ISO, and other styles
13

Bannach, Max, and Till Tantau. "On the Descriptive Complexity of Color Coding." Algorithms 14, no. 3 (March 19, 2021): 96. http://dx.doi.org/10.3390/a14030096.

Full text
Abstract:
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
APA, Harvard, Vancouver, ISO, and other styles
14

Jaffke, Lars, and Bart M. P. Jansen. "Fine-grained parameterized complexity analysis of graph coloring problems." Discrete Applied Mathematics 327 (March 2023): 33–46. http://dx.doi.org/10.1016/j.dam.2022.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Haan, Ronald de, and Stefan Szeider. "A Compendium of Parameterized Problems at Higher Levels of the Polynomial Hierarchy." Algorithms 12, no. 9 (September 9, 2019): 188. http://dx.doi.org/10.3390/a12090188.

Full text
Abstract:
We present a list of parameterized problems together with a complexity classification of whether they allow a fixed-parameter tractable reduction to SAT or not. These problems are parameterized versions of problems whose complexity lies at the second level of the Polynomial Hierarchy or higher.
APA, Harvard, Vancouver, ISO, and other styles
16

Ordyniak, Sebastian, and Stefan Szeider. "Parameterized Complexity of Small Decision Tree Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 6454–62. http://dx.doi.org/10.1609/aaai.v35i7.16800.

Full text
Abstract:
We study the NP-hard problem of learning a decision tree (DT) of smallest depth or size from data. We provide the first parameterized complexity analysis of the problem and draw a detailed parameterized complexity map for the natural parameters: size or depth of the DT, maximum domain size of all features, and the maximum Hamming distance between any two examples. Our main result shows that learning DTs of smallest depth or size is fixed-parameter tractable (FPT) parameterized by the combination of all three of these parameters. We contrast this FPT-result by various hardness results that underline the algorithmic significance of the considered parameters.
APA, Harvard, Vancouver, ISO, and other styles
17

Grüttemeier, Niels, and Christian Komusiewicz. "Learning Bayesian Networks Under Sparsity Constraints: A Parameterized Complexity Analysis." Journal of Artificial Intelligence Research 74 (July 10, 2022): 1225–67. http://dx.doi.org/10.1613/jair.1.13138.

Full text
Abstract:
We study the problem of learning the structure of an optimal Bayesian network when additional constraints are posed on the network or on its moralized graph. More precisely, we consider the constraint that the network or its moralized graph are close, in terms of vertex or edge deletions, to a sparse graph class Π. For example, we show that learning an optimal network whose moralized graph has vertex deletion distance at most k from a graph with maximum degree 1 can be computed in polynomial time when k is constant. This extends previous work that gave an algorithm with such a running time for the vertex deletion distance to edgeless graphs. We then show that further extensions or improvements are presumably impossible. For example, we show that learning optimal networks where the network or its moralized graph have maximum degree 2 or connected components of size at most c, c ≥ 3, is NP-hard. Finally, we show that learning an optimal network with at most k edges in the moralized graph presumably has no f(k) · |I|O(1)-time algorithm and that, in contrast, an optimal network with at most k arcs can be computed in 2O(k) · |I|O(1) time where |I| is the total input size.
APA, Harvard, Vancouver, ISO, and other styles
18

Hermelin, Danny, and Liat Rozenberg. "Parameterized complexity analysis for the Closest String with Wildcards problem." Theoretical Computer Science 600 (October 2015): 11–18. http://dx.doi.org/10.1016/j.tcs.2015.06.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bentert, Matthias, Robert Bredereck, Péter Györgyi, Andrzej Kaczmarczyk, and Rolf Niedermeier. "A Multivariate Complexity Analysis of the Material Consumption Scheduling Problem." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11755–63. http://dx.doi.org/10.1609/aaai.v35i13.17397.

Full text
Abstract:
The NP-hard Material Consumption Scheduling Problem and closely related problems have been thoroughly studied since the 1980's. Roughly speaking, the problem deals with minimizing the makespan when scheduling jobs that consume non-renewable resources. We focus on the single-machine case without preemption: from time to time, the resources of the machine are (partially) replenished, thus allowing for meeting a necessary pre-condition for processing further jobs, each of which having individual resource demands. We initiate a systematic exploration of the parameterized computational complexity landscape of the problem, providing parameterized tractability as well as intractability results. Doing so, we mainly investigate how parameters related to the resource supplies influence the computational complexity. Thereby, we get a deepened understanding of this fundamental scheduling problem.
APA, Harvard, Vancouver, ISO, and other styles
20

Feldmann, Andreas Emil, Karthik C. Karthik C. S., Euiwoong Lee, and Pasin Manurangsi. "A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms." Algorithms 13, no. 6 (June 19, 2020): 146. http://dx.doi.org/10.3390/a13060146.

Full text
Abstract:
Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
APA, Harvard, Vancouver, ISO, and other styles
21

Bulteau, Laurent, and Mathias Weller. "Parameterized Algorithms in Bioinformatics: An Overview." Algorithms 12, no. 12 (December 1, 2019): 256. http://dx.doi.org/10.3390/a12120256.

Full text
Abstract:
Bioinformatics regularly poses new challenges to algorithm engineers and theoretical computer scientists. This work surveys recent developments of parameterized algorithms and complexity for important NP-hard problems in bioinformatics. We cover sequence assembly and analysis, genome comparison and completion, and haplotyping and phylogenetics. Aside from reporting the state of the art, we give challenges and open problems for each topic.
APA, Harvard, Vancouver, ISO, and other styles
22

Chitnis, Rajesh, MohammadTaghi Hajiaghayi, and Vahid Liaghat. "Parameterized Complexity of Problems in Coalitional Resource Games." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 620–25. http://dx.doi.org/10.1609/aaai.v25i1.7887.

Full text
Abstract:
Coalition formation is a key topic in multi-agent systems. Coalitions enable agents to achieve goals that they may nothave been able to achieve on their own. Previous work hasshown problems in coalition games to be computationally hard. Wooldridge and Dunne (Artifi. Intell. 2006) studied the classical computational complexity of several natural decision problems in Coalitional Resource Games (CRG) - games in which each agent is endowed with a set of resources and coalitions can bring about a set of goals if they are collectively endowed with the necessary amount of resources. The input of coalitional resource games bundles together several elements, e.g., the agent set Ag, the goal set G, the resource set R, etc. Shrot et al. (AAMAS 2009) examine coalition formation problems in the CRG model using the theory of Parameterized Complexity. Their refined analysis shows that not all parts of input act equal - some instances of the problem are indeed tractable while others still remain intractable.We answer an important question left open by Shrot, Aumann,and Kraus by showing that the SC Problem (checking whether a Coalition is Successful) is W[1]-hard when parameterized by the size of the coalition. Then via a single theme of reduction from SC, we are able to show that various problems related to resources, resource bounds, and resource conflicts introduced by Wooldridge et al. are (i) W[1]-hard or co-W[1]-hard w.r.t the size of the coalition; and (ii) Para-NP hard or co-Para-NP-hard w.r.t |R|. When parameterized by |G| or |R| + |Ag|, we give a general algorithm which proves that these problems are indeed tractable.
APA, Harvard, Vancouver, ISO, and other styles
23

Godber, O. F., M. Chentouf, and R. Wall. "Sustainable goat production: modelling optimal performance in extensive systems." Animal Production Science 60, no. 6 (2020): 843. http://dx.doi.org/10.1071/an18481.

Full text
Abstract:
Context Strategies for achieving greater ruminant livestock productivity are essential to meet the food demands of growing populations, but sustainable changes are difficult to identify given the inherent complexity of such systems. Systems models can address this issue by allowing the impact of potential changes to be explored. Aims To develop a holistic systems model for goat production in an extensive Mediterranean environment which could allow changes in key management factors influencing the system to be investigated. Methods Initially, a conceptual comprehensive stock-and-flow model of a representative Mediterranean goat production system was constructed. This was used to identify informative indicators that would represent the overall technical and economic performance of the system. Sub-models were then assembled to build the full systems model. The model was parameterised with data collected over 3 years for goat holdings in northern Morocco. Scenario analysis techniques are used to explore the strategies that optimise performance under climate and feed price challenges. Key results Meat production is particularly important during periods of drought when increased meat yields can counteract the expected reduction in milk yields, to protect human food security, prevent excessive rangeland degradation and preserve natural nutritional resources. Feed price shocks during drought can have significant negative impacts on the system and zero feed input is shown to be a more sustainable strategy than reliance on high price feed during drought. Any alternative feed sources need to have a high forage component to reduce grazing periods significantly and promote rangeland preservation. Implications A diverse management strategy with a mixed meat and dairy semi-intensive production is more stable than specialised dairy systems and allows goat production and financial viability of intensification to be maintained under climatic stress; maintaining meat production was necessary to optimise performance. Conclusions The model allows improved insight into management strategies which could optimise animal husbandry performance in goat subsistence systems. However, the work also demonstrates the difficulty of constructing a truly holistic model since, to be practical, such constructs must necessarily be bounded; parameter selection and the limits to the boundaries imposed are inevitably critical.
APA, Harvard, Vancouver, ISO, and other styles
24

Dexter, Nick, Hoang Tran, and Clayton Webster. "A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs." ESAIM: Mathematical Modelling and Numerical Analysis 53, no. 6 (November 2019): 2025–45. http://dx.doi.org/10.1051/m2an/2019048.

Full text
Abstract:
We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based ℓ1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Fichte, Johannes. "Backdoors to Tractability of Answer-Set Programming." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 29, 2013): 1662–63. http://dx.doi.org/10.1609/aaai.v27i1.8505.

Full text
Abstract:
The practical results of answer-set programming indicate that classical complexity theory is insufficient as a theoretical framework to explain why modern answer-set programming solvers work fast on industrial applications. Complexity analysis by means of parameterized complexity theory seems to be promising, because we think that the reason for the gap between theory and practice is the presence of a "hidden structure" in real-world instances. The application of parameterized complexity theory to answer-set programming would give a crucial understanding of how solver heuristics work. This profound understanding can be used to improve the decision heuristics of modern solvers and yields new efficient algorithms for decision problems in the nonmonotonic setting. My research aims to explain the gap between theoretical upper bounds and the effort to solve real-world instances. I will further develop by means of parameterized complexity exact algorithms which work efficiently for real-world instances. The approach is based on backdoors which are small sets of atoms that represent "clever reasoning shortcuts" through the search space. The concept of backdoors is widely used in the areas of propositional satisfiability and constraint satisfaction. I will show how this concept can be adapted to the nonmonotonic setting and how it can be utilized to improve common algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Bredereck, Robert, Jiehua Chen, Sepp Hartung, Rolf Niedermeier, Ondřej Suchý, and Stefan Kratsch. "A Multivariate Complexity Analysis of Lobbying in Multiple Referenda." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1292–98. http://dx.doi.org/10.1609/aaai.v26i1.8248.

Full text
Abstract:
We extend work by Christian et al. [Review of Economic Design 2007] on lobbying in multiple referenda by first providing a more fine-grained analysis of the computational complexity of the NP-complete Lobbying problem. Herein, given a binary matrix, the columns represent issues to vote on and the rows correspond to voters making a binary vote on each issue. An issue is approved if a majority of votes has a 1 in the corresponding column. The goal is to get all issues approved by modifying a minimum number of rows to all-1-rows. In our multivariate complexity analysis, we present a more holistic view on the nature of the computational complexity of Lobbying, providing both (parameterized) tractability and intractability results, depending on various problem parameterizations to be adopted. Moreover, we show non-existence results concerning efficient and effective preprocessing for Lobbying and introduce natural variants such as Restricted Lobbying and Partial Lobbying.
APA, Harvard, Vancouver, ISO, and other styles
27

Misra, Neeldhara, Frances Rosamond, and Meirav Zehavi. "Special Issue “New Frontiers in Parameterized Complexity and Algorithms”: Foreward by the Guest Editors." Algorithms 13, no. 9 (September 18, 2020): 236. http://dx.doi.org/10.3390/a13090236.

Full text
Abstract:
This Special Issue contains eleven articles—surveys and research papers—that represent fresh and ambitious new directions in the area of Parameterized Complexity. They provide ground-breaking research at the frontiers of knowledge, and they contribute to bridging the gap between theory and practice. The scope and impact of the field continues to increase. Promising avenues and new research challenges are highlighted in this Special Issue.
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Mugang, Jianxin Wang, Qilong Feng, and Bin Fu. "Randomized Parameterized Algorithms for the Kidney Exchange Problem." Algorithms 12, no. 2 (February 25, 2019): 50. http://dx.doi.org/10.3390/a12020050.

Full text
Abstract:
In order to increase the potential kidney transplants between patients and their incompatible donors, kidney exchange programs have been created in many countries. In the programs, designing algorithms for the kidney exchange problem plays a critical role. The graph theory model of the kidney exchange problem is to find a maximum weight packing of vertex-disjoint cycles and chains for a given weighted digraph. In general, the length of cycles is not more than a given constant L (typically 2 ≤ L ≤ 5), and the objective function corresponds to maximizing the number of possible kidney transplants. In this paper, we study the parameterized complexity and randomized algorithms for the kidney exchange problem without chains from theory. We construct two different parameterized models of the kidney exchange problem for two cases L = 3 and L ≥ 3 , and propose two randomized parameterized algorithms based on the random partitioning technique and the randomized algebraic technique, respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Jian Hua, Dian Wei Gao, Ke Sun, and Xin Sheng Liu. "Parameterized Modeling and Analysis of Wind Turbine Blade Using VB and ANSYS." Advanced Materials Research 774-776 (September 2013): 248–51. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.248.

Full text
Abstract:
It is very difficult to establish a model of wind turbine blade in finite element software directly because of complexity of the shape. To solve this problem, parameterized modeling and analysis program of wind turbine blade is developed applying of ANSYS secondary development technique based on VB6.0. Processes of establishing a finite element model, applying loads and extracting Post-processing data of wind turbine blade are described. A numeral example demonstrates the program is feasible and correct. The research work can help to speed up application of ANSYS secondary technique in design and analysis of wind turbine blade and provide references for parameterized modeling and analysis of similar engineering in the future.
APA, Harvard, Vancouver, ISO, and other styles
30

Boehmer, Niclas, Robert Bredereck, Klaus Heeger, and Rolf Niedermeier. "Bribery and Control in Stable Marriage." Journal of Artificial Intelligence Research 71 (August 24, 2021): 993–1048. http://dx.doi.org/10.1613/jair.1.12755.

Full text
Abstract:
We initiate the study of external manipulations in Stable Marriage by considering several manipulative actions as well as several manipulation goals. For instance, one goal is to make sure that a given pair of agents is matched in a stable solution, and this may be achieved by the manipulative action of reordering some agents' preference lists. We present a comprehensive study of the computational complexity of all problems arising in this way. We find several polynomial-time solvable cases as well as NP-hard ones. For the NP-hard cases, focusing on the natural parameter "budget" (that is, the number of manipulative actions one is allowed to perform), we also conduct a parameterized complexity analysis and encounter mostly parameterized hardness results.
APA, Harvard, Vancouver, ISO, and other styles
31

Bouafia, Mousaab, Djamel Benterki, and Adnan Yassine. "Complexity analysis of interior point methods for linear programming based on a parameterized kernel function." RAIRO - Operations Research 50, no. 4-5 (October 2016): 935–49. http://dx.doi.org/10.1051/ro/2015056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Boudjellal, Nawel, Hayet Roumili, and Djamel Benterki. "Complexity analysis of interior point methods for convex quadratic programming based on a parameterized Kernel function." Boletim da Sociedade Paranaense de Matemática 40 (February 2, 2022): 1–16. http://dx.doi.org/10.5269/bspm.47772.

Full text
Abstract:
The kernel functions play an important role in the amelioration of the computational complexity of algorithms. In this paper, we present a primal-dual interior-point algorithm for solving convex quadratic programming based on a new parametric kernel function. The proposed kernel function is not logarithmic and not self-regular. We analysis a large and small-update versions which are based on a new kernel function. We obtain the best known iteration bound for large-update methods, which improves signicantly the so far obtained complexity results. Thisresult is the rst to reach this goal.
APA, Harvard, Vancouver, ISO, and other styles
33

Abu-Khzam, Faisal N., and Karam Al Kontar. "A Brief Survey of Fixed-Parameter Parallelism." Algorithms 13, no. 8 (August 14, 2020): 197. http://dx.doi.org/10.3390/a13080197.

Full text
Abstract:
This paper provides an overview of the field of parameterized parallel complexity by surveying previous work in addition to presenting a few new observations and exploring potential new directions. In particular, we present a general view of how known FPT techniques, such as bounded search trees, color coding, kernelization, and iterative compression, can be modified to produce fixed-parameter parallel algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Bredereck, R., J. Chen, S. Hartung, S. Kratsch, R. Niedermeier, O. Suchy, and G. J. Woeginger. "A Multivariate Complexity Analysis of Lobbying in Multiple Referenda." Journal of Artificial Intelligence Research 50 (June 27, 2014): 409–46. http://dx.doi.org/10.1613/jair.4285.

Full text
Abstract:
Assume that each of n voters may or may not approve each of m issues. If an agent (the lobby) may influence up to k voters, then the central question of the NP-hard Lobbying problem is whether the lobby can choose the voters to be influenced so that as a result each issue gets a majority of approvals. This problem can be modeled as a simple matrix modification problem: Can one replace k rows of a binary n x m-matrix by k all-1 rows such that each column in the resulting matrix has a majority of 1s? Significantly extending on previous work that showed parameterized intractability (W[2]-completeness) with respect to the number k of modified rows, we study how natural parameters such as n, m, k, or the "maximum number of 1s missing for any column to have a majority of 1s" (referred to as "gap value g") govern the computational complexity of Lobbying. Among other results, we prove that Lobbying is fixed-parameter tractable for parameter m and provide a greedy logarithmic-factor approximation algorithm which solves Lobbying even optimally if m < 5. We also show empirically that this greedy algorithm performs well on general instances. As a further key result, we prove that Lobbying is LOGSNP-complete for constant values g>0, thus providing a first natural complete problem from voting for this complexity class of limited nondeterminism.
APA, Harvard, Vancouver, ISO, and other styles
35

Nguyen, Thanh H., Mason Wright, Michael P. Wellman, and Satinder Singh. "Multistage Attack Graph Security Games: Heuristic Strategies, with Empirical Game-Theoretic Analysis." Security and Communication Networks 2018 (December 13, 2018): 1–28. http://dx.doi.org/10.1155/2018/2864873.

Full text
Abstract:
We study the problem of allocating limited security countermeasures to protect network data from cyber-attacks, for scenarios modeled by Bayesian attack graphs. We consider multistage interactions between a network administrator and cybercriminals, formulated as a security game. This formulation is capable of representing security environments with significant dynamics and uncertainty and very large strategy spaces. We propose parameterized heuristic strategies for the attacker and defender and provide detailed analysis of their time complexity. Our heuristics exploit the topological structure of attack graphs and employ sampling methods to overcome the computational complexity in predicting opponent actions. Due to the complexity of the game, we employ a simulation-based approach and perform empirical game analysis over an enumerated set of heuristic strategies. Finally, we conduct experiments in various game settings to evaluate the performance of our heuristics in defending networks, in a manner that is robust to uncertainty about the security environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Carneiro, Alan Diêgo Aurélio, Fábio Protti, and Uéverton dos Santos Souza. "On knot-free vertex deletion: Fine-grained parameterized complexity analysis of a deadlock resolution graph problem." Theoretical Computer Science 909 (March 2022): 97–109. http://dx.doi.org/10.1016/j.tcs.2022.01.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

DONG, XIAO, and GUOYAN ZHENG. "MATCHING PARAMETERIZED SHAPES BY NONPARAMETRIC BELIEF PROPAGATION." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 02 (March 2009): 209–46. http://dx.doi.org/10.1142/s0218001409007120.

Full text
Abstract:
In this paper we generalize the belief propagation based rigid shape matching algorithm to a nonparametric belief propagation based on parameterized shape matching. We construct a local-global shape descriptor based cost function to compare the distances among landmarks in each data set, which is equivalent to the Hamiltonian of a spin glass. The constructed cost function is immune to rigid transformations, therefore the parameterized shape matching can be achieved by searching for the optimal shape parameter and the correspondence assignment that minimize the cost function. The optimization procedure is then approximated by a Monte Carlo simulation based MAP estimation on a graphical model, i.e. the nonparametric belief propagation. Experiments on a principal component analysis (PCA) based point distribution model (PDM) of the proximal femur illustrate the effects of two key factors, the topology of the graphical model and the renormalization of the shape parameters of the parameterized shape. Other factors that can influence its performance and its computational complexity are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Pourhassan, Mojgan, Feng Shi, and Frank Neumann. "Parameterized Analysis of Multiobjective Evolutionary Algorithms and the Weighted Vertex Cover Problem." Evolutionary Computation 27, no. 4 (December 2019): 559–75. http://dx.doi.org/10.1162/evco_a_00255.

Full text
Abstract:
Evolutionary multiobjective optimization for the classical vertex cover problem has been analysed in Kratsch and Neumann ( 2013 ) in the context of parameterized complexity analysis. This article extends the analysis to the weighted vertex cover problem in which integer weights are assigned to the vertices and the goal is to find a vertex cover of minimum weight. Using an alternative mutation operator introduced in Kratsch and Neumann ( 2013 ), we provide a fixed parameter evolutionary algorithm with respect to [Formula: see text], the cost of an optimal solution for the problem. Moreover, we present a multiobjective evolutionary algorithm with standard mutation operator that keeps the population size in a polynomial order by means of a proper diversity mechanism, and therefore, manages to find a 2-approximation in expected polynomial time. We also introduce a population-based evolutionary algorithm which finds a [Formula: see text]-approximation in expected time [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
39

Faran, Rachel, and Orna Kupferman. "A Parametrized Analysis of Algorithms on Hierarchical Graphs." International Journal of Foundations of Computer Science 30, no. 06n07 (September 2019): 979–1003. http://dx.doi.org/10.1142/s0129054119400252.

Full text
Abstract:
Hierarchical graphs are used in order to describe systems with a sequential composition of sub-systems. A hierarchical graph consists of a vector of subgraphs. Vertices in a subgraph may “call” other subgraphs. The reuse of subgraphs, possibly in a nested way, causes hierarchical graphs to be exponentially more succinct than equivalent flat graphs. Early research on hierarchical graphs and the computational price of their succinctness suggests that there is no strong correlation between the complexity of problems when applied to flat graphs and their complexity in the hierarchical setting. That is, the complexity in the hierarchical setting is higher, but all “jumps” in complexity up to an exponential one are exhibited, including no jumps in some problems. We continue the study of the complexity of algorithms for hierarchical graphs, with the following contributions: (1) In many applications, the subgraphs have a small, often a constant, number of exit vertices, namely vertices from which control returns to the calling subgraph. We offer a parameterized analysis of the complexity and point to problems where the complexity becomes lower when the number of exit vertices is bounded by a constant. (2) We describe a general methodology for algorithms on hierarchical graphs. The methodology is based on an iterative compression of subgraphs in a way that maintains the solution to the problems and results in subgraphs whose size depends only on the number of exit vertices, and (3) we handle labeled hierarchical graphs, where edges are labeled by letters from some alphabet, and the problems refer to the languages of the graphs.
APA, Harvard, Vancouver, ISO, and other styles
40

Toms, Benjamin A., Jeffrey B. Basara, and Yang Hong. "Usage of Existing Meteorological Data Networks for Parameterized Road Ice Formation Modeling." Journal of Applied Meteorology and Climatology 56, no. 7 (July 2017): 1959–76. http://dx.doi.org/10.1175/jamc-d-16-0199.1.

Full text
Abstract:
AbstractA road ice prediction model was developed on the basis of existing data networks with an objective of providing a computationally efficient method of road ice forecasting. Icing risk was separated into three distinct road ice formation mechanisms: hoarfrost, freezing fog, and frozen precipitation. Hoarfrost parameterizations were mostly gathered as presented in previous literature, with modifications incorporated to account for diffusional ice crystal growth-rate complexity. Freezing-fog parameterizations were based on previous fog typological analyses under the assumption that fog formation mechanisms are similar in above- and subfreezing temperatures. Frozen-precipitation parameterizations were primarily unique to the developed model but were also partially based on previous research. Diagnostic analyses use a synthesis of Automated Surface Observing System (ASOS), Automated Weather Observing System (AWOS), and Oklahoma Mesonet data. Prognostic analyses utilize the National Digital Forecast Database (NDFD), a 2.5-km gridded database of forecast meteorological variables output from National Weather Service Weather Forecast Offices. A frequency analysis was performed using the diagnostic parameterizations to determine general road icing risk across the state of Oklahoma. The frequency analyses aligned well with expected temporal maxima and confirmed the viability of the developed parameterizations. Further, a fog typological analysis showed the implemented freezing-fog-formation parameterizations to capture 89% of fog events. These results suggest that the developed model, identified as the Road-Ice Model (RIM), may be implemented as a robust option for analyzing the potential for road ice development based on the background meteorological environment.
APA, Harvard, Vancouver, ISO, and other styles
41

Kowaluk, Mirosław, and Andrzej Lingas. "A Multi-Dimensional Matrix Product—A Natural Tool for Parameterized Graph Algorithms." Algorithms 15, no. 12 (November 28, 2022): 448. http://dx.doi.org/10.3390/a15120448.

Full text
Abstract:
We introduce the concept of a k-dimensional matrix product D of k matrices A1,…,Ak of sizes n1×n,…,nk×n, respectively, where D[i1,…,ik] is equal to ∑ℓ=1nA1[i1,ℓ]×…×Ak[ik,ℓ]. We provide upper bounds on the time complexity of computing the product and solving related problems of computing witnesses and maximum witnesses of the Boolean version of the product in terms of the time complexity of rectangular matrix multiplication. The multi-dimensional matrix product framework is useful in the design of parameterized graph algorithms. First, we apply our results on the multi-dimensional matrix product to the fundamental problem of detecting/counting copies of a fixed pattern graph in a host graph. The recent progress on this problem has not included complete pattern graphs, i.e., cliques (and their complements, i.e., edge-free pattern graphs, in the induced setting). The fastest algorithms for the aforementioned patterns are based on a reduction to triangle detection/counting. We provide an alternative simple method of detection/counting copies of fixed size cliques based on the multi-dimensional matrix product. It is at least as time efficient as the triangle method in cases of K4 and K5. Next, we show an immediate reduction of the k-dominating set problem to the multi-dimensional matrix product. It implies the W[2] hardness of the problem of computing the k-dimensional Boolean matrix product. Finally, we provide an efficient reduction of the problem of finding the lowest common ancestors for all k-tuples of vertices in a directed acyclic graph to the problem of finding witnesses of the Boolean variant of the multi-dimensional matrix product. Although the time complexities of the algorithms resulting from the aforementioned reductions solely match those of the known algorithms, the advantage of our algorithms is simplicity. Our algorithms also demonstrate the versatility of the multi-dimensional matrix product framework.
APA, Harvard, Vancouver, ISO, and other styles
42

Sato, T., and Y. Kameya. "Parameter Learning of Logic Programs for Symbolic-Statistical Modeling." Journal of Artificial Intelligence Research 15 (December 1, 2001): 391–454. http://dx.doi.org/10.1613/jair.912.

Full text
Abstract:
We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Xia, Lirong, and Weiqiang Zheng. "The Smoothed Complexity of Computing Kemeny and Slater Rankings." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5742–50. http://dx.doi.org/10.1609/aaai.v35i6.16720.

Full text
Abstract:
The computational complexity of winner determination under common voting rules is a classical and fundamental topic in the field of computational social choice. Previous work has established the NP-hardness of winner determination under some commonly-studied voting rules, such as the Kemeny rule and the Slater rule. In a recent position paper, Baumeister, Hogrebe, and Rothe (2020) questioned the relevance of the worst-case nature of NP-hardness in social choice and proposed to conduct smoothed complexity analysis (Spielman and Teng 2009) under Blaser and Manthey’s (2015) framework. In this paper, we develop the first smoothed complexity results for winner determination in voting. We prove the smoothed hardness of Kemeny and Slater using the classical smoothed runtime analysis, and prove a parameterized typical-case smoothed easiness result for Kemeny. We also make an attempt of applying Blaser and Manthey’s (2015) smoothed complexity framework in social choice contexts by proving that the framework categorizes an always-exponential-time brute force search algorithm as being smoothed poly-time, under a natural noise model based on the well-studied Mallows model in social choice and statistics. Overall, our results show that smoothed complexity analysis in computational social choice is a challenging and fruitful topic.
APA, Harvard, Vancouver, ISO, and other styles
44

Rostkier-Edelstein, Dorita, and Joshua P. Hacker. "The Roles of Surface-Observation Ensemble Assimilation and Model Complexity for Nowcasting of PBL Profiles: A Factor Separation Analysis." Weather and Forecasting 25, no. 6 (December 1, 2010): 1670–90. http://dx.doi.org/10.1175/2010waf2222435.1.

Full text
Abstract:
Abstract Recent results showed the ability of surface-observation assimilation with a single-column model (SCM) and an ensemble filter (EF) to skillfully estimate the vertical structure of the PBL when only climatological information is provided for initialization and forcing. The present study quantifies the relative benefits of model complexity, compared to surface-observation assimilation, for making 30-min SCM ensemble predictions (nowcasts). The SCM is initialized and forced by timely mesoscale forecasts, making it capable of providing flow-dependent probabilistic very short-range forecasts of PBL profiles wherever surface observations are available. Factor separation (FS) analysis measures the relative contributions to skill from EF surface assimilation compared to selected SCM components: parameterized radiation and objectively scaled horizontal advection. Here, the SCM–EF system is presented and its deterministic skill (as represented by ensemble-mean error) is analyzed with FS. Results show that surface assimilation can more meaningfully contribute to the skill levels of temperature, wind, and mixing-ratio nowcasts than model enhancements under a wide range of flow scenarios. However, in the convective PBL regime surface assimilation can enhance the moist bias often observed in parameterized PBL mixing ratio profiles due to poor covariances estimated from the ensemble. Then, the SCM–EF proves useful in revealing a model deficiency. Externally imposed horizontal advection is required to provide skillful ensemble-mean forecasts when not assimilating surface observations, but can offset the benefit realized from assimilation by quickly sweeping the updated state out of the domain. The radiation scheme has a minor effect on forecast performance. It improves the nowcast surface temperature at night, and can act synergistically with assimilation to improve low-level jet predictions, but the effect above the surface steeply decreases with height. The results suggest that an SCM–EF may be helpful in wind-power and pollutant dispersion applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Schlosser, Michail, Axel Schumacher, and Klaus Bellendir. "Effective Modeling of Load Applications in Composite Structures - Accuracy, Complexity, Computational Time." Key Engineering Materials 809 (June 2019): 461–66. http://dx.doi.org/10.4028/www.scientific.net/kem.809.461.

Full text
Abstract:
The simulation of load application elements requires the modeling of the contact point and a nonlinear analysis. This contact analysis is still time-consuming despite of powerful computers. A reduction of this contact by a simple load model would result in enormous time savings. The Hertzian contact theory provides an analytical approach to the contact problem. However, an isotropic material behavior is assumed, which is problematic especially with fiber reinforced structures. Nevertheless, a suitable load model can be developed for a simplified model of a bolt joint. The edge effects occurring at the edge of the hole are determined using an approximation function (parameterized polynomial approach). The anisotropic material behavior is represented by alternative models or it can also be integrated into the calculation by an extension of Hertzian theory. The different approaches are compared in respect of accuracy, complexity and computing time. For reference and verification of the results, a contact model is created using the FEM software HyperMesh and Optistruct from Altair. Besides the contact model can be used as an aid for creating the load model. Finally, a method is presented, which reduces a contact analysis to a purely linear static structural analysis and thus enables a significantly reduced computing time. The corresponding load model also gives a good representation of reality.
APA, Harvard, Vancouver, ISO, and other styles
46

Gaspers, Serge, and Kamran Najeebullah. "Optimal Surveillance of Covert Networks by Minimizing Inverse Geodesic Length." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 533–40. http://dx.doi.org/10.1609/aaai.v33i01.3301533.

Full text
Abstract:
The inverse geodesic length (IGL) is a well-known and widely used measure of network performance. It equals the sum of the inverse distances of all pairs of vertices. In network analysis, IGL of a network is often used to assess and evaluate how well heuristics perform in strengthening or weakening a network. We consider the edge-deletion problem MINIGLED. Formally, given a graph G, a budget k, and a target inverse geodesic length T, the question is whether there exists a subset of edges X with |X| ≤ ck, such that the inverse geodesic length of G − X is at most T.In this paper, we design algorithms and study the complexity of MINIGL-ED. We show that it is NP-complete and cannot be solved in subexponential time even when restricted to bipartite or split graphs assuming the Exponential Time Hypothesis. In terms of parameterized complexity, we consider the problem with respect to various parameters. We show that MINIGL-ED is fixed-parameter tractable for parameter T and vertex cover by modeling the problem as an integer quadratic program. We also provide FPT algorithms parameterized by twin cover and neighborhood diversity combined with the deletion budget k. On the negative side we show that MINIGL-ED is W[1]-hard for parameter tree-width.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Rui, and Shihua Zhang. "Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9082–90. http://dx.doi.org/10.1609/aaai.v36i8.20893.

Full text
Abstract:
Understanding the black-box prediction for neural networks is challenging. To achieve this, early studies have designed influence function (IF) to measure the effect of removing a single training point on neural networks. However, the classic implicit Hessian-vector product (IHVP) method for calculating IF is fragile, and theoretical analysis of IF in the context of neural networks is still lacking. To this end, we utilize the neural tangent kernel (NTK) theory to calculate IF for the neural network trained with regularized mean-square loss, and prove that the approximation error can be arbitrarily small when the width is sufficiently large for two-layer ReLU networks. We analyze the error bound for the classic IHVP method in the over-parameterized regime to understand when and why it fails or not. In detail, our theoretical analysis reveals that (1) the accuracy of IHVP depends on the regularization term, and is pretty low under weak regularization; (2) the accuracy of IHVP has a significant correlation with the probability density of corresponding training points. We further borrow the theory from NTK to understand the IFs better, including quantifying the complexity for influential samples and depicting the variation of IFs during the training dynamics. Numerical experiments on real-world data confirm our theoretical results and demonstrate our findings.
APA, Harvard, Vancouver, ISO, and other styles
48

Bulteau, Laurent, Guillaume Fertin, Géraldine Jean, and Christian Komusiewicz. "Sorting by Multi-Cut Rearrangements." Algorithms 14, no. 6 (May 29, 2021): 169. http://dx.doi.org/10.3390/a14060169.

Full text
Abstract:
A multi-cut rearrangement of a string S is a string S′ obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated string X1·X2·X3·…·Xk·Xk+1, where X1 and Xk+1 are possibly empty, and (2) rearranging the Xis so as to obtain S′=Xπ(1)·Xπ(2)·Xπ(3)·…·Xπ(k+1), π being a permutation on 1,2,…,k+1 satisfying π(1)=1 and π(k+1)=k+1. Given two strings S and T built on the same multiset of characters from an alphabet Σ, the Sorting by Multi-Cut Rearrangements (SMCR) problem asks whether a given number ℓ of k-cut rearrangements suffices to transform S into T. The SMCR problem generalizes several classical genomic rearrangements problems, such as Sorting by Transpositions and Sorting by Block Interchanges. It may also model chromoanagenesis, a recently discovered phenomenon consisting of massive simultaneous rearrangements. In this paper, we study the SMCR problem from an algorithmic complexity viewpoint. More precisely, we investigate its classical and parameterized complexity, as well as its approximability, in the general case or when S and T are permutations.
APA, Harvard, Vancouver, ISO, and other styles
49

Chini, Peter, Roland Meyer, and Prakash Saivasan. "Fine-Grained Complexity of Safety Verification." Journal of Automated Reasoning 64, no. 7 (July 14, 2020): 1419–44. http://dx.doi.org/10.1007/s10817-020-09572-x.

Full text
Abstract:
Abstract We study the fine-grained complexity of Leader Contributor Reachability ($${\textsf {LCR}} $$ LCR ) and Bounded-Stage Reachability ($${\textsf {BSR}} $$ BSR ), two variants of the safety verification problem for shared memory concurrent programs. For both problems, the memory is a single variable over a finite data domain. Our contributions are new verification algorithms and lower bounds. The latter are based on the Exponential Time Hypothesis ($${\textsf {ETH}} $$ ETH ), the problem $${\textsf {Set~Cover}} $$ Set Cover , and cross-compositions. $${\textsf {LCR}} $$ LCR is the question whether a designated leader thread can reach an unsafe state when interacting with a certain number of equal contributor threads. We suggest two parameterizations: (1) By the size of the data domain $${\texttt {D}}$$ D and the size of the leader $${\texttt {L}}$$ L , and (2) by the size of the contributors $${\texttt {C}}$$ C . We present algorithms for both cases. The key techniques are compact witnesses and dynamic programming. The algorithms run in $${\mathcal {O}}^*(({\texttt {L}}\cdot ({\texttt {D}}+1))^{{\texttt {L}}\cdot {\texttt {D}}} \cdot {\texttt {D}}^{{\texttt {D}}})$$ O ∗ ( ( L · ( D + 1 ) ) L · D · D D ) and $${\mathcal {O}}^*(2^{{\texttt {C}}})$$ O ∗ ( 2 C ) time, showing that both parameterizations are fixed-parameter tractable. We complement the upper bounds by (matching) lower bounds based on $${\textsf {ETH}} $$ ETH and $${\textsf {Set~Cover}} $$ Set Cover . Moreover, we prove the absence of polynomial kernels. For $${\textsf {BSR}} $$ BSR , we consider programs involving $${\texttt {t}}$$ t different threads. We restrict the analysis to computations where the write permission changes $${\texttt {s}}$$ s times between the threads. $${\textsf {BSR}} $$ BSR asks whether a given configuration is reachable via such an $${\texttt {s}}$$ s -stage computation. When parameterized by $${\texttt {P}}$$ P , the maximum size of a thread, and $${\texttt {t}}$$ t , the interesting observation is that the problem has a large number of difficult instances. Formally, we show that there is no polynomial kernel, no compression algorithm that reduces the size of the data domain $${\texttt {D}}$$ D or the number of stages $${\texttt {s}}$$ s to a polynomial dependence on $${\texttt {P}}$$ P and $${\texttt {t}}$$ t . This indicates that symbolic methods may be harder to find for this problem.
APA, Harvard, Vancouver, ISO, and other styles
50

Bannach, Max, and Sebastian Berndt. "Practical Access to Dynamic Programming on Tree Decompositions." Algorithms 12, no. 8 (August 16, 2019): 172. http://dx.doi.org/10.3390/a12080172.

Full text
Abstract:
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO 1 (that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography