Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Worst-case complexity analysis.

Статті в журналах з теми "Worst-case complexity analysis"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Worst-case complexity analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Szirmay-Kalos, L., and G. Márton. "Worst-case versus average case complexity of ray-shooting." Computing 61, no. 2 (June 1998): 103–31. http://dx.doi.org/10.1007/bf02684409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kwas, Marek, and Youming Li. "Worst case complexity of multivariate Feynman–Kac path integration." Journal of Complexity 19, no. 6 (December 2003): 730–43. http://dx.doi.org/10.1016/s0885-064x(03)00048-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jackowski, Tomasz. "Complexity of multilinear problems in the worst case setting." Journal of Complexity 6, no. 4 (December 1990): 389–408. http://dx.doi.org/10.1016/0885-064x(90)90030-h.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Milanese, M., and A. Vicino. "Information-Based Complexity and Nonparametric Worst-Case System Identification." Journal of Complexity 9, no. 4 (December 1993): 427–46. http://dx.doi.org/10.1006/jcom.1993.1028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Plaskota, Leszek. "Worst Case Complexity of Problems with Random Information Noise." Journal of Complexity 12, no. 4 (December 1996): 416–39. http://dx.doi.org/10.1006/jcom.1996.0026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pemmaraju, Sriram V., and Clifford A. Shaffer. "Analysis of the worst case space complexity of a PR quadtree." Information Processing Letters 49, no. 5 (March 1994): 263–67. http://dx.doi.org/10.1016/0020-0190(94)90065-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Youming, and Grzegorz W. Wasilkowski. "Worst Case Complexity of Weighted Approximation and Integration over Rd." Journal of Complexity 18, no. 1 (March 2002): 330–45. http://dx.doi.org/10.1006/jcom.2001.0632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Short, Michael. "Bounds on Worst-Case Deadline Failure Probabilities in Controller Area Networks." Journal of Computer Networks and Communications 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5196092.

Повний текст джерела
Анотація:
Industrial communication networks like the Controller Area Network (CAN) are often required to operate reliably in harsh environments which expose the communication network to random errors. Probabilistic schedulability analysis can employ rich stochastic error models to capture random error behaviors, but this is most often at the expense of increased analysis complexity. In this paper, an efficient method (of time complexityO(n log n)) to bound the message deadline failure probabilities for an industrial CAN network consisting ofnperiodic/sporadic message transmissions is proposed. The paper develops bounds for Deadline Minus Jitter Monotonic (DMJM) and Earliest Deadline First (EDF) message scheduling techniques. Both random errors and random bursts of errors can be included in the model. Stochastic simulations and a case study considering DMJM and EDF scheduling of an automotive benchmark message set provide validation of the technique and highlight its application.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

De Haan, Ronald, Anna Roubickova, and Stefan Szeider. "Parameterized Complexity Results for Plan Reuse." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 224–31. http://dx.doi.org/10.1609/aaai.v27i1.8655.

Повний текст джерела
Анотація:
Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings. We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of parameters. We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kon, Mark, and Leszek Plaskota. "Complexity of Neural Network Approximation with Limited Information: A Worst Case Approach." Journal of Complexity 17, no. 2 (June 2001): 345–65. http://dx.doi.org/10.1006/jcom.2001.0575.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Matsuo, Hirofumi. "Cyclic sequencing problems in the two-machine permutation flow shop: Complexity, worst-case, and average-case analysis." Naval Research Logistics 37, no. 5 (October 1990): 679–94. http://dx.doi.org/10.1002/1520-6750(199010)37:5<679::aid-nav3220370507>3.0.co;2-q.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Chan, Tsz Nam, Leong Hou U, Yun Peng, Byron Choi, and Jianliang Xu. "Fast network k-function-based spatial analysis." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2853–66. http://dx.doi.org/10.14778/3551793.3551836.

Повний текст джерела
Анотація:
Network K -function has been the de facto operation for analyzing point patterns in spatial networks, which is widely used in many communities, including geography, ecology, transportation science, social science, and criminology. To analyze a location dataset, domain experts need to generate a network K -function plot that involves computing multiple network K -functions. However, network K -function is a computationally expensive operation that is not feasible to support large-scale datasets, let alone to generate a network K -function plot. To handle this issue, we develop two efficient algorithms, namely count augmentation (CA) and neighbor sharing (NS), which can reduce the worst-case time complexity for computing network K -functions. In addition, we incorporate the advanced shortest path sharing (ASPS) approach into these two methods to further lower the worst-case time complexity for generating network K -function plots. Experiment results on four large-scale location datasets (up to 7.33 million data points) show that our methods can achieve up to 165.85x speedup compared with the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kacewicz, Bolesław. "Asymptotically tight worst case complexity bounds for initial-value problems with nonadaptive information." Journal of Complexity 47 (August 2018): 86–96. http://dx.doi.org/10.1016/j.jco.2018.02.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Vakhania, Nodari. "Probabilistic quality estimations for combinatorial optimization problems." Georgian Mathematical Journal 25, no. 1 (March 1, 2018): 123–34. http://dx.doi.org/10.1515/gmj-2017-0041.

Повний текст джерела
Анотація:
AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

ASCHIERI, FEDERICO. "GAME SEMANTICS AND THE GEOMETRY OF BACKTRACKING: A NEW COMPLEXITY ANALYSIS OF INTERACTION." Journal of Symbolic Logic 82, no. 2 (June 2017): 672–708. http://dx.doi.org/10.1017/jsl.2016.48.

Повний текст джерела
Анотація:
AbstractWe present abstract complexity results about Coquand and Hyland–Ong game semantics, that will lead to new bounds on the length of first-order cut-elimination, normalization, interaction between expansion trees and any other dialogical process game semantics can model and apply to. In particular, we provide a novel method to bound the length of interactions between visible strategies and to measure precisely the tower of exponentials defining the worst-case complexity. Our study improves the old estimates on average by several exponentials.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Gordon, Ofir, Yuval Filmus, and Oren Salzman. "Revisiting the Complexity Analysis of Conflict-Based Search: New Computational Techniques and Improved Bounds." Proceedings of the International Symposium on Combinatorial Search 12, no. 1 (July 21, 2021): 64–72. http://dx.doi.org/10.1609/socs.v12i1.18552.

Повний текст джерела
Анотація:
The problem of Multi-Agent Path Finding (MAPF) calls for finding a set of conflict-free paths for a fleet of agents operating in a given environment. Arguably, the state-of-the-art approach to computing optimal solutions is Conflict-Based Search (CBS). In this work we revisit the complexity analysis of CBS to provide tighter bounds on the algorithm's run-time in the worst-case. Our analysis paves the way to better pinpoint the parameters that govern (in the worst case) the algorithm's computational complexity. Our analysis is based on two complementary approaches: In the first approach we bound the run-time using the size of a Multi-valued Decision Diagram (MDD)--a layered graph which compactly contains all possible single-agent paths between two given vertices for a specific path length. In the second approach we express the running time by a novel recurrence relation which bounds the algorithm's complexity. We use generating functions-based analysis in order to tightly bound the recurrence. Using these technique we provide several new upper-bounds on CBS's complexity. The results allow us to improve the existing bound on the running time of CBS for many cases. For example, on a set of common benchmarks we improve the upper-bound by a factor of at least 2 to the power of 10,000,000.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Han, Ai Li. "Complexity Research on B Algorithm." Applied Mechanics and Materials 20-23 (January 2010): 173–77. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.173.

Повний текст джерела
Анотація:
The time complexity of B algorithm, one of the intelligent search algorithms, is discussed. By anatomizing some instances, it is pointed out that the cost of calculating the value of heuristic function should be included in the range of time complexity analysis for B algorithm. And then, an algorithm of calculating the value of heuristic function is presented. By analyzing the cost of calculating the value of heuristic function, it is pointed out that the number of recursions in B algorithm is O(n!) in the worst case. Therefore, the time complexity of B algorithm is exponential instead of O(n2).
Стилі APA, Harvard, Vancouver, ISO та ін.
18

HARRINGTON, PAUL, COLM Ó. DÚNLAING, and CHEE K. YAP. "OPTIMAL VORONOI DIAGRAM CONSTRUCTION WITH n CONVEX SITES IN THREE DIMENSIONS." International Journal of Computational Geometry & Applications 17, no. 06 (December 2007): 555–93. http://dx.doi.org/10.1142/s0218195907002483.

Повний текст джерела
Анотація:
This paper presents a worst-case optimal algorithm for constructing the Voronoi diagram for n disjoint convex and rounded semi-algebraic sites in 3 dimensions. Rather than extending optimal 2-dimensional methods,32,16,20,2 we base our method on a suboptimal 2-dimensional algorithm, outlined by Lee and Drysdale and modified by Sharir25,30 for computing the diagram of circular sites. For complexity considerations, we assume the sites have bounded complexity, i.e., the algebraic degree is bounded as is the number of algebraic patches making up the site. For the sake of simplicity we assume that the sites are what we call rounded. This assumption simplifies the analysis, though it is probably unnecessary. Our algorithm runs in time O(C(n)) where C(n) is the worst-case complexity of an n-site diagram. For spherical sites C(n) is θ(n2), but sharp estimates do not seem to be available for other classes of site.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Procaccia, A. D., and J. S. Rosenschein. "Junta Distributions and the Average-Case Complexity of Manipulating Elections." Journal of Artificial Intelligence Research 28 (February 28, 2007): 157–81. http://dx.doi.org/10.1613/jair.2148.

Повний текст джерела
Анотація:
Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Kuhlmann, Marco, Giorgio Satta, and Peter Jonsson. "On the Complexity of CCG Parsing." Computational Linguistics 44, no. 3 (September 2018): 447–82. http://dx.doi.org/10.1162/coli_a_00324.

Повний текст джерела
Анотація:
We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir ( 1994 ). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker and Weir ( 1994 ) apart from weakly equivalent formalisms such as Tree Adjoining Grammar, for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Goldman, C. V., and S. Zilberstein. "Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis." Journal of Artificial Intelligence Research 22 (November 1, 2004): 143–74. http://dx.doi.org/10.1613/jair.1427.

Повний текст джерела
Анотація:
Decentralized control of cooperative systems captures the operation of a group of decision makers that share a single global objective. The difficulty in solving optimally such problems arises when the agents lack full observability of the global state of the system when they operate. The general problem has been shown to be NEXP-complete. In this paper, we identify classes of decentralized control problems whose complexity ranges between NEXP and P. In particular, we study problems characterized by independent transitions, independent observations, and goal-oriented objective functions. Two algorithms are shown to solve optimally useful classes of goal-oriented decentralized processes in polynomial time. This paper also studies information sharing among the decision-makers, which can improve their performance. We distinguish between three ways in which agents can exchange information: indirect communication, direct communication and sharing state features that are not controlled by the agents. Our analysis shows that for every class of problems we consider, introducing direct or indirect communication does not change the worst-case complexity. The results provide a better understanding of the complexity of decentralized control problems that arise in practice and facilitate the development of planning algorithms for these problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lee, Ching-pei, and Stephen J. Wright. "Random permutations fix a worst case for cyclic coordinate descent." IMA Journal of Numerical Analysis 39, no. 3 (July 27, 2018): 1246–75. http://dx.doi.org/10.1093/imanum/dry040.

Повний текст джерела
Анотація:
Abstract Variants of the coordinate descent approach for minimizing a nonlinear function are distinguished in part by the order in which coordinates are considered for relaxation. Three common orderings are cyclic (CCD), in which we cycle through the components of $x$ in order; randomized (RCD), in which the component to update is selected randomly and independently at each iteration; and random-permutations cyclic (RPCD), which differs from CCD only in that a random permutation is applied to the variables at the start of each cycle. Known convergence guarantees are weaker for CCD and RPCD than for RCD, though in most practical cases, computational performance is similar among all these variants. There is a certain type of quadratic function for which CCD is significantly slower than for RCD; a recent paper by Sun & Ye (2016, Worst-case complexity of cyclic coordinate descent: $O(n^2)$ gap with randomized version. Technical Report. Stanford, CA: Department of Management Science and Engineering, Stanford University. arXiv:1604.07130) has explored the poor behavior of CCD on functions of this type. The RPCD approach performs well on these functions, even better than RCD in a certain regime. This paper explains the good behavior of RPCD with a tight analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Della Croce, Federico, and Vangelis Th Paschos. "Exploiting dominance conditions for computing non trivial worst-case complexity for bounded combinatorial optimization problems." Operational Research 8, no. 3 (September 11, 2008): 235–56. http://dx.doi.org/10.1007/s12351-008-0020-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Anastasi, G., P. Foglia, and L. Lenzini. "MPEG video traffic on a MetaRing network: complexity reduction of a ‘worst-case’ model for bandwidth allocation analysis." Computer Communications 21, no. 2 (March 1998): 111–25. http://dx.doi.org/10.1016/s0140-3664(97)00135-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Barbay, Jérémy. "Optimal Prefix Free Codes with Partial Sorting." Algorithms 13, no. 1 (December 31, 2019): 12. http://dx.doi.org/10.3390/a13010012.

Повний текст джерела
Анотація:
We describe an algorithm computing an optimal prefix free code for n unsorted positive weights in time within O ( n ( 1 + lg α ) ) ⊆ O ( n lg n ) , where the alternation α ∈ [ 1 . . n − 1 ] approximates the minimal amount of sorting required by the computation. This asymptotical complexity is within a constant factor of the optimal in the algebraic decision tree computational model, in the worst case over all instances of size n and alternation α . Such results refine the state of the art complexity of Θ ( n lg n ) in the worst case over instances of size n in the same computational model, a landmark in compression and coding since 1952. Beside the new analysis technique, such improvement is obtained by combining a new algorithm, inspired by van Leeuwen’s algorithm to compute optimal prefix free codes from sorted weights (known since 1976), with a relatively minor extension of Karp et al.’s deferred data structure to partially sort a multiset accordingly to the queries performed on it (known since 1988). Preliminary experimental results on text compression by words show α to be polynomially smaller than n, which suggests improvements by at most a constant multiplicative factor in the running time for such applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Xia, Lirong, and Weiqiang Zheng. "The Smoothed Complexity of Computing Kemeny and Slater Rankings." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5742–50. http://dx.doi.org/10.1609/aaai.v35i6.16720.

Повний текст джерела
Анотація:
The computational complexity of winner determination under common voting rules is a classical and fundamental topic in the field of computational social choice. Previous work has established the NP-hardness of winner determination under some commonly-studied voting rules, such as the Kemeny rule and the Slater rule. In a recent position paper, Baumeister, Hogrebe, and Rothe (2020) questioned the relevance of the worst-case nature of NP-hardness in social choice and proposed to conduct smoothed complexity analysis (Spielman and Teng 2009) under Blaser and Manthey’s (2015) framework. In this paper, we develop the first smoothed complexity results for winner determination in voting. We prove the smoothed hardness of Kemeny and Slater using the classical smoothed runtime analysis, and prove a parameterized typical-case smoothed easiness result for Kemeny. We also make an attempt of applying Blaser and Manthey’s (2015) smoothed complexity framework in social choice contexts by proving that the framework categorizes an always-exponential-time brute force search algorithm as being smoothed poly-time, under a natural noise model based on the well-studied Mallows model in social choice and statistics. Overall, our results show that smoothed complexity analysis in computational social choice is a challenging and fruitful topic.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Dai, Jingke, Peng Zhao, Xiao Chen, and Fenggan Zhang. "Analysis and Design of Systematic Rateless Codes in FH/BFSK System with Interference." Mobile Information Systems 2020 (February 10, 2020): 1–8. http://dx.doi.org/10.1155/2020/9049284.

Повний текст джерела
Анотація:
The asymptotic analysis of systematic rateless codes in frequency hopping (FH) systems with interference is first provided using discretized density evolution (DDE) and compared with the traditional fixed-rate scheme. A simplified analysis with Gaussian assumption of initial message is proposed in the worst case of interference, which has much lower complexity and provides a very close result to DDE. Based on this simplified analysis, the linear programming is employed to design rateless codes and the simulation results on partial-band interference channels show that the optimized codes have more powerful antijamming performance than the codes originally designed for conventional systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Selvi, V. "Clustering Analysis of Greedy Heuristic Method in Zero_One Knapsack Problem." International Journal of Emerging Research in Management and Technology 6, no. 7 (June 29, 2018): 39. http://dx.doi.org/10.23956/ijermt.v6i7.181.

Повний текст джерела
Анотація:
Knapsack problem is a surely understood class of optimization problems, which tries to expand the profit of items in a knapsack without surpassing its capacity, Knapsack can be solved by several algorithms such like Greedy, dynamic programming, Branch & bound etc. The solution to the zero_one knapsack problem (KP) can be viewed as the result of a sequence of decision. Clustering is the process of resolving that type of applications. Different clustering application for grouping elements with equal priority. In this paper we are introducing greedy heuristic algorithm for solving zero_one knapsack problem. We will exhibit a relative investigation of the Greedy, dynamic programming, B&B and Genetic algorithms regarding of the complexity of time requirements, and the required programming efforts and compare the total value for each of them. Greedy and Genetic algorithms can be used to solve the 0-1 Knapsack problem within a reasonable time complexity. The worst-case time complexity (Big-O) of both algorithms is O(N). Using the greedy method, the algorithm can produce high quality clusters while reduce time the best partioning avoid the memory confinement problem during the process.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Jiang, Tianzi. "The worst case complexity of the fredholm equation of the second kind with non-periodic free term and noise information." Numerical Functional Analysis and Optimization 19, no. 3-4 (January 1998): 329–43. http://dx.doi.org/10.1080/01630569808816831.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Touil, Imene, and Wided Chikouche. "Primal-dual interior point methods for Semidefinite programming based on a new type of kernel functions." Filomat 34, no. 12 (2020): 3957–69. http://dx.doi.org/10.2298/fil2012957t.

Повний текст джерела
Анотація:
In this paper, we propose the first hyperbolic-logarithmic kernel function for Semidefinite programming problems. By simple analysis tools, several properties of this kernel function are used to compute the total number of iterations. We show that the worst-case iteration complexity of our algorithm for large-update methods improves the obtained iteration bounds based on hyperbolic [24] as well as classic kernel functions. For small-update methods, we derive the best known iteration bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Khan, Farhan Hai, Tannistha Pal, and Rounik Roy. "Development of a Universal Running Time Predictor using Multivariate Regression." International Journal of Innovative Research in Physics 3, no. 3 (April 4, 2022): 1–9. http://dx.doi.org/10.15864/ijiip.3301.

Повний текст джерела
Анотація:
Various types of running times exist for analysis of algorithmic efficiency. This research presents a more empirical approach to the problem for the practical measurements of the actual running time of algorithms by considering a plethora of randomized inputs Rn and then fitting a regression curve in n to the algorithm of practical time complexity ν (n). This will also provide us the productivity factor η which will quantify the universal running time with respect to the asymptotic worst-case complexity and evaluate the efficiency of the given algorithm with the help of leading coefficients. This research will also help us compare similar algorithms in a mathematically modelled manner.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Neunhöffer, Max, and Cheryl E. Praeger. "Computing Minimal Polynomials of Matrices." LMS Journal of Computation and Mathematics 11 (2008): 252–79. http://dx.doi.org/10.1112/s1461157000000590.

Повний текст джерела
Анотація:
AbstractWe present and analyse a Monte-Carlo algorithm to compute the minimal polynomial of ann × nmatrix over a finite field that requiresO(n3) field operations andO(n) random vectors, and is well suited for successful practical implementation. The algorithm, and its complexity analysis, use standard algorithms for polynomial and matrix operations. We compare features of the algorithm with several other algorithms in the literature. In addition we present a deterministic verification procedure which is similarly efficient in most cases but has a worst-case complexity ofO(n4). Finally, we report the results of practical experiments with an implementation of our algorithms in comparison with the current algorithms in the GAP library.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Song, Xin, Jingguo Ren, and Qiuming Li. "Doubly Constrained Robust Blind Beamforming Algorithm." Journal of Applied Mathematics 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/245609.

Повний текст джерела
Анотація:
We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA) to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhao, Lulu, Guang Liang, and Huijie Liu. "An Improved Robust Beamforming Design for Cognitive Multiantenna Relay Networks." Mobile Information Systems 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/2719543.

Повний текст джерела
Анотація:
This paper investigates the robust relay beamforming design for the multiantenna nonregenerative cognitive relay networks (CRNs). Firstly, it is proved that the optimal beamforming matrix could be simplified as the product of a variable vector and the conjugate transposition of a known channel response vector. Then, by exploiting the optimal beamforming matrix with simplified structure, an improved robust beamforming design is proposed. Analysis and simulation results show that, compared with the existing suboptimal scheme, the proposed method can achieve higher worst-case channel capacity with lower computational complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

WANG, BIING-FENG. "A BETTER ANALYSIS OF BEN-ASHER’S ALGORITHM FOR THE CONDITIONAL CARTESIAN PRODUCT PROBLEM." Parallel Processing Letters 06, no. 03 (September 1996): 331–44. http://dx.doi.org/10.1142/s0129626496000327.

Повний текст джерела
Анотація:
In [2], a new problem called conditional cartesian product problem was introduced. The problem has a trivial time lower bound [Formula: see text], where A and B are two groups of N elements and CA,B is a subset of the cartesian product A×B satisfying some unknown condition C. Ben-Asher proposed an efficient algorithm for the problem and showed that the proposed algorithm can be performed in [Formula: see text] time [2]. As compared with the trivial time lower bound, the algorithm is not optimal. In this paper, with a better analysis, we show that the worst-case time complexity of Ben-Asher’s algorithm is [Formula: see text], which is much better than [Formula: see text]. Consequently, it can be concluded that for case the rate of growth of |CA,B| is not smaller than [Formula: see text] is a tight lower bound and Ben-Asher’s algorithm is indeed optimal.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Liu, Fangyao, Yayu Peng, Zhengxin Chen, and Yong Shi. "Modeling of Characteristics on Artificial Intelligence IQ Test: a Fuzzy Cognitive Map-Based Dynamic Scenario Analysis." International Journal of Computers Communications & Control 14, no. 6 (November 27, 2019): 653–69. http://dx.doi.org/10.15837/ijccc.2019.6.3692.

Повний текст джерела
Анотація:
This research article uses a Fuzzy Cognitive Map (FCM) approach to improve an earlier proposed IQ test characteristics of Artificial Intelligence (AI) systems. The defuzzification process makes use of fuzzy logic and the triangular membership function along with linguistic term analyses. Each edge of the proposed FCM is assigned to a positive or negative influence type associated with a quantitative weight. All the weights are based on the defuzzified value in the defuzzification results. This research also leverages a dynamic scenario analysis to investigate the interrelationships between driver concepts and other concepts. Worst and best-case scenarios have been conducted on the correlation among concepts. We also use an inference simulation to examine the concepts importance order and the FCM convergence status. The analysis results not only examine the FCM complexity, but also draws insightful conclusions.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

NAMAZI, HAMIDREZA, and VLADIMIR V. KULISH. "COMPLEXITY-BASED CLASSIFICATION OF THE CORONAVIRUS DISEASE (COVID-19)." Fractals 28, no. 05 (August 2020): 2050114. http://dx.doi.org/10.1142/s0218348x20501145.

Повний текст джерела
Анотація:
COVID-19 is a pandemic disease, which massively affected human lives in more than 200 countries. Caused by the coronavirus SARS-CoV-2, this acute respiratory illness affects the human lungs and can easily spread from person to person. Since the disease heavily affects human lungs, analyzing the X-ray images of the lungs may prove to be a powerful tool for disease investigation. In this research, we use the information contained within the complex structures of X-ray images between the cases of COVID-19 and other respiratory diseases, whereas the case of healthy lungs is taken as the reference point. To analyze X-ray images, we benefit from the concept of Shannon’s entropy and fractal theory. Shannon’s entropy is directly related to the amount of information contained within the X-ray images in question, whereas fractal theory is used to analyze the complexity of these images. The results, obtained in this study, show that the method of fractal analysis can detect the level of infection among different respiratory diseases and that COVID-19 has the worst effect on the human lungs. In other words, the complexity of X-ray images is proportional to the severity of the respiratory disease. The method of analysis, employed in this study, can be used even further to analyze how COVID-19 progresses in affected patients.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

García Ojeda, Juan Carlos. "A Look at Algorithm BEPtoPNST." Revista Ingenierías Universidad de Medellín 20, no. 39 (September 9, 2020): 115–28. http://dx.doi.org/10.22395/rium.v20n39a7.

Повний текст джерела
Анотація:
This work analyzes the computational complexity of algorithm BEPtoPNST which transforms a building-evacuation problem (BEP) into a time-ex-panded, process-network synthesis (PNST) problem. The solution of the latter is achieved by resorting to the P-graph method which exploits the combinatorial nature of a BEP. Unlike other approaches, the P-graph method provides not only the optimal solution (best evacuation route as a function of egress time), but also the best n sub-optimal solutions. For the complexity analysis, a generic processor, and a Random-access machine (RAM) model were deployed as well as a mathematical model to calculate the number and cost of the operations performed. It was observed that algorithm BEPtoPNST exhibits an asymptotic complexity of order O ( T | A | (| N | –k)). When solving a BEP, however, the total complexity grows exponentially with order O (T | A | (| N | –k)) + O (2h)) in the worst case; where h represents the total number of operating units specified in the corresponding PNST problem. Nevertheless, the computational comple-xity can be reduced significantly when the P-graph method is deployed.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Meghdouri, Fares, Félix Iglesias Vázquez, and Tanja Zseby. "Modeling data with observers." Intelligent Data Analysis 26, no. 3 (April 18, 2022): 785–803. http://dx.doi.org/10.3233/ida-215741.

Повний текст джерела
Анотація:
Compact data models have become relevant due to the massive, ever-increasing generation of data. We propose Observers-based Data Modeling (ODM), a lightweight algorithm to extract low density data models (aka coresets) that are suitable for both static and stream data analysis. ODM coresets keep data internal structures while alleviating computational costs of machine learning during evaluation phases accounting for a O(n log n) worst-case complexity. We compare ODM with previous proposals in classification, clustering, and outlier detection. Results show the preponderance of ODM for obtaining the best trade-off in accuracy, versatility, and speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Meghdouri, Fares, Félix Iglesias Vázquez, and Tanja Zseby. "Modeling data with observers." Intelligent Data Analysis 26, no. 3 (April 18, 2022): 785–803. http://dx.doi.org/10.3233/ida-215741.

Повний текст джерела
Анотація:
Compact data models have become relevant due to the massive, ever-increasing generation of data. We propose Observers-based Data Modeling (ODM), a lightweight algorithm to extract low density data models (aka coresets) that are suitable for both static and stream data analysis. ODM coresets keep data internal structures while alleviating computational costs of machine learning during evaluation phases accounting for a O(n log n) worst-case complexity. We compare ODM with previous proposals in classification, clustering, and outlier detection. Results show the preponderance of ODM for obtaining the best trade-off in accuracy, versatility, and speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Xu, Zhang Yan, Wei Zhang, and Yan Ying Fan. "Attribute Reduction Algorithm of Incomplete Dicision Table Based on Conditional Entropy." Applied Mechanics and Materials 380-384 (August 2013): 1505–9. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1505.

Повний текст джерела
Анотація:
The search of the attribute reduction algorithm of rough set in incomplete decision table is a research hot spot. Though analysis of the advantages and disadvantages of the existing attribute reduction algorithms,we put forward a definition of relative discernibility matrix base on the positive area. Then we compute the tolerance class with the the idea of cardinal number sorting method, giving a quick heuristic algorithm of attribute reduction with theconditional entropy and relative discernibility matrix, which of the time complexity is in the worst case. The test result shows that the algorithm can obtain an attribute reduction efficiently.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Ghuman, Sukhpal, Emanuele Giaquinta, and Jorma Tarhio. "Lyndon Factorization Algorithms for Small Alphabets and Run-Length Encoded Strings." Algorithms 12, no. 6 (June 21, 2019): 124. http://dx.doi.org/10.3390/a12060124.

Повний текст джерела
Анотація:
We present two modifications of Duval’s algorithm for computing the Lyndon factorization of a string. One of the algorithms has been designed for strings containing runs of the smallest character. It works best for small alphabets and it is able to skip a significant number of characters of the string. Moreover, it can be engineered to have linear time complexity in the worst case. When there is a run-length encoded string R of length ρ , the other algorithm computes the Lyndon factorization of R in O ( ρ ) time and in constant space. It is shown by experimental results that the new variations are faster than Duval’s original algorithm in many scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Xu, Q., and X. Yang. "Analysis of forward approach for upper bounding end-to-end transmission delays over distributed real-time avionics networks." Aeronautical Journal 124, no. 1279 (April 17, 2020): 1399–435. http://dx.doi.org/10.1017/aer.2020.33.

Повний текст джерела
Анотація:
ABSTRACTDistributed real-time avionics networks have been subjected to a great evolution in terms of functionality and complexity. A direct consequence of this evolution is a continual growth of data exchange. AFDX standardised as ARINC 664 is chosen as the backbone network for those distributed real-time avionics networks as it offers high throughput and does not require global clock synchronisation. For certification reasons and engineering research, a deterministic upper bound of the end-to-end transmission delay for packets of each flow should be guaranteed. The Forward Approach (FA) is proposed for the computation of the worst-case end-to-end transmission delay. This approach iteratively estimates the maximum backlog (amount of the pending packets) in each visited switch along the transmission path, and the worst-case end-to-end transmission delay can be computed. Although it is pessimistic (overestimated), the Forward Approach can provide a tighter upper bound of the end-to-end transmission delay while considering the serialisation effect. Recently, our research finds the computation of the serialisation effect might induce an optimistic (underestimated) upper bound. In this paper, we analyse the pessimism in the Forward Approach and the optimism induced by the computation of the serialisation effect, and then we provide a new computation of the serialisation effect. We compare this new computation with the original one, the experiments show that the new computation reduces the optimism and the upper bound of the end-to-end transmission delay can be computed more accurately.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ahmad, Rais Aziz. "Interface-Driven Software Requirements Analysis." European Scientific Journal, ESJ 12, no. 30 (October 31, 2016): 40. http://dx.doi.org/10.19044/esj.2016.v12n30p40.

Повний текст джерела
Анотація:
Software requirements are one of the root causes of failure for IT software development projects. Reasons for this may be that the requirements are high-level, many might simply be wishes, or frequently changed, or they might be unclear, missing, for example, goals, objectives, strategies, and so on. Another major reason for projects’ failure may also be the use of improper techniques for software requirements specification. Currently, most IT software development projects utilise textual techniques like use cases, user stories, scenarios, and features for software requirements elicitation, analysis and specification. While IT software development processes can construct software in different programming languages, the primary focus here will be those IT projects using object-oriented programming languages. Object-oriented programming itself has several characteristics worth noting, such as its popularity, reusability, modularity, concurrency, abstraction and encapsulation. Object-oriented analysis and design transforms software requirements gathered with textual techniques into object-oriented programming. This transformation can cause complexity in identifying objects, classes and interfaces, which, in turn, complicates the object-oriented analysis and design. Because requirements can change over the course of a project and, likewise, software design can evolve during software construction, the traceability of software requirements with objects and components can become difficult. Apart from leading to project complexity, such a process can impact software quality and, in the worst-case scenario, cause the project to fail entirely. The goal of this article is to provide interface-driven techniques that will reduce ambiguity among software requirements, improve traceability and simplify software requirements modelling.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yang, Jianye, Yun Peng, and Wenjie Zhang. "(p,q)-biclique counting and enumeration for large sparse bipartite graphs." Proceedings of the VLDB Endowment 15, no. 2 (October 2021): 141–53. http://dx.doi.org/10.14778/3489496.3489497.

Повний текст джерела
Анотація:
In this paper, we study the problem of ( p , q)-biclique counting and enumeration for large sparse bipartite graphs. Given a bipartite G = ( U, V , E), and two integer parameters p and q, we aim to efficiently count and enumerate all (p, q)-bicliques in G , where a (p, q)-biclique B ( L, R ) is a complete subgraph of G with L ⊆ U, R ⊆ V , |L| = p, and |R| = q. The problem of (p, q)-biclique counting and enumeration has many applications, such as graph neural network information aggregation, densest subgraph detection, and cohesive subgroup analysis, etc. Despite the wide range of applications, to the best of our knowledge, we note that there is no efficient and scalable solution to this problem in the literature. This problem is computationally challenging, due to the worst-case exponential number of (p, q)-bicliques. In this paper, we propose a competitive branch-and-bound baseline method, namely BCList, which explores the search space in a depth-first manner, together with a variety of pruning techniques. Although BCList offers a useful computation framework to our problem, its worst-case time complexity is exponential to p + q. To alleviate this, we propose an advanced approach, called BCList++. Particularly, BCList++ applies a layer based exploring strategy to enumerate ( p, q )-bicliques by anchoring the search on either U or V only, which has a worst-case time complexity exponential to either p or q only. Consequently, a vital task is to choose a layer with the least computation cost. To this end, we develop a cost model, which is built upon an unbiased estimator for the density of 2-hop graph induced by U or V. To improve computation efficiency, BCList++ exploits pre-allocated arrays and vertex labeling techniques such that the frequent subgraph creating operations can be substituted by array element switching operations. We conduct extensive experiments on 16 real-life datasets, and the experimental results demonstrate that BCList++ significantly outperforms the baseline methods by up to 3 orders of magnitude. We show via a case study that (p, q)-bicliques optimize the efficiency of graph neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Fathi-Hafshejani, S., and Reza Peyghami. "An interior point algorithm for solving linear optimization problems using a new trigonometric kernel function." Filomat 34, no. 5 (2020): 1471–86. http://dx.doi.org/10.2298/fil2005471f.

Повний текст джерела
Анотація:
In this paper, a primal-dual interior point algorithm for solving linear optimization problems based on a new kernel function with a trigonometric barrier term which is not only used for determining the search directions but also for measuring the distance between the given iterate and the ?-center for the algorithm is proposed. Using some simple analysis tools and prove that our algorithm based on the new proposed trigonometric kernel function meets O (?n log n log n/?) and O (?n log n/?) as the worst case complexity bounds for large and small-update methods. Finally, some numerical results of performing our algorithm are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

TEWARI, S., S. M. BHANDARKAR, and J. ARNOLD. "DESIGN AND ANALYSIS OF AN EFFICIENT RECURSIVE LINKING ALGORITHM FOR CONSTRUCTING LIKELIHOOD BASED GENETIC MAPS FOR A LARGE NUMBER OF MARKERS." Journal of Bioinformatics and Computational Biology 05, no. 02a (April 2007): 201–50. http://dx.doi.org/10.1142/s0219720007002758.

Повний текст джерела
Анотація:
A multi-locus likelihood of a genetic map is computed based on a mathematical model of chromatid exchange in meiosis that accounts for any type of bivalent configuration in a genetic interval in any specified order of genetic markers. The computational problem is to calculate the likelihood (L) and maximize L by choosing an ordering of genetic markers on the map and the recombination distances between markers. This maximum likelihood estimate (MLE) could be found either with a straightforward algorithm or with the proposed recursive linking algorithm that implements the likelihood computation process involving an iterative procedure is called Expectation Maximization (EM). The time complexity of the straightforward algorithm is exponential without bound in the number of genetic markers, and implementation of the model with a straightforward algorithm for more than seven genetic markers is not feasible, thus motivating the critical importance of the proposed recursive linking algorithm. The recursive linking algorithm decomposes the pool of genetic markers into segments and renders the model implementable for hundreds of genetic markers. The recursive algorithm is shown to reduce the order of time complexity from exponential to linear in the number of markers. The improvement in time complexity is shown theoretically by a worst-case analysis of the algorithm and supported by run time results using data on linkage group-II of the fungal genome Neurospora crassa.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

BASTIEN, CÉDRIC, JUREK CZYZOWICZ, WOJCIECH FRACZAK, and WOJCIECH RYTTER. "REDUCING SIMPLE GRAMMARS: EXPONENTIAL AGAINST HIGHLY-POLYNOMIAL TIME IN PRACTICE." International Journal of Foundations of Computer Science 18, no. 04 (August 2007): 715–25. http://dx.doi.org/10.1142/s0129054107004930.

Повний текст джерела
Анотація:
Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Schott, Dominik Jan, Addythia Saphala, Georg Fischer, Wenxin Xiong, Andrea Gabbrielli, Joan Bordoy, Fabian Höflinger, Kai Fischer, Christian Schindelhauer, and Stefan Johann Rupitsch. "Comparison of Direct Intersection and Sonogram Methods for Acoustic Indoor Localization of Persons." Sensors 21, no. 13 (June 29, 2021): 4465. http://dx.doi.org/10.3390/s21134465.

Повний текст джерела
Анотація:
We discuss two methods to detect the presence and location of a person in an acoustically small-scale room and compare the performances for a simulated person in distances between 1 and 2 m. The first method is Direct Intersection, which determines a coordinate point based on the intersection of spheroids defined by observed distances of high-intensity reverberations. The second method, Sonogram analysis, overlays all channels’ room impulse responses to generate an intensity map for the observed environment. We demonstrate that the former method has lower computational complexity that almost halves the execution time in the best observed case, but about 7 times slower in the worst case compared to the Sonogram method while using 2.4 times less memory. Both approaches yield similar mean absolute localization errors between 0.3 and 0.9 m. The Direct Intersection method performs more precise in the best case, while the Sonogram method performs more robustly.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Santoso, Hari, and Lukman Fakih Lidimilah. "OPTIMASI ALGORITMA ALGA UNTUK MENINGKATKAN LAJU KONVERGENSI." Jurnal Ilmiah Informatika 2, no. 1 (June 9, 2017): 68–82. http://dx.doi.org/10.35316/jimi.v2i1.446.

Повний текст джерела
Анотація:
Artificial AlgaeAlgorithm (AAA) is an optimization algorithm that has advantages of swarm algorithm model and evolution model. AAA consists of three phases of helical movement phase, reproduction, and adaptation. Helical movement is a three-dimensional movement with the direction of x, y, and z which is very influential in the rate of convergence and diversity of solutions. Helical motion optimization aims to increase the convergence rate by moving the algae to the best colony in the population. Algae Algorithm Optimization (AAA ') was tested with 25 objective functions of CEC'05 and implemented in case of pressure vessel design optimization. The results of the CEC'05 function test show that there is an increase in convergence rate at AAA ', but at worst condition of AAA' becomes less stable and trapped in local optima. The complexity analysis shows that AAA has the complexity of O (M3N2O) and AAA 'has the complexity of O (M2N2O) with M is the number of colonies, N is the number of algae individuals, and O is the maximum of the evaluation function. The results of the implementation of pressure vessel design optimization show that AAA's execution time increased 1,103 times faster than AAA. The increase in speed is due to the tournament selection process in AAA performed before the helical motion, whereas in AAA 'is done if the solution after movement is no better than before. At its best, AAA 'found a solution 4.5921 times faster than AAA. At worst, AAA 'stuck on local optima because helical movement is too focused on global best that is not necessarily global optima.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії