Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Amortized complexity.

Zeitschriftenartikel zum Thema „Amortized complexity“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-26 Zeitschriftenartikel für die Forschung zum Thema "Amortized complexity" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tarjan, Robert Endre. „Amortized Computational Complexity“. SIAM Journal on Algebraic Discrete Methods 6, Nr. 2 (April 1985): 306–18. http://dx.doi.org/10.1137/0606031.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Feder, Tomás, Eyal Kushilevitz, Moni Naor und Noam Nisan. „Amortized Communication Complexity“. SIAM Journal on Computing 24, Nr. 4 (August 1995): 736–50. http://dx.doi.org/10.1137/s0097539792235864.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nipkow, Tobias, und Hauke Brinkop. „Amortized Complexity Verified“. Journal of Automated Reasoning 62, Nr. 3 (13.03.2018): 367–91. http://dx.doi.org/10.1007/s10817-018-9459-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kingston, Jeffrey H. „The amortized complexity of Henriksen's algorithm“. BIT 26, Nr. 2 (Juni 1986): 156–63. http://dx.doi.org/10.1007/bf01933741.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bahendwar, Isha Ashish, Ruchit Purshottam Bhardwaj und Prof S. G. Mundada. „Amortized Complexity Analysis for Red-Black Trees and Splay Trees“. International Journal of Innovative Research in Computer Science & Technology 6, Nr. 6 (November 2018): 121–28. http://dx.doi.org/10.21276/ijircst.2018.6.6.2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cramer, Ronald, Ivan Damgård und Marcel Keller. „On the Amortized Complexity of Zero-Knowledge Protocols“. Journal of Cryptology 27, Nr. 2 (31.01.2013): 284–316. http://dx.doi.org/10.1007/s00145-013-9145-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

NAVARRO, GONZALO, RODRIGO PAREDES, PATRICIO V. POBLETE und PETER SANDERS. „STRONGER QUICKHEAPS“. International Journal of Foundations of Computer Science 22, Nr. 04 (Juni 2011): 945–69. http://dx.doi.org/10.1142/s0129054111008507.

Der volle Inhalt der Quelle
Annotation:
The Quickheap (QH) is a recent data structure for implementing priority queues which has proved to be simple and efficient in practice. It has also been shown to offer logarithmic expected amortized complexity for all of its operations. Yet, this complexity holds only when keys inserted and deleted are uniformly distributed over the current set of keys. This assumption is in many cases difficult to verify, and does not hold in some important applications such as implementing some minimum spanning tree algorithms using priority queues. In this paper we introduce an elegant model called a Leftmost Skeleton Tree (LST) that reveals the connection between QHs and randomized binary search trees, and allows us to define Randomized QHs. We prove that these offer logarithmic expected amortized complexity for all operations regardless of the input distribution. We also use LSTs in connection to α-balanced trees to achieve a practical α-Balanced QH that offers worst-case amortized logarithmic time bounds for all the operations. Both variants are much more robust than the original QHs. We show experimentally that randomized QHs behave almost as efficiently as QHs on random inputs, and that they retain their good performance on inputs where that of QHs degrades.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hiary, Ghaith A. „An amortized-complexity method to compute the Riemann zeta function“. Mathematics of Computation 80, Nr. 275 (25.01.2011): 1785–96. http://dx.doi.org/10.1090/s0025-5718-2011-02452-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hoogerwoord, Rob R. „Functional Pearls A symmetric set of efficient list operations“. Journal of Functional Programming 2, Nr. 4 (Oktober 1992): 505–13. http://dx.doi.org/10.1017/s0956796800000526.

Der volle Inhalt der Quelle
Annotation:
In this paper we show that it is possible to implement a symmetric set of finite-list operations efficiently; the set is symmetric in the sense that lists can be manipulated at either end. We derive the definitions of these operations from their specifications by calculation. The operations have O(1) time complexity, provided that we content ourselves with, so-called, amortized efficiency, instead of worst-case efficiency
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dumitrescu, Adrian. „A Selectable Sloppy Heap“. Algorithms 12, Nr. 3 (06.03.2019): 58. http://dx.doi.org/10.3390/a12030058.

Der volle Inhalt der Quelle
Annotation:
We study the selection problem, namely that of computing the ith order statistic of n given elements. Here we offer a data structure called selectable sloppy heap that handles a dynamic version in which upon request (i) a new element is inserted or (ii) an element of a prescribed quantile group is deleted from the data structure. Each operation is executed in constant time—and is thus independent of n (the number of elements stored in the data structure)—provided that the number of quantile groups is fixed. This is the first result of this kind accommodating both insertion and deletion in constant time. As such, our data structure outperforms the soft heap data structure of Chazelle (which only offers constant amortized complexity for a fixed error rate 0 < ε ≤ 1 / 2 ) in applications such as dynamic percentile maintenance. The design demonstrates how slowing down a certain computation can speed up the data structure. The method described here is likely to have further impact in the field of data structure design in extending asymptotic amortized upper bounds to same formula asymptotic worst-case bounds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Charguéraud, Arthur, und François Pottier. „Verifying the Correctness and Amortized Complexity of a Union-Find Implementation in Separation Logic with Time Credits“. Journal of Automated Reasoning 62, Nr. 3 (22.09.2017): 331–65. http://dx.doi.org/10.1007/s10817-017-9431-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Trott, Edward W. „Accounting for Debt Instruments Held as Assets“. Accounting Horizons 23, Nr. 4 (01.12.2009): 457–69. http://dx.doi.org/10.2308/acch.2009.23.4.457.

Der volle Inhalt der Quelle
Annotation:
SYNOPSIS: I propose that all debt instruments held as assets be accounted for using a combination of reported amounts that reflect a measurement at the initial recognition date and fair value. The proposal eliminates the “incurred loss” and “other-than-temporary-impairment (hereafter, OTTI)” models and replaces them with a valuation account that adjusts the amortized reported amounts to fair value each reporting date. Disclosures, on a disaggregated basis, about the debt instruments, charge-offs, and fair value measurements will provide significantly more information than currently provided. The advantages of this proposal are to (1) reduce accounting complexity through use of a single objective for reporting all debt instruments held as assets; (2) use the most relevant, representationally faithful, and verifiable measurement attribute for the assets; and (3) increase transparency and information about the assets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Braeger, Steven, Nicholas Arnold und Damian Dechev. „Scalable n-body event prediction“. Open Computer Science 2, Nr. 1 (01.01.2012): 1–15. http://dx.doi.org/10.2478/s13537-012-0005-9.

Der volle Inhalt der Quelle
Annotation:
AbstractThe general simulation of n-body systems often requires the simulation of pairwise interaction events between the objects. The naive method of simulating these events is an algorithm that polls each pair of objects for an interaction every time step. This algorithm has O(n 2)operations per time step in the number of objects. However, this method scales very well to multiple cores. In this paper, we propose a novel method of pairwise simulation that saves a significant amount of computation time by predicting possible future outcomes rather than reacting to them. We demonstrate the implementation of this method, as well as demonstrate that it has amortized O(n) complexity per time step. We also demonstrate an implementation to allow this algorithm to scale comparably well to multiple cores of a shared memory machine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Amjad, Ghous, Seny Kamara und Tarik Moataz. „Breach-Resistant Structured Encryption“. Proceedings on Privacy Enhancing Technologies 2019, Nr. 1 (01.01.2019): 245–65. http://dx.doi.org/10.2478/popets-2019-0014.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivated by the problem of data breaches, we formalize a notion of security for dynamic structured encryption (STE) schemes that guarantees security against a snapshot adversary; that is, an adversary that receives a copy of the encrypted structure at various times but does not see the transcripts related to any queries. In particular, we focus on the construction of dynamic encrypted multi-maps which are used to build efficient searchable symmetric encryption schemes, graph encryption schemes and encrypted relational databases. Interestingly, we show that a form of snapshot security we refer to as breach resistance implies previously-studied notions such as a (weaker version) of history independence and write-only obliviousness. Moreover, we initiate the study of dual-secure dynamic STE constructions: schemes that are forward-private against a persistent adversary and breach-resistant against a snapshot adversary. The notion of forward privacy guarantees that updates to the encrypted structure do not reveal their association to any query made in the past. As a concrete instantiation, we propose a new dual-secure dynamic multi-map encryption scheme that outperforms all existing constructions; including schemes that are not dual-secure. Our construction has query complexity that grows with the selectivity of the query and the number of deletes since the client executed a linear-time rebuild protocol which can be de-amortized. We implemented our scheme (with the de-amortized rebuild protocol) and evaluated its concrete efficiency empirically. Our experiments show that it is highly efficient with queries taking less than 1 microsecond per label/value pair.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Gibney, Daniel, und Sharma V. Thankachan. „Text Indexing for Regular Expression Matching“. Algorithms 14, Nr. 5 (23.04.2021): 133. http://dx.doi.org/10.3390/a14050133.

Der volle Inhalt der Quelle
Annotation:
Finding substrings of a text T that match a regular expression p is a fundamental problem. Despite being the subject of extensive research, no solution with a time complexity significantly better than O(|T||p|) has been found. Backurs and Indyk in FOCS 2016 established conditional lower bounds for the algorithmic problem based on the Strong Exponential Time Hypothesis that helps explain this difficulty. A natural question is whether we can improve the time complexity for matching the regular expression by preprocessing the text T? We show that conditioned on the Online Matrix–Vector Multiplication (OMv) conjecture, even with arbitrary polynomial preprocessing time, a regular expression query on a text cannot be answered in strongly sublinear time, i.e., O(|T|1−ε) for any ε>0. Furthermore, if we extend the OMv conjecture to a plausible conjecture regarding Boolean matrix multiplication with polynomial preprocessing time, which we call Online Matrix–Matrix Multiplication (OMM), we can strengthen this hardness result to there being no solution with a query time that is O(|T|3/2−ε). These results hold for alphabet sizes three or greater. We then provide data structures that answer queries in O(|T||p|τ) time where τ∈[1,|T|] is fixed at construction. These include a solution that works for all regular expressions with Expτ·|T| preprocessing time and space. For patterns containing only ‘concatenation’ and ‘or’ operators (the same type used in the hardness result), we provide (1) a deterministic solution which requires Expτ·|T|log2|T| preprocessing time and space, and (2) when |p|≤|T|z for z=2o(log|T|), a randomized solution with amortized query time which answers queries correctly with high probability, requiring Expτ·|T|2Ωlog|T| preprocessing time and space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

LENHOF, HANS-PETER, und MICHIEL SMID. „AN OPTIMAL CONSTRUCTION METHOD FOR GENERALIZED CONVEX LAYERS“. International Journal of Computational Geometry & Applications 03, Nr. 03 (September 1993): 245–67. http://dx.doi.org/10.1142/s0218195993000166.

Der volle Inhalt der Quelle
Annotation:
Let P be a set of n points in the Euclidean plane and let C be a convex figure. In 1985, Chazelle and Edelsbrunner presented an algorithm, which preprocesses P such that for any query point q, the points of P in the translate C+q can be retrieved efficiently. Assuming that constant time suffices for deciding the inclusion of a point in C, they provided a space and query time optimal solution. Their algorithm uses O(n) space. A query with output size k can be solved in O( log n+k) time. The preprocessing step of their algorithm, however, has time complexity O(n2). We show that the usage of a new construction method for layers reduces the preprocessing time to O(n log n). We thus provide the first space, query time and preprocessing time optimal solution for this class of point retrieval problems. Besides, we present two new dynamic data structures for these problems. The first dynamic data structure allows on-line insertions and deletions of points in O(( log n)2) time. In this dynamic data structure, a query with output size k can be solved in O( log n+k( log n)2) time. The second dynamic data structure, which allows only semi-online updates, has O(( log n)2) amortized update time and O( log n+k) query time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kaplan, Haim, Wolfgang Mulzer, Liam Roditty, Paul Seiferth und Micha Sharir. „Dynamic Planar Voronoi Diagrams for General Distance Functions and Their Algorithmic Applications“. Discrete & Computational Geometry 64, Nr. 3 (22.09.2020): 838–904. http://dx.doi.org/10.1007/s00454-020-00243-7.

Der volle Inhalt der Quelle
Annotation:
Abstract We describe a new data structure for dynamic nearest neighbor queries in the plane with respect to a general family of distance functions. These include $$L_p$$ L p -norms and additively weighted Euclidean distances. Our data structure supports general (convex, pairwise disjoint) sites that have constant description complexity (e.g., points, line segments, disks, etc.). Our structure uses $$O(n \log ^3 n)$$ O ( n log 3 n ) storage, and requires polylogarithmic update and query time, improving an earlier data structure of Agarwal, Efrat, and Sharir which required $$O(n^{\varepsilon })$$ O ( n ε ) time for an update and $$O(\log n)$$ O ( log n ) time for a query [SICOMP 1999]. Our data structure has numerous applications. In all of them, it gives faster algorithms, typically reducing an $$O(n^{\varepsilon })$$ O ( n ε ) factor in the previous bounds to polylogarithmic. In addition, we give here two new applications: an efficient construction of a spanner in a disk intersection graph, and a data structure for efficient connectivity queries in a dynamic disk graph. To obtain this data structure, we combine and extend various techniques from the literature. Along the way, we obtain several side results that are of independent interest. Our data structure depends on the existence and an efficient construction of “vertical” shallow cuttings in arrangements of bivariate algebraic functions. We prove that an appropriate level in an arrangement of a random sample of a suitable size provides such a cutting. To compute it efficiently, we develop a randomized incremental construction algorithm for computing the lowest k levels in an arrangement of bivariate algebraic functions (we mostly consider here collections of functions whose lower envelope has linear complexity, as is the case in the dynamic nearest-neighbor context, under both types of norm). To analyze this algorithm, we also improve a longstanding bound on the combinatorial complexity of the vertical decomposition of these levels. Finally, to obtain our structure, we combine our vertical shallow cutting construction with Chan’s algorithm for efficiently maintaining the lower envelope of a dynamic set of planes in $${{\mathbb {R}}}^3$$ R 3 . Along the way, we also revisit Chan’s technique and present a variant that uses a single binary counter, with a simpler analysis and improved amortized deletion time (by a logarithmic factor; the insertion and query costs remain asymptotically the same).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Rasmussen, Soren, Ethan D. Gutmann, Irene Moulitsas und Salvatore Filippone. „Fortran Coarray Implementation of Semi-Lagrangian Convected Air Particles within an Atmospheric Model“. ChemEngineering 5, Nr. 2 (06.05.2021): 21. http://dx.doi.org/10.3390/chemengineering5020021.

Der volle Inhalt der Quelle
Annotation:
This work added semi-Lagrangian convected air particles to the Intermediate Complexity Atmospheric Research (ICAR) model. The ICAR model is a simplified atmospheric model using quasi-dynamical downscaling to gain performance over more traditional atmospheric models. The ICAR model uses Fortran coarrays to split the domain amongst images and handle the halo region communication of the image’s boundary regions. The newly implemented convected air particles use trilinear interpolation to compute initial properties from the Eulerian domain and calculate humidity and buoyancy forces as the model runs. This paper investigated the performance cost and scaling attributes of executing unsaturated and saturated air particles versus the original particle-less model. An in-depth analysis was done on the communication patterns and performance of the semi-Lagrangian air particles, as well as the performance cost of a variety of initial conditions such as wind speed and saturation mixing ratios. This study found that given a linear increase in the number of particles communicated, there is an initial decrease in performance, but that it then levels out, indicating that over the runtime of the model, there is an initial cost of particle communication, but that the computational benefits quickly offset it. The study provided insight into the number of processors required to amortize the additional computational cost of the air particles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Brodal, Gerth Stølting, Shiva Chaudhuri und Jaikumar Radhakrishnan. „The Randomized Complexity of Maintaining the Minimum“. BRICS Report Series 3, Nr. 40 (10.06.1996). http://dx.doi.org/10.7146/brics.v3i40.20022.

Der volle Inhalt der Quelle
Annotation:
The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e2^2t) − 1 comparisons for FindMin.<br />If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists; in contrast, it is shown that no deterministic algorithm can have constant cost<br />per operation. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ashish Bahendwar, Isha, RuchitPurshottam Bhardwaj und S. G. Mundada. „Amortized Complexity Analysis for Red-Black Trees and Splay Trees“. SSRN Electronic Journal, 2018. http://dx.doi.org/10.2139/ssrn.3529374.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Brodal, Gerth Stølting, Rolf Fagerberg, Christian N. S. Pedersen und Anna Östlin. „The Complexity of Constructing Evolutionary Trees Using Experiments“. BRICS Report Series 8, Nr. 1 (01.01.2001). http://dx.doi.org/10.7146/brics.v8i1.20220.

Der volle Inhalt der Quelle
Annotation:
<p>We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(n d logd n) using at most n |d/2| (log2|d/2|−1 n + O(1)) experiments for d > 2, and<br />at most n(log n + O(1)) experiments for d = 2, where d is the degree of the tree. This improves the previous best upper bound by a factor Theta(log d). For d = 2 the previously best algorithm with running time O(n log n) had a bound of 4n log n on the number of experiments. By an explicit adversary argument, we show an <br />Omega(nd logd n) lower bound, matching our upper bounds and improving the previous best lower bound<br />by a factor Theta(logd n). Central to our algorithm is the construction and maintenance of separator trees of small height. We present how to maintain separator trees with height log n + O(1) under the insertion of new nodes in amortized time O(log n). Part of our dynamic algorithm is an algorithm for computing a centroid tree in optimal time O(n).</p><p>Keywords: Evolutionary trees, Experiment model, Separator trees, Centroid tree, Lower bounds</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Brodal, Gerth Stølting, Rolf Fagerberg, Allan Grønlund Jørgensen, Gabriel Moruz und Thomas Mølhave. „Optimal Resilient Dynamic Dictionaries“. BRICS Report Series 9, Nr. 12 (13.08.2015). http://dx.doi.org/10.7146/brics.v9i12.21966.

Der volle Inhalt der Quelle
Annotation:
<p>We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarily) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound delta on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys. We first present a simple resilient dynamic search tree, based on random sampling, with O(log n + delta) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Theta(log n + delta) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + delta) amortized time. Our dynamic dictionary also supports range queries in O(log n + delta + t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Omega(log n + delta) worst-case time per search.</p><p> </p><p>Full text: <a href="http://dx.doi.org/10.1007/978-3-540-75520-3_32" target="_self">http://dx.doi.org/10.1007/978-3-540-75520-3_32</a></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

de Andrade Júnior, José Wagner, und Rodrigo Duarte Seabra. „Fully Retroactive Minimum Spanning Tree Problem“. Computer Journal, 08.12.2020. http://dx.doi.org/10.1093/comjnl/bxaa135.

Der volle Inhalt der Quelle
Annotation:
Abstract This article describes an algorithm that solves a fully dynamic variant of the minimum spanning tree (MST) problem. The fully retroactive MST allows adding an edge to time $t$, or to obtain the current MST at time $t$. By using the square root technique and a data structure link-cut tree, it was possible to obtain an algorithm that runs each query in $O(\sqrt{m} \lg{|V(G)|})$ amortized, in which $|V(G)|$ is the number of nodes in graph $G$ and $m$ is the size of the timeline. We use a different approach to solve the MST problem instead of the standard algorithms, such as Prim or Kruskal, and this allows using the square root technique to improve the final complexity of the algorithm. Our empirical analysis shows that the proposed algorithm runs faster than re-executing the standard algorithms, and this difference only increases when the number of nodes in these graphs is larger.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Brodal, Gerth Stølting, Rolf Fagerberg und Riko Jacob. „Cache Oblivious Search Trees via Binary Trees of Small Height“. BRICS Report Series 8, Nr. 36 (04.10.2001). http://dx.doi.org/10.7146/brics.v8i36.21696.

Der volle Inhalt der Quelle
Annotation:
We propose a version of cache oblivious search trees which is simpler than the previous proposal of Bender, Demaine and Farach-Colton and has the same complexity bounds. In particular, our data structure avoids the use of weight balanced B-trees, and can be implemented as just a single array of data elements, without the use of pointers. The structure also improves space utilization.<br /> <br />For storing n elements, our proposal uses (1+epsilon)n times the element size of memory, and performs searches in worst case O(log_B n) memory transfers, updates in amortized O((log^2 n)/(epsilon B)) memory transfers, and range queries in worst case O(log_B n + k/B) memory transfers, where k is the size of the output.<br /> <br />The basic idea of our data structure is to maintain a dynamic binary tree of height log n + O(1) using existing methods, embed this tree in a static binary tree, which in turn is embedded in an array in a cache oblivious fashion, using the van Emde Boas layout of Prokop.<br /> <br />We also investigate the practicality of cache obliviousness in the area of search trees, by providing an empirical comparison of different methods for laying out a search tree in memory.<br /> <br />The source code of the programs, our scripts and tools, and the data we present here are available online under ftp.brics.dk/RS/01/36/Experiments/.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Pagh, Rasmus, und Jakob Pagter. „Optimal Time-Space Trade-Offs for Non-Comparison-Based Sorting“. BRICS Report Series 8, Nr. 2 (02.01.2001). http://dx.doi.org/10.7146/brics.v8i2.20456.

Der volle Inhalt der Quelle
Annotation:
<p>We study the fundamental problem of sorting n integers of w bits on a unit-cost RAM with word size w, and in particular consider the time-space trade-off (product of time and space in bits) for this problem. For comparison-based algorithms, the time-space complexity is known to be Theta(n^2). A result of Beame shows that the lower bound also holds for non-comparison-based algorithms, but no algorithm has met this for time below the comparison-based <br />Omega(n lg n) lower bound. </p><p>We show that if sorting within some time bound T~ is possible, then time T = O(T~ + n lg* n) can be achieved with high probability using space S = O(n^2/T + w), which is optimal. Given a deterministic priority queue using amortized<br />time t(n) per operation and space n^O(1), we provide a deterministic<br />algorithm sorting in time T = O(n (t(n) + lg* n)) with S = O(n^2/T+w). Both results require that w <= n^(1-Omega(1)).</p><p>Using existing priority queues and sorting algorithms, this implies<br />that we can deterministically sort time-space optimally in time Theta(T) for T >= n(lg lg n)^2, and with high probability for T >= n lg lg n.</p><p>Our results imply that recent lower bounds for deciding element distinctness in o(n lg n) time are nearly tight.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Husfeldt, Thore. „Fully Dynamic Transitive Closure in Plane Dags with one Source and one Sink“. BRICS Report Series 1, Nr. 30 (30.09.1994). http://dx.doi.org/10.7146/brics.v1i30.21613.

Der volle Inhalt der Quelle
Annotation:
We give an algorithm for the Dynamic Transitive Closure Problem for planar directed acyclic graphs with one source and one sink. The graph can be updated in logarithmic time under arbitrary edge insertions and deletions that preserve the embedding. Queries of the form `is there a directed path from u to v ?' for arbitrary vertices u and v can be answered in logarithmic time. The size of the data structure and the initialisation time are linear in the number of edges.<br /> <br />The result enlarges the class of graphs for which a logarithmic (or even polylogarithmic) time dynamic transitive closure algorithm exists. Previously, the only algorithms within the stated resource bounds put restrictions on the topology of the graph or on the delete operation. To obtain our result, we use a new characterisation of the transitive closure in plane graphs with one source and one sink and introduce new techniques to exploit this characterisation.<br /> <br />We also give a lower bound of Omega(log n/log log n) on the amortised complexity of the problem in the cell probe model with logarithmic word size. This is the first dynamic directed graph problem with almost matching lower and upper bounds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie