Academic literature on the topic 'Deterministic constant time algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deterministic constant time algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deterministic constant time algorithms"

1

Jayanti, Prasad, and Siddhartha Jayanti. "Deterministic Constant-Amortized-RMR Abortable Mutex for CC and DSM." ACM Transactions on Parallel Computing 8, no. 4 (December 31, 2021): 1–26. http://dx.doi.org/10.1145/3490559.

Full text
Abstract:
The abortable mutual exclusion problem, proposed by Scott and Scherer in response to the needs in real-time systems and databases, is a variant of mutual exclusion that allows processes to abort from their attempt to acquire the lock. Worst-case constant remote memory reference algorithms for mutual exclusion using hardware instructions such as Fetch&Add or Fetch&Store have long existed for both cache coherent (CC) and distributed shared memory multiprocessors, but no such algorithms are known for abortable mutual exclusion. Even relaxing the worst-case requirement to amortized, algorithms are only known for the CC model. In this article, we improve this state of the art by designing a deterministic algorithm that uses Fetch&Store to achieve amortized O (1) remote memory reference in both the CC and distributed shared memory models. Our algorithm supports Fast Abort (a process aborts within six steps of receiving the abort signal) and has the following additional desirable properties: it supports an arbitrary number of processes of arbitrary names, requires only O (1) space per process, and satisfies a novel fairness condition that we call Airline FCFS . Our algorithm is short with fewer than a dozen lines of code.
APA, Harvard, Vancouver, ISO, and other styles
2

SUEL, TORSTEN. "PERMUTATION ROUTING AND SORTING ON MESHES WITH ROW AND COLUMN BUSES." Parallel Processing Letters 05, no. 01 (March 1995): 63–80. http://dx.doi.org/10.1142/s0129626495000072.

Full text
Abstract:
We study the problems of permutation routing and sorting on several models of meshes with fixed and reconfigurable row and column buses. We describe two fast and fairly simple deterministic algorithms for permutation routing on two-dimensional networks, and a more complicated algorithm for multi-dimensional networks. The algorithms are obtained by converting two known off-line routing schemes into deterministic routing algorithms, and they can be implemented on a variety of different models of meshes with buses. We also give a deterministic algorithm for 1–1 sorting whose running time matches that for permutation routing, and another algorithm that matches the bisection lower bound on reconfigurable networks of arbitrary constant dimension.
APA, Harvard, Vancouver, ISO, and other styles
3

DAS, SAJAL K., and RANETTE H. HALVERSON. "SIMPLE DETERMINISTIC AND RANDOMIZED ALGORITHMS FOR LINKED LIST RANKING ON THE EREW PRAM MODEL." Parallel Processing Letters 04, no. 01n02 (June 1994): 15–27. http://dx.doi.org/10.1142/s012962649400003x.

Full text
Abstract:
An asynchronous, CRCW PRAM (or APRAM) algorithm for linked list ranking, proposed by Martel and Subramonian, performs EO (n log log n) expected work employing [Formula: see text] processors. Motivated by their unique approach, this paper proposes two EREW list ranking algorithms – one deterministic and the other randomized. The deterministic algorithm performs in [Formula: see text] time using p processors, where n≥p log p. Thus, for p= O (n/ log n), it requires O ( log n log log n) time and O (n log log n) work. Although not work-optimal, this algorithm is very simple compared to the known work-optimal (deterministic) EREW algorithms for list ranking and has the added advantage of small constant factors in the time and space requirements. The randomized algorithm follows the same line of approach, but uses randomization in one step to decrease the time complexity, thus improving on the time complexity of the original algorithm. It requires [Formula: see text] expected time, and hence it is an EO ( log n) expected time, work-optimal algorithm employing p= O (n/ log n) processors. Furthermore, the randomized algorithm uses less space than the deterministic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Khadiev, Kamil, and Vladislav Remidovskii. "Classical and Quantum Algorithms for Assembling a Text from a Dictionary." Nonlinear Phenomena in Complex Systems 24, no. 3 (October 12, 2021): 207–21. http://dx.doi.org/10.33581/1561-4085-2021-24-3-207-221.

Full text
Abstract:
We study algorithms for solving the problem of assembling a text (long string) from a dictionary (a sequence of small strings). The problem has an application in bioinformatics and has a connection with the sequence assembly method for reconstructing a long deoxyribonucleic-acid (DNA) sequence from small fragments. The problem is assembling a string t of length n from strings s1,...,sm. Firstly, we provide a classical (randomized) algorithm with running time Õ(nL0.5 + L) where L is the sum of lengths of s1,...,sm. Secondly, we provide a quantum algorithm with running time Õ(nL0.25 + √mL). Thirdly, we show the lower bound for a classical (randomized or deterministic) algorithm that is Ω(n+L). So, we obtain the quadratic quantum speed-up with respect to the parameter L; and our quantum algorithm have smaller running time comparing to any classical (randomized or deterministic) algorithm in the case of non-constant length of strings in the dictionary.
APA, Harvard, Vancouver, ISO, and other styles
5

Allender, Eric, V. Arvind, Rahul Santhanam, and Fengming Wang. "Uniform derandomization from pathetic lower bounds." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1971 (July 28, 2012): 3512–35. http://dx.doi.org/10.1098/rsta.2011.0318.

Full text
Abstract:
The notion of probabilistic computation dates back at least to Turing, who also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the extent to which randomness is superfluous. A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e. superpolynomial, or even nearly exponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic. Here, we present two instances where ‘pathetic’ lower bounds of the form n 1+ ϵ would suffice to derandomize interesting classes of probabilistic algorithms. We show the following: — If the word problem over S 5 requires constant-depth threshold circuits of size n 1+ ϵ for some ϵ >0, then any language accepted by uniform polynomial size probabilistic threshold circuits can be solved in subexponential time (and, more strongly, can be accepted by a uniform family of deterministic constant-depth threshold circuits of subexponential size). — If there are no constant-depth arithmetic circuits of size n 1+ ϵ for the problem of multiplying a sequence of n 3×3 matrices, then, for every constant d , black-box identity testing for depth- d arithmetic circuits with bounded individual degree can be performed in subexponential time (and even by a uniform family of deterministic constant-depth AC 0 circuits of subexponential size).
APA, Harvard, Vancouver, ISO, and other styles
6

Rajasekaran, Sanguthevar. "On the Euclidean Minimum Spanning Tree Problem." Computing Letters 1, no. 1 (March 6, 2005): 11–14. http://dx.doi.org/10.1163/1574040053326325.

Full text
Abstract:
Given a weighted graph G(V;E), a minimum spanning tree for G can be obtained in linear time using a randomized algorithm or nearly linear time using a deterministic algorithm. Given n points in the plane, we can construct a graph with these points as nodes and an edge between every pair of nodes. The weight on any edge is the Euclidean distance between the two points. Finding a minimum spanning tree for this graph is known as the Euclidean minimum spanning tree problem (EMSTP). The minimum spanning tree algorithms alluded to before will run in time O(n2) (or nearly O(n2)) on this graph. In this note we point out that it is possible to devise simple algorithms for EMSTP in k- dimensions (for any constant k) whose expected run time is O(n), under the assumption that the points are uniformly distributed in the space of interest.CR Categories: F2.2 Nonnumerical Algorithms and Problems; G.3 Probabilistic Algorithms
APA, Harvard, Vancouver, ISO, and other styles
7

Efthymiou, Charilaos. "Deterministic counting of graph colourings using sequences of subgraphs." Combinatorics, Probability and Computing 29, no. 4 (June 22, 2020): 555–86. http://dx.doi.org/10.1017/s0963548320000255.

Full text
Abstract:
AbstractIn this paper we propose a polynomial-time deterministic algorithm for approximately counting the k-colourings of the random graph G(n, d/n), for constant d>0. In particular, our algorithm computes in polynomial time a $(1\pm n^{-\Omega(1)})$ -approximation of the so-called ‘free energy’ of the k-colourings of G(n, d/n), for $k\geq (1+\varepsilon) d$ with probability $1-o(1)$ over the graph instances.Our algorithm uses spatial correlation decay to compute numerically estimates of marginals of the Gibbs distribution. Spatial correlation decay has been used in different counting schemes for deterministic counting. So far algorithms have exploited a certain kind of set-to-point correlation decay, e.g. the so-called Gibbs uniqueness. Here we deviate from this setting and exploit a point-to-point correlation decay. The spatial mixing requirement is that for a pair of vertices the correlation between their corresponding configurations becomes weaker with their distance.Furthermore, our approach generalizes in that it allows us to compute the Gibbs marginals for small sets of nearby vertices. Also, we establish a connection between the fluctuations of the number of colourings of G(n, d/n) and the fluctuations of the number of short cycles and edges in the graph.
APA, Harvard, Vancouver, ISO, and other styles
8

Tarnawski, Jakub. "New graph algorithms via polyhedral techniques." it - Information Technology 63, no. 3 (April 15, 2021): 177–82. http://dx.doi.org/10.1515/itit-2021-0014.

Full text
Abstract:
Abstract This article gives a short overview of my dissertation, where new algorithms are given for two fundamental graph problems. We develop novel ways of using linear programming formulations, even exponential-sized ones, to extract structure from problem instances and to guide algorithms in making progress. The first part of the dissertation addresses a benchmark problem in combinatorial optimization: the asymmetric traveling salesman problem (ATSP). It consists in finding the shortest tour that visits all vertices of a given edge-weighted directed graph. A ρ-approximation algorithm for ATSP is one that runs in polynomial time and always produces a tour at most ρ times longer than the shortest tour. Finding such an algorithm with constant ρ had been a long-standing open problem. Here we give such an algorithm. The second part of the dissertation addresses the perfect matching problem. We have known since the 1980s that it has efficient parallel algorithms if the use of randomness is allowed. However, we do not know if randomness is necessary – that is, whether the matching problem is in the class NC. We show that it is in the class quasi-NC. That is, we give a deterministic parallel algorithm that runs in poly-logarithmic time on quasi-polynomially many processors.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Xiao-Bing, Ming Wang, Mark S. Leeson, Ezequiel A. Di Paolo, and Hao Liu. "Deterministic Agent-Based Path Optimization by Mimicking the Spreading of Ripples." Evolutionary Computation 24, no. 2 (June 2016): 319–46. http://dx.doi.org/10.1162/evco_a_00156.

Full text
Abstract:
Inspirations from nature have contributed fundamentally to the development of evolutionary computation. Learning from the natural ripple-spreading phenomenon, this article proposes a novel ripple-spreading algorithm (RSA) for the path optimization problem (POP). In nature, a ripple spreads at a constant speed in all directions, and the node closest to the source is the first to be reached. This very simple principle forms the foundation of the proposed RSA. In contrast to most deterministic top-down centralized path optimization methods, such as Dijkstra’s algorithm, the RSA is a bottom-up decentralized agent-based simulation model. Moreover, it is distinguished from other agent-based algorithms, such as genetic algorithms and ant colony optimization, by being a deterministic method that can always guarantee the global optimal solution with very good scalability. Here, the RSA is specifically applied to four different POPs. The comparative simulation results illustrate the advantages of the RSA in terms of effectiveness and efficiency. Thanks to the agent-based and deterministic features, the RSA opens new opportunities to attack some problems, such as calculating the exact complete Pareto front in multiobjective optimization and determining the kth shortest project time in project management, which are very difficult, if not impossible, for existing methods to resolve. The ripple-spreading optimization principle and the new distinguishing features and capacities of the RSA enrich the theoretical foundations of evolutionary computation.
APA, Harvard, Vancouver, ISO, and other styles
10

Mehta, Dinesh P., Carl Shetters, and Donald W. Bouldin. "Meta-Algorithms for Scheduling a Chain of Coarse-Grained Tasks on an Array of Reconfigurable FPGAs." VLSI Design 2013 (December 25, 2013): 1–13. http://dx.doi.org/10.1155/2013/249592.

Full text
Abstract:
This paper considers the problem of scheduling a chain of n coarse-grained tasks on a linear array of k reconfigurable FPGAs with the objective of primarily minimizing reconfiguration time. A high-level meta-algorithm along with two detailed meta-algorithms (GPRM and SPRM) that support a wide range of problem formulations and cost functions is presented. GPRM, the more general of the two schemes, reduces the problem to computing a shortest path in a DAG; SPRM, the less general scheme, employs dynamic programming. Both meta algorithms are linear in n and compute optimal solutions. GPRM can be exponential in k but is nevertheless practical because k is typically a small constant. The deterministic quality of this meta algorithm and the guarantee of optimal solutions for all of the formulations discussed make this approach a powerful alternative to other metatechniques such as simulated annealing and genetic algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deterministic constant time algorithms"

1

Талмач, Дмитро Павлович. "Детерміновані методи відображення повідомлення в точку еліптичної кривої, заданої у різних формах." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/44276.

Full text
Abstract:
Данна робота присвячена розробцi детермiнованих полiномiальних алгоритмiв кодування бiтових векторiв в точки елiптичних кривих представлених у рiзних формах. У роботi приводяться основнi необхiднi для розумiння викладок вiдомостi про елiптичнi кривi, особливо кривi в формi Едвардса. Далi детально розглядається проблема кодування елементiв поля, над яким визначена крива, у множину точок кривої для використання у криптографiчних протоколах, в основi яких лежить хешування або задача iнкапсуляцiї ключа. У останньому роздiлi презентуються новi алгоритми, наводиться їх порiвняльний аналiз.
The work is devoted to constructing deterministic polynomial algorithm for encoding sequences of bits into points of Elliptic Curves represented in different forms. The work presents basic information related to the topic of Elliptic Curves, especially in the Edwards form, that is necessary for understanding further mathematical calculations. Next, the problem of encoding underlying field elements, over which the curve is defined, into points of the curve for using this encoding in cryptographic protocols, which are based on hashing or key encapsulation schemes, is considered in more detail. In the last section new algorithms are presented and compared.
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Huy Ngoc Ph D. Massachusetts Institute of Technology. "Constant time algorithms in sparse graph model." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62426.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 87-91).
We focus on constant-time algorithms for graph problems in bounded degree model. We introduce several techniques to design constant-time approximation algorithms for problems such as Vertex Cover, Maximum Matching, Maximum Weighted Matching, Maximum Independent Set and Set Cover. Some of our techniques can also be applied to design constant-time testers for minor-closed properties. In Chapter 1, we show how to construct a simple oracle that provides query access to a fixed Maximal Independent Set (MIS) of the input graph. More specifically, the oracle gives answers to queries of the form "Is v in the MIS?" for any vertex v in the graph. The oracle runs in constant-time, i.e., the running time for the oracle to answer a single query, is independent to the size of the input graph. Combining this oracle with a simple sampling scheme immediately implies an approximation algorithm for size of the minimum vertex cover. The second technique, called oracle hierarchy, transforms classical approximation algorithms into constant-time algorithms that approximate the size of the optimal solution. The technique is applicable to a certain subclass of algorithms that compute a solution in a constant number of phases. In the transformation, oracle hierarchy uses the MIS oracle to simulates each phase. The problems amenable to these techniques include Maximum Matching, Maximum Weight Matching, Set Cover, and Minimum Dominating Set. For example, for Maximum Matching, we give the first constant-time algorithm that for the class of graphs of degree bounded by d, computes the maximum matching size to within en, for any e > 0, where n is the number of vertices in the graph. The running time of the algorithm is independent of n, and only depends on d and e. In Chapter 2, we introduce a new tool called partitioning oracle which provides query access to a fixed partition of the input graph. In particular, the oracle gives answers to queries of the form "Which part in the fixed partition contains v?" for any vertex v in the graph. We develop methods for constructing a partitioning oracle for any class of bounded-degree graphs with an excluded minor. For any e > 0, our partitioning oracle provides query access to a fixed partition of the input constant-degree minor-free graph, in which every part has size 0(1/ 2 ), and the number of edges removed is at most en. We illustrate the power of this technique by using it to extend and simplify a number of previous approximation and testing results for sparse graphs, as well as to provide new results that were unachievable with existing techniques. For instance: " We give constant-time approximation algorithms for the size of the minimum vertex cover, the minimum dominating set, and the maximum independent set for any class of graphs with an excluded minor. * We show a simple proof that any minor-closed graph property is testable in constant time in the bounded degree model. Finally, in Chapter 3, we construct a more efficient partitioning oracle for graphs with constant treewidth. Although the partitioning oracle in Chapter 2 runs in time independent of the size of the input graph, it has to make 2POlY(1/E)) queries to the input graph to answer a query about the partition. Our new partitioning oracle improves this query complexity to poly(1/E) for graphs with constant treewidth. The new oracle can be used to test constant treewidth in poly(1/E) time in the bounded-degree model. Another application is a poly(1/E)-time algorithm that approximates the maximum matching size, the minimum vertex cover size, and the minimum dominating set size up to an additive en in bounded treewidth graphs.
by Huy Ngoc Nguyen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Domingues, Riaal. "A polynomial time algorithm for prime recognition." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08212007-100529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yoshida, Yuichi. "Studies on Constant-Time Algorithms for Bounded-Degree Graphs and Constraint Satisfaction Problems." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/157487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anderson, Robert Lawrence. "An Exposition of the Deterministic Polynomial-Time Primality Testing Algorithm of Agrawal-Kayal-Saxena." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd869.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jasanský, Michal. "Využití prostředků umělé inteligence pro podporu na kapitálových trzích." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-224231.

Full text
Abstract:
This diploma thesis deals with the prediction of financial time series on capital markets using artificial intelligence methods. There are created several dynamic architectures of artificial neural networks, which are learned and subsequently used for prediction of future movements of shares. Based on the results an assessment and recommendations for working with artificial neural networks are provided.
APA, Harvard, Vancouver, ISO, and other styles
7

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Hunter, Brandon. "Channel Probing for an Indoor Wireless Communications Channel." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Full text
Abstract:
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
APA, Harvard, Vancouver, ISO, and other styles
9

YEO, HUI-LING, and 楊惠玲. "Constant-time Algorithms for Some Digraph Problems on RMESH." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/06439018185264732057.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊教育研究所
89
A directed reconfigurable mesh(abbreviated to DRM) is a parallel computation model, which consists of a 3-dimensional array with n^3 processors and a reconfigurable bus system. In this 3-D model, each processor has six direction’s switches and there is one input port and one output port for each direction. We can dynamically setup the configurations of all processors in order to form different and independent sub-buses in the processor array. In each sub-buses the data can be broadcasted in fixed units of time. There are a lot of researchers that develop efficient or constant-time algorithms based on this parallel computation model in order to solve many complicated problems. In this thesis, we propose a new O(1)-time algorithm for computing the transitive closure of digraph. Our algorithm is designed on a 3-dimensional nxnxn DRM, where n is the number of vertices in the graph. Using the O(1)-time transitive closure algorithms and the properties of digraph, we are able to develop many other related digraph algorithms that take O(1) time on a 3-dimensional nxnxn DRM. These problems include strongly connected component testing, finding the strongly connected component, topological sort, and the single source shortest path of acyclic digraph.
APA, Harvard, Vancouver, ISO, and other styles
10

Megow, Nicole, and Andreas S. Schulz. "Scheduling to Minimize Average Completion Time Revisited: Deterministic On-line Algorithms." 2004. http://hdl.handle.net/1721.1/4048.

Full text
Abstract:
We consider the scheduling problem of minimizing the average weighted completion time on identical parallel machines when jobs are arriving over time. For both the preemptive and the nonpreemptive setting, we show that straightforward extensions of Smith's ratio rule yield smaller competitive ratios compared to the previously best-known deterministic on-line algorithms, which are (4+epsilon)-competitive in either case. Our preemptive algorithm is 2-competitive, which actually meets the competitive ratio of the currently best randomized on-line algorithm for this scenario. Our nonpreemptive algorithm has a competitive ratio of 3.28. Both results are characterized by a surprisingly simple analysis; moreover, the preemptive algorithm also works in the less clairvoyant environment in which only the ratio of weight to processing time of a job becomes known at its release date, but neither its actual weight nor its processing time. In the corresponding nonpreemptive situation, every on-line algorithm has an unbounded competitive ratio
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Deterministic constant time algorithms"

1

Allen, Michael P., and Dominic J. Tildesley. Molecular dynamics. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0003.

Full text
Abstract:
This chapter introduces the classical equations of motion for a system of molecules, and describes their solution by stable, accurate, time-stepping algorithms. Simple atomic systems, rigid molecules, and flexible molecules with and without constraints, are treated, with examples of program code. Quaternions are introduced as useful parameters for solving the rigid-body equations of motion of molecules. A simple example of a multiple timestep algorithm is given, and there is a brief summary of event-driven (hard-particle) dynamics. Examples of constant-temperature molecular dynamics using stochastic and deterministic methods are presented, and the corresponding constant-pressure molecular dynamics methods for fixed and variable box-shape are described. The molecular dynamics method is extended to the treatment of polarizable systems, and dynamical simulation of the grand canonical ensemble is mentioned.
APA, Harvard, Vancouver, ISO, and other styles
2

A, Dukeman G., and United States. National Aeronautics and Space Administration., eds. Examination of a practical aerobraking guidance algorithm. Washington, DC: American Institute of Aeronautics and Astronautics, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deterministic constant time algorithms"

1

Grandoni, Fabrizio, Stefano Leonardi, Piotr Sankowski, Chris Schwiegelshohn,, and Shay Solomon. "(1 + ∊)-Approximate Incremental Matching in Constant Deterministic Amortized Time." In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 1886–98. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Albers, Susanne, and Alexander Eckl. "Explorable Uncertainty in Scheduling with Non-uniform Testing Times." In Approximation and Online Algorithms, 127–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80879-2_9.

Full text
Abstract:
AbstractThe problem of scheduling with testing in the framework of explorable uncertainty models environments where some preliminary action can influence the duration of a task. In the model, each job has an unknown processing time that can be revealed by running a test. Alternatively, jobs may be run untested for the duration of a given upper limit. Recently, Dürr et al. [4] have studied the setting where all testing times are of unit size and have given lower and upper bounds for the objectives of minimizing the sum of completion times and the makespan on a single machine. In this paper, we extend the problem to non-uniform testing times and present the first competitive algorithms. The general setting is motivated for example by online user surveys for market prediction or querying centralized databases in distributed computing. Introducing general testing times gives the problem a new flavor and requires updated methods with new techniques in the analysis. We present constant competitive ratios for the objective of minimizing the sum of completion times in the deterministic case, both in the non-preemptive and preemptive setting. For the preemptive setting, we additionally give a first lower bound. We also present a randomized algorithm with improved competitive ratio. Furthermore, we give tight competitive ratios for the objective of minimizing the makespan, both in the deterministic and the randomized setting.
APA, Harvard, Vancouver, ISO, and other styles
3

Karpinski, Marek, and Yakov Nekrich. "Predecessor Queries in Constant Time?" In Algorithms – ESA 2005, 238–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11561071_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hoffmann, Michael, Vincent Kusters, and Tillmann Miltzow. "Halving Balls in Deterministic Linear Time." In Algorithms - ESA 2014, 566–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44777-2_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Itkis, Gene, Chengdian Lin, and Janos Simon. "Deterministic, constant space, self-stabilizing leader election on uniform rings." In Distributed Algorithms, 288–302. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0022154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pietracaprina, Andrea, and Geppino Pucci. "Tight bounds on deterministic PRAM emulations with constant redundancy." In Algorithms — ESA '94, 391–400. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/bfb0049425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mansour, Yishay, Boaz Patt-Shamir, and Shai Vardi. "Constant-Time Local Computation Algorithms." In Approximation and Online Algorithms, 110–21. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28684-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Konyagin, Sergei, and Carl Pomerance. "On Primes Recognizable in Deterministic Polynomial Time." In Algorithms and Combinatorics, 176–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-60408-9_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lenz, Tobias. "Deterministic Splitter Finding in a Stream with Constant Storage and Guarantees." In Algorithms and Computation, 26–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11940128_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brodnik, Andrej, and J. Ian Munro. "Membership in constant time and minimum space." In Algorithms — ESA '94, 72–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/bfb0049398.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deterministic constant time algorithms"

1

Marchiori, Dúnia, Ricardo Custódio, Daniel Panario, and Lucia Moura. "Towards constant-time probabilistic root finding for code-based cryptography." In Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbseg.2021.17313.

Full text
Abstract:
In code-based cryptography, deterministic algorithms are used in the root-finding step of the decryption process. However, probabilistic algorithms are more time efficient than deterministic ones for large fields. These algorithms can be useful for long-term security where larger parameters are relevant. Still, current probabilistic root-finding algorithms suffer from time variations making them susceptible to timing side-channel attacks. To prevent these attacks, we propose a countermeasure to a probabilistic root-finding algorithm so that its execution time does not depend on the degree of the input polynomial but on the cryptosystem parameters. We compare the performance of our proposed algorithm to other root-finding algorithms already used in code-based cryptography. In general, our method is faster than the straightforward algorithm in Classic McEliece. The results also show the range of degrees in larger finite fields where our proposed algorithm is faster than the Additive Fast Fourier Transform algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Marchiori, Dúnia, Ricardo Custódio, Daniel Panario, and Lucia Moura. "Towards constant-time probabilistic root finding for code-based cryptography." In Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbseg.2021.17313.

Full text
Abstract:
In code-based cryptography, deterministic algorithms are used in the root-finding step of the decryption process. However, probabilistic algorithms are more time efficient than deterministic ones for large fields. These algorithms can be useful for long-term security where larger parameters are relevant. Still, current probabilistic root-finding algorithms suffer from time variations making them susceptible to timing side-channel attacks. To prevent these attacks, we propose a countermeasure to a probabilistic root-finding algorithm so that its execution time does not depend on the degree of the input polynomial but on the cryptosystem parameters. We compare the performance of our proposed algorithm to other root-finding algorithms already used in code-based cryptography. In general, our method is faster than the straightforward algorithm in Classic McEliece. The results also show the range of degrees in larger finite fields where our proposed algorithm is faster than the Additive Fast Fourier Transform algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Khonji, Majid, Ashkan Jasour, and Brian Williams. "Approximability of Constant-horizon Constrained POMDP." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/775.

Full text
Abstract:
Partially Observable Markov Decision Process (POMDP) is a fundamental framework for planning and decision making under uncertainty. POMDP is known to be intractable to solve or even approximate when the planning horizon is long (i.e., within a polynomial number of time steps). Constrained POMDP (C-POMDP) allows constraints to be specified on some aspects of the policy in addition to the objective function. When the constraints involve bounding the probability of failure, the problem is called Chance-Constrained POMDP (CC-POMDP). Our first contribution is a reduction from CC-POMDP to C-POMDP and a novel Integer Linear Programming (ILP) formulation. Thus, any algorithm for the later problem can be utilized to solve any instance of the former. Second, we show that unlike POMDP, when the length of the planning horizon is constant, (C)C-POMDP is NP-Hard. Third, we present the first Fully Polynomial Time Approximation Scheme (FPTAS) that computes (near) optimal deterministic policies for constant-horizon (C)C-POMDP in polynomial time.
APA, Harvard, Vancouver, ISO, and other styles
4

Patankar, Ravindra, and Asok Ray. "Fatigue Crack Propagation Under Variable-Amplitude Load: Part I — A Deterministic State-Space Model." In ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0228.

Full text
Abstract:
Abstract The two-part paper addresses modeling of fatigue crack damage in metallic structures for on-line diagnostics and health monitoring of operating machinery. This paper which is the first part presents a dynamical model of fatigue crack propagation in the deterministic state-space setting based on the crack closure concept under cyclic stress excitation of variable-amplitude loading where the model state variables are crack length and crack opening stress. The second part which is a companion paper extends the deterministic model to formulate a stochastic state-space model of fatigue crack propagation under both constant-amplitude and variable-amplitude load. The state-space model is capable of capturing the effects of stress overload and underload on crack retardation and sequence effects, and the model predictions are in fair agreement with experimental data on the 7075-T6 aluminum alloy. Furthermore, the state-space model recursively computes the crack opening stress via a simple functional relationship and does not require a stacked array of peaks and valleys of stress history for its execution. Therefore, savings in both computation time and memory requirement are significant. As such, the state space model of fatigue crack propagation allows real-time execution of decision algorithms for diagnostics, risk analysis, and life prediction on inexpensive platforms such as a Pentium processor.
APA, Harvard, Vancouver, ISO, and other styles
5

Andrade de Melo, Alexsander, and Mateus De Oliveira Oliveira. "Symbolic Solutions for Symbolic Constraint Satisfaction Problems." In 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/6.

Full text
Abstract:
A fundamental drawback that arises when one is faced with the task of deterministically certifying solutions to computational problems in PSPACE is the fact that witnesses may have superpolynomial size, assuming that NP is not equal to PSPACE. Therefore, the complexity of such a deterministic verifier may already be super-polynomially lower-bounded by the size of a witness. In this work, we introduce a new symbolic framework to address this drawback. More precisely, we introduce a PSPACE-hard notion of symbolic constraint satisfaction problem where both instances and solutions for these instances are implicitly represented by ordered decision diagrams (i.e. read-once, oblivious, branching programs). Our main result states that given an ordered decision diagram D of length k and width w specifying a CSP instance, one can determine in time f(w,w')*k whether there is an ODD of width at most w' encoding a solution for this instance. Intuitively, while the parameter w quantifies the complexity of the instance, the parameter w' quantifies the complexity of a prospective solution. We show that CSPs of constant width can be used to formalize natural PSPACE hard problems, such as reachability of configurations for Turing machines working in nondeterministic linear space. For such problems, our main result immediately yields an algorithm that determines the existence of solutions of width w in time g(w)*n, where g:N->N is a suitable computable function, and n is the size of the input.
APA, Harvard, Vancouver, ISO, and other styles
6

Ellen, Faith, Barun Gorain, Avery Miller, and Andrzej Pelc. "Constant-Length Labeling Schemes for Deterministic Radio Broadcast." In SPAA '19: 31st ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3323165.3323194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ellen, Faith, and Seth Gilbert. "Constant-Length Labelling Schemes for Faster Deterministic Radio Broadcast." In SPAA '20: 32nd ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3350755.3400238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ito, Hiro. "Constant-Time Algorithms for Complex Networks." In 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE). IEEE, 2016. http://dx.doi.org/10.1109/apwc-on-cse.2016.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DuPont, Bryony L., Jonathan Cagan, and Patrick Moriarty. "Optimization of Wind Farm Layout and Wind Turbine Geometry Using a Multi-Level Extended Pattern Search Algorithm That Accounts for Variation in Wind Shear Profile Shape." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70290.

Full text
Abstract:
This paper presents a multi-level Extended Pattern Search algorithm (EPS) to optimize both the local positioning and geometry of wind turbines on a wind farm. Additionally, this work begins to draw attention to the effects of atmospheric stability on wind farm power development. The wind farm layout optimization problem involves optimizing the local position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, thereby increasing the effective wind speed at each turbine, allowing it to develop more power. The extended pattern search, employed within a multi-agent system architecture, uses a deterministic approach with stochastic extensions to avoid local minima and converge on superior solutions compared to other algorithms. The EPS presented herein is used in an iterative, hierarchical scheme — an overarching pattern search determines individual turbine positioning, then a sub-level EPS determines the optimal hub height and rotor for each turbine, and the entire search is iterated. This work also explores the wind shear profile shape to better estimate the effects of changes in the atmosphere, specifically the changes in wind speed with respect to height on the total power development of the farm. This consideration shows how even slight changes in time of day, hub height, and farm location can impact the resulting power. The objective function used in this work is the maximization of profit. The farm installation cost is estimated using a data surface derived from the National Renewable Energy Laboratory (NREL) JEDI wind model. Two wind cases are considered: a test case utilizing constant wind speed and unidirectional wind, and a more realistic wind case that considers three discrete wind speeds and varying wind directions, each of which is represented by a fraction of occurrence. Resulting layouts indicate the effects of more accurate cost and power modeling, partial wake interaction, as well as the differences attributed to including and neglecting the effects of atmospheric stability on the wind shear profile shape.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Huy N., and Krzysztof Onak. "Constant-Time Approximation Algorithms via Local Improvements." In 2008 IEEE 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 2008. http://dx.doi.org/10.1109/focs.2008.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deterministic constant time algorithms"

1

Miles, Gaines E., Yael Edan, F. Tom Turpin, Avshalom Grinstein, Thomas N. Jordan, Amots Hetzroni, Stephen C. Weller, Marvin M. Schreiber, and Okan K. Ersoy. Expert Sensor for Site Specification Application of Agricultural Chemicals. United States Department of Agriculture, August 1995. http://dx.doi.org/10.32747/1995.7570567.bard.

Full text
Abstract:
In this work multispectral reflectance images are used in conjunction with a neural network classifier for the purpose of detecting and classifying weeds under real field conditions. Multispectral reflectance images which contained different combinations of weeds and crops were taken under actual field conditions. This multispectral reflectance information was used to develop algorithms that could segment the plants from the background as well as classify them into weeds or crops. In order to segment the plants from the background the multispectrial reflectance of plants and background were studied and a relationship was derived. It was found that using a ratio of two wavelenght reflectance images (750nm and 670nm) it was possible to segment the plants from the background. Once ths was accomplished it was then possible to classify the segmented images into weed or crop by use of the neural network. The neural network developed for this work is a modification of the standard learning vector quantization algorithm. This neural network was modified by replacing the time-varying adaptation gain with a constant adaptation gain and a binary reinforcement function. This improved accuracy and training time as well as introducing several new properties such as hill climbing and momentum addition. The network was trained and tested with different wavelength combinations in order to find the best results. Finally, the results of the classifier were evaluated using a pixel based method and a block based method. In the pixel based method every single pixel is evaluated to test whether it was classified correctly or not and the best weed classification results were 81% and its associated crop classification accuracy is 57%. In the block based classification method, the image was divided into blocks and each block was evaluated to determine whether they contained weeds or not. Different block sizes and thesholds were tested. The best results for this method were 97% for a block size of 8 inches and a pixel threshold of 60. A simulation model was developed to 1) quantify the effectiveness of a site-specific sprayer, 2) evaluate influence of diffeent design parameters on efficiency of the site-specific sprayer. In each iteration of this model, infected areas (weed patches) in the field were randomly generated and the amount of required herbicides for spraying these areas were calculated. The effectiveness of the sprayer was estimated for different stain sizes, nozzle types (conic and flat), nozzle sizes and stain detection levels of the identification system. Simulation results indicated that the flat nozzle is much more effective as compared to the conic nozzle and its relative efficiency is greater for small nozzle sizes. By using a site-specific sprayer, the average ratio between the spraying areas and the stain areas is about 1.1 to 1.8 which can save up to 92% of herbicides, especially when the proportion of the stain areas is small.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography