Journal articles on the topic 'Deterministic constant time algorithms'

To see the other types of publications on this topic, follow the link: Deterministic constant time algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deterministic constant time algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jayanti, Prasad, and Siddhartha Jayanti. "Deterministic Constant-Amortized-RMR Abortable Mutex for CC and DSM." ACM Transactions on Parallel Computing 8, no. 4 (December 31, 2021): 1–26. http://dx.doi.org/10.1145/3490559.

Full text
Abstract:
The abortable mutual exclusion problem, proposed by Scott and Scherer in response to the needs in real-time systems and databases, is a variant of mutual exclusion that allows processes to abort from their attempt to acquire the lock. Worst-case constant remote memory reference algorithms for mutual exclusion using hardware instructions such as Fetch&Add or Fetch&Store have long existed for both cache coherent (CC) and distributed shared memory multiprocessors, but no such algorithms are known for abortable mutual exclusion. Even relaxing the worst-case requirement to amortized, algorithms are only known for the CC model. In this article, we improve this state of the art by designing a deterministic algorithm that uses Fetch&Store to achieve amortized O (1) remote memory reference in both the CC and distributed shared memory models. Our algorithm supports Fast Abort (a process aborts within six steps of receiving the abort signal) and has the following additional desirable properties: it supports an arbitrary number of processes of arbitrary names, requires only O (1) space per process, and satisfies a novel fairness condition that we call Airline FCFS . Our algorithm is short with fewer than a dozen lines of code.
APA, Harvard, Vancouver, ISO, and other styles
2

SUEL, TORSTEN. "PERMUTATION ROUTING AND SORTING ON MESHES WITH ROW AND COLUMN BUSES." Parallel Processing Letters 05, no. 01 (March 1995): 63–80. http://dx.doi.org/10.1142/s0129626495000072.

Full text
Abstract:
We study the problems of permutation routing and sorting on several models of meshes with fixed and reconfigurable row and column buses. We describe two fast and fairly simple deterministic algorithms for permutation routing on two-dimensional networks, and a more complicated algorithm for multi-dimensional networks. The algorithms are obtained by converting two known off-line routing schemes into deterministic routing algorithms, and they can be implemented on a variety of different models of meshes with buses. We also give a deterministic algorithm for 1–1 sorting whose running time matches that for permutation routing, and another algorithm that matches the bisection lower bound on reconfigurable networks of arbitrary constant dimension.
APA, Harvard, Vancouver, ISO, and other styles
3

DAS, SAJAL K., and RANETTE H. HALVERSON. "SIMPLE DETERMINISTIC AND RANDOMIZED ALGORITHMS FOR LINKED LIST RANKING ON THE EREW PRAM MODEL." Parallel Processing Letters 04, no. 01n02 (June 1994): 15–27. http://dx.doi.org/10.1142/s012962649400003x.

Full text
Abstract:
An asynchronous, CRCW PRAM (or APRAM) algorithm for linked list ranking, proposed by Martel and Subramonian, performs EO (n log log n) expected work employing [Formula: see text] processors. Motivated by their unique approach, this paper proposes two EREW list ranking algorithms – one deterministic and the other randomized. The deterministic algorithm performs in [Formula: see text] time using p processors, where n≥p log p. Thus, for p= O (n/ log n), it requires O ( log n log log n) time and O (n log log n) work. Although not work-optimal, this algorithm is very simple compared to the known work-optimal (deterministic) EREW algorithms for list ranking and has the added advantage of small constant factors in the time and space requirements. The randomized algorithm follows the same line of approach, but uses randomization in one step to decrease the time complexity, thus improving on the time complexity of the original algorithm. It requires [Formula: see text] expected time, and hence it is an EO ( log n) expected time, work-optimal algorithm employing p= O (n/ log n) processors. Furthermore, the randomized algorithm uses less space than the deterministic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Khadiev, Kamil, and Vladislav Remidovskii. "Classical and Quantum Algorithms for Assembling a Text from a Dictionary." Nonlinear Phenomena in Complex Systems 24, no. 3 (October 12, 2021): 207–21. http://dx.doi.org/10.33581/1561-4085-2021-24-3-207-221.

Full text
Abstract:
We study algorithms for solving the problem of assembling a text (long string) from a dictionary (a sequence of small strings). The problem has an application in bioinformatics and has a connection with the sequence assembly method for reconstructing a long deoxyribonucleic-acid (DNA) sequence from small fragments. The problem is assembling a string t of length n from strings s1,...,sm. Firstly, we provide a classical (randomized) algorithm with running time Õ(nL0.5 + L) where L is the sum of lengths of s1,...,sm. Secondly, we provide a quantum algorithm with running time Õ(nL0.25 + √mL). Thirdly, we show the lower bound for a classical (randomized or deterministic) algorithm that is Ω(n+L). So, we obtain the quadratic quantum speed-up with respect to the parameter L; and our quantum algorithm have smaller running time comparing to any classical (randomized or deterministic) algorithm in the case of non-constant length of strings in the dictionary.
APA, Harvard, Vancouver, ISO, and other styles
5

Allender, Eric, V. Arvind, Rahul Santhanam, and Fengming Wang. "Uniform derandomization from pathetic lower bounds." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1971 (July 28, 2012): 3512–35. http://dx.doi.org/10.1098/rsta.2011.0318.

Full text
Abstract:
The notion of probabilistic computation dates back at least to Turing, who also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the extent to which randomness is superfluous. A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e. superpolynomial, or even nearly exponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic. Here, we present two instances where ‘pathetic’ lower bounds of the form n 1+ ϵ would suffice to derandomize interesting classes of probabilistic algorithms. We show the following: — If the word problem over S 5 requires constant-depth threshold circuits of size n 1+ ϵ for some ϵ >0, then any language accepted by uniform polynomial size probabilistic threshold circuits can be solved in subexponential time (and, more strongly, can be accepted by a uniform family of deterministic constant-depth threshold circuits of subexponential size). — If there are no constant-depth arithmetic circuits of size n 1+ ϵ for the problem of multiplying a sequence of n 3×3 matrices, then, for every constant d , black-box identity testing for depth- d arithmetic circuits with bounded individual degree can be performed in subexponential time (and even by a uniform family of deterministic constant-depth AC 0 circuits of subexponential size).
APA, Harvard, Vancouver, ISO, and other styles
6

Rajasekaran, Sanguthevar. "On the Euclidean Minimum Spanning Tree Problem." Computing Letters 1, no. 1 (March 6, 2005): 11–14. http://dx.doi.org/10.1163/1574040053326325.

Full text
Abstract:
Given a weighted graph G(V;E), a minimum spanning tree for G can be obtained in linear time using a randomized algorithm or nearly linear time using a deterministic algorithm. Given n points in the plane, we can construct a graph with these points as nodes and an edge between every pair of nodes. The weight on any edge is the Euclidean distance between the two points. Finding a minimum spanning tree for this graph is known as the Euclidean minimum spanning tree problem (EMSTP). The minimum spanning tree algorithms alluded to before will run in time O(n2) (or nearly O(n2)) on this graph. In this note we point out that it is possible to devise simple algorithms for EMSTP in k- dimensions (for any constant k) whose expected run time is O(n), under the assumption that the points are uniformly distributed in the space of interest.CR Categories: F2.2 Nonnumerical Algorithms and Problems; G.3 Probabilistic Algorithms
APA, Harvard, Vancouver, ISO, and other styles
7

Efthymiou, Charilaos. "Deterministic counting of graph colourings using sequences of subgraphs." Combinatorics, Probability and Computing 29, no. 4 (June 22, 2020): 555–86. http://dx.doi.org/10.1017/s0963548320000255.

Full text
Abstract:
AbstractIn this paper we propose a polynomial-time deterministic algorithm for approximately counting the k-colourings of the random graph G(n, d/n), for constant d>0. In particular, our algorithm computes in polynomial time a $(1\pm n^{-\Omega(1)})$ -approximation of the so-called ‘free energy’ of the k-colourings of G(n, d/n), for $k\geq (1+\varepsilon) d$ with probability $1-o(1)$ over the graph instances.Our algorithm uses spatial correlation decay to compute numerically estimates of marginals of the Gibbs distribution. Spatial correlation decay has been used in different counting schemes for deterministic counting. So far algorithms have exploited a certain kind of set-to-point correlation decay, e.g. the so-called Gibbs uniqueness. Here we deviate from this setting and exploit a point-to-point correlation decay. The spatial mixing requirement is that for a pair of vertices the correlation between their corresponding configurations becomes weaker with their distance.Furthermore, our approach generalizes in that it allows us to compute the Gibbs marginals for small sets of nearby vertices. Also, we establish a connection between the fluctuations of the number of colourings of G(n, d/n) and the fluctuations of the number of short cycles and edges in the graph.
APA, Harvard, Vancouver, ISO, and other styles
8

Tarnawski, Jakub. "New graph algorithms via polyhedral techniques." it - Information Technology 63, no. 3 (April 15, 2021): 177–82. http://dx.doi.org/10.1515/itit-2021-0014.

Full text
Abstract:
Abstract This article gives a short overview of my dissertation, where new algorithms are given for two fundamental graph problems. We develop novel ways of using linear programming formulations, even exponential-sized ones, to extract structure from problem instances and to guide algorithms in making progress. The first part of the dissertation addresses a benchmark problem in combinatorial optimization: the asymmetric traveling salesman problem (ATSP). It consists in finding the shortest tour that visits all vertices of a given edge-weighted directed graph. A ρ-approximation algorithm for ATSP is one that runs in polynomial time and always produces a tour at most ρ times longer than the shortest tour. Finding such an algorithm with constant ρ had been a long-standing open problem. Here we give such an algorithm. The second part of the dissertation addresses the perfect matching problem. We have known since the 1980s that it has efficient parallel algorithms if the use of randomness is allowed. However, we do not know if randomness is necessary – that is, whether the matching problem is in the class NC. We show that it is in the class quasi-NC. That is, we give a deterministic parallel algorithm that runs in poly-logarithmic time on quasi-polynomially many processors.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Xiao-Bing, Ming Wang, Mark S. Leeson, Ezequiel A. Di Paolo, and Hao Liu. "Deterministic Agent-Based Path Optimization by Mimicking the Spreading of Ripples." Evolutionary Computation 24, no. 2 (June 2016): 319–46. http://dx.doi.org/10.1162/evco_a_00156.

Full text
Abstract:
Inspirations from nature have contributed fundamentally to the development of evolutionary computation. Learning from the natural ripple-spreading phenomenon, this article proposes a novel ripple-spreading algorithm (RSA) for the path optimization problem (POP). In nature, a ripple spreads at a constant speed in all directions, and the node closest to the source is the first to be reached. This very simple principle forms the foundation of the proposed RSA. In contrast to most deterministic top-down centralized path optimization methods, such as Dijkstra’s algorithm, the RSA is a bottom-up decentralized agent-based simulation model. Moreover, it is distinguished from other agent-based algorithms, such as genetic algorithms and ant colony optimization, by being a deterministic method that can always guarantee the global optimal solution with very good scalability. Here, the RSA is specifically applied to four different POPs. The comparative simulation results illustrate the advantages of the RSA in terms of effectiveness and efficiency. Thanks to the agent-based and deterministic features, the RSA opens new opportunities to attack some problems, such as calculating the exact complete Pareto front in multiobjective optimization and determining the kth shortest project time in project management, which are very difficult, if not impossible, for existing methods to resolve. The ripple-spreading optimization principle and the new distinguishing features and capacities of the RSA enrich the theoretical foundations of evolutionary computation.
APA, Harvard, Vancouver, ISO, and other styles
10

Mehta, Dinesh P., Carl Shetters, and Donald W. Bouldin. "Meta-Algorithms for Scheduling a Chain of Coarse-Grained Tasks on an Array of Reconfigurable FPGAs." VLSI Design 2013 (December 25, 2013): 1–13. http://dx.doi.org/10.1155/2013/249592.

Full text
Abstract:
This paper considers the problem of scheduling a chain of n coarse-grained tasks on a linear array of k reconfigurable FPGAs with the objective of primarily minimizing reconfiguration time. A high-level meta-algorithm along with two detailed meta-algorithms (GPRM and SPRM) that support a wide range of problem formulations and cost functions is presented. GPRM, the more general of the two schemes, reduces the problem to computing a shortest path in a DAG; SPRM, the less general scheme, employs dynamic programming. Both meta algorithms are linear in n and compute optimal solutions. GPRM can be exponential in k but is nevertheless practical because k is typically a small constant. The deterministic quality of this meta algorithm and the guarantee of optimal solutions for all of the formulations discussed make this approach a powerful alternative to other metatechniques such as simulated annealing and genetic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Hsu, Geesern, Andrew E. Yagle, Kenneth C. Ludema, and Joel A. Levitt. "Modeling and Identification of Lubricated Polymer Friction Dynamics." Journal of Dynamic Systems, Measurement, and Control 122, no. 1 (October 11, 1996): 78–88. http://dx.doi.org/10.1115/1.482431.

Full text
Abstract:
A systematic approach is proposed to model the dynamics of lubricated polymer friction. It starts with the development of a physical model to describe the fundamental mechanisms of the friction. The physical model then serves as the basic structure for the development of a complex model able to capture a wider spectrum of the deterministic and stochastic dynamics of friction. To assess the accuracy of the complex model, two estimation algorithms are formulated to estimate the unknown parameters in the model and to test the model against experimental data. One algorithm is based on the maximum likelihood principle to estimate the constant parameters for stationary friction dynamics, and the other based on the extended Kalman filter to estimate the time-varying parameters for nonstationary friction dynamics. The model and the algorithms are all validated through experiments. [S0022-0434(00)00601-8]
APA, Harvard, Vancouver, ISO, and other styles
12

HERLEY, KIERAN T., ANDREA PIETRACAPRINA, and GEPPINO PUCCI. "FAST DETERMINISTIC PARALLEL BRANCH-AND-BOUND." Parallel Processing Letters 09, no. 03 (September 1999): 325–33. http://dx.doi.org/10.1142/s012962649900030x.

Full text
Abstract:
The branch-and-bound problem involves determining the minimum cost leaf in a cost-labelled tree, subject to the constraint that only the root is known initially and that children are revealed only by visiting thier parent. We present the first efficient deterministic algorithm to solve the branch-and-bound problem for a tree T of constant degree on a p-processor parallel machine. Let c* be the cost of the minimum-cost leaf in T, and let n and h be the number of nodes and the height, respectively, of the subtree T* ⊆ T of nodes of cost less than or equal to c*. Our algorithm runs in O(n/p + h log 2(np)) time on an EREW-PRAM. Moreover, the running time faithfully reflects both communication and computation costs, unlike most of the previous results where the cost of local computation is ignored. For large ranges of the parameters, our algorithm matches the optimal performance of existing randomized strategies. The algorithm can be ported to any architecture for which an efficient implementation of Parallel Priority Queues [PP91] is available.
APA, Harvard, Vancouver, ISO, and other styles
13

Singh, Mohit, and Weijun Xie. "Approximation Algorithms for D-optimal Design." Mathematics of Operations Research 45, no. 4 (November 2020): 1512–34. http://dx.doi.org/10.1287/moor.2019.1041.

Full text
Abstract:
Experimental design is a classical statistics problem, and its aim is to estimate an unknown vector from linear measurements where a Gaussian noise is introduced in each measurement. For the combinatorial experimental design problem, the goal is to pick a subset of experiments so as to make the most accurate estimate of the unknown parameters. In this paper, we will study one of the most robust measures of error estimation—the D-optimality criterion, which corresponds to minimizing the volume of the confidence ellipsoid for the estimation error. The problem gives rise to two natural variants depending on whether repetitions of experiments are allowed or not. We first propose an approximation algorithm with a 1/e-approximation for the D-optimal design problem with and without repetitions, giving the first constant-factor approximation for the problem. We then analyze another sampling approximation algorithm and prove that it is asymptotically optimal. Finally, for D-optimal design with repetitions, we study a different algorithm proposed by the literature and show that it can improve this asymptotic approximation ratio. All the sampling algorithms studied in this paper are shown to admit polynomial-time deterministic implementations.
APA, Harvard, Vancouver, ISO, and other styles
14

Bramas, Quentin, and Sébastien Tixeuil. "The Random Bit Complexity of Mobile Robots Scattering." International Journal of Foundations of Computer Science 28, no. 02 (February 2017): 111–33. http://dx.doi.org/10.1142/s0129054117500083.

Full text
Abstract:
We consider the problem of scattering n robots in a two dimensional continuous space. As this problem is impossible to solve in a deterministic manner, all solutions must be probabilistic. We investigate the amount of randomness (that is, the number of random bits used by the robots) that is required to achieve scattering. We first prove that n log n random bits are necessary to scatter n robots in any setting. Also, we give a sufficient condition for a scattering algorithm to be random bit optimal. As it turns out that previous solutions for scattering satisfy our condition, they are hence proved random bit optimal for the scattering problem. Then, we investigate the time complexity of scattering when strong multiplicity detection is not available. We prove that such algorithms cannot converge in constant time in the general case and in o(log log n) rounds for random bits optimal scattering algorithms. However, we present a family of scattering algorithms that converge as fast as needed without using multiplicity detection. Also, we put forward a specific protocol of this family that is random bit optimal (O(n log n) random bits are used) and time optimal (O(log log n) rounds are used). This improves the time complexity of previous results in the same setting by a log n factor. Aside from characterizing the random bit complexity of mobile robot scattering, our study also closes the time complexity gap with and without strong multiplicity detection (that is, O(1) time complexity is only achievable when strong multiplicity detection is available, and it is possible to approach a constant value as desired otherwise).
APA, Harvard, Vancouver, ISO, and other styles
15

Datta, Ajoy K., Stephane Devismes, Lawrence L. Larmore, and Vincent Villain. "Self-Stabilizing Weak Leader Election in Anonymous Trees Using Constant Memory per Edge." Parallel Processing Letters 27, no. 02 (June 2017): 1750002. http://dx.doi.org/10.1142/s0129626417500025.

Full text
Abstract:
We propose a deterministic silent self-stabilizing algorithm for the weak leader election problem in anonymous trees. Our algorithm is designed in the message passing model, and requires only O(1) bits of memory per edge. It does not necessitate the a priori knowledge of any global parameter on the network. Finally, its stabilization time is at most [Formula: see text] time units, where 𝒟 is the diameter of the network, X is an upper bound on the time to execute some recurrent code by processes, and Imax is the maximal number of messages initially in a link.
APA, Harvard, Vancouver, ISO, and other styles
16

Ławryńczuk, Maciej. "Nonlinear State–Space Predictive Control with On–Line Linearisation and State Estimation." International Journal of Applied Mathematics and Computer Science 25, no. 4 (December 1, 2015): 833–47. http://dx.doi.org/10.1515/amcs-2015-0060.

Full text
Abstract:
Abstract This paper describes computationally efficient model predictive control (MPC) algorithms for nonlinear dynamic systems represented by discrete-time state-space models. Two approaches are detailed: in the first one the model is successively linearised on-line and used for prediction, while in the second one a linear approximation of the future process trajectory is directly found on-line. In both the cases, as a result of linearisation, the future control policy is calculated by means of quadratic optimisation. For state estimation, the extended Kalman filter is used. The discussed MPC algorithms, although disturbance state observers are not used, are able to compensate for deterministic constant-type external and internal disturbances. In order to illustrate implementation steps and compare the efficiency of the algorithms, a polymerisation reactor benchmark system is considered. In particular, the described MPC algorithms with on-line linearisation are compared with a truly nonlinear MPC approach with nonlinear optimisation repeated at each sampling instant.
APA, Harvard, Vancouver, ISO, and other styles
17

Liang, Dieyan, and Hong Shen. "Efficient Algorithms for Max-Weighted Point Sweep Coverage on Lines." Sensors 21, no. 4 (February 19, 2021): 1457. http://dx.doi.org/10.3390/s21041457.

Full text
Abstract:
As an important application of wireless sensor networks (WSNs), deployment of mobile sensors to periodically monitor (sweep cover) a set of points of interest (PoIs) arises in various applications, such as environmental monitoring and data collection. For a set of PoIs in an Eulerian graph, the point sweep coverage problem of deploying the fewest sensors to periodically cover a set of PoIs is known to be Non-deterministic Polynomial Hard (NP-hard), even if all sensors have the same velocity. In this paper, we consider the problem of finding the set of PoIs on a line periodically covered by a given set of mobile sensors that has the maximum sum of weight. The problem is first proven NP-hard when sensors are with different velocities in this paper. Optimal and approximate solutions are also presented for sensors with the same and different velocities, respectively. For M sensors and N PoIs, the optimal algorithm for the case when sensors are with the same velocity runs in O(MN) time; our polynomial-time approximation algorithm for the case when sensors have a constant number of velocities achieves approximation ratio 12; for the general case of arbitrary velocities, 12α and 12(1−1/e) approximation algorithms are presented, respectively, where integer α≥2 is the tradeoff factor between time complexity and approximation ratio.
APA, Harvard, Vancouver, ISO, and other styles
18

HERLEY, KIERAN T., ANDREA PIETRACAPRINA, and GEPPINO PUCCI. "DETERMINISTIC BRANCH-AND-BOUND ON DISTRIBUTED MEMORY MACHINES." International Journal of Foundations of Computer Science 10, no. 04 (December 1999): 391–404. http://dx.doi.org/10.1142/s0129054199000289.

Full text
Abstract:
The branch-and-bound problem involves determining the leaf of minimum cost in a cost-labelled, heap-ordered tree, subject to the constraint that only the root is known initially and that the children of a node are revealed only by visiting their parent. We present the first efficient deterministic algorithm to solve the branch-and-bound problem for a tree T of constant degree on a p-procesor distributed-memory Optically Connected Parallel Computer (OCPC). Let c* be the cost of the minimum-cost leaf in T, and let n and h be the number of nodes and the height, respectively, of the subtree T*⊆T of nodes with cost at most c*. When according for both computation and communication costs, our algorithm runs in time O(n/p+h( max {p, log n log p})2) for general values of n, and can be made to run in time O((n/p+h log 4p) log log p) for n polynomial in p. For large ranges of the relevant parameters, our algorithm is provably optimal and improves asymptotically upon the well-known randomized strategy by Karp and Zhang.
APA, Harvard, Vancouver, ISO, and other styles
19

Arenas, Marcelo, Luis Alberto Croquevielle, Rajesh Jayaram, and Cristian Riveros. "#NFA Admits an FPRAS: Efficient Enumeration, Counting, and Uniform Generation for Logspace Classes." Journal of the ACM 68, no. 6 (December 31, 2021): 1–40. http://dx.doi.org/10.1145/3477045.

Full text
Abstract:
In this work, we study two simple yet general complexity classes, based on logspace Turing machines, that provide a unifying framework for efficient query evaluation in areas such as information extraction and graph databases, among others. We investigate the complexity of three fundamental algorithmic problems for these classes: enumeration, counting, and uniform generation of solutions, and show that they have several desirable properties in this respect. Both complexity classes are defined in terms of non-deterministic logspace transducers (NL-transducers). For the first class, we consider the case of unambiguous NL-transducers, and we prove constant delay enumeration and both counting and uniform generation of solutions in polynomial time. For the second class, we consider unrestricted NL-transducers, and we obtain polynomial delay enumeration, approximate counting in polynomial time, and polynomial-time randomized algorithms for uniform generation. More specifically, we show that each problem in this second class admits a fully polynomial-time randomized approximation scheme (FPRAS) and a polynomial-time Las Vegas algorithm (with preprocessing) for uniform generation. Remarkably, the key idea to prove these results is to show that the fundamental problem # NFA admits an FPRAS, where # NFA is the problem of counting the number of strings of length n (given in unary) accepted by a non-deterministic finite automaton (NFA). While this problem is known to be P-complete and, more precisely, SpanL -complete, it was open whether this problem admits an FPRAS. In this work, we solve this open problem and obtain as a welcome corollary that every function in SpanL admits an FPRAS.
APA, Harvard, Vancouver, ISO, and other styles
20

CABELLO, SERGIO, MARK DE BERG, PANOS GIANNOPOULOS, CHRISTIAN KNAUER, RENÉ VAN OOSTRUM, and REMCO C. VELTKAMP. "MAXIMIZING THE AREA OF OVERLAP OF TWO UNIONS OF DISKS UNDER RIGID MOTION." International Journal of Computational Geometry & Applications 19, no. 06 (December 2009): 533–56. http://dx.doi.org/10.1142/s0218195909003118.

Full text
Abstract:
Let A and B be two sets of n resp. m disjoint unit disks in the plane, with m ≥ n. We consider the problem of finding a translation or rigid motion of A that maximizes the total area of overlap with B. The function describing the area of overlap is quite complex, even for combinatorially equivalent translations and, hence, we turn our attention to approximation algorithms. We give deterministic (1 - ∊)-approximation algorithms for translations and for rigid motions, which run in O((nm/∊2) log (m/∊)) and O((n2m2/∊3) log m)) time, respectively. For rigid motions, we can also compute a (1 - ∊)-approximation in O((m2n4/3Δ1/3/∊3) log n log m) time, where Δ is the diameter of set A. Under the condition that the maximum area of overlap is at least a constant fraction of the area of A, we give a probabilistic (1 - ∊)-approximation algorithm for rigid motions that runs in O((m2/∊4) log 2(m/∊) log m) time and succeeds with high probability. Our results generalize to the case where A and B consist of possibly intersecting disks of different radii, provided that (i) the ratio of the radii of any two disks in A ∪ B is bounded, and (ii) within each set, the maximum number of disks with a non-empty intersection is bounded.
APA, Harvard, Vancouver, ISO, and other styles
21

Ezemobi, Ethelbert, Andrea Tonoli, and Mario Silvagni. "Battery State of Health Estimation with Improved Generalization Using Parallel Layer Extreme Learning Machine." Energies 14, no. 8 (April 16, 2021): 2243. http://dx.doi.org/10.3390/en14082243.

Full text
Abstract:
The online estimation of battery state of health (SOH) is crucial to ensure the reliability of the energy supply in electric and hybrid vehicles. An approach for enhancing the generalization of SOH estimation using a parallel layer extreme learning machine (PL-ELM) algorithm is analyzed in this paper. The deterministic and stable PL-ELM model is designed to overcome the drift problem that is associated with some conventional machine learning algorithms; hence, extending the application of a single SOH estimation model over a large set of batteries of the same type. The PL-ELM model was trained with selected features that characterize the SOH. These features are acquired as the discrete variation of indicator variables including voltage, state of charge (SOC), and energy releasable by the battery. The model training was performed with an experimental battery dataset collected at room temperature under a constant current load condition at discharge phases. Model validation was performed with a dataset of other batteries of the same type that were aged under a constant load condition. An optimum performance with low error variance was obtained from the model result. The root mean square error (RMSE) of the validated model varies from 0.064% to 0.473%, and the mean absolute error (MAE) error from 0.034% to 0.355% for the battery sets tested. On the basis of performance, the model was compared with a deterministic extreme learning machine (ELM) and an incremental capacity analysis (ICA)-based scheme from the literature. The algorithm was tested on a Texas F28379D microcontroller unit (MCU) board with an average execution speed of 93 μs in real time, and 0.9305% CPU occupation. These results suggest that the model is suitable for online applications.
APA, Harvard, Vancouver, ISO, and other styles
22

BEGGS, EDWIN J., JOSÉ FÉLIX COSTA, and JOHN V. TUCKER. "The impact of models of a physical oracle on computational power." Mathematical Structures in Computer Science 22, no. 5 (September 6, 2012): 853–79. http://dx.doi.org/10.1017/s0960129511000557.

Full text
Abstract:
Using physical experiments as oracles for algorithms, we can characterise the computational power of classes of physical systems. Here we show that two different physical models of the apparatus for a single experiment can have different computational power. The experiment is thescatter machine experiment(SME), which was first presented in Beggs and Tucker (2007b). Our first physical model contained a wedge with a sharp vertex that made the experiment non-deterministic with constant runtime. We showed that Turing machines with polynomial time and an oracle based on a sharp wedge computed the non-uniform complexity classP/poly. Here we reconsider the experiment with a refined physical model where the sharp vertex of the wedge is replaced byanysuitable smooth curve with vertex at the same point. These smooth models of the experimental apparatus are deterministic. We show thatno matter what shape is chosen for the apparatus:(i)the time of detection of the scattered particles increases at least exponentially with the size of the query; and(ii)Turing machines with polynomial time and an oracle based on a smooth wedge compute the non-uniform complexity classP/log* ⫋P/poly.We discuss evidence that many experiments that measure quantities have exponential runtimes and a computational power ofP/log*.
APA, Harvard, Vancouver, ISO, and other styles
23

AL-SAFAR, AHMED. "Hybrid CPU Scheduling algorithm SJF-RR In Static Set of Processes." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 20, 2021): 36–60. http://dx.doi.org/10.55562/jrucs.v29i1.377.

Full text
Abstract:
Round Robin (RR) algorithm is widely used in modern operating systems (OS) as it has a better responsiveness as periodic quantum (occurring at regular intervals) in addition to have a good feature such as low scheduling overhead of n processes in a ready queue which takes a constant time O(1). But, RR algorithms have the worse features such as having low throughput, long average turnaround and waiting time, in addition to the number of context switches for (n) processes is (n) switches. Shortest Job First (SJF) however, itself is not practical in time sharing Oss due to its low response. More over the scheduling overhead of n processes in a ready queue is O(n), But the good features of SJF algorithm are the best average turnaround time and waiting time.By considering a static set of n processes, desirable features of CPU scheduler to maximize CPU utilization, response time and minimize waiting time and turnaround time are obtained by combining the kernel properties of SJF algorithm with the best features of RR algorithm to produce a new algorithm as an original and novel algorithm called; " Hybrid CPU Scheduling algorithm SJF-RR in Static Set of Processes " which, proposed in this research.The proposed algorithm is implemented through an innovative optimal equation to adapt time quantum factor for each process in each round as a periodic quantum (occurred at irregular intervals). That is while applying proposed algorithm, mathematical calculations take place to have particular time quantum for each process. Once, a criterion has been selected for comparison, deterministic modeling with the same numbers for input is proven that proposed algorithm is the best.
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Jingjing. "3D DNA Self-Assembly Algorithmic Model to Solve the Hamiltonian Path Problem." Journal of Nanoelectronics and Optoelectronics 16, no. 5 (May 1, 2021): 731–37. http://dx.doi.org/10.1166/jno.2021.3000.

Full text
Abstract:
Self-assembly reveals the innate character of DNA computing, DNA self-assembly is regarded as the best way to make DNA computing transform into computer chip. This paper introduces a strategy of DNA 3D self-assembly algorithm to solve the Hamiltonian Path Problem. Firstly, I introduced a non-deterministic algorithm. Then, according to the algorithm I designed the types of DNA tiles which the computing process needs. Lastly, I demonstrated the self-assembly process and the experimental methods which can get the final result. The computing time is linear, and the number of the different tile types is constant.
APA, Harvard, Vancouver, ISO, and other styles
25

PORSCHEN, STEFAN. "ON RECTANGULAR COVERING PROBLEMS." International Journal of Computational Geometry & Applications 19, no. 04 (August 2009): 325–40. http://dx.doi.org/10.1142/s0218195909002988.

Full text
Abstract:
Many applications like image processing, data compression or pattern recognition require a covering of a set of n points most often located in the (discrete) plane by rectangles due to specific cost constraints. In this paper we provide exact dynamic programming algorithms for covering point sets by regular rectangles, that have to obey certain boundary conditions, namely all rectangles must have lengths of at least k, which is a prescribed problem parameter. The concrete objective function underlying our optimization problem is the sum of area, circumference and a constant over all rectangles forming a covering. This objective function can be motivated by requirements of numerically solving PDE's by discretization over (adaptive multi-)grids. More precisely, we propose exact deterministic algorithms for such problems based on a (set theoretic) dynamic programming approach yielding a time bound of O(n23n). In a second step this bound is (asymptotically) decreased to O(n62n) by exploiting some structural features. Finally, a generalization of the problem and its solution methods is discussed for the case of arbitrary (finite) space dimension.
APA, Harvard, Vancouver, ISO, and other styles
26

Agafonov, Anton, and Vladislav Myasnikov. "Method for Reliable Shortest Path Determination in Stochastic Networks using Parametrically Defined Stable Probability Distributions." SPIIRAS Proceedings 18, no. 3 (June 5, 2019): 558–82. http://dx.doi.org/10.15622/sp.2019.18.3.557-581.

Full text
Abstract:
An increase in the number of vehicles, especially in large cities, and inability of the existing road infrastructure to distribute transport flows, leads to a higher congestion level in transport networks. This problem makes the solution to navigational problems more and more important. Despite the popularity of these tasks, many existing commercial systems find a route in deterministic networks, not taking into account the time-dependent and stochastic properties of traffic flows, i.e. travel time of road links is considered as constant. This paper addresses the reliable routing problem in stochastic networks using actual information of the traffic flow parameters. We consider the following optimality criterion: maximization of the probability of arriving on time at a destination given a departure time and a time budget. The reliable shortest path takes into account the variance of the travel time of the road network segments, which makes it more applicable for solving routing problems in transport networks compared to standard shortest path search algorithms that take into account only the average travel time of network segments. To describe the travel time of the road network segments, it is proposed to use parametrically defined stable Levy probability distributions. The use of stable distributions allows replacing the operation of calculating convolution to determine the reliability of the path to recalculating the parameters of the distributions density, which significantly reduces the computational time of the algorithm. The proposed method gives a solution in the form of a decision, i.e. the route proposed in the solution is not fixed in advance, but adaptively changes depending on changes in the real state of the network. An experimental analysis of the algorithm carried out on a large-scale transport network of Samara, Russia, showed that the presented algorithm can significantly reduce the computational time of the reliable shortest path algorithm with a slight increase in travel time.
APA, Harvard, Vancouver, ISO, and other styles
27

Campbell, Kayleigh, Laura Staugler, and Andrea Arnold. "Estimating Time-Varying Applied Current in the Hodgkin-Huxley Model." Applied Sciences 10, no. 2 (January 11, 2020): 550. http://dx.doi.org/10.3390/app10020550.

Full text
Abstract:
The classic Hodgkin-Huxley model is widely used for understanding the electrophysiological dynamics of a single neuron. While applying a low-amplitude constant current to the system results in a single voltage spike, it is possible to produce multiple voltage spikes by applying time-varying currents, which may not be experimentally measurable. The aim of this work is to estimate time-varying applied currents of different deterministic forms given noisy voltage data. In particular, we utilize an augmented ensemble Kalman filter with parameter tracking to estimate four different time-varying applied current parameters and associated Hodgkin-Huxley model states, along with uncertainty bounds in each case. We test the efficiency of the parameter tracking algorithm in this setting by analyzing the effects of changing the standard deviation of the parameter drift and the frequency of data available on the resulting time-varying applied current estimates and related uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
28

Boje, Edward. "Representation of simulation errors in single step methods using state dependent noise." MATEC Web of Conferences 347 (2021): 00001. http://dx.doi.org/10.1051/matecconf/202134700001.

Full text
Abstract:
The local error of single step methods is modelled as a function of the state derivative multiplied by bias and zero-mean white noise terms. The deterministic Taylor series expansion of the local error depends on the state derivative meaning that the local error magnitude is zero in steady state and grows with the rate of change of the state vector. The stochastic model of the local error may include a constant, “catch-all” noise term. A continuous time extension of the local error model is developed and this allows the original continuous time state differential equation to be represented by a combination of the simulation method and a stochastic term. This continuous time stochastic differential equation model can be used to study the propagation of the simulation error in Monte Carlo experiments, for step size control, or for propagating the mean and variance. This simulation error model can be embedded into continuous-discrete state estimation algorithms. Two illustrative examples are included to highlight the application of the approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Bannai, Hideo, Travis Gagie, Gary Hoppenworth, Simon J. Puglisi, and Luís M. S. Russo. "More Time-Space Tradeoffs for Finding a Shortest Unique Substring." Algorithms 13, no. 9 (September 18, 2020): 234. http://dx.doi.org/10.3390/a13090234.

Full text
Abstract:
We extend recent results regarding finding shortest unique substrings (SUSs) to obtain new time-space tradeoffs for this problem and the generalization of finding k-mismatch SUSs. Our new results include the first algorithm for finding a k-mismatch SUS in sublinear space, which we obtain by extending an algorithm by Senanayaka (2019) and combining it with a result on sketching by Gawrychowski and Starikovskaya (2019). We first describe how, given a text T of length n and m words of workspace, with high probability we can find an SUS of length L in O(n(L/m)logL) time using random access to T, or in O(n(L/m)log2(L)loglogσ) time using O((L/m)log2L) sequential passes over T. We then describe how, for constant k, with high probability, we can find a k-mismatch SUS in O(n1+ϵL/m) time using O(nϵL/m) sequential passes over T, again using only m words of workspace. Finally, we also describe a deterministic algorithm that takes O(nτlogσlogn) time to find an SUS using O(n/τ) words of workspace, where τ is a parameter.
APA, Harvard, Vancouver, ISO, and other styles
30

FERREIRA, AFONSO, and NICOLAS SCHABANEL. "A RANDOMIZED BSP/CGM ALGORITHM FOR THE MAXIMAL INDEPENDENT SET PROBLEM." Parallel Processing Letters 09, no. 03 (September 1999): 411–22. http://dx.doi.org/10.1142/s0129626499000384.

Full text
Abstract:
This paper presents a randomized parallel algorithm for the Maximal Independent Set problem. Our algorithm uses a BSPlike computer with p processors and requires that [Formula: see text] for a graph with n vertices and m edges. Under this scalability assumption, and after a preprocessing phase, it computes a maximal independent set after O( log p) communication rounds, with high probability, each round requiring linear computation time [Formula: see text]. The preprocessing phase is deterministic and important in order to ensure that degree computations can be implemented efficiently. For this, we give an optimal parallel BSPCGM algorithm to the p-quantiles search problem, which runs in [Formula: see text] time and a constant number of communication rounds, and could be of interest in its own right, as shown in the text.
APA, Harvard, Vancouver, ISO, and other styles
31

BARLOW, MARTIN T., JIAN DING, ASAF NACHMIAS, and YUVAL PERES. "The Evolution of the Cover Time." Combinatorics, Probability and Computing 20, no. 3 (February 15, 2011): 331–45. http://dx.doi.org/10.1017/s0963548310000489.

Full text
Abstract:
The cover time of a graph is a celebrated example of a parameter that is easy to approximate using a randomized algorithm, but for which no constant factor deterministic polynomial time approximation is known. A breakthrough due to Kahn, Kim, Lovász and Vu [25] yielded a (log logn)2 polynomial time approximation. We refine the upper bound of [25], and show that the resulting bound is sharp and explicitly computable in random graphs. Cooper and Frieze showed that the cover time of the largest component of the Erdős–Rényi random graph G(n, c/n) in the supercritical regime with c > 1 fixed, is asymptotic to ϕ(c)nlog2n, where ϕ(c) → 1 as c ↓ 1. However, our new bound implies that the cover time for the critical Erdős–Rényi random graph G(n, 1/n) has order n, and shows how the cover time evolves from the critical window to the supercritical phase. Our general estimate also yields the order of the cover time for a variety of other concrete graphs, including critical percolation clusters on the Hamming hypercube {0, 1}n, on high-girth expanders, and on tori ℤdn for fixed large d. This approach also gives a simpler proof of a result of Aldous [2] that the cover time of a uniform labelled tree on k vertices is of order k3/2. For the graphs we consider, our results show that the blanket time, introduced by Winkler and Zuckerman [45], is within a constant factor of the cover time. Finally, we prove that for any connected graph, adding an edge can increase the cover time by at most a factor of 4.
APA, Harvard, Vancouver, ISO, and other styles
32

Arsenault, Richard, and Pascal Côté. "Analysis of the effects of biases in ensemble streamflow prediction (ESP) forecasts on electricity production in hydropower reservoir management." Hydrology and Earth System Sciences 23, no. 6 (June 28, 2019): 2735–50. http://dx.doi.org/10.5194/hess-23-2735-2019.

Full text
Abstract:
Abstract. This paper presents an analysis of the effects of biased extended streamflow prediction (ESP) forecasts on three deterministic optimization techniques implemented in a simulated operational context with a rolling horizon test bed for managing a cascade of hydroelectric reservoirs and generating stations in Québec, Canada. The observed weather data were fed to the hydrological model, and the synthetic streamflow subsequently generated was considered to be a proxy for the observed inflow. A traditional, climatology-based ESP forecast approach was used to generate ensemble streamflow scenarios, which were used by three reservoir management optimization approaches. Both positive and negative biases were then forced into the ensembles by multiplying the streamflow values by constant factors. The optimization method's response to those biases was measured through the evaluation of the average annual energy generation in a forward-rolling simulation test bed in which the entire system is precisely and accurately modelled. The ensemble climate data forecasts, the hydrological modelling and ESP forecast generation, optimization model, and decision-making process are all integrated, as is the simulation model that updates reservoir levels and computes generation at each time step. The study focussed on one hydropower system both with and without minimum baseload constraints. This study finds that the tested deterministic optimization algorithms lack the capacity to compensate for uncertainty in future inflows and therefore place the reservoir levels at greater risk to maximize short-term profit. It is shown that for this particular system, an increase in ESP forecast inflows of approximately 5 % allows managing the reservoirs at optimal levels and producing the most energy on average, effectively negating the deterministic model's tendency to underestimate the risk of spilling. Finally, it is shown that implementing minimum load constraints serves as a de facto control on deterministic bias by forcing the system to draw more water from the reservoirs than what the models consider to be optimal trajectories.
APA, Harvard, Vancouver, ISO, and other styles
33

Sergi, Alessandro, Gabriel Hanna, Roberto Grimaudo, and Antonino Messina. "Quasi-Lie Brackets and the Breaking of Time-Translation Symmetry for Quantum Systems Embedded in Classical Baths." Symmetry 10, no. 10 (October 16, 2018): 518. http://dx.doi.org/10.3390/sym10100518.

Full text
Abstract:
Many open quantum systems encountered in both natural and synthetic situations are embedded in classical-like baths. Often, the bath degrees of freedom may be represented in terms of canonically conjugate coordinates, but in some cases they may require a non-canonical or non-Hamiltonian representation. Herein, we review an approach to the dynamics and statistical mechanics of quantum subsystems embedded in either non-canonical or non-Hamiltonian classical-like baths which is based on operator-valued quasi-probability functions. These functions typically evolve through the action of quasi-Lie brackets and their associated Quantum-Classical Liouville Equations, or through quasi-Lie brackets augmented by dissipative terms. Quasi-Lie brackets possess the unique feature that, while conserving the energy (which the Noether theorem links to time-translation symmetry), they violate the time-translation symmetry of their algebra. This fact can be heuristically understood in terms of the dynamics of the open quantum subsystem. We then describe an example in which a quantum subsystem is embedded in a bath of classical spins, which are described by non-canonical coordinates. In this case, it has been shown that an off-diagonal open-bath geometric phase enters into the propagation of the quantum-classical dynamics. Next, we discuss how non-Hamiltonian dynamics may be employed to generate the constant-temperature evolution of phase space degrees of freedom coupled to the quantum subsystem. Constant-temperature dynamics may be generated by either a classical Langevin stochastic process or a Nosé–Hoover deterministic thermostat. These two approaches are not equivalent but have different advantages and drawbacks. In all cases, the calculation of the operator-valued quasi-probability function allows one to compute time-dependent statistical averages of observables. This may be accomplished in practice using a hybrid Molecular Dynamics/Monte Carlo algorithms, which we outline herein.
APA, Harvard, Vancouver, ISO, and other styles
34

Dalzell, Alexander M., Aram W. Harrow, Dax Enshan Koh, and Rolando L. La Placa. "How many qubits are needed for quantum computational supremacy?" Quantum 4 (May 11, 2020): 264. http://dx.doi.org/10.22331/q-2020-05-11-264.

Full text
Abstract:
Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P≠NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH(a) and per-int-NSETH(b) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F2 or the permanent of an n×n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2cn time steps, where c∈{a,b}. A third conjecture poly3-ave-SBSETH(a′) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class SBP. We analyze evidence for these conjectures and argue that they are plausible when a=1/2, b=0.999 and a′=1/2.Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 104 to 107 gates.
APA, Harvard, Vancouver, ISO, and other styles
35

Khoshjahan, Yazdan. "A Study of Resource-Constrained Project Optimization with Discounted Weighted Earliness-Tardiness Penalty Costs." Advanced Materials Research 488-489 (March 2012): 898–903. http://dx.doi.org/10.4028/www.scientific.net/amr.488-489.898.

Full text
Abstract:
This research consider a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, subject to precedence relations among the activities and resource-constrained. Project activities are assumed to have a known deterministic due date, a unit earliness as well as a unit tardiness penalty cost and constant renewable resource requirements. The objective is to schedule the activities in order to minimize the total weighted earliness–tardiness penalty cost of the project subject to the finish–start precedence constraints and the constant renewable resource availability constraints. With these features the problem becomes highly attractive in just-in-time environments. A genetic algorithm (GA) is proposed to solve this model. In addition design of experiments and response surface methodology are employed to tune the GA parameters and to evaluate the performance of the proposed method in 270 test problems. The results of the performance analysis will be shown at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
36

Su, Huanyin, Feng Shi, Guangming Xu, Jin Qin, and Xinghua Shan. "Schedule-Based Passenger Assignment for High-Speed Rail Networks considering the Ticket-Booking Process." Mathematical Problems in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/1650839.

Full text
Abstract:
This paper proposes a schedule-based passenger assignment method for high-speed rail networks considering the ticket-booking process. Passengers book tickets to reserve seats during the presale period in high-speed rail systems and passengers on trains are determined during the ticket-booking process. The ticket-booking process is modeled as a continuous and deterministic predecision process. A solution algorithm is designed using the discretization of the continuous process by partitioning the ticket-booking time and the optimal paths remain constant in any partition interval. Finally, an application to the Chinese high-speed rail network is presented. A comparison of the numerical results with the reality is conducted to validate the efficiency and precision of the method and algorithm. Based on the results, the operating efficiency of the current train schedule is evaluated and some specific improvement measures are proposed.
APA, Harvard, Vancouver, ISO, and other styles
37

Abbad, Brahim, and Bjørn Ursin. "High-resolution bootstrapped differential semblance." GEOPHYSICS 77, no. 3 (May 1, 2012): U39—U47. http://dx.doi.org/10.1190/geo2011-0231.1.

Full text
Abstract:
We formulated two coherency measures, based on the bootstrapped differential semblance (BDS) estimator, that offered higher resolution in parameter tracking than did standard normalized differential semblance. Bootstrapping is a statistical resampling procedure used to infer estimates of standard errors and confidence intervals from data samples for which the statistical properties are unattainable via simple means, or when the probability density function is unkown or difficult to estimate. The first proposed estimator was based on a deterministic sorting of original offset traces by alternating near and far offsets to achieve maximized time shifts between adjacent traces. The near offsets were indexed with odd integers, while the even integers were used to index far offsets that were located at a constant index increment from the previous trace. The second was the product of several BDS terms, with the first term being the deterministic BDS defined above. The other terms were generated by random sorting of traces that alternated near and far offsets in an unpredictible manner. The proposed estimators could be applied in building velocity (and anellipticity) spectra for time-domain velocity analysis, depth-domain residual velocity update, or to any parameter-fitting algorithm involving discrete multichannel data. The gain in resolution provided by the suggested estimators over the differential semblance coefficient was illustrated on a number of synthetic and field data examples.
APA, Harvard, Vancouver, ISO, and other styles
38

Koehler, Fulton, and M. Turhan Taner. "The use of the conjugate‐gradient algorithm in the computation of predictive deconvolution operators." GEOPHYSICS 50, no. 12 (December 1985): 2752–58. http://dx.doi.org/10.1190/1.1441895.

Full text
Abstract:
A number of excellent papers have been published since the introduction of deconvolution by Robinson in the middle 1950s. The application of the Wiener‐Levinson algorithm makes deconvolution a practical and vital part of today’s digital seismic data processing. We review the original formulation of deconvolution, develop the solution from another perspective, and demonstrate a general and rigorous solution that could be implemented. By “general” we mean a deterministic time‐varying and multichannel operator design, and by “rigorous” we mean the straightforward least‐squares error solution without simplifying to a Toeplitz matrix. Also we show that the conjugate‐gradient algorithm used in conjunction with the least‐squares problem leads to a satisfactory simplification; that in the computation of the operators, the square matrix involved in the normal equations need not be computed. Furthermore, the product of this matrix with a column matrix can be obtained directly from the data as a result of two cascaded simple convolutions. The time‐varying deconvolution problem is shown to be equivalent to the multichannel deconvolution problem. Hence, with one simple formulation and associated programming, the procedure can be utilized for time‐constant single‐channel and multichannel deconvolution and time‐varying single‐channel and multichannel deconvolution.
APA, Harvard, Vancouver, ISO, and other styles
39

Guruswami, Venkatesan, and Chaoping Xing. "Optimal Rate List Decoding over Bounded Alphabets Using Algebraic-geometric Codes." Journal of the ACM 69, no. 2 (April 30, 2022): 1–48. http://dx.doi.org/10.1145/3506668.

Full text
Abstract:
We give new constructions of two classes of algebraic code families that are efficiently list decodable with small output list size from a fraction 1-R-ε of adversarial errors, where R is the rate of the code, for any desired positive constant ε. The alphabet size depends only ε and is nearly optimal. The first class of codes are obtained by folding algebraic-geometric codes using automorphisms of the underlying function field. The second class of codes are obtained by restricting evaluation points of an algebraic-geometric code to rational points from a subfield . In both cases, we develop a linear-algebraic approach to perform list decoding, which pins down the candidate messages to a subspace with a nice “periodic” structure. To prune this subspace and obtain a good bound on the list size, we pick subcodes of these codes by pre-coding into certain subspace-evasive sets that are guaranteed to have small intersection with the sort of periodic subspaces that arise in our list decoding. We develop two approaches for constructing such subspace-evasive sets. The first is a Monte Carlo construction of hierearchical subspace-evasive (h.s.e.) sets that leads to excellent list size but is not explicit. The second approach exploits a further ultra-periodicity of our subspaces and uses a novel construct called subspace designs , which were subsequently constructed explicitly and also found further applications in pseudorandomness. To get a family of codes over a fixed alphabet size, we instantiate our approach with algebraic-geometric codes based on the Garcia–Stichtenoth tower of function fields. Combining this with pruning via h.s.e. sets yields codes list-decodable up to a 1-R-ε error fraction with list size bounded by O (1/ε), matching the existential bound for random codes up to constant factors. Further, the alphabet size can be made exp ( Õ (1/ε 2 )), which is not much worse than the lower bound of exp (Ω (1/ε)). The parameters we achieve are thus quite close to the existential bounds in all three aspects (error-correction radius, alphabet size, and list size) simultaneously. This construction is, however, Monte Carlo and the claimed list-decoding property only holds with high probability. Once the code is (efficiently) sampled, the encoding/decoding algorithms are deterministic with a running time O _ε ( N c ) for an absolute constant c , where N is the code’s block length. Using subspace designs instead for the pruning, our approach yields the first deterministic construction of an algebraic code family of rate R with efficient list decoding from 1-R-ε fraction of errors over an alphabet of constant size exp (Õ(1/ε 2 )). The list-size bound is upper bounded by a very slowly growing function of the block length N ; in particular, it is at most O(log ( r ) N ) (the r th iterated logarithm) for any fixed integer r . The explicit construction avoids the shortcoming of the Monte Carlo sampling at the expense of a slightly worse list size.
APA, Harvard, Vancouver, ISO, and other styles
40

Kohútka, Lukáš, Lukáš Nagy, and Viera Stopjaková. "Hardware Dynamic Memory Manager for Hard Real-Time Systems." Annals of Emerging Technologies in Computing 3, no. 4 (October 1, 2019): 48–70. http://dx.doi.org/10.33166/aetic.2019.04.005.

Full text
Abstract:
This paper presents novel hardware architecture of dynamic memory manager providing memory allocation and deallocation operations that are suitable for hard real-time and safety-critical systems due to very high determinism of these operations. The proposed memory manager implements Worst-Fit algorithm for selection of suitable free block of memory that can be used by the external environment, e.g. CPU. The deterministic timing of the memory allocation and deallocation operations is essential for hard real-time systems. The proposed memory manager performs these operations in nearly constant time thanks to the adoption of hardware-accelerated max queue, which is a data structure that continuously provides the largest free block of memory in two clock cycles regardless of actual number or constellation of existing free blocks of memory. In order to minimize the overhead caused by implementing the memory management in hardware, the max queue was optimized by developing a new sorting architecture, called Rocket-Queue. The Rocket-Queue architecture as well as the whole memory manager is described in this paper in detail. The memory manager and the Rocket-Queue architecture were verified using simplified version of UVM and applying billions of randomly generated instructions as testing inputs. The Rocket-Queue architecture was synthesized into Intel FPGA Cyclone V with 100 MHz clock frequency and the results show that it consumes from 17,06% to 38,67% less LUTs than the existing architecture, called Systolic Array. The memory manager implemented in a form of a coprocessor that provides four custom instructions was synthesized into 28nm TSMC HPM technology with 1 GHz clock frequency and 0.9V power supply. The ASIC synthesis results show that the Rocket-Queue based memory manager can occupy up to 24,59% smaller chip area than the Systolic Array based manager. In terms of total power consumption, the Rocket-Queue based memory manager consumes from 15,16% to 42,95% less power.
APA, Harvard, Vancouver, ISO, and other styles
41

Hou, Jing, Yan Yang, He He, and Tian Gao. "Adaptive Dual Extended Kalman Filter Based on Variational Bayesian Approximation for Joint Estimation of Lithium-Ion Battery State of Charge and Model Parameters." Applied Sciences 9, no. 9 (April 26, 2019): 1726. http://dx.doi.org/10.3390/app9091726.

Full text
Abstract:
An accurate state of charge (SOC) estimation is vital for the safe operation and efficient management of lithium-ion batteries. At present, the extended Kalman filter (EKF) can accurately estimate the SOC under the condition of a precise battery model and deterministic noise statistics. However, in practical applications, the battery characteristics change with different operating conditions and the measurement noise statistics may vary with time, resulting in nonoptimal and even unreliable estimation of SOC by EKF. To improve the SOC estimation accuracy under uncertain measurement noise statistics, a variational Bayesian approximation-based adaptive dual extended Kalman filter (VB-ADEKF) is proposed in this paper. The variational Bayesian inference is integrated with the dual EKF (DEKF) to jointly estimate the lithium-ion battery parameters and SOC. Meanwhile, the measurement noise variances are simultaneously estimated in the SOC estimation process to compensate for the model uncertainties, so that the adaptability of the proposed algorithm to dynamic changes in battery characteristics is greatly improved. A constant current discharge test, a pulse current discharge test, and an urban dynamometer driving schedule (UDDS) test are performed to verify the effectiveness and superiority of the proposed algorithm by comparison with the DEKF algorithm. The experimental results show that the proposed VB-ADEKF algorithm outperforms the traditional DEKF algorithm in terms of SOC estimation accuracy, convergence rate, and robustness.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Liangshun, and Hengjin Cai. "Novel Stream Ciphering Algorithm for Big Data Images Using Zeckendorf Representation." Wireless Communications and Mobile Computing 2021 (October 21, 2021): 1–19. http://dx.doi.org/10.1155/2021/4637876.

Full text
Abstract:
Big data is a term used for very large data sets. Digital equipment produces vast amounts of images every day; the need for image encryption is increasingly pronounced, for example, to safeguard the privacy of the patients’ medical imaging data in cloud disk. There is an obvious contradiction between the security and privacy and the widespread use of big data. Nowadays, the most important engine to provide confidentiality is encryption. However, block ciphering is not suitable for the huge data in a real-time environment because of the strong correlation among pixels and high redundancy; stream ciphering is considered a lightweight solution for ciphering high-definition images (i.e., high data volume). For a stream cipher, since the encryption algorithm is deterministic, the only thing you can do is to make the key “look random.” This article proves that the probability that the digit 1 appears in the midsection of a Zeckendorf representation is constant, which can be utilized to generate the pseudorandom numbers. Then, a novel stream cipher key generator (ZPKG) is proposed to encrypt high-definition images that need transferring. The experimental results show that the proposed stream ciphering method, with the keystream of which satisfies Golomb’s randomness postulates, is faster than RC4 and LSFR with indistinguishable performance on hardware depletion, and the method is highly key sensitive and shows good resistance against noise attacks and statistical attacks.
APA, Harvard, Vancouver, ISO, and other styles
43

Alfonso, L. "An algorithm for the numerical solution of the multivariate master equation for stochastic coalescence." Atmospheric Chemistry and Physics Discussions 15, no. 8 (April 23, 2015): 12213–40. http://dx.doi.org/10.5194/acpd-15-12213-2015.

Full text
Abstract:
Abstract. In cloud modeling studies, the time evolution of droplet size distributions due to collision-coalescence events is usually modeled with the kinetic collection equation (KCE) or Smoluchowski coagulation equation. However, the KCE is a deterministic equation with no stochastic fluctuations or correlations. Therefore, the full stochastic description of cloud droplet growth in a coalescing system must be obtained from the solution of the multivariate master equation, which models the evolution of the state vector for the number of droplets of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels, multivariate initial conditions and small system sizes is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions of the master equation obtained for the constant and sum kernels. Correlation coefficients were calculated for the turbulent hydrodynamic kernel, and true stochastic averages were compared with numerical solutions of the kinetic collection equation for that case. The results for collection kernels depending on droplet mass demonstrates that the magnitude of correlations are significant, and must be taken into account when modeling the evolution of a finite volume coalescing system.
APA, Harvard, Vancouver, ISO, and other styles
44

Alfonso, L. "An algorithm for the numerical solution of the multivariate master equation for stochastic coalescence." Atmospheric Chemistry and Physics 15, no. 21 (November 6, 2015): 12315–26. http://dx.doi.org/10.5194/acp-15-12315-2015.

Full text
Abstract:
Abstract. In cloud modeling studies, the time evolution of droplet size distributions due to collision–coalescence events is usually modeled with the Smoluchowski coagulation equation, also known as the kinetic collection equation (KCE). However, the KCE is a deterministic equation with no stochastic fluctuations or correlations. Therefore, the full stochastic description of cloud droplet growth in a coalescing system must be obtained from the solution of the multivariate master equation, which models the evolution of the state vector for the number of droplets of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain types of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels, multivariate initial conditions and small system sizes is introduced. The performance of the method was seen by comparing the numerically calculated particle mass spectrum with analytical solutions of the master equation obtained for the constant and sum kernels. Correlation coefficients were calculated for the turbulent hydrodynamic kernel, and true stochastic averages were compared with numerical solutions of the kinetic collection equation for that case. The results for collection kernels depending on droplet mass demonstrates that the magnitudes of correlations are significant and must be taken into account when modeling the evolution of a finite volume coalescing system.
APA, Harvard, Vancouver, ISO, and other styles
45

Limon-Cantu, David, and Vicente Alarcon-Aquino. "Multiresolution dendritic cell algorithm for network anomaly detection." PeerJ Computer Science 7 (October 19, 2021): e749. http://dx.doi.org/10.7717/peerj-cs.749.

Full text
Abstract:
Anomaly detection in computer networks is a complex task that requires the distinction of normality and anomaly. Network attack detection in information systems is a constant challenge in computer security research, as information systems provide essential services for enterprises and individuals. The consequences of these attacks could be the access, disclosure, or modification of information, as well as denial of computer services and resources. Intrusion Detection Systems (IDS) are developed as solutions to detect anomalous behavior, such as denial of service, and backdoors. The proposed model was inspired by the behavior of dendritic cells and their interactions with the human immune system, known as Dendritic Cell Algorithm (DCA), and combines the use of Multiresolution Analysis (MRA) Maximal Overlap Discrete Wavelet Transform (MODWT), as well as the segmented deterministic DCA approach (S-dDCA). The proposed approach is a binary classifier that aims to analyze a time-frequency representation of time-series data obtained from high-level network features, in order to classify data as normal or anomalous. The MODWT was used to extract the approximations of two input signal categories at different levels of decomposition, and are used as processing elements for the multi resolution DCA. The model was evaluated using the NSL-KDD, UNSW-NB15, CIC-IDS2017 and CSE-CIC-IDS2018 datasets, containing contemporary network traffic and attacks. The proposed MRA S-dDCA model achieved an accuracy of 97.37%, 99.97%, 99.56%, and 99.75% for the tested datasets, respectively. Comparisons with the DCA and state-of-the-art approaches for network anomaly detection are presented. The proposed approach was able to surpass state-of-the-art approaches with UNSW-NB15 and CSECIC-IDS2018 datasets, whereas the results obtained with the NSL-KDD and CIC-IDS2017 datasets are competitive with machine learning approaches.
APA, Harvard, Vancouver, ISO, and other styles
46

Plaskura, Paweł. "MODELLING OF FORGETTING CURVES IN EDUCATIONAL E-ENVIRONMENT." Information Technologies and Learning Tools 71, no. 3 (June 29, 2019): 1. http://dx.doi.org/10.33407/itlt.v71i3.2368.

Full text
Abstract:
Modelling of the didactical process by using educational network needs network representation of learning and forgetting curves known from the literature. The learning and forgetting curves are the solution of differential equations. The differential equations can be represented in the form of a network of connected elements in a similar way to the electrical circuits and represented in the form of an intuitive schematic. The network can be simulated using a microsystems simulator. Such an approach enables the easy creation of the macro models and their analysis. It enables the use of many advanced simulation algorithms. The use of analogy enables defining the educational environment by defining network variables and giving them meaning relative to generalized variables. In the paper, examples of representation of forgetting curves as the above-mentioned network are presented. Parameters of elements were selected in the deterministic optimisation process. The obtained results were compared with the forgetting curves known from the literature. The appropriate time constants were identified in the systems and their values were given. Time constants have their equivalents in the appropriate values in the formulas describing the forgetting curves. Based on the results, appropriate conclusions were drawn. The work also shows the concept of a model that uses behavioural modelling and variable parameters of elements depending on the state and time. The model has been used in practice. The presented approach enables a much more accurate determination of the parameters of the forgetting curves. The models can be used in the simulation of the forgetting process. The paper can be interesting for those who deal with modelling of the didactical process, especially on the e-learning platforms.
APA, Harvard, Vancouver, ISO, and other styles
47

Chakrabarty, Anindya, Zongwei Luo, Rameshwar Dubey, and Shan Jiang. "A theoretical model of jump diffusion-mean reversion." Business Process Management Journal 23, no. 3 (June 5, 2017): 537–54. http://dx.doi.org/10.1108/bpmj-01-2016-0005.

Full text
Abstract:
Purpose The purpose of this paper is to develop a theoretical model of a jump diffusion-mean reversion constant proportion portfolio insurance strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literatures. Design/methodology/approach The paper adopts Merton’s jump diffusion (JD) model to simulate the price path followed by risky assets and the CIR mean reversion model to simulate the path followed by the short-term interest rate. The floor of the CPPI strategy is linked to the stochastic process driving the value of a fixed income instrument whose yield follows the CIR mean reversion model. The developed model is benchmarked against CNX-NIFTY 50 and is back tested during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. Findings Back testing the algorithm using Monte Carlo simulation across the crisis and recovery phases of the 2008 recession regime revealed that the portfolio performs better than the risky markets during the crisis by hedging the downside risk effectively and performs better than the fixed income instruments during the growth phase by leveraging on the upside potential. This makes it a value-enhancing proposition for the risk-averse investors. Originality/value The study modifies the CPPI algorithm by re-defining the floor of the algorithm to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literatures have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which aids the model to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase.
APA, Harvard, Vancouver, ISO, and other styles
48

LAWLOR, DAVID, YANG WANG, and ANDREW CHRISTLIEB. "ADAPTIVE SUB-LINEAR TIME FOURIER ALGORITHMS." Advances in Adaptive Data Analysis 05, no. 01 (January 2013): 1350003. http://dx.doi.org/10.1142/s1793536913500039.

Full text
Abstract:
We present a new deterministic algorithm for the sparse Fourier transform problem, in which we seek to identify k ≪ N significant Fourier coefficients from a signal of bandwidth N. Previous deterministic algorithms exhibit quadratic runtime scaling, while our algorithm scales linearly with k in the average case. Underlying our algorithm are a few simple observations relating the Fourier coefficients of time-shifted samples to unshifted samples of the input function. This allows us to detect when aliasing between two or more frequencies has occurred, as well as to determine the value of unaliased frequencies. We show that empirically our algorithm is orders of magnitude faster than competing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
49

Ivanyos, Gábor, Marek Karpinski, and Nitin Saxena. "Deterministic Polynomial Time Algorithms for Matrix Completion Problems." SIAM Journal on Computing 39, no. 8 (January 2010): 3736–51. http://dx.doi.org/10.1137/090781231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

BONFANTE, G., A. CICHON, J. Y. MARION, and H. TOUZET. "Algorithms with polynomial interpretation termination proof." Journal of Functional Programming 11, no. 1 (January 2001): 33–53. http://dx.doi.org/10.1017/s0956796800003877.

Full text
Abstract:
We study the effect of polynomial interpretation termination proofs of deterministic (resp. non-deterministic) algorithms defined by con uent (resp. non-con uent) rewrite systems over data structures which include strings, lists and trees, and we classify them according to the interpretations of the constructors. This leads to the definition of six function classes which turn out to be exactly the deterministic (resp. non-deterministic) polynomial time, linear exponential time and linear doubly exponential time computable functions when the class is based on con uent (resp. non-con uent) rewrite systems. We also obtain a characterisation of the linear space computable functions. Finally, we demonstrate that functions with exponential interpretation termination proofs are super-elementary.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography