To see the other types of publications on this topic, follow the link: Algorithmic probability theory.

Journal articles on the topic 'Algorithmic probability theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Algorithmic probability theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Helmuth, Tyler, Will Perkins, and Guus Regts. "Algorithmic Pirogov–Sinai theory." Probability Theory and Related Fields 176, no. 3-4 (June 26, 2019): 851–95. http://dx.doi.org/10.1007/s00440-019-00928-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Solomonoff, Ray J. "The Discovery of Algorithmic Probability." Journal of Computer and System Sciences 55, no. 1 (August 1997): 73–88. http://dx.doi.org/10.1006/jcss.1997.1500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sterkenburg, Tom F. "A Generalized Characterization of Algorithmic Probability." Theory of Computing Systems 61, no. 4 (May 13, 2017): 1337–52. http://dx.doi.org/10.1007/s00224-017-9774-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Levin, Leonid A. "Some theorems on the algorithmic approach to probability theory and information theory." Annals of Pure and Applied Logic 162, no. 3 (December 2010): 224–35. http://dx.doi.org/10.1016/j.apal.2010.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zenil, Hector, Fernando Soler-Toscano, Jean-Paul Delahaye, and Nicolas Gauvrit. "Two-dimensional Kolmogorov complexity and an empirical validation of the Coding theorem method by compressibility." PeerJ Computer Science 1 (September 30, 2015): e23. http://dx.doi.org/10.7717/peerj-cs.23.

Full text
Abstract:
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
APA, Harvard, Vancouver, ISO, and other styles
6

Cover, Thomas M., Peter Gacs, and Robert M. Gray. "Kolmogorov's Contributions to Information Theory and Algorithmic Complexity." Annals of Probability 17, no. 3 (July 1989): 840–65. http://dx.doi.org/10.1214/aop/1176991250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

DOWNEY, ROD, DENIS R. HIRSCHFELDT, JOSEPH S. MILLER, and ANDRÉ NIES. "RELATIVIZING CHAITIN'S HALTING PROBABILITY." Journal of Mathematical Logic 05, no. 02 (December 2005): 167–92. http://dx.doi.org/10.1142/s0219061305000468.

Full text
Abstract:
As a natural example of a 1-random real, Chaitin proposed the halting probability Ω of a universal prefix-free machine. We can relativize this example by considering a universal prefix-free oracle machine U. Let [Formula: see text] be the halting probability of UA; this gives a natural uniform way of producing an A-random real for every A ∈ 2ω. It is this operator which is our primary object of study. We can draw an analogy between the jump operator from computability theory and this Omega operator. But unlike the jump, which is invariant (up to computable permutation) under the choice of an effective enumeration of the partial computable functions, [Formula: see text] can be vastly different for different choices of U. Even for a fixed U, there are oracles A =* B such that [Formula: see text] and [Formula: see text] are 1-random relative to each other. We prove this and many other interesting properties of Omega operators. We investigate these operators from the perspective of analysis, computability theory, and of course, algorithmic randomness.
APA, Harvard, Vancouver, ISO, and other styles
8

Kozyrev, V. P., and S. V. Yushmanov. "Graph theory (algorithmic, algebraic, and metric problems)." Journal of Soviet Mathematics 39, no. 1 (October 1987): 2476–508. http://dx.doi.org/10.1007/bf01086177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Asmussen, Søren, and Tomasz Rolski. "Computational methods in risk theory: A matrix-algorithmic approach." Insurance: Mathematics and Economics 10, no. 4 (January 1992): 259–74. http://dx.doi.org/10.1016/0167-6687(92)90058-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hurwitz, Carol M. "On the homotopy theory of monoids." Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 47, no. 2 (October 1989): 171–85. http://dx.doi.org/10.1017/s1446788700031621.

Full text
Abstract:
AbsractIn this paper, it is shown that any connected, small category can be embedded in a semi-groupoid (a category in which there is at least one isomorphism between any two elements) in such a way that the embedding includes a homotopy equivalence of classifying spaces. This immediately gives a monoid whose classifying space is of the same homotopy type as that of the small category. This construction is essentially algorithmic, and furthermore, yields a finitely presented monoid whenever the small category is finitely presented. Some of these results are generalizations of ideas of McDuff.
APA, Harvard, Vancouver, ISO, and other styles
11

Remeslennikov, V. N., and V. A. Roman'kov. "Model-theoretic and algorithmic questions in group theory." Journal of Soviet Mathematics 31, no. 3 (November 1985): 2887–939. http://dx.doi.org/10.1007/bf02106805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zenil, Hector, Santiago Hernández-Orozco, Narsis Kiani, Fernando Soler-Toscano, Antonio Rueda-Toicen, and Jesper Tegnér. "A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity." Entropy 20, no. 8 (August 15, 2018): 605. http://dx.doi.org/10.3390/e20080605.

Full text
Abstract:
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.
APA, Harvard, Vancouver, ISO, and other styles
13

Kramosil, Ivan. "On Pseudo-Random Sequences over Finite Automata." Fundamenta Informaticae 8, no. 1 (January 1, 1985): 55–62. http://dx.doi.org/10.3233/fi-1985-8104.

Full text
Abstract:
It is a well-known fact that binary sequences (strings) of high algorithmic complexity can be taken as good approximations of statistically independent random sequences with two equiprobable outputs. Here “sequence of high algorithmic complexity” is such one, that the length of the shortest program generating this sequence by a universal Turing machine differs only by an a priori given constant from the length of the generated sequence. The present paper generalizes this result to the case of a finite (not necessarily binary) alphabet. Considering an infinite sequence of finite sequences of high algorithmic complexity over a finite alphabet, the relative frequency of occurences of each letter or finite string of letters is proved to tend to the inverted value of the total number of letters, or strings of letters of the given length, in question. This result may be seen as an analogy to the strong law of large numbers in the case of equiprobable probability distribution.
APA, Harvard, Vancouver, ISO, and other styles
14

Benatti, Fabio. "Quantum Algorithmic Complexities and Entropy." Open Systems & Information Dynamics 16, no. 01 (March 2009): 1–28. http://dx.doi.org/10.1142/s1230161209000025.

Full text
Abstract:
We review the basics of classical algorithmic complexity theory and two of its quantum extensions that have been prompted by the foreseeable existence of quantum computing devices. In particular, we will examine the relations between these extensions and the von Neumann entropy rate of generic quantum information sources of ergodic type.
APA, Harvard, Vancouver, ISO, and other styles
15

Нагоркин, Максим, Maksim Nagorkin, Владимир Федоров, Vladimir Fedorov, Игорь Пыриков, Igor Pyrikov, Михаил Топорков, and Mihail Toporkov. "REGULATIONS OF ROUGHNESS PARAMETERS FOR MACHINERY FUNCTIONAL SURFACES IN TECHNOLOGICAL DOCUMENTATION." Bulletin of Bryansk state technical university 2019, no. 3 (March 27, 2019): 4–12. http://dx.doi.org/10.30987/article_5c8b5ce9c06c31.07069800.

Full text
Abstract:
On the basis of the probability theory approach to the formation of surface roughness parameters in machinery during machining there is offered an algorithmic solution of an urgent problem of the regulations of roughness parameters for machinery functional surfaces in technological documentation. The algorithmic solutions for the following parameter versions are developed (RSS 2.309-73): the highest value; the smallest value; a value range; a rated value with ultimate deviations; an indication of two and more parameters. The theoretical investigation results may serve as a starting point for the further development of theory and practice for the technological support of roughness parameters in machinery functional surfaces with the required reliability. Foe a wide actual realization of investigation results the solution of a number of problems is needed: 1) the development of standards or guide information on the regulations in technological documentation of required technological values of roughness parameters; 2) the further development of investigations in the field of technological support reliability of quality parame-ters (including roughness) of blank surfaces in the course of machining in technologically flexible systems.
APA, Harvard, Vancouver, ISO, and other styles
16

BIENVENU, LAURENT, and CHRISTOPHER P. PORTER. "DEEP CLASSES." Bulletin of Symbolic Logic 22, no. 2 (June 2016): 249–86. http://dx.doi.org/10.1017/bsl.2016.9.

Full text
Abstract:
AbstractA set of infinite binary sequences ${\cal C} \subseteq 2$ℕ is negligible if there is no partial probabilistic algorithm that produces an element of this set with positive probability. The study of negligibility is of particular interest in the context of ${\rm{\Pi }}_1^0 $ classes. In this paper, we introduce the notion of depth for ${\rm{\Pi }}_1^0 $ classes, which is a stronger form of negligibility. Whereas a negligible ${\rm{\Pi }}_1^0 $ class ${\cal C}$ has the property that one cannot probabilistically compute a member of ${\cal C}$ with positive probability, a deep ${\rm{\Pi }}_1^0 $ class ${\cal C}$ has the property that one cannot probabilistically compute an initial segment of a member of ${\cal C}$ with high probability. That is, the probability of computing a length n initial segment of a deep ${\rm{\Pi }}_1^0 $ class converges to 0 effectively in n.We prove a number of basic results about depth, negligibility, and a variant of negligibility that we call tt-negligibility. We provide a number of examples of deep ${\rm{\Pi }}_1^0 $ classes that occur naturally in computability theory and algorithmic randomness. We also study deep classes in the context of mass problems, examine the relationship between deep classes and certain lowness notions in algorithmic randomness, and establish a relationship between members of deep classes and the amount of mutual information with Chaitin’s Ω.
APA, Harvard, Vancouver, ISO, and other styles
17

SCHMIDHUBER, JÜRGEN. "HIERARCHIES OF GENERALIZED KOLMOGOROV COMPLEXITIES AND NONENUMERABLE UNIVERSAL MEASURES COMPUTABLE IN THE LIMIT." International Journal of Foundations of Computer Science 13, no. 04 (August 2002): 587–612. http://dx.doi.org/10.1142/s0129054102001291.

Full text
Abstract:
The traditional theory of Kolmogorov complexity and algorithmic probability focuses on monotone Turing machines with one-way write-only output tape. This naturally leads to the universal enumerable Solomonoff-Levin measure. Here we introduce more general, nonenumerable but cumulatively enumerable measures (CEMs) derived from Turing machines with lexicographically nondecreasing output and random input, and even more general approximable measures and distributions computable in the limit. We obtain a natural hierarchy of generalizations of algorithmic probability and Kolmogorov complexity, suggesting that the "true" information content of some (possibly infinite) bitstring x is the size of the shortest nonhalting program that converges to x and nothing but x on a Turing machine that can edit its previous outputs. Among other things we show that there are objects computable in the limit yet more random than Chaitin's "number of wisdom" Omega, that any approximable measure of x is small for any x lacking a short description, that there is no universal approximable distribution, that there is a universal CEM, and that any nonenumerable CEM of x is small for any x lacking a short enumerating program. We briefly mention consequences for universes sampled from such priors.
APA, Harvard, Vancouver, ISO, and other styles
18

Piantadosi, Steven T., and Evelina Fedorenko. "Infinitely productive language can arise from chance under communicative pressure." Journal of Language Evolution 2, no. 2 (April 7, 2017): 141–47. http://dx.doi.org/10.1093/jole/lzw013.

Full text
Abstract:
Abstract Human communication is unparalleled in the animal kingdom. The key distinctive feature of our language is productivity: we are able to express an infinite number of ideas using a limited set of words. Traditionally, it has been argued or assumed that productivity emerged as a consequence of very specific, innate grammatical systems. Here we formally develop an alternative hypothesis: productivity may have rather solely arisen as a consequence of increasing the number of signals (e.g. sentences) in a communication system, under the additional assumption that the processing mechanisms are algorithmically unconstrained. Using tools from algorithmic information theory, we examine the consequences of two intuitive constraints on the probability that a language will be infinitely productive. We prove that under maximum entropy assumptions, increasing the complexity of a language will not strongly pressure it to be finite or infinite. In contrast, increasing the number of signals in a language increases the probability of languages that have—in fact—infinite cardinality. Thus, across evolutionary time, the productivity of human language could have arisen solely from algorithmic randomness combined with a communicative pressure for a large number of signals.
APA, Harvard, Vancouver, ISO, and other styles
19

González, Carlos R., and Yaser S. Abu-Mostafa. "Mismatched Training and Test Distributions Can Outperform Matched Ones." Neural Computation 27, no. 2 (February 2015): 365–87. http://dx.doi.org/10.1162/neco_a_00697.

Full text
Abstract:
In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test distributions in supervised learning can in fact outperform matched distributions in terms of the bottom line, the out-of-sample performance, independent of the target function in question. This surprising result has theoretical and algorithmic ramifications that we discuss.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Tianyi, Zhehui Chen, Enlu Zhou, and Tuo Zhao. "A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization." Stochastic Systems 11, no. 4 (December 2021): 265–81. http://dx.doi.org/10.1287/stsy.2021.0083.

Full text
Abstract:
Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
21

Rafajłowicz, Wojciech. "Nonparametric Estimation of Continuously Parametrized Families of Probability Density Functions—Computational Aspects." Algorithms 13, no. 7 (July 8, 2020): 164. http://dx.doi.org/10.3390/a13070164.

Full text
Abstract:
We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Shanin, N. A. "Referee’s report on Leonid Levin’s dissertation “Some Theorems on the Algorithmic Approach to Probability Theory and Information Theory” (November 27, 1972)." Annals of Pure and Applied Logic 162, no. 3 (December 2010): 236. http://dx.doi.org/10.1016/j.apal.2010.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

AUMÜLLER, MARTIN, MARTIN DIETZFELBINGER, CLEMENS HEUBERGER, DANIEL KRENN, and HELMUT PRODINGER. "Dual-Pivot Quicksort: Optimality, Analysis and Zeros of Associated Lattice Paths." Combinatorics, Probability and Computing 28, no. 4 (August 14, 2018): 485–518. http://dx.doi.org/10.1017/s096354831800041x.

Full text
Abstract:
We present an average-case analysis of a variant of dual-pivot quicksort. We show that the algorithmic partitioning strategy used is optimal, that is, it minimizes the expected number of key comparisons. For the analysis, we calculate the expected number of comparisons exactly as well as asymptotically; in particular, we provide exact expressions for the linear, logarithmic and constant terms.An essential step is the analysis of zeros of lattice paths in a certain probability model. Along the way a combinatorial identity is proved.
APA, Harvard, Vancouver, ISO, and other styles
24

Kucherov, Gregory. "Evolution of biosequence search algorithms: a brief survey." Bioinformatics 35, no. 19 (April 17, 2019): 3547–52. http://dx.doi.org/10.1093/bioinformatics/btz272.

Full text
Abstract:
Abstract Motivation Although modern high-throughput biomolecular technologies produce various types of data, biosequence data remain at the core of bioinformatic analyses. However, computational techniques for dealing with this data evolved dramatically. Results In this bird’s-eye review, we overview the evolution of main algorithmic techniques for comparing and searching biological sequences. We highlight key algorithmic ideas emerged in response to several interconnected factors: shifts of biological analytical paradigm, advent of new sequencing technologies and a substantial increase in size of the available data. We discuss the expansion of alignment-free techniques coming to replace alignment-based algorithms in large-scale analyses. We further emphasize recently emerged and growing applications of sketching methods which support comparison of massive datasets, such as metagenomics samples. Finally, we focus on the transition to population genomics and outline associated algorithmic challenges.
APA, Harvard, Vancouver, ISO, and other styles
25

Graf, Alessandra, and Penny Haxell. "Finding independent transversals efficiently." Combinatorics, Probability and Computing 29, no. 5 (May 14, 2020): 780–806. http://dx.doi.org/10.1017/s0963548320000127.

Full text
Abstract:
AbstractWe give an efficient algorithm that, given a graph G and a partition V1,…,Vm of its vertex set, finds either an independent transversal (an independent set {v1,…,vm} in G such that ${v_i} \in {V_i}$ for each i), or a subset ${\cal B}$ of vertex classes such that the subgraph of G induced by $\bigcup\nolimits_{\cal B}$ has a small dominating set. A non-algorithmic proof of this result has been known for a number of years and has been used to solve many other problems. Thus we are able to give algorithmic versions of many of these applications, a few of which we describe explicitly here.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Yun Zhi. "Vulnerability Assessment of Information System Based on Weighted Directional Graph and Complex Network Technology." Applied Mechanics and Materials 644-650 (September 2014): 2920–24. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.2920.

Full text
Abstract:
Assessment model of the vulnerability for information system is improved by using Bayesian equilibrium algorithm. The mathematical evaluation model of combined complex network information systems is established through the combination of weighted directional algorithm, and the algorithmic routine of network vulnerability assessment is designed. In order to verify the validity and reliability of the model and the algorithm, the test platform of complex network is built, and the vulnerability of network is detected with the weighted directional method, which has got the probability distribution nephogram of network vulnerability and the curve of network performance with time changing. At the last, the effect of different nodes of the network on the vulnerability of system is calculated with directed weights. And the results shown that the attacked number of different nodes and the attacked probability have improved the credibility of information analysis, which has provided theory reference for the research of information system vulnerability.
APA, Harvard, Vancouver, ISO, and other styles
27

Angelini, Maria Chiara, Paolo Fachin, and Simone de Feo. "Mismatching as a tool to enhance algorithmic performances of Monte Carlo methods for the planted clique model." Journal of Statistical Mechanics: Theory and Experiment 2021, no. 11 (November 1, 2021): 113406. http://dx.doi.org/10.1088/1742-5468/ac3657.

Full text
Abstract:
Abstract Over-parametrization was a crucial ingredient for recent developments in inference and machine-learning fields. However a good theory explaining this success is still lacking. In this paper we study a very simple case of mismatched over-parametrized algorithm applied to one of the most studied inference problem: the planted clique problem. We analyze a Monte Carlo (MC) algorithm in the same class of the famous Jerrum algorithm. We show how this MC algorithm is in general suboptimal for the recovery of the planted clique. We show however how to enhance its performances by adding a (mismatched) parameter: the temperature; we numerically find that this over-parametrized version of the algorithm can reach the supposed algorithmic threshold for the planted clique problem.
APA, Harvard, Vancouver, ISO, and other styles
28

Cho, Juhee, Donggyu Kim, and Karl Rohe. "Intelligent Initialization and Adaptive Thresholding for Iterative Matrix Completion: Some Statistical and Algorithmic Theory for Adaptive-Impute." Journal of Computational and Graphical Statistics 28, no. 2 (February 13, 2019): 323–33. http://dx.doi.org/10.1080/10618600.2018.1518238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mukha, V. S., and N. F. Kako. "Total probability and Bayes formulae for joint multidimensional-matrix Gaussian distributions." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 58, no. 1 (April 4, 2022): 48–59. http://dx.doi.org/10.29235/1561-2430-2022-58-1-48-59.

Full text
Abstract:
This paper is devoted to the development of a mathematical tool for obtaining the Bayesian estimations of the parameters of multidimensional regression objects in their finite-dimensional multidimensional-matrix description. Such a need arises, particularly, in the problem of dual control of regression objects when multidimensional-matrix mathematical formalism is used for the description of the controlled object. In this paper, the concept of a one-dimensional random cell is introduced as a set of multidimensional random matrices (in accordance with the “cell array” data type in the Matlab programming system), and the definition of the joint multidimensional-matrix Gaussian distribution is given (the definition of the Gaussian one-dimensional random cell). This required the introduction of the concepts of one-dimensional cell of the mathematical expectation and two-dimensional cell of the variance-covariance of the one-dimensional random cell. The integral connected with the joint Gaussian probability density function of the multidimensional matrices is calculated. The two formulae of the total probability and the Bayes formula for joint multidimensional-matrix Gaussian distributions are given. Using these results, the Bayesian estimations of the unknown coefficients of the multidimensional-matrix polynomial regression function are obtained. The algorithm of the calculation of the Bayesian estimations is realized in the form of the computer program. The results represented in the paper have theoretical and algorithmic generality.
APA, Harvard, Vancouver, ISO, and other styles
30

Bonnet, Luc, Jean-Luc Akian, Éric Savin, and T. Sullivan. "Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification." Algorithms 13, no. 8 (August 13, 2020): 196. http://dx.doi.org/10.3390/a13080196.

Full text
Abstract:
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.
APA, Harvard, Vancouver, ISO, and other styles
31

Ledoux, James, and Gerardo Rubino. "Simple formulae for counting processes in reliability models." Advances in Applied Probability 29, no. 4 (December 1997): 1018–38. http://dx.doi.org/10.2307/1427852.

Full text
Abstract:
Dependability evaluation is a basic component in the assessment of the quality of repairable systems. We develop a model taking simultaneously into account the occurrence of failures and repairs, together with the observation of user-defined success events. The model is built from a Markovian description of the behavior of the system. We obtain the distribution function of the joint number of observed failures and of delivered services on a fixed mission period of the system. In particular, the marginal distribution of the number of failures can be directly related to the distribution of the Markovian arrival process extensively used in queueing theory. We give both the analytical expressions of the considered distributions and the algorithmic solutions for their evaluation. An asymptotic analysis is also provided.
APA, Harvard, Vancouver, ISO, and other styles
32

FOX, JACOB, LÁSZLÓ MIKLÓS LOVÁSZ, and YUFEI ZHAO. "On Regularity Lemmas and their Algorithmic Applications." Combinatorics, Probability and Computing 26, no. 4 (March 28, 2017): 481–505. http://dx.doi.org/10.1017/s0963548317000049.

Full text
Abstract:
Szemerédi's regularity lemma and its variants are some of the most powerful tools in combinatorics. In this paper, we establish several results around the regularity lemma. First, we prove that whether or not we include the condition that the desired vertex partition in the regularity lemma is equitable has a minimal effect on the number of parts of the partition. Second, we use an algorithmic version of the (weak) Frieze–Kannan regularity lemma to give a substantially faster deterministic approximation algorithm for counting subgraphs in a graph. Previously, only an exponential dependence for the running time on the error parameter was known, and we improve it to a polynomial dependence. Third, we revisit the problem of finding an algorithmic regularity lemma, giving approximation algorithms for several co-NP-complete problems. We show how to use the weak Frieze–Kannan regularity lemma to approximate the regularity of a pair of vertex subsets. We also show how to quickly find, for each ε′>ε, an ε′-regular partition withkparts if there exists an ε-regular partition withkparts. Finally, we give a simple proof of the permutation regularity lemma which improves the tower-type bound on the number of parts in the previous proofs to a single exponential bound.
APA, Harvard, Vancouver, ISO, and other styles
33

Neuts, Marcel F. "Generalizations of the Pollaczek-Khinchin integral equation in the theory of queues." Advances in Applied Probability 18, no. 4 (December 1986): 952–90. http://dx.doi.org/10.2307/1427258.

Full text
Abstract:
A classical result in queueing theory states that in the stable M/G/1 queue, the stationary distribution W(x) of the waiting time of an arriving customer or of the virtual waiting time satisfies a linear Volterra integral equation of the second kind, of convolution type. For many variants of the M/G/1 queue, there are corresponding integral equations, which in most cases differ from the Pollaczek–Khinchin equation only in the form of the inhomogeneous term. This leads to interesting factorizations of the waiting-time distribution and to substantial algorithmic simplifications. In a number of priority queues, the waiting-time distributions satisfy Volterra integral equations whose kernel is a functional of the busy-period distribution in related M/G/1 queues. In other models, such as the M/G/1 queue with Bernoulli feedback or with limited admissions of customers per service, there is a more basic integral equation of Volterra type, which yields a probability distribution in terms of which the waiting-time distributions are conveniently expressed.For several complex queueing models with an embedded Markov renewal process of M/G/1 type, one obtains matrix Volterra integral equations for the waiting-time distributions or for related vectors of mass functions. Such models include the M/SM/1 and the N/G/1 queues, as well as the M/G/1 queue with some forms of bulk service.When the service-time distributions are of phase type, the numerical computation of waiting-time distributions may commonly be reduced to the solution of systems of linear differential equations with constant coefficients.
APA, Harvard, Vancouver, ISO, and other styles
34

Pei, Yan. "From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization." Scientific World Journal 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/704587.

Full text
Abstract:
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Hassan, Ahmad Kamal, Ubaid M. Al-Saggaf, Muhammad Moinuddin, and Mohamed K. Alshoubaki. "Statistical Method Based Waveform Optimization in Collocated MIMO Radar Systems." Mathematics 11, no. 3 (January 29, 2023): 680. http://dx.doi.org/10.3390/math11030680.

Full text
Abstract:
Multiple-input multiple-output (MIMO) radar has acquired considerable attention as it offers an additional degree of freedom which results in performance gains when contrasted with the regular single antenna element radar system. Waveform optimization in MIMO radar is essential as it can offer tremendous improvements in target detection which are quantified in terms of reductions in the symbol error rate and improvements in target detection probability. In this work, we foster a strategy for the optimization of transmitter and receiver waveform in a collocated MIMO radar by only considering the second order statistics, thus relaxing the information of the instantaneous target states. Our contributions are primarily two-fold. First, we find a closed-form expression of the outage probability of an unknown target under clutter environment. For this prospect, we model the signal-to-interference-plus-noise ratio in a canonical quadratic structure, and then utilize the modern residue theory approach to characterize the distribution function. Secondly, we propose constrained and unconstrained optimization problems for the reduction in outage probability using algorithmic techniques such as interior-point, sequential-quadratic programming, and the active-set method for the optimization of the transmitter and receiver waveform. We also provide simulated re-enactments to validate our hypothetical deductions.
APA, Harvard, Vancouver, ISO, and other styles
36

VAN DEN HEUVEL, JAN. "Algorithmic Aspects of a Chip-Firing Game." Combinatorics, Probability and Computing 10, no. 6 (November 2001): 505–29. http://dx.doi.org/10.1017/s0963548301004886.

Full text
Abstract:
Algorithmic aspects of a chip-firing game on a graph introduced by Biggs are studied. This variant of the chip-firing game, called the dollar game, has the properties that every starting configuration leads to a so-called critical configuration. The set of critical configurations has many interesting properties. In this paper it is proved that the number of steps needed to reach a critical configuration is polynomial in the number of edges of the graph and the number of chips in the starting configuration, but not necessarily in the size of the input. An alternative algorithm is also described and analysed.
APA, Harvard, Vancouver, ISO, and other styles
37

AXON, LOGAN M. "MARTIN-LÖF RANDOMNESS IN SPACES OF CLOSED SETS." Journal of Symbolic Logic 80, no. 2 (April 22, 2015): 359–83. http://dx.doi.org/10.1017/jsl.2014.76.

Full text
Abstract:
AbstractAlgorithmic randomness was originally defined for Cantor space with the fair-coin measure. Recent work has examined algorithmic randomness in new contexts, in particular closed subsets of 2ɷ ([2] and [8]). In this paper we use the probability theory of closed set-valued random variables (RACS) to extend the definition of Martin-Löf randomness to spaces of closed subsets of locally compact, Hausdorff, second countable topological spaces. This allows for the study of Martin-Löf randomness in many new spaces, but also gives a new perspective on Martin-Löf randomness for 2ɷ and on the algorithmically random closed sets of [2] and [8]. The first half of this paper is devoted to developing the machinery of Martin-Löf randomness for general spaces of closed sets. We then prove some general results and move on to show how the algorithmically random closed sets of [2] and [8] fit into this new framework.
APA, Harvard, Vancouver, ISO, and other styles
38

Taranto, Aldo, and Shahjahan Khan. "Bi-directional grid absorption barrier constrained stochastic processes with applications in finance & investment." Risk Governance and Control: Financial Markets and Institutions 10, no. 3 (2020): 20–33. http://dx.doi.org/10.22495/rgcv10i3p2.

Full text
Abstract:
Whilst the gambler’s ruin problem (GRP) is based on martingales and the established probability theory proves that the GRP is a doomed strategy, this research details how the semimartingale framework is required for the grid trading problem (GTP) of financial markets, especially foreign exchange (FX) markets. As banks and financial institutions have the requirement to hedge their FX exposure, the GTP can help provide a framework for greater automation of the hedging process and help forecast which hedge scenarios to avoid. Two theorems are adapted from GRP to GTP and prove that grid trading, whilst still subject to the risk of ruin, has the ability to generate significantly more profitable returns in the short term. This is also supported by extensive simulation and distributional analysis. We introduce two absorption barriers, one at zero balance (ruin) and one at a specified profit target. This extends the traditional GRP and the GTP further by deriving both the probability of ruin and the expected number of steps (of reaching a barrier) to better demonstrate that GTP takes longer to reach ruin than GRP. These statistical results have applications into finance such as multivariate dynamic hedging (Noorian, Flower, & Leong, 2016), portfolio risk optimization, and algorithmic loss recovery.
APA, Harvard, Vancouver, ISO, and other styles
39

Regier, Michael D., and Erica E. M. Moodie. "The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data." International Journal of Biostatistics 12, no. 1 (May 1, 2016): 65–77. http://dx.doi.org/10.1515/ijb-2015-0016.

Full text
Abstract:
Abstract We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
APA, Harvard, Vancouver, ISO, and other styles
40

Palhazi Cuervo, Daniel, Peter Goos, and Kenneth Sörensen. "An algorithmic framework for generating optimal two-stratum experimental designs." Computational Statistics & Data Analysis 115 (November 2017): 224–49. http://dx.doi.org/10.1016/j.csda.2017.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Magomedova, M. R., Z. A. Kurbanova, and B. A. Shangereeva. "COMPUTER SIMULATION OF DETERMINING SILTATION VOLUMES OF WATER RESERVOIR STORAGE ON THE AKSAY RIVER." Herald of Dagestan State Technical University. Technical Sciences 46, no. 4 (January 2, 2020): 102–12. http://dx.doi.org/10.21822/2073-6185-2019-46-4-102-112.

Full text
Abstract:
Objectives. The development of a mathematical model for the increased turbidity zones of the Aksay river in order to determine the siltation volumes of the Aksay water reservoir storage.Method. The mathematical model is developed using the theory of probability and the theory of random process outliers. The model takes the normal distribution of the horizontal and vertical components of the instantaneous flow velocities into account, as well as the Rayleigh law of the distribution of their maxima. The proposed model is used to calculate the “turbidity tail” of the Aksay river.Result. Due to the multifactorial nature of the continuously associated processes of siltation and deposition of suspended and bottom sediments in the upper pounds of the Aksay reservoir storage hydrological system, a mathematical model of the reservoir accretion process is developed. This model provides the reliability of accretion forecasting with spatial and temporal correlation with the siltation process model, which is actually feasible on the basis of computer simulation.Conclusion. The developed model, which is based on a probabilistic approach and the theory of random process outliers, reflects the overall process of sediment transport in open channels. The development and execution of simulation programmes is carried out using the Microsoft Developer Studio (MDS) and the Fortran Power Station algorithmic language, which comprises not only a programming system, but also a set of tools for supporting large software projects integrated into MDS.
APA, Harvard, Vancouver, ISO, and other styles
42

Zasiekin, Serhii. "Approaches to Translation in the Context of Theory of Speech Activity." PSYCHOLINGUISTICS 24, no. 2 (October 3, 2018): 63–77. http://dx.doi.org/10.31470/2309-1797-2018-24-2-63-77.

Full text
Abstract:
Over the past decades there has been a significant increase in the studies exploring cognitive foundations of translation reflected in a considerable amount of literature published on the topic. However, it is important to bear in mind that many of the ideas in the cognitive literature are mainly rooted in the psycholinguistic approaches to translation. For instance, a lot of scholarly works on translation in the former Soviet Union published in 1960-1970s emphasise the role of translator’s thinking and speech processes. The emergence of ‘theory of speech activity’, Soviet version of Western psycholinguistics, stimulated interest of linguists and psychologists who considered translation and interpreting, their procedural aspects worthy of scholarly attention. A. Leontyev (1969), one of the founders of the above mentioned ‘theory’, paid special attention to translator’s mental operations and probabilistic programming of the target language utterance(s). Thus far, a number of recent cognitive translation studies have confirmed the effectiveness of previous psycholinguistic models of translation designed within the framework of theory of speech activity. The goal of the study is a theoretical review of psycholinguistic approaches to interpreting and translation discussed in the works of scholars who were part of the Soviet theory of speech activity. The main objective is to reveal the translator’s status, his/her thinking and speech operations as psycholinguistic units in the approaches under review. Together, the psycholinguistic studies reviewed in the paper support the notion that the translator relies both on his/her algorithmic actions and heuristic solutions with the latter based on his/her background guided by probability thinking mechanism. This integrated approach proves useful in expanding our better and deeper understanding of translator’s activity.
APA, Harvard, Vancouver, ISO, and other styles
43

Tuna, Salih, and Mahesan Niranjan. "Reducing the algorithmic variability in transcriptome-based inference." Bioinformatics 26, no. 9 (March 8, 2010): 1185–91. http://dx.doi.org/10.1093/bioinformatics/btq104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Maciąg, Rafał. "Zaawansowane procedury NLP jako przesłanka rekonstrukcji idei wiedzy." Zarządzanie w Kulturze 23, no. 1 (May 30, 2022): 37–53. http://dx.doi.org/10.4467/20843976zk.22.003.15869.

Full text
Abstract:
Advanced NLP Procedures as Premises for the Reconstruction of the Idea of Knowledge The article presents the current state of development of the Natural Language Processing (NLP) technology, in particular the GPT-3 language model, and presents its consequences for understanding the phenomenon of knowledge. The NLP technology has been experiencing remarkable development recently. The GPT-3 language model presents a level of advancement that allows it to generate texts as answers to general questions, as summaries of the presented text, etc., which reach the level surpassing the analogous level of human texts. These algorithmic operations lead to the determination of the probability distribution of its components. Texts generated by such a model should be considered as autonomous texts, using immanent, implicit knowledge embedded in language. This conclusion raises questions about the status of such knowledge. Help in the analysis is provided also by the theory of discourse, as well as the theory of discursive space based on it, that proposes the interpretation of knowledge as a trajectory of discourses in a dynamical space. Recognizing that knowledge may also be autonomous, and in particular not be at the exclusive disposal of humans, leads to the question of the status of artificial cognitive agents, such as the GPT-3 language model.
APA, Harvard, Vancouver, ISO, and other styles
45

Zenil, Hector. "Towards Demystifying Shannon Entropy, Lossless Compression and Approaches to Statistical Machine Learning." Proceedings 47, no. 1 (June 19, 2020): 24. http://dx.doi.org/10.3390/proceedings2020047024.

Full text
Abstract:
Current approaches in science, including most machine and deep learning methods, rely heavily at their core on traditional statistics and information theory, but these theories are known to fail to capture certain fundamental properties of data and the world related to recursive and computable phenomena, and they are ill-equipped to deal with high-level functions such as inference, abstraction, modelling and causation, being fragile and easily deceived. How is it that some of these approaches have (apparently) been successfully applied? We explore recent attempts to adopt more powerful, albeit more difficult methods, methods based on the theories of computability and algorithmic probability, which may eventually display and grasp these higher level elements of human intelligence. We propose that a fundamental question in science regarding how to find shortcuts for faster adoption of proven mathematical tools can be answered by shortening the adoption cycle and leaving behind old practices in favour of new ones. This is the case for randomness, where science continues to cling to purely statistical tools in disentangling randomness from meaning, and is stuck in a self-deluding pattern of still privileging regression and correlation despite the fact that mathematics has made important advances to better characterise randomness that have yet to be incorporated into scientific theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
46

STRZAŁKA, DOMINIK, and FRANCISZEK GRABOWSKI. "TOWARDS POSSIBLE NON-EXTENSIVE THERMODYNAMICS OF ALGORITHMIC PROCESSING — STATISTICAL MECHANICS OF INSERTION SORT ALGORITHM." International Journal of Modern Physics C 19, no. 09 (September 2008): 1443–58. http://dx.doi.org/10.1142/s0129183108013011.

Full text
Abstract:
Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann–Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.
APA, Harvard, Vancouver, ISO, and other styles
47

Chakravarthy, S., and Attahiru Sule Alfa. "A finite capacity queue with Markovian arrivals and two servers with group services." Journal of Applied Mathematics and Stochastic Analysis 7, no. 2 (January 1, 1994): 161–78. http://dx.doi.org/10.1155/s1048953394000171.

Full text
Abstract:
In this paper we consider a finite capacity queuing system in which arrivals are governed by a Markovian arrival process. The system is attended by two exponential servers, who offer services in groups of varying sizes. The service rates may depend on the number of customers in service. Using Markov theory, we study this finite capacity queuing model in detail by obtaining numerically stable expressions for (a) the steady-state queue length densities at arrivals and at arbitrary time points; (b) the Laplace-Stieltjes transform of the stationary waiting time distribution of an admitted customer at points of arrivals. The stationary waiting time distribution is shown to be of phase type when the interarrival times are of phase type. Efficient algorithmic procedures for computing the steady-state queue length densities and other system performance measures are discussed. A conjecture on the nature of the mean waiting time is proposed. Some illustrative numerical examples are presented.
APA, Harvard, Vancouver, ISO, and other styles
48

Neuts, Marcel F. "Generalizations of the Pollaczek-Khinchin integral equation in the theory of queues." Advances in Applied Probability 18, no. 04 (December 1986): 952–90. http://dx.doi.org/10.1017/s0001867800016232.

Full text
Abstract:
A classical result in queueing theory states that in the stable M/G/1 queue, the stationary distribution W(x) of the waiting time of an arriving customer or of the virtual waiting time satisfies a linear Volterra integral equation of the second kind, of convolution type. For many variants of the M/G/1 queue, there are corresponding integral equations, which in most cases differ from the Pollaczek–Khinchin equation only in the form of the inhomogeneous term. This leads to interesting factorizations of the waiting-time distribution and to substantial algorithmic simplifications. In a number of priority queues, the waiting-time distributions satisfy Volterra integral equations whose kernel is a functional of the busy-period distribution in related M/G/1 queues. In other models, such as the M/G/1 queue with Bernoulli feedback or with limited admissions of customers per service, there is a more basic integral equation of Volterra type, which yields a probability distribution in terms of which the waiting-time distributions are conveniently expressed. For several complex queueing models with an embedded Markov renewal process of M/G/1 type, one obtains matrix Volterra integral equations for the waiting-time distributions or for related vectors of mass functions. Such models include the M/SM/1 and the N/G/1 queues, as well as the M/G/1 queue with some forms of bulk service. When the service-time distributions are of phase type, the numerical computation of waiting-time distributions may commonly be reduced to the solution of systems of linear differential equations with constant coefficients.
APA, Harvard, Vancouver, ISO, and other styles
49

Chalkis, Apostolos, Vissarion Fisikopoulos, Panagiotis Repouskos, and Elias Tsigaridas. "Sampling the feasible sets of SDPs and volume approximation." ACM Communications in Computer Algebra 54, no. 3 (September 2020): 114–18. http://dx.doi.org/10.1145/3457341.3457349.

Full text
Abstract:
We present algorithmic, complexity, and implementation results on the problem of sampling points in the interior and the boundary of a spectrahedron, that is the feasible region of a semidefinite program. Our main tool is random walks. We define and analyze a set of primitive geometric operations that exploits the algebraic properties of spectrahedra and the polynomial eigenvalue problem and leads to the realization of a broad collection of efficient random walks. We demonstrate random walks that experimentally show faster mixing time than the ones used previously for sampling from spectrahedra in theory or applications, for example Hit and Run. Consecutively, the variety of random walks allows us to sample from general probability distributions, for example the family of log-concave distributions which arise frequently in numerous applications. We apply our tools to compute (i) the volume of a spectrahedron and (ii) the expectation of functions coming from robust optimal control. We provide a C++ open source implementation of our methods that scales efficiently up to dimension 200. We illustrate its efficiency on various data sets.
APA, Harvard, Vancouver, ISO, and other styles
50

Boldi, Paolo, Ian Leifer, and Hernán A. Makse. "Quasifibrations of graphs to find symmetries and reconstruct biological networks." Journal of Statistical Mechanics: Theory and Experiment 2022, no. 11 (November 1, 2022): 113401. http://dx.doi.org/10.1088/1742-5468/ac99d1.

Full text
Abstract:
Abstract A fibration of graphs is a homomorphism that is a local isomorphism of in-neighborhoods. Recently, it has been shown that graph fibrations are useful tools to uncover symmetries and cluster synchronization in biological networks ranging from gene, protein, and metabolic networks to the brain. However, the inherent incompleteness and disordered nature of biological data preclude the application of the definition of fibration as it is. As a consequence, also the currently known algorithms to identify fibrations fail in these domains. In this paper, we introduce and develop systematically the theory of quasifibrations which attempts to capture more realistic patterns of quasi-symmetry in such networks. We provide an algorithmic solution to the problem of finding quasifibrations in networks where the existence of missing links and variability across samples preclude the identification of perfect fibration symmetries. We test our algorithm against other strategies to repair missing links in incomplete networks using real connectome data and synthetic networks. Quasifibrations can be applied to reconstruct any incomplete network structure characterized by underlying symmetrical and almost symmetrical clusters. The most direct application of our algorithms is that of helping researchers to find hidden symmetries in unknown (or partially unknown) networks, especially (but not exclusively) of biological nature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography