Littérature scientifique sur le sujet « Algorithmic probability theory »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Algorithmic probability theory ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Algorithmic probability theory"

1

Helmuth, Tyler, Will Perkins et Guus Regts. « Algorithmic Pirogov–Sinai theory ». Probability Theory and Related Fields 176, no 3-4 (26 juin 2019) : 851–95. http://dx.doi.org/10.1007/s00440-019-00928-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Solomonoff, Ray J. « The Discovery of Algorithmic Probability ». Journal of Computer and System Sciences 55, no 1 (août 1997) : 73–88. http://dx.doi.org/10.1006/jcss.1997.1500.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sterkenburg, Tom F. « A Generalized Characterization of Algorithmic Probability ». Theory of Computing Systems 61, no 4 (13 mai 2017) : 1337–52. http://dx.doi.org/10.1007/s00224-017-9774-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Levin, Leonid A. « Some theorems on the algorithmic approach to probability theory and information theory ». Annals of Pure and Applied Logic 162, no 3 (décembre 2010) : 224–35. http://dx.doi.org/10.1016/j.apal.2010.09.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zenil, Hector, Fernando Soler-Toscano, Jean-Paul Delahaye et Nicolas Gauvrit. « Two-dimensional Kolmogorov complexity and an empirical validation of the Coding theorem method by compressibility ». PeerJ Computer Science 1 (30 septembre 2015) : e23. http://dx.doi.org/10.7717/peerj-cs.23.

Texte intégral
Résumé :
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cover, Thomas M., Peter Gacs et Robert M. Gray. « Kolmogorov's Contributions to Information Theory and Algorithmic Complexity ». Annals of Probability 17, no 3 (juillet 1989) : 840–65. http://dx.doi.org/10.1214/aop/1176991250.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

DOWNEY, ROD, DENIS R. HIRSCHFELDT, JOSEPH S. MILLER et ANDRÉ NIES. « RELATIVIZING CHAITIN'S HALTING PROBABILITY ». Journal of Mathematical Logic 05, no 02 (décembre 2005) : 167–92. http://dx.doi.org/10.1142/s0219061305000468.

Texte intégral
Résumé :
As a natural example of a 1-random real, Chaitin proposed the halting probability Ω of a universal prefix-free machine. We can relativize this example by considering a universal prefix-free oracle machine U. Let [Formula: see text] be the halting probability of UA; this gives a natural uniform way of producing an A-random real for every A ∈ 2ω. It is this operator which is our primary object of study. We can draw an analogy between the jump operator from computability theory and this Omega operator. But unlike the jump, which is invariant (up to computable permutation) under the choice of an effective enumeration of the partial computable functions, [Formula: see text] can be vastly different for different choices of U. Even for a fixed U, there are oracles A =* B such that [Formula: see text] and [Formula: see text] are 1-random relative to each other. We prove this and many other interesting properties of Omega operators. We investigate these operators from the perspective of analysis, computability theory, and of course, algorithmic randomness.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kozyrev, V. P., et S. V. Yushmanov. « Graph theory (algorithmic, algebraic, and metric problems) ». Journal of Soviet Mathematics 39, no 1 (octobre 1987) : 2476–508. http://dx.doi.org/10.1007/bf01086177.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Asmussen, Søren, et Tomasz Rolski. « Computational methods in risk theory : A matrix-algorithmic approach ». Insurance : Mathematics and Economics 10, no 4 (janvier 1992) : 259–74. http://dx.doi.org/10.1016/0167-6687(92)90058-j.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hurwitz, Carol M. « On the homotopy theory of monoids ». Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 47, no 2 (octobre 1989) : 171–85. http://dx.doi.org/10.1017/s1446788700031621.

Texte intégral
Résumé :
AbsractIn this paper, it is shown that any connected, small category can be embedded in a semi-groupoid (a category in which there is at least one isomorphism between any two elements) in such a way that the embedding includes a homotopy equivalence of classifying spaces. This immediately gives a monoid whose classifying space is of the same homotopy type as that of the small category. This construction is essentially algorithmic, and furthermore, yields a finitely presented monoid whenever the small category is finitely presented. Some of these results are generalizations of ideas of McDuff.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Algorithmic probability theory"

1

Minozzo, Marco. « On some aspects of the prequential and algorithmic approaches to probability and statistical theory ». Thesis, University College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362374.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Larsson, Frans. « Algorithmic trading surveillance : Identifying deviating behavior with unsupervised anomaly detection ». Thesis, Uppsala universitet, Matematiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-389941.

Texte intégral
Résumé :
The financial markets are no longer what they used to be and one reason for this is the breakthrough of algorithmic trading. Although this has had several positive effects, there have been recorded incidents where algorithms have been involved. It is therefore of interest to find effective methods to monitor algorithmic trading. The purpose of this thesis was therefore to contribute to this research area by investigating if machine learning can be used for detecting deviating behavior. Since the real world data set used in this study lacked labels, an unsupervised anomaly detection approach was chosen. Two models, isolation forest and deep denoising autoencoder, were selected and evaluated. Because the data set lacked labels, artificial anomalies were injected into the data set to make evaluation of the models possible. These synthetic anomalies were generated by two different approaches, one based on a downsampling strategy and one based on manual construction and modification of real data. The evaluation of the anomaly detection models shows that both isolation forest and deep denoising autoencoder outperform a trivial baseline model, and have the ability to detect deviating behavior. Furthermore, it is shown that a deep denoising autoencoder outperforms isolation forest, with respect to both area under the receiver operating characteristics curve and area under the precision-recall curve. A deep denoising autoencoder is therefore recommended for the purpose of algorithmic trading surveillance.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Jurvelin, Olsson Mikael, et Andreas Hild. « Pairs Trading, Cryptocurrencies and Cointegration : A Performance Comparison of Pairs Trading Portfolios of Cryptocurrencies Formed Through the Augmented Dickey Fuller Test, Johansen’s Test and Phillips Perron’s Test ». Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385484.

Texte intégral
Résumé :
This thesis analyzes the performance and process of constructing portfolios of cryptocurrency pairs based on cointegrated relationships indicated by the Augmented Dickey-Fuller test, Johansen’s test and Phillips Peron’s test. Pairs are tested for cointegration over a 3-month and a 6-month window and then traded over a trading window of the same length. The cryptocurrencies included in the study are 14 cryptocurrencies with the highest market capitalization on April 24th 2019. One trading strategy has been applied on every portfolio following the 3-month and the 6-month methodology with thresholds at 1.75 and stop-losses at 4 standard deviations. The performance of each portfolio is compared with their corresponding buy and hold benchmark. All portfolios outperformed their buy and hold benchmark, with and without transaction costs set to 2%. Following the 3-month methodology was superior to the 6- month method and the portfolios formed through Phillips Peron’s test had the highest return for both window methods.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Mozayyan, Esfahani Sina. « Algorithmic Trading and Prediction of Foreign Exchange Rates Based on the Option Expiration Effect ». Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252297.

Texte intégral
Résumé :
The equity option expiration effect is a well observed phenomenon and is explained by delta hedge rebalancing and pinning risk, which makes the strike price of an option work as a magnet for the underlying price. The FX option expiration effect has not previously been explored to the same extent. In this paper the FX option expiration effect is investigated with the aim of finding out whether it provides valuable information for predicting FX rate movements. New models are created based on the concept of the option relevance coefficient that determines which options are at higher risk of being in the money or out of the money at a specified future time and thus have an attraction effect. An algorithmic trading strategy is created to evaluate these models. The new models based on the FX option expiration effect strongly outperform time series models used as benchmarks. The best results are obtained when the information about the FX option expiration effect is included as an exogenous variable in a GARCH-X model. However, despite promising and consistent results, more scientific research is required to be able to draw significant conclusions.
Effekten av aktieoptioners förfall är ett välobserverat fenomen, som kan förklaras av delta hedge-ombalansering och pinning-risk. Som följd av dessa fungerar lösenpriset för en option som en magnet för det underliggande priset. Effekten av FX-optioners förfall har tidigare inte utforskats i samma utsträckning. I denna rapport undersöks effekten av FX-optioners förfall med målet att ta reda på om den kan ge information som kan användas till prediktioner av FX-kursen. Nya modeller skapas baserat på konceptet optionsrelevanskoefficient som bestämmer huruvida optioner har en större sannolikhet att vara "in the money" eller "out of the money" vid en specificerad framtida tidpunkt och därmed har en attraktionseffekt. En algoritmisk tradingstrategi skapas för att evaluera dessa modeller. De nya modellerna baserade på effekten av FX-optioners förfall överpresterar klart jämfört med de tidsseriemodeller som användes som riktmärken. De bästa resultaten uppnåddes när informationen om effekten av FX-optioners förfall inkluderas som en exogen variabel i en GARCH-X modell. Dock, trots lovande och konsekventa resultat, behövs mer vetenskaplig forskning för att kunna dra signifikanta slutsatser.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Barakat, Arian. « What makes an (audio)book popular ? » Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152871.

Texte intégral
Résumé :
Audiobook reading has traditionally been used for educational purposes but has in recent times grown into a popular alternative to the more traditional means of consuming literature. In order to differentiate themselves from other players in the market, but also provide their users enjoyable literature, several audiobook companies have lately directed their efforts on producing own content. Creating highly rated content is, however, no easy task and one reoccurring challenge is how to make a bestselling story. In an attempt to identify latent features shared by successful audiobooks and evaluate proposed methods for literary quantification, this thesis employs an array of frameworks from the field of Statistics, Machine Learning and Natural Language Processing on data and literature provided by Storytel - Sweden’s largest audiobook company. We analyze and identify important features from a collection of 3077 Swedish books concerning their promotional and literary success. By considering features from the aspects Metadata, Theme, Plot, Style and Readability, we found that popular books are typically published as a book series, cover 1-3 central topics, write about, e.g., daughter-mother relationships and human closeness but that they also hold, on average, a higher proportion of verbs and a lower degree of short words. Despite successfully identifying these, but also other factors, we recognized that none of our models predicted “bestseller” adequately and that future work may desire to study additional factors, employ other models or even use different metrics to define and measure popularity. From our evaluation of the literary quantification methods, namely topic modeling and narrative approximation, we found that these methods are, in general, suitable for Swedish texts but that they require further improvement and experimentation to be successfully deployed for Swedish literature. For topic modeling, we recognized that the sole use of nouns provided more interpretable topics and that the inclusion of character names tended to pollute the topics. We also identified and discussed the possible problem of word inflections when modeling topics for more morphologically complex languages, and that additional preprocessing treatments such as word lemmatization or post-training text normalization may improve the quality and interpretability of topics. For the narrative approximation, we discovered that the method currently suffers from three shortcomings: (1) unreliable sentence segmentation, (2) unsatisfactory dictionary-based sentiment analysis and (3) the possible loss of sentiment information induced by translations. Despite only examining a handful of literary work, we further found that books written initially in Swedish had narratives that were more cross-language consistent compared to books written in English and then translated to Swedish.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Graham, Matthew McKenzie. « Auxiliary variable Markov chain Monte Carlo methods ». Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Texte intégral
Résumé :
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Asif, Muneeb. « Predicting the Success of Bank Telemarketing using various Classification Algorithms ». Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-67994.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hemsley, Ross. « Méthodes probabilistes pour l'analyse des algorithmes sur les tesselations aléatoires ». Thesis, Nice, 2014. http://www.theses.fr/2014NICE4143/document.

Texte intégral
Résumé :
Dans cette thèse, nous exploitons les outils de la théorie des probabilités et de la géométrie stochastique pour analyser des algorithmes opérant sur les tessellations. Ce travail est divisé entre deux thèmes principaux, le premier traite de la navigation dans une tessellation de Delaunay et dans son dual, le diagramme de Voronoï avec des implications pour les algorithmes de localisation spatiales et de routage dans les réseaux en ligne. Nous proposons deux nouveaux algorithmes de navigation dans la triangulation de Delaunay, que nous appelons Pivot Walk et Cone Walk. Pour Cone Walk, nous fournissons une analyse en moyenne détaillée avec des bornes explicites sur les propriétés de la pire marche possible effectuée par l'algorithme sur une triangulation de Delaunay aléatoire d'une région convexe bornée. C'est un progrès significatif car dans l'algorithme Cone Walk, les probabilités d'utiliser un triangle ou un autre au cours de la marche présentent des dépendances complexes, dépendances inexistantes dans d'autres marches. La deuxième partie de ce travail concerne l'étude des propriétés extrémales de tessellations aléatoires. En particulier, nous dérivons les premiers et derniers statistiques d'ordre pour les boules inscrites dans les cellules d'un arrangement de droites Poissonnien; ce résultat a des implications par exemple pour le hachage respectant la localité. Comme corollaire, nous montrons que les cellules minimisant l'aire sont des triangles
In this thesis, we leverage the tools of probability theory and stochastic geometry to investigate the behavior of algorithms on geometric tessellations of space. This work is split between two main themes, the first of which is focused on the problem of navigating the Delaunay tessellation and its geometric dual, the Voronoi diagram. We explore the applications of this problem to point location using walking algorithms and the study of online routing in networks. We then propose and investigate two new algorithms which navigate the Delaunay triangulation, which we call Pivot Walk and Cone Walk. For Cone Walk, we provide a detailed average-case analysis, giving explicit bounds on the properties of the worst possible path taken by the algorithm on a random Delaunay triangulation in a bounded convex region. This analysis is a significant departure from similar results that have been obtained, due to the difficulty of dealing with the complex dependence structure of localized navigation algorithms on the Delaunay triangulation. The second part of this work is concerned with the study of extremal properties of random tessellations. In particular, we derive the first and last order-statistics for the inballs of the cells in a Poisson line tessellation. This result has implications for algorithms involving line tessellations, such as locality sensitive hashing. As a corollary, we show that the cells minimizing the area are triangles
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jones, Bo. « A New Approximation Scheme for Monte Carlo Applications ». Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1579.

Texte intégral
Résumé :
Approximation algorithms employing Monte Carlo methods, across application domains, often require as a subroutine the estimation of the mean of a random variable with support on [0,1]. One wishes to estimate this mean to within a user-specified error, using as few samples from the simulated distribution as possible. In the case that the mean being estimated is small, one is then interested in controlling the relative error of the estimate. We introduce a new (epsilon, delta) relative error approximation scheme for [0,1] random variables and provide a comparison of this algorithm's performance to that of an existing approximation scheme, both establishing theoretical bounds on the expected number of samples required by the two algorithms and empirically comparing the samples used when the algorithms are employed for a particular application.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Dineff, Dimitris. « Clustering using k-means algorithm in multivariate dependent models with factor structure ». Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429528.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Algorithmic probability theory"

1

Neuts, Marcel F. Algorithmic probability : A collection of problems. London : Chapman & Hall, 1995.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Calude, Cristian. Information and Randomness : An Algorithmic Perspective. Berlin, Heidelberg : Springer Berlin Heidelberg, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Universal artificial intelligence : Sequential decisions based on algorithmic probability. Berlin : Springer, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nonlinear discrete optimization : An algorithmic theory. Zürich, Switzerland : European Mathematical Society Publishing House, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Habib, Michel. Probabilistic Methods for Algorithmic Discrete Mathematics. Berlin, Heidelberg : Springer Berlin Heidelberg, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

author, Sarich Marco 1985, dir. Metastability and Markov state models in molecular dynamics : Modeling, analysis, algorithmic approaches. Providence, Rhode Island : American Mathematical Society, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dubhashi, Devdatt. Concentration of measure for the analysis of randomized algorithms. Cambridge : Cambridge University Press, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Scheduling : Theory, Algorithms, and Systems. New York, NY : Springer New York, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

service), SpringerLink (Online, dir. Scheduling : Theory, Algorithms, and Systems. 4e éd. Boston, MA : Springer US, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Concentration of measure for the analysis of randomized algorithms. Cambridge : Cambridge University Press, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Algorithmic probability theory"

1

Shen, Alexander. « Algorithmic Information Theory and Foundations of Probability ». Dans Lecture Notes in Computer Science, 26–34. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04420-5_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Freivalds, Rūsiņš. « Algorithmic Information Theory and Computational Complexity ». Dans Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 142–54. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-44958-1_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Silvescu, Adrian, et Vasant Honavar. « Abstraction Super-Structuring Normal Forms : Towards a Theory of Structural Induction ». Dans Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 339–50. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-44958-1_27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hutter, Marcus. « Towards a Universal Theory of Artificial Intelligence Based on Algorithmic Probability and Sequential Decisions ». Dans Machine Learning : ECML 2001, 226–38. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44795-4_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Adelson-Velsky, G. M., V. L. Arlazarov et M. V. Donskoy. « Algorithms for Games and Probability Theory ». Dans Algorithms for Games, 144–74. New York, NY : Springer New York, 1988. http://dx.doi.org/10.1007/978-1-4612-3796-9_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Devroye, Luc. « Auxiliary Results from Probability Theory ». Dans Lecture Notes on Bucket Algorithms, 127–35. Boston, MA : Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4899-3531-1_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tempo, Roberto, Giuseppe Calafiore et Fabrizio Dabbene. « Elements of Probability Theory ». Dans Randomized Algorithms for Analysis and Control of Uncertain Systems, 7–12. London : Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4610-0_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Baier, Christel, Florian Funke, Jakob Piribauer et Robin Ziemek. « On probability-raising causality in Markov decision processes ». Dans Lecture Notes in Computer Science, 40–60. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99253-8_3.

Texte intégral
Résumé :
AbstractThe purpose of this paper is to introduce a notion of causality in Markov decision processes based on the probability-raising principle and to analyze its algorithmic properties. The latter includes algorithms for checking cause-effect relationships and the existence of probability-raising causes for given effect scenarios. Inspired by concepts of statistical analysis, we study quality measures (recall, coverage ratio and f-score) for causes and develop algorithms for their computation. Finally, the computational complexity for finding optimal causes with respect to these measures is analyzed.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Dolev, Shlomi. « The Reality Game Theory Imposes (Short Summary) ». Dans Algorithms, Probability, Networks, and Games, 25–26. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24024-4_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cuzzolin, Fabio. « Probability transforms : The affine family ». Dans Artificial Intelligence : Foundations, Theory, and Algorithms, 431–67. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63153-6_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Algorithmic probability theory"

1

Griffith, Tristan D., Vinod P. Gehlot et Mark J. Balas. « On the Observability of Quantum Dynamical Systems ». Dans ASME 2022 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/imece2022-88856.

Texte intégral
Résumé :
Abstract Quantum statistical mechanics offers an increasingly relevant theory for a wide variety of probabilistic systems including thermodynamics, particle dynamics, and robotics. Quantum dynamical systems can be described by linear time invariant systems and so there is a need to build out traditional control theory for quantum statistical mechanics. The probability information in a quantum dynamical system evolves according to the quantum master equation, whose state is a matrix instead of a column vector. Accordingly, the traditional notion of a full rank observability matrix does not apply. In this work, we develop a proof of observability for quantum dynamical systems including a rank test and algorithmic considerations. A qubit example is provided for situations where the dynamical system is both observable and unobservable.
Styles APA, Harvard, Vancouver, ISO, etc.
2

« HUMAN BODY TRACKING BASED ON PROBABILITY EVOLUTIONARY ALGORITHM ». Dans International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001362603030309.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wenwei Niu, Zhe Xiao, Ming Huang, Jiang Yu et Jingsong Hu. « An algorithm with high decoding success probability based on LT codes ». Dans EM Theory (ISAPE - 2010). IEEE, 2010. http://dx.doi.org/10.1109/isape.2010.5696655.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kanungo, T., M. Y. Jaisimha, J. Palmer et R. M. Haralick. « Methodology for analyzing the performance of detection tasks ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1992. http://dx.doi.org/10.1364/oam.1992.fcc3.

Texte intégral
Résumé :
There has been increasing interest in quantitative performance evaluation of computer vision algorithms. The usual method is to vary parameters of the input images or parameters of the algorithms and then construct operating curves that relate the probability of misdetection and false alarm for each parameter setting. Such an analysis does not integrate the performance of the numerous operating curves. In this paper we outline a methodology for summarizing many operating curves into a few performance curves. This methodology is adapted from the human psychophysics literature and is general to any detection algorithm. We demonstrated the methodology by comparing the performance of two line detection algorithms. The task was to detect the presence or absence of a vertical edge in the middle of an image containing a grating mask and additive Gaussian noise. We compared the Burns line finder and an algorithm using the facet edge detector and the Hough transform. To determine each algorithm's performance curve, we estimated the contrast necessary for an unbiased 75% correct detection as a function of the orientation of the grating mask. These functions were further characterized in terms of the algorithm's orientation selectivity and overall performance. An algorithm with the best overall performance need not have the best orientation selectivity. These performance curves can be used to optimize the design of algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Alfano, Gianvincenzo, Marco Calautti, Sergio Greco, Francesco Parisi et Irina Trubitsyna. « Explainable Acceptance in Probabilistic Abstract Argumentation : Complexity and Approximation ». Dans 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/4.

Texte intégral
Résumé :
Recently there has been an increasing interest in probabilistic abstract argumentation, an extension of Dung's abstract argumentation framework with probability theory. In this setting, we address the problem of computing the probability that a given argument is accepted. This is carried out by introducing the concept of probabilistic explanation for a given (probabilistic) extension. We show that the complexity of the problem is FP^#P-hard and propose polynomial approximation algorithms with bounded additive error for probabilistic argumentation frameworks where odd-length cycles are forbidden. This is quite surprising since, as we show, such kind of approximation algorithm does not exist for the related FP^#P-hard problem of computing the probability of the credulous acceptance of an argument, even for the special class of argumentation frameworks considered in the paper.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Smith, Curtis. « Representing Common-Cause Failures in the SAPHIRE Software ». Dans ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-67130.

Texte intégral
Résumé :
Currently, the risk analysis software SAPHIRE has implemented a common-cause failure (CCF) module to represent standard CCF methods such as alpha-factor and multiple Greek letter approaches. However, changes to SAPHIRE are required to support the Nuclear Regulatory Commission’s 2007 “Risk Assessment Standardization Project” CCF analysis guidance for events assessment. This guidance provides an outline of how both the nominal CCF probabilities and conditional (e.g., after a redundant component has failed) CCF probabilities should be calculated. Based upon user-provided input and extending the limitations in the current version of SAPHIRE, the CCF module calculations will be made consistent with the new guidance. The CCF modifications will involve changes to (1) the SAPHIRE graphical user interface directing how end-users and modelers interface with PRA models and (2) algorithmic changes as required. Included in the modifications will be the possibility to treat CCF probability adjustments based upon failure types (e.g., independent versus dependent) and failure modes (e.g., failure-to-run versus failure-to-start). In general, SAPHIRE is being modified to allow the risk analyst to define a CCF object. This object is defined in terms of a basic event. For the CCF object, the analyst would need to specify a minimal set of information, including: - The number of redundant components; - The failure criteria (how many component have to fail); - The CCF model type (alpha-factor, MGL, or beta-factor); - The parameters (e.g., the alpha-factors) associated with the model; - Staggered or non-staggered testing assumption; - Default level of detail (expanded, showing all of the specific failure combinations, or not). This paper will outline both the theory behind the probabilistic calculations and the resulting implementation in the SAPHIRE software.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wood, Jared G., Benjamin Kehoe et J. Karl Hedrick. « Target Estimate PDF-Based Optimal Path Planning Algorithm With Application to UAV Systems ». Dans ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4262.

Texte intégral
Résumé :
Companies are starting to explore investing in UAV systems that come with standard autopilot trackers. There is a need for general cooperative local path planning algorithms that function with these types of systems. We have recently finished a project in which algorithms for autonomously searching for, detecting, and tracking ground targets was developed for a fixed-wing UAV with a visual spectrum gimballed camera. A set of scenarios are identified in which finite horizon path optimization results in a non-optimal ineffective path. For each of these scenarios, an appropriate path optimization problem is defined to replace finite horizon optimization. An algorithm is presented that determines which path optimization should be performed given a UAV state and target estimate probability distribution. The algorithm was implemented and thoroughly tested in flight experiments. The experimental work was successful and gave insight into what is required for a path planning algorithm to robustly work with standard waypoint tracking UAV systems. This paper presents the algorithm that was developed, theory supporting the algorithm, and experimental results.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Haijun, Liang. « Research and Discussion on the Novel Big Data Clustering Algorithm based on Probability Theory and Nash Game Theory ». Dans 2015 Conference on Informatization in Education, Management and Business (IEMB-15). Paris, France : Atlantis Press, 2015. http://dx.doi.org/10.2991/iemb-15.2015.206.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mourelatos, Zissimos P., et Jun Zhou. « A Design Optimization Method Using Evidence Theory ». Dans ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84693.

Texte intégral
Résumé :
Early in the engineering design cycle, it is difficult to quantify product reliability or compliance to performance targets due to insufficient data or information to model uncertainties. Probability theory can not be therefore, used. Design decisions are usually, based on fuzzy information that is vague, imprecise qualitative, linguistic or incomplete. Recently, evidence theory has been proposed to handle uncertainty with limited information as an alternative to probability theory. In this paper, a computationally efficient design optimization method is proposed based on evidence theory, which can handle a mixture of epistemic and random uncertainties. It quickly identifies the vicinity of the optimal point and the active constraints by moving a hyper-ellipse in the original design space, using a reliability-based design optimization (RBDO) algorithm. Subsequently, a derivative-free optimizer calculates the evidence-based optimum, starting from the close-by RBDO optimum, considering only the identified active constraints. The computational cost is kept low by first moving to the vicinity of the optimum quickly and subsequently using local surrogate models of the active constraints only. Two examples demonstrate the proposed evidence-based design optimization method.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rouhani, Sara, Tahrima Rahman et Vibhav Gogate. « Algorithms for the Nearest Assignment Problem ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/707.

Texte intégral
Résumé :
We consider the following nearest assignment problem (NAP): given a Bayesian network B and probability value q, find a configuration w of variables in B such that difference between q and the probability of w is minimized. NAP is much harder than conventional inference problems such as finding the most probable explanation and is NP-hard even on independent Bayesian networks (IBNs), which are networks having no edges. Therefore, in order to solve NAP on IBNs, we show how to encode it as a two-way number partitioning problem. This encoding allows us to use greedy poly-time approximation algorithms from the number partitioning literature to yield an algorithm with guarantees for solving NAP on IBNs. We extend this basic algorithm from independent networks to arbitrary probabilistic graphical models by leveraging cutset conditioning and (Rao-Blackwellised) sampling algorithms. We derive approximation and complexity guarantees for our new algorithms and show experimentally that they are quite accurate in practice.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Algorithmic probability theory"

1

Steele, J. M. Probability and Statistics Applied to the Theory of Algorithms. Fort Belvoir, VA : Defense Technical Information Center, avril 1995. http://dx.doi.org/10.21236/ada295805.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lee, W. S., Victor Alchanatis et Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, janvier 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Texte intégral
Résumé :
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe et Hamid Mehmood. Mapping WASH-related disease risk : A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, décembre 2021. http://dx.doi.org/10.53328/uxuo4751.

Texte intégral
Résumé :
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie