Journal articles on the topic 'Probabilistic representation'

To see the other types of publications on this topic, follow the link: Probabilistic representation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Probabilistic representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

JAEGER, MANFRED. "PROBABILISTIC DECISION GRAPHS — COMBINING VERIFICATION AND AI TECHNIQUES FOR PROBABILISTIC INFERENCE." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, supp01 (January 2004): 19–42. http://dx.doi.org/10.1142/s0218488504002564.

Full text
Abstract:
We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic decision graph for a given distribution is at most as large as the smallest junction tree for the same distribution, and that in some cases it can in fact be much smaller. Behind these very promising features of probabilistic decision graphs lies the fact that they integrate into a single coherent framework a number of representational and algorithmic optimizations developed for Bayesian networks (use of hidden variables, context-specific independence, structured representation of conditional probability tables).
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Najjar, Nabil I., Ramon Casadesus-Masanell, and Emre Ozdenoren. "Probabilistic representation of complexity." Journal of Economic Theory 111, no. 1 (July 2003): 49–87. http://dx.doi.org/10.1016/s0022-0531(03)00075-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Giannarakis, Nick, Alexandra Silva, and David Walker. "ProbNV: probabilistic verification of network control planes." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473595.

Full text
Abstract:
ProbNV is a new framework for probabilistic network control plane verification that strikes a balance between generality and scalability. ProbNV is general enough to encode a wide range of features from the most common protocols (eBGP and OSPF) and yet scalable enough to handle challenging properties, such as probabilistic all-failures analysis of medium-sized networks with 100-200 devices. When there are a small, bounded number of failures, networks with up to 500 devices may be verified in seconds. ProbNV operates by translating raw CISCO configurations into a probabilistic and functional programming language designed for network verification. This language comes equipped with a novel type system that characterizes the sort of representation to be used for each data structure: concrete for the usual representation of values; symbolic for a BDD-based representation of sets of values; and multi-value for an MTBDD-based representation of values that depend upon symbolics. Careful use of these varying representations speeds execution of symbolic simulation of network models. The MTBDD-based representations are also used to calculate probabilistic properties of network models once symbolic simulation is complete. We implement the language and evaluate its performance on benchmarks constructed from real network topologies and synthesized routing policies.
APA, Harvard, Vancouver, ISO, and other styles
4

Lindstr�m, Sten, and Wlodzimierz Rabinowicz. "On probabilistic representation of non-probabilistic belief revision." Journal of Philosophical Logic 18, no. 1 (February 1989): 69–101. http://dx.doi.org/10.1007/bf00296175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.

Full text
Abstract:
We consider the problem of constructing abstract representations for planning in high-dimensional, continuous environments. We assume an agent equipped with a collection of high-level actions, and construct representations provably capable of evaluating plans composed of sequences of those actions. We first consider the deterministic planning case, and show that the relevant computation involves set operations performed over sets of states. We define the specific collection of sets that is necessary and sufficient for planning, and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning. The resulting representation can be expressed in PDDL, a canonical high-level planning domain language; we construct such a representation for the Playroom domain and solve it in milliseconds using an off-the-shelf planner. We then consider probabilistic planning, which we show requires generalizing from sets of states to distributions over states. We identify the specific distributions required for planning, and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan. In addition, we show that learning the relevant probability distributions corresponds to specific instances of probabilistic density estimation and probabilistic classification. We construct an agent that autonomously learns the correct abstract representation of a computer game domain, and rapidly solves it. Finally, we apply these techniques to create a physical robot system that autonomously learns its own symbolic representation of a mobile manipulation task directly from sensorimotor data---point clouds, map locations, and joint angles---and then plans using that representation. Together, these results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high-level representations.
APA, Harvard, Vancouver, ISO, and other styles
6

Halpern, J. Y., and D. Koller. "Representation Dependence in Probabilistic Inference." Journal of Artificial Intelligence Research 21 (March 1, 2004): 319–56. http://dx.doi.org/10.1613/jair.1292.

Full text
Abstract:
Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed this as a significant problem. For example, the principle of maximum entropyhas been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any representation-independent probabilistic inference procedure that ignores irrelevant information is essentially entailment, in a precise sense. Moreover, we show that representation independence is incompatible with even a weak default assumption of independence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation independence and other desiderata, and provide a construction of a family of inference procedures that provides such restricted representation independence, using relative entropy.
APA, Harvard, Vancouver, ISO, and other styles
7

Karpati, A., P. Adam, and J. Janszky. "Quantum operations in probabilistic representation." Physica Scripta T135 (July 2009): 014054. http://dx.doi.org/10.1088/0031-8949/2009/t135/014054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barber, M. J., J. W. Clark, and C. H. Anderson. "Neural Representation of Probabilistic Information." Neural Computation 15, no. 8 (August 1, 2003): 1843–64. http://dx.doi.org/10.1162/08997660360675062.

Full text
Abstract:
It has been proposed that populations of neurons process information in terms of probability density functions (PDFs) of analog variables. Such analog variables range, for example, from target luminance and depth on the sensory interface to eye position and joint angles on the motor output side. The requirement that analog variables must be processed leads inevitably to a probabilistic description, while the limited precision and lifetime of the neuronal processing units lead naturally to a population representation of information. We show how a time-dependent probability densityρ(x; t) over variable x, residing in a specified function space of dimension D, may be decoded from the neuronal activities in a population as a linear combination of certain decoding functions φi(x), with coefficients given by the N firing rates ai(t) (generally with D ≪ N). We show how the neuronal encoding process may be described by projecting a set of complementary encoding functions [Formula: see text]i(x) on the probability density ρ(x; t), and passing the result through a rectifying nonlinear activation function. We show how both encoders [Formula: see text]i (x) and decoders φi(x) may be determined by minimizing cost functions that quantify the inaccuracy of the representation. Expressing a given computation in terms of manipulation and transformation of probabilities, we show how this representation leads to a neural circuit that can carry out the required computation within a consistent Bayesian framework, with the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.
APA, Harvard, Vancouver, ISO, and other styles
9

Soldatova, Larisa N., Andrey Rzhetsky, Kurt De Grave, and Ross D. King. "Representation of probabilistic scientific knowledge." Journal of Biomedical Semantics 4, Suppl 1 (2013): S7. http://dx.doi.org/10.1186/2041-1480-4-s1-s7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Haba, Z. "Probabilistic representation of quantum dynamics." Physics Letters A 175, no. 6 (April 1993): 371–76. http://dx.doi.org/10.1016/0375-9601(93)90984-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gikhman, Il I. "Probabilistic representation of quantum evolution." Ukrainian Mathematical Journal 44, no. 10 (October 1992): 1203–8. http://dx.doi.org/10.1007/bf01057675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Feldman, Jacob. "Symbolic representation of probabilistic worlds." Cognition 123, no. 1 (April 2012): 61–83. http://dx.doi.org/10.1016/j.cognition.2011.12.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Joshi, S., and R. Khardon. "Probabilistic Relational Planning with First Order Decision Diagrams." Journal of Artificial Intelligence Research 41 (June 21, 2011): 231–66. http://dx.doi.org/10.1613/jair.3205.

Full text
Abstract:
Dynamic programming algorithms have been successfully applied to propositional stochastic planning problems by using compact representations, in particular algebraic decision diagrams, to capture domain dynamics and value functions. Work on symbolic dynamic programming lifted these ideas to first order logic using several representation schemes. Recent work introduced a first order variant of decision diagrams (FODD) and developed a value iteration algorithm for this representation. This paper develops several improvements to the FODD algorithm that make the approach practical. These include, new reduction operators that decrease the size of the representation, several speedup techniques, and techniques for value approximation. Incorporating these, the paper presents a planning system, FODD-Planner, for solving relational stochastic planning problems. The system is evaluated on several domains, including problems from the recent international planning competition, and shows competitive performance with top ranking systems. This is the first demonstration of feasibility of this approach and it shows that abstraction through compact representation is a promising approach to stochastic planning.
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Dou, Lingwei Wei, Yaxin Liu, Wei Zhou, and Songlin Hu. "Structured Probabilistic Coding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12491–501. http://dx.doi.org/10.1609/aaai.v38i11.29142.

Full text
Abstract:
This paper presents a new supervised representation learning framework, namely structured probabilistic coding (SPC), to learn compact and informative representations from input related to the target task. SPC is an encoder-only probabilistic coding technology with a structured regularization from the target space. It can enhance the generalization ability of pre-trained language models for better language understanding. Specifically, our probabilistic coding simultaneously performs information encoding and task prediction in one module to more fully utilize the effective information from input data. It uses variational inference in the output space to reduce randomness and uncertainty. Besides, to better control the learning process of probabilistic representations, a structured regularization is proposed to promote uniformity across classes in the latent space. With the regularization term, SPC can preserve the Gaussian structure of the latent code and achieve better coverage of the hidden space with class uniformly. Experimental results on 12 natural language understanding tasks demonstrate that our SPC effectively improves the performance of pre-trained language models for classification and regression. Extensive experiments show that SPC can enhance the generalization capability, robustness to label noise, and clustering quality of output representations.
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Haoyu, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, and Chang Xu. "Boosting Semi-Supervised Semantic Segmentation with Probabilistic Representations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 2938–46. http://dx.doi.org/10.1609/aaai.v37i3.25396.

Full text
Abstract:
Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. More- over, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of se- mantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL.
APA, Harvard, Vancouver, ISO, and other styles
16

Poole, D., and N. L. Zhang. "Exploiting Contextual Independence In Probabilistic Inference." Journal of Artificial Intelligence Research 18 (June 1, 2003): 263–313. http://dx.doi.org/10.1613/jair.1122.

Full text
Abstract:
Bayesian belief networks have grown to prominence because they provide compact representations for many problems for which probabilistic inference is appropriate, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probabilities of a variable given its parents. In this paper we present such a representation that exploits contextual independence in terms of parent contexts; which variables act as parents may depend on the value of other variables. The internal representation is in terms of contextual factors (confactors) that is simply a pair of a context and a table. The algorithm, contextual variable elimination, is based on the standard variable elimination algorithm that eliminates the non-query variables in turn, but when eliminating a variable, the tables that need to be multiplied can depend on the context. This algorithm reduces to standard variable elimination when there is no contextual independence structure to exploit. We show how this can be much more efficient than variable elimination when there is structure to exploit. We explain why this new method can exploit more structure than previous methods for structured belief network inference and an analogous algorithm that uses trees.
APA, Harvard, Vancouver, ISO, and other styles
17

Block, Ned. "If perception is probabilistic, why does it not seem probabilistic?" Philosophical Transactions of the Royal Society B: Biological Sciences 373, no. 1755 (July 30, 2018): 20170341. http://dx.doi.org/10.1098/rstb.2017.0341.

Full text
Abstract:
The success of the Bayesian perspective in explaining perceptual phenomena has motivated the view that perceptual representation is probabilistic. But if perceptual representation is probabilistic, why does normal conscious perception not reflect the full probability functions that the probabilistic point of view endorses? For example, neurons in cortical area MT that respond to the direction of motion are broadly tuned: a patch of cortex that is tuned to vertical motion also responds to horizontal motion, but when we see vertical motion, foveally, in good conditions, it does not look at all horizontal. The standard solution in terms of sampling runs into the problem that sampling is an account of perceptual decision rather than perception. This paper argues that the best Bayesian approach to this problem does not require probabilistic representation. This article is part of the theme issue ‘Perceptual consciousness and cognitive access'.
APA, Harvard, Vancouver, ISO, and other styles
18

Moghaddam, B., and A. Pentland. "Probabilistic visual learning for object representation." IEEE Transactions on Pattern Analysis and Machine Intelligence 19, no. 7 (July 1997): 696–710. http://dx.doi.org/10.1109/34.598227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Höhna, Sebastian, Tracy A. Heath, Bastien Boussau, Michael J. Landis, Fredrik Ronquist, and John P. Huelsenbeck. "Probabilistic Graphical Model Representation in Phylogenetics." Systematic Biology 63, no. 5 (June 20, 2014): 753–71. http://dx.doi.org/10.1093/sysbio/syu039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Haijun, Shengyan Zhang, Yujie Du, Hongjuan Ge, and Bo Hu. "Visual tracking via probabilistic collaborative representation." Journal of Electronic Imaging 26, no. 1 (February 1, 2017): 013010. http://dx.doi.org/10.1117/1.jei.26.1.013010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Beccaria, M. "Probabilistic representation of fermionic lattice systems." Nuclear Physics B - Proceedings Supplements 83-84, no. 1-3 (March 2000): 911–13. http://dx.doi.org/10.1016/s0920-5632(00)00413-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Beccaria, Matteo, Carlo Presilla, Gian Fabrizio De Angelis, and Giovanni Jona-Lasinio. "Probabilistic representation of fermionic lattice systems." Nuclear Physics B - Proceedings Supplements 83-84 (April 2000): 911–13. http://dx.doi.org/10.1016/s0920-5632(00)91842-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Khrennikov, Andrei. "Probabilistic pathway representation of cognitive information." Journal of Theoretical Biology 231, no. 4 (December 2004): 597–613. http://dx.doi.org/10.1016/j.jtbi.2004.07.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mao, L., and H. Resat. "Probabilistic representation of gene regulatory networks." Bioinformatics 20, no. 14 (April 8, 2004): 2258–69. http://dx.doi.org/10.1093/bioinformatics/bth236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Zhen-Qing. "Time fractional equations and probabilistic representation." Chaos, Solitons & Fractals 102 (September 2017): 168–74. http://dx.doi.org/10.1016/j.chaos.2017.04.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Peng, Xiang, Qilong Gao, Jiquan Li, Zhenyu Liu, Bing Yi, and Shaofei Jiang. "Probabilistic Representation Approach for Multiple Types of Epistemic Uncertainties Based on Cubic Normal Transformation." Applied Sciences 10, no. 14 (July 8, 2020): 4698. http://dx.doi.org/10.3390/app10144698.

Full text
Abstract:
Many non-probabilistic approaches have been widely regarded as mathematical tools for the representation of epistemic uncertainties. However, their heavy computational burden and low computational efficiency hinder their applications in practical engineering problems. In this article, a unified probabilistic representation approach for multiple types of epistemic uncertainties is proposed based on the cubic normal transformation method. The epistemic uncertainties can be represented using an interval approach, triangular fuzzy approach, or evidence theory. The uncertain intervals of four statistical moments, which contain mean, variance, skewness, and kurtosis, are calculated using the sampling analysis method. Subsequently, the probabilistic cubic normal distribution functions are conducted for sampling points of four statistical moments of epistemic uncertainties. Finally, a calculation procedure for the construction of probabilistic representation functions is proposed, and these epistemic uncertainties are represented with belief and plausibility continuous probabilistic measure functions. Two numerical examples and one engineering example demonstrate that the proposed approach can act as an accurate probabilistic representation function with high computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

Aljundi, Rahaf, Yash Patel, Milan Sulc, Nikolay Chumerin, and Daniel Olmeda Reino. "Contrastive Classification and Representation Learning with Probabilistic Interpretation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6675–83. http://dx.doi.org/10.1609/aaai.v37i6.25819.

Full text
Abstract:
Cross entropy loss has served as the main objective function for classification-based tasks. Widely deployed for learning neural network classifiers, it shows both effectiveness and a probabilistic interpretation. Recently, after the success of self supervised contrastive representation learning methods, supervised contrastive methods have been proposed to learn representations and have shown superior and more robust performance, compared to solely training with cross entropy loss. However, cross entropy loss is still needed to train the final classification layer. In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss. First, we revisit a previously proposed contrastive-based objective function that approximates cross entropy loss and present a simple extension to learn the classifier jointly. Second, we propose a new version of the supervised contrastive training that learns jointly the parameters of the classifier and the backbone of the network. We empirically show that these proposed objective functions demonstrate state-of-the-art performance and show a significant improvement over the standard cross entropy loss with more training stability and robustness in various challenging settings.
APA, Harvard, Vancouver, ISO, and other styles
28

Karami, Mahdi, and Dale Schuurmans. "Deep Probabilistic Canonical Correlation Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8055–63. http://dx.doi.org/10.1609/aaai.v35i9.16982.

Full text
Abstract:
We propose a deep generative framework for multi-view learning based on a probabilistic interpretation of canonical correlation analysis (CCA). The model combines a linear multi-view layer in the latent space with deep generative networks as observation models, to decompose the variability in multiple views into a shared latent representation that describes the common underlying sources of variation and a set of viewspecific components. To approximate the posterior distribution of the latent multi-view layer, an efficient variational inference procedure is developed based on the solution of probabilistic CCA. The model is then generalized to an arbitrary number of views. An empirical analysis confirms that the proposed deep multi-view model can discover subtle relationships between multiple views and recover rich representations.
APA, Harvard, Vancouver, ISO, and other styles
29

EKENBERG, LOVE, MATS DANIELSON, and JOHAN THORBIÖRNSON. "MULTIPLICATIVE PROPERTIES IN EVALUATION OF DECISION TREES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 14, no. 03 (June 2006): 293–316. http://dx.doi.org/10.1142/s0218488506004023.

Full text
Abstract:
In attempting to address real-life decision problems, where uncertainty about data prevails, some kind of representation of imprecise information is important and several have been proposed. In particular, first-order representations, such as sets of probability measures, upper and lower probabilities, and interval probabilities and utilities of various kinds, have been suggested for enabling a better representation of the input sentences for a subsequent decision analysis. However, sometimes second-order approaches are better suited for modelling incomplete knowledge and we demonstrate how such can add important information when handling aggregations of imprecise representations, as is the case in decision trees or probabilistic networks. Based on this, we suggest a measure of belief density for such intervals. We also demonstrate important properties when operating on general distributions. The results equally apply to approaches which do not explicitly deal with second-order distributions, instead using only first-order concepts such as upper and lower bounds. While the discussion focuses on probabilistic decision trees, the results apply to other formalisms involving products of probabilities, such as probabilistic networks, and to formalisms dealing with products of interval entities such as interval weight trees in multi-criteria decision making.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Banghe, Chengzhong Xu, and Hui Kong. "LiDAR Road-Atlas: An Efficient Map Representation for General 3D Urban Environment." Field Robotics 3, no. 1 (January 10, 2023): 435–59. http://dx.doi.org/10.55417/fr.2023014.

Full text
Abstract:
In this work, we propose the LiDAR Road-Atlas, a compact and efficient 3D map representation, for autonomous robot or vehicle navigation in a general urban environment. The LiDAR Road-Atlas can be generated by an online mapping framework which incrementally merges local 2D occupancy grid maps (2D-OGMs). Specifically, the contributions of our method are threefold. First, we solve the challenging problem of creating local 2D-OGMs in nonstructured urban scenes based on a real-time delimitation of traversable and curb regions in a LiDAR point cloud. Second, we achieve accurate 3D mapping in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third, we achieve a very efficient 3D map representation of a general environment thanks to the automatic local-OGM-induced traversable-region labeling and a sparse probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can achieve accurate vehicle localization, path planning, and some other tasks. Our map representation is insensitive to dynamic objects which can be filtered out in the resulting map based on a probabilistic fusion. Empirically, we compare our map representation with a couple of popular map representations in robotics society, and our map representation is more favorable in terms of efficiency, scalability, and compactness. Additionally, we also evaluate localization performance given the LiDAR Road-Atlas representations on two public datasets. With a 16-channel LiDAR sensor, our method achieves an average global localization error of 0.26 m (translation) and 1.07 (rotation) on the Apollo dataset, and 0.89 m (translation) and 1.29 (rotation) on the MulRan dataset, respectively, at 10 Hz, which validates its promising performance. The code for this work is open-sourced at https://github.com/IMRL/Lidar-road-atlas.
APA, Harvard, Vancouver, ISO, and other styles
31

Kou, Zheng, Junjie Li, Xinyue Fan, Saeed Kosari, and Xiaoli Qiang. "Predicting Cross-Species Infection of Swine Influenza Virus with Representation Learning of Amino Acid Features." Computational and Mathematical Methods in Medicine 2021 (October 11, 2021): 1–12. http://dx.doi.org/10.1155/2021/6985008.

Full text
Abstract:
Swine influenza viruses (SIVs) can unforeseeably cross the species barriers and directly infect humans, which pose huge challenges for public health and trigger pandemic risk at irregular intervals. Computational tools are needed to predict infection phenotype and early pandemic risk of SIVs. For this purpose, we propose a feature representation algorithm to predict cross-species infection of SIVs. We built a high-quality dataset of 1902 viruses. A feature representation learning scheme was applied to learn feature representations from 64 well-trained random forest models with multiple feature descriptors of mutant amino acid in the viral proteins, including compositional information, position-specific information, and physicochemical properties. Class and probabilistic information were integrated into the feature representations, and redundant features were removed by feature space optimization. High performance was achieved using 20 informative features and 22 probabilistic information. The proposed method will facilitate SIV characterization of transmission phenotype.
APA, Harvard, Vancouver, ISO, and other styles
32

Cheng, Zhiyun. "Bowling ball representation of virtual string links." Journal of Knot Theory and Its Ramifications 26, no. 06 (February 17, 2017): 1742001. http://dx.doi.org/10.1142/s0218216517420019.

Full text
Abstract:
In this paper, we investigate the virtual string links via a probabilistic interpretation. This representation can be used to distinguish some virtual string links from classical string links. In order to study the algebraic structure behind this probabilistic interpretation, we introduce the notion of virtual flat biquandle. The cocycle invariants associated with virtual flat biquandle are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhou, Changcong, Chenghu Tang, Fuchao Liu, and Wenxuan Wang. "A Probabilistic Representation Method for Interval Uncertainty." International Journal of Computational Methods 15, no. 05 (June 5, 2018): 1850038. http://dx.doi.org/10.1142/s021987621850038x.

Full text
Abstract:
In this work, we consider the interval uncertainty from the probabilistic point of view, focusing on the establishment of probabilistic representation of interval uncertainty. A model-free sampling technique is first introduced, which can be used to produce a considerably larger sample from a given small sample. To make sure the local statistical characteristics of these two samples coincide, an improved model-free sampling technique is introduced based on probability weighted moments. The improved model-free sampling technique is then applied to obtain a large sample based on interval data, of which the probability distribution is produced by the kernel density estimator. Highest density regions based on estimated probability density function have been considered to further investigate the underlying information. The proposed probabilistic representation method is employed in the attempt of interval uncertainty propagation with the results compared with previous studies. The research adds a possible tool in the treatment of interval uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
34

Hochgerner, Simon. "Probabilistic representation of helicity in viscous fluids." Comptes Rendus. Mécanique 350, G2 (June 16, 2022): 283–95. http://dx.doi.org/10.5802/crmeca.116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ahmed, Sultan, and Malek Mouhoub. "Representation and Reasoning with Probabilistic TCP-nets." Computer and Information Science 11, no. 4 (October 16, 2018): 9. http://dx.doi.org/10.5539/cis.v11n4p9.

Full text
Abstract:
TCP-nets are graphical tools for modeling user's preference and relative importance statements. We propose the Prob-abilistic TCP-net (PTCP-net) model that can aggregate a set of TCP-nets, in a compact form, sharing the same set of variables and their domains but having different preference and relative importance statements. In particular, the PTCP-net is able to aggregate the choices of multiple users such as, in recommender systems. The PTCP-net can also be seen as an extension of the TCP-net with uncertainty on preference and relative importance statements. We adopt the Bayesian Network as the reasoning tool for PTCP-nets especially when answering the following two queries (1) finding the most probable TCP-net and (2) finding the most probable optimal outcome. We also show that the PTCP-net is applicable in collaborative filtering type recommender systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Goodman, Gerald S. "A probabilistic representation of totally positive matrices." Advances in Applied Mathematics 7, no. 2 (June 1986): 236–52. http://dx.doi.org/10.1016/0196-8858(86)90035-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Belopolskaya, Y. I. "Probabilistic representation of solutions of hydrodynamic equations." Journal of Mathematical Sciences 101, no. 5 (October 2000): 3422–36. http://dx.doi.org/10.1007/bf02680143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gou, Jianping, Lei Wang, Bing Hou, Jiancheng Lv, Yunhao Yuan, and Qirong Mao. "Two-phase probabilistic collaborative representation-based classification." Expert Systems with Applications 133 (November 2019): 9–20. http://dx.doi.org/10.1016/j.eswa.2019.05.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Busnello, Barbara, Franco Flandoli, and Marco Romito. "A PROBABILISTIC REPRESENTATION FOR THE VORTICITY OF A THREE-DIMENSIONAL VISCOUS FLUID AND FOR GENERAL SYSTEMS OF PARABOLIC EQUATIONS." Proceedings of the Edinburgh Mathematical Society 48, no. 2 (May 23, 2005): 295–336. http://dx.doi.org/10.1017/s0013091503000506.

Full text
Abstract:
AbstractA probabilistic representation formula for general systems of linear parabolic equations, coupled only through the zero-order term, is given. On this basis, an implicit probabilistic representation for the vorticity in a three-dimensional viscous fluid (described by the Navier–Stokes equations) is carefully analysed, and a theorem of local existence and uniqueness is proved. The aim of the probabilistic representation is to provide an extension of the Lagrangian formalism from the non-viscous (Euler equations) to the viscous case. As an application, a continuation principle, similar to the Beale–Kato–Majda blow-up criterion, is proved.
APA, Harvard, Vancouver, ISO, and other styles
40

Beltagy, I., Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. "Representing Meaning with a Combination of Logical and Distributional Models." Computational Linguistics 42, no. 4 (December 2016): 763–808. http://dx.doi.org/10.1162/coli_a_00266.

Full text
Abstract:
NLP tasks differ in the semantic information they require, and at this time no single semantic representation fulfills all requirements. Logic-based representations characterize sentence structure, but do not capture the graded aspect of meaning. Distributional models give graded similarity ratings for words and phrases, but do not capture sentence structure in the same detail as logic-based approaches. It has therefore been argued that the two are complementary. We adopt a hybrid approach that combines logical and distributional semantics using probabilistic logic, specifically Markov Logic Networks. In this article, we focus on the three components of a practical system: 1 1) Logical representation focuses on representing the input problems in probabilistic logic; 2) knowledge base construction creates weighted inference rules by integrating distributional information with other sources; and 3) probabilistic inference involves solving the resulting MLN inference problems efficiently. To evaluate our approach, we use the task of textual entailment, which can utilize the strengths of both logic-based and distributional representations. In particular we focus on the SICK data set, where we achieve state-of-the-art results. We also release a lexical entailment data set of 10,213 rules extracted from the SICK data set, which is a valuable resource for evaluating lexical entailment systems. 2
APA, Harvard, Vancouver, ISO, and other styles
41

Dorado, Rubén. "Statistical models for languaje representation." Revista Ontare 1, no. 1 (September 16, 2015): 29. http://dx.doi.org/10.21158/23823399.v1.n1.2013.1208.

Full text
Abstract:
ONTARE. REVISTA DE INVESTIGACIÓN DE LA FACULTAD DE INGENIERÍAThis paper discuses several models for the computational representation of language. First, some n-gram models that are based on Markov models are introduced. Second, a family of models known as the exponential models is taken into account. This family in particular allows the incorporation of several features to model. Third, a recent current of research, the probabilistic Bayesian approach, is discussed. In this kind of models, language is modeled as a probabilistic distribution. Several distributions and probabilistic processes, such as the Dirichlet distribution and the Pitman- Yor process, are used to approximate the linguistic phenomena. Finally, the problem of sparseness of the language and its common solution known as smoothing is discussed. RESUMENEste documento discute varios modelos para la representación computacional del lenguaje. En primer lugar, se introducen los modelos de n-gramas que son basados en los modelos Markov. Luego, se toma en cuenta una familia de modelos conocido como el modelo exponencial. Esta familia en particular permite la incorporación de varias funciones para modelar. Como tercer punto, se discute una corriente reciente de la investigación, el enfoque probabilístico Bayesiano. En este tipo de modelos, el lenguaje es modelado como una distribución probabilística. Se utilizan varias distribuciones y procesos probabilísticos para aproximar los fenómenos lingüísticos, tales como la distribución de Dirichlet y el proceso de Pitman-Yor. Finalmente, se discute el problema de la escasez del lenguaje y su solución más común conocida como smoothing o redistribución.
APA, Harvard, Vancouver, ISO, and other styles
42

Akhlil, Khalid. "Probabilistic Solution of the General Robin Boundary Value Problem on Arbitrary Domains." International Journal of Stochastic Analysis 2012 (December 30, 2012): 1–17. http://dx.doi.org/10.1155/2012/163096.

Full text
Abstract:
Using a capacity approach and the theory of the measure’s perturbation of the Dirichlet forms, we give the probabilistic representation of the general Robin boundary value problems on an arbitrary domain Ω, involving smooth measures, which give rise to a new process obtained by killing the general reflecting Brownian motion at a random time. We obtain some properties of the semigroup directly from its probabilistic representation, some convergence theorems, and also a probabilistic interpretation of the phenomena occurring on the boundary.
APA, Harvard, Vancouver, ISO, and other styles
43

Luttermann, Malte, Tanya Braun, Ralf Möller, and Marcel Gehrke. "Colour Passing Revisited: Lifted Model Construction with Commutative Factors." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20500–20507. http://dx.doi.org/10.1609/aaai.v38i18.30034.

Full text
Abstract:
Lifted probabilistic inference exploits symmetries in a probabilistic model to allow for tractable probabilistic inference with respect to domain sizes. To apply lifted inference, a lifted representation has to be obtained, and to do so, the so-called colour passing algorithm is the state of the art. The colour passing algorithm, however, is bound to a specific inference algorithm and we found that it ignores commutativity of factors while constructing a lifted representation. We contribute a modified version of the colour passing algorithm that uses logical variables to construct a lifted representation independent of a specific inference algorithm while at the same time exploiting commutativity of factors during an offline-step. Our proposed algorithm efficiently detects more symmetries than the state of the art and thereby drastically increases compression, yielding significantly faster online query times for probabilistic inference when the resulting model is applied.
APA, Harvard, Vancouver, ISO, and other styles
44

Parisi, Francesco, and John Grant. "Knowledge Representation in Probabilistic Spatio-Temporal Knowledge Bases." Journal of Artificial Intelligence Research 55 (March 28, 2016): 743–98. http://dx.doi.org/10.1613/jair.4883.

Full text
Abstract:
We represent knowledge as integrity constraints in a formalization of probabilistic spatio-temporal knowledge bases. We start by defining the syntax and semantics of a formalization called PST knowledge bases. This definition generalizes an earlier version, called SPOT, which is a declarative framework for the representation and processing of probabilistic spatio-temporal data where probability is represented as an interval because the exact value is unknown. We augment the previous definition by adding a type of non-atomic formula that expresses integrity constraints. The result is a highly expressive formalism for knowledge representation dealing with probabilistic spatio-temporal data. We obtain complexity results both for checking the consistency of PST knowledge bases and for answering queries in PST knowledge bases, and also specify tractable cases. All the domains in the PST framework are finite, but we extend our results also to arbitrarily large finite domains.
APA, Harvard, Vancouver, ISO, and other styles
45

López-Saldívar, Julio A., Olga V. Man’ko, Margarita A. Man’ko, and Vladimir I. Man’ko. "Bosonic Representation of Matrices and Angular Momentum Probabilistic Representation of Cyclic States." Entropy 25, no. 12 (December 6, 2023): 1628. http://dx.doi.org/10.3390/e25121628.

Full text
Abstract:
The Jordan–Schwinger map allows us to go from a matrix representation of any arbitrary Lie algebra to an oscillator (bosonic) representation. We show that any Lie algebra can be considered for this map by expressing the algebra generators in terms of the oscillator creation and annihilation operators acting in the Hilbert space of quantum oscillator states. Then, to describe quantum states in the probability representation of quantum oscillator states, we express their density operators in terms of conditional probability distributions (symplectic tomograms) or Husimi-like probability distributions. We illustrate this general scheme by examples of qubit states (spin-1/2 su(2)-group states) and even and odd Schrödinger cat states related to the other representation of su(2)-algebra (spin-j representation). The two-mode coherent-state superpositions associated with cyclic groups are studied, using the Jordan–Schwinger map. This map allows us to visualize and compare different properties of the mentioned states. For this, the su(2) coherent states for different angular momenta j are used to define a Husimi-like Q representation. Some properties of these states are explicitly presented for the cyclic groups C2 and C3. Also, their use in quantum information and computing is mentioned.
APA, Harvard, Vancouver, ISO, and other styles
46

Szwabe, Andrzej, Pawel Misiorek, Tadeusz Janasiewicz, and Przemyslaw Walkowiak. "Holistic Entropy Reduction for Collaborative Filtering." Foundations of Computing and Decision Sciences 39, no. 3 (July 1, 2014): 209–29. http://dx.doi.org/10.2478/fcds-2014-0012.

Full text
Abstract:
Abstract We propose a collaborative filtering (CF) method that uses behavioral data provided as propositions having the RDF-compliant form of (user X, likes, item Y ) triples. The method involves the application of a novel self-configuration technique for the generation of vector-space representations optimized from the information-theoretic perspective. The method, referred to as Holistic Probabilistic Modus Ponendo Ponens (HPMPP), enables reasoning about the likelihood of unknown facts. The proposed vector-space graph representation model is based on the probabilistic apparatus of quantum Information Retrieval and on the compatibility of all operators representing subjects, predicates, objects and facts. The dual graph-vector representation of the available propositional data enables the entropy-reducing transformation and supports the compositionality of mutually compatible representations. As shown in the experiments presented in the paper, the compositionality of the vector-space representations allows an HPMPP-based recommendation system to identify which of the unknown facts having the triple form (user X, likes, item Y ) are the most likely to be true in a way that is both effective and, in contrast to methods proposed so far, fully automatic.
APA, Harvard, Vancouver, ISO, and other styles
47

VENNEKENS, JOOST, MARC DENECKER, and MAURICE BRUYNOOGHE. "CP-logic: A language of causal probabilistic events and its relation to logic programming." Theory and Practice of Logic Programming 9, no. 3 (May 2009): 245–308. http://dx.doi.org/10.1017/s1471068409003767.

Full text
Abstract:
AbstractThis paper develops a logical language for representing probabilistic causal laws. Our interest in such a language is two-fold. First, it can be motivated as a fundamental study of the representation of causal knowledge. Causality has an inherent dynamic aspect, which has been studied at the semantical level by Shafer in his framework of probability trees. In such a dynamic context, where the evolution of a domain over time is considered, the idea of a causal law as something which guides this evolution is quite natural. In our formalization, a set of probabilistic causal laws can be used to represent a class of probability trees in a concise, flexible and modular way. In this way, our work extends Shafer's by offering a convenient logical representation for his semantical objects. Second, this language also has relevance for the area of probabilistic logic programming. In particular, we prove that the formal semantics of a theory in our language can be equivalently defined as a probability distribution over the well-founded models of certain logic programs, rendering it formally quite similar to existing languages such as ICL or PRISM. Because we can motivate and explain our language in a completely self-contained way as a representation of probabilistic causal laws, this provides a new way of explaining the intuitions behind such probabilistic logic programs: we can say precisely which knowledge such a program expresses, in terms that are equally understandable by a non-logician. Moreover, we also obtain an additional piece of knowledge representation methodology for probabilistic logic programs, by showing how they can express probabilistic causal laws.
APA, Harvard, Vancouver, ISO, and other styles
48

Dohmen, Taylor, Noah Topper, George Atia, Andre Beckus, Ashutosh Trivedi, and Alvaro Velasquez. "Inferring Probabilistic Reward Machines from Non-Markovian Reward Signals for Reinforcement Learning." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 574–82. http://dx.doi.org/10.1609/icaps.v32i1.19844.

Full text
Abstract:
The success of reinforcement learning in typical settings is predicated on Markovian assumptions on the reward signal by which an agent learns optimal policies. In recent years, the use of reward machines has relaxed this assumption by enabling a structured representation of non-Markovian rewards. In particular, such representations can be used to augment the state space of the underlying decision process, thereby facilitating non-Markovian reinforcement learning. However, these reward machines cannot capture the semantics of stochastic reward signals. In this paper, we make progress on this front by introducing probabilistic reward machines (PRMs) as a representation of non-Markovian stochastic rewards. We present an algorithm to learn PRMs from the underlying decision process and prove results around its correctness and convergence.
APA, Harvard, Vancouver, ISO, and other styles
49

Cetto, Ana María. "Electron Spin Correlations: Probabilistic Description and Geometric Representation." Entropy 24, no. 10 (October 9, 2022): 1439. http://dx.doi.org/10.3390/e24101439.

Full text
Abstract:
The electron spin correlation is shown to be expressible in terms of a bona fide probability distribution function with an associated geometric representation. With this aim, an analysis is presented of the probabilistic features of the spin correlation within the quantum formalism, which helps clarify the concepts of contextuality and measurement dependence. The dependence of the spin correlation on conditional probabilities allows for a clear separation between system state and measurement context; the latter determines how the probability space should be partitioned in calculating the correlation. A probability distribution function ρ(ϕ) is then proposed, which reproduces the quantum correlation for a pair of single-particle spin projections and is amenable to a simple geometric representation that gives meaning to the variable ϕ. The same procedure is shown to be applicable to the bipartite system in the singlet spin state. This endows the spin correlation with a clear probabilistic meaning and leaves the door open for a possible physical picture of the electron spin, as discussed at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
50

G, Anjali. "Component Based Representation Using Probabilistic Neural Network Classifier." IOSR Journal of Computer Engineering 16, no. 6 (2014): 35–39. http://dx.doi.org/10.9790/0661-16653539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography