Academic literature on the topic 'Von Mises-Fisher prior'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Von Mises-Fisher prior.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Von Mises-Fisher prior":

1

Ma, He, and Weipeng Wu. "A deep clustering framework integrating pairwise constraints and a VMF mixture model." Electronic Research Archive 32, no. 6 (2024): 3952–72. http://dx.doi.org/10.3934/era.2024177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract><p>We presented a novel deep generative clustering model called Variational Deep Embedding based on Pairwise constraints and the Von Mises-Fisher mixture model (VDEPV). VDEPV consists of fully connected neural networks capable of learning latent representations from raw data and accurately predicting cluster assignments. Under the assumption of a genuinely non-informative prior, VDEPV adopted a von Mises-Fisher mixture model to depict the hyperspherical interpretation of the data. We defined and established pairwise constraints by employing a random sample mining strategy and applying data augmentation techniques. These constraints enhanced the compactness of intra-cluster samples in the spherical embedding space while improving inter-cluster samples' separability. By minimizing Kullback-Leibler divergence, we formulated a clustering loss function based on pairwise constraints, which regularized the joint probability distribution of latent variables and cluster labels. Comparative experiments with other deep clustering methods demonstrated the excellent performance of VDEPV.</p></abstract>
2

Michel, Nicolas, Giovanni Chierchia, Romain Negrel, and Jean-François Bercher. "Learning Representations on the Unit Sphere: Investigating Angular Gaussian and Von Mises-Fisher Distributions for Online Continual Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14350–58. http://dx.doi.org/10.1609/aaai.v38i13.29348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function. We also consider the von Mises-Fisher distribution, which is the conditional of a Gaussian in the unit-sphere. The learned representations are pushed toward fixed directions, which are the prior means of the Gaussians; allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://github.com/Nicolas1203/ocl-fd.
3

Cao, Mingxuan, Kai Xie, Feng Liu, Bohao Li, Chang Wen, Jianbiao He, and Wei Zhang. "Recognition of Occluded Goods under Prior Inference Based on Generative Adversarial Network." Sensors 23, no. 6 (March 22, 2023): 3355. http://dx.doi.org/10.3390/s23063355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity of goods. Therefore, this study proposes an approach for occluding goods recognition based on a generative adversarial network combined with prior inference to address the two abovementioned problems. With DarkNet53 as the backbone network, semantic segmentation is used to locate the occluded part in the feature extraction network, and simultaneously, the YOLOX decoupling head is used to obtain the detection frame. Subsequently, a generative adversarial network under prior inference is used to restore and expand the features of the occluded parts, and a multi-scale spatial attention and effective channel attention weighted attention mechanism module is proposed to select fine-grained features of goods. Finally, a metric learning method based on von Mises–Fisher distribution is proposed to increase the class spacing of features to achieve the effect of feature distinction, whilst the distinguished features are utilized to recognize goods at a fine-grained level. The experimental data used in this study were all obtained from the self-made smart retail container dataset, which contains a total of 12 types of goods used for recognition and includes four couples of similar goods. Experimental results reveal that the peak signal-to-noise ratio and structural similarity under improved prior inference are 0.7743 and 0.0183 higher than those of the other models, respectively. Compared with other optimal models, mAP improves the recognition accuracy by 1.2% and the recognition accuracy by 2.82%. This study solves two problems: one is the occlusion caused by hands, and the other is the high similarity of goods, thus meeting the requirements of commodity recognition accuracy in the field of intelligent retail and exhibiting good application prospects.
4

Fang, Jinyuan, Shangsong Liang, Zaiqiao Meng, and Maarten De Rijke. "Hyperspherical Variational Co-embedding for Attributed Networks." ACM Transactions on Information Systems 40, no. 3 (July 31, 2022): 1–36. http://dx.doi.org/10.1145/3478284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network-based information has been widely explored and exploited in the information retrieval literature. Attributed networks, consisting of nodes, edges as well as attributes describing properties of nodes, are a basic type of network-based data, and are especially useful for many applications. Examples include user profiling in social networks and item recommendation in user-item purchase networks. Learning useful and expressive representations of entities in attributed networks can provide more effective building blocks to down-stream network-based tasks such as link prediction and attribute inference. Practically, input features of attributed networks are normalized as unit directional vectors. However, most network embedding techniques ignore the spherical nature of inputs and focus on learning representations in a Gaussian or Euclidean space, which, we hypothesize, might lead to less effective representations. To obtain more effective representations of attributed networks, we investigate the problem of mapping an attributed network with unit normalized directional features into a non-Gaussian and non-Euclidean space. Specifically, we propose a hyperspherical variational co-embedding for attributed networks (HCAN), which is based on generalized variational auto-encoders for heterogeneous data with multiple types of entities. HCAN jointly learns latent embeddings for both nodes and attributes in a unified hyperspherical space such that the affinities between nodes and attributes can be captured effectively. We argue that this is a crucial feature in many real-world applications of attributed networks. Previous Gaussian network embedding algorithms break the assumption of uninformative prior, which leads to unstable results and poor performance. In contrast, HCAN embeds nodes and attributes as von Mises-Fisher distributions, and allows one to capture the uncertainty of the inferred representations. Experimental results on eight datasets show that HCAN yields better performance in a number of applications compared with nine state-of-the-art baselines.
5

Hornik, Kurt, and Bettina Grün. "On conjugate families and Jeffreys priors for von Mises–Fisher distributions." Journal of Statistical Planning and Inference 143, no. 5 (May 2013): 992–99. http://dx.doi.org/10.1016/j.jspi.2012.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lewin, Peter. "Rothbard and Mises on Interest: An Exercise in Theoretical Purity." Journal of the History of Economic Thought 19, no. 1 (1997): 141–59. http://dx.doi.org/10.1017/s1053837200004727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The conventional wisdom in economics holds, with Irving Fisher, that interest is explained jointly by the forces of time preference (thrift) and productivity. One school of thought, however, has held stubbornly to the assertion that interest is best understood as a result of time preference alone, time preference as the essential determinant of interest. This is the pure time preference approach to interest. And while most economists are inclined to dismiss this approach out of hand, the pure time preference approach has proved remarkably resilient. Part of the explanation for the persistence of rival theories can be found, not surprisingly, in terminological confusions and ambiguities, for example in deciding among candidates for essential causation. I hope in this article to improve the case for the pure time preference approach to interest by clarifying the argument. It appears that some of the confusion can be attributed to the approach of two theorists, Ludwig von Mises and Murray Rothbard, and to their connecting the time preference approach to their particular a priori methodology.
7

Andreella, Angela, and Livio Finos. "Procrustes Analysis for High-Dimensional Data." Psychometrika, May 18, 2022. http://dx.doi.org/10.1007/s11336-022-09859-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe Procrustes-based perturbation model (Goodall in J R Stat Soc Ser B Methodol 53(2):285–321, 1991) allows minimization of the Frobenius distance between matrices by similarity transformation. However, it suffers from non-identifiability, critical interpretation of the transformed matrices, and inapplicability in high-dimensional data. We provide an extension of the perturbation model focused on the high-dimensional data framework, called the ProMises (Procrustes von Mises–Fisher) model. The ill-posed and interpretability problems are solved by imposing a proper prior distribution for the orthogonal matrix parameter (i.e., the von Mises–Fisher distribution) which is a conjugate prior, resulting in a fast estimation process. Furthermore, we present the Efficient ProMises model for the high-dimensional framework, useful in neuroimaging, where the problem has much more than three dimensions. We found a great improvement in functional magnetic resonance imaging connectivity analysis because the ProMises model permits incorporation of topological brain information in the alignment’s estimation process.
8

Nakhaei Rad, Najmeh, Andriette Bekker, Mohammad Arashi, and Christophe Ley. "Coming Together of Bayesian Inference and Skew Spherical Data." Frontiers in Big Data 4 (February 8, 2022). http://dx.doi.org/10.3389/fdata.2021.769726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents Bayesian directional data modeling via the skew-rotationally-symmetric Fisher-von Mises-Langevin (FvML) distribution. The prior distributions for the parameters are a pivotal building block in Bayesian analysis, therefore, the impact of the proposed priors will be quantified using the Wasserstein Impact Measure (WIM) to guide the practitioner in the implementation process. For the computation of the posterior, modifications of Gibbs and slice samplings are applied for generating samples. We demonstrate the applicability of our contribution via synthetic and real data analyses. Our investigation paves the way for Bayesian analysis of skew circular and spherical data.

Dissertations / Theses on the topic "Von Mises-Fisher prior":

1

Hornik, Kurt, and Bettina Grün. "On conjugate families and Jeffreys priors for von Mises-Fisher distributions." Elsevier, 2013. http://dx.doi.org/10.1016/j.jspi.2012.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper discusses characteristics of standard conjugate priors and their induced posteriors in Bayesian inference for von Mises-Fisher distributions, using either the canonical natural exponential family or the more commonly employed polar coordinate parameterizations. We analyze when standard conjugate priors as well as posteriors are proper, and investigate the Jeffreys prior for the von Mises-Fisher family. Finally, we characterize the proper distributions in the standard conjugate family of the (matrixvalued) von Mises-Fisher distributions on Stiefel manifolds.
2

Traullé, Benjamin. "Techniques d’échantillonnage pour la déconvolution aveugle bayésienne." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces travaux de thèse abordent deux défis principaux dans le domaine de la déconvolution aveugle (DA) bayésienne via l’utilisation de méthodes Markov chain Monte Carlo (MCMC). Tout d’abord, en DA, il est courant d’utiliser des lois a priori de type gaussien. Cependant, ces lois ne résolvent pas le problème de l’ambiguïté d’échelle. Cette dernière pose des difficultés à la convergence des algorithmes MCMC classiques, qui présentent un échantillonnage lent de l’échelle, et complique la conception d’estimateurs sans échelle, pourtant souhaitables en pratique. Pour surmonter cette limitation, un a priori de von Mises-Fisher est proposé et permet de supprimer efficacement l’ambiguïté d’échelle. Cette approche a déjà montré son effet régularisant dans d’autres problèmes inverses, notamment la DA basée sur l’optimisation. Les avantages de cet a priori au sein des algorithmes MCMC par rapport aux a priori gaussiens conventionnels sont discutés en faibles dimensions tant d’un point de vue théorique qu’expérimental. Cependant, la nature multimodale des postérieures demeure et peut encore entraver l’exploration de l’espace d’état, en particulier lors de l’utilisation d’algorithmes tel que l’échantillonneur de Gibbs. Ces mauvaises propriétés de mélange entraînent des performances sous-optimales en matière d’exploration inter- et intra-mode et peuvent rendre peu pertinente l’utilisation d’estimateurs bayésiens à ce stade. Pour résoudre ce problème, nous proposons une approche originale basée sur l’utilisation d’un algorithme reversible jump MCMC (RJMCMC) qui améliore considérablement l’exploration de l’espace en générant de nouveaux états dans des régions à forte probabilité, qui ont été identifiées dans une étape préliminaire. L’efficacité de l’algorithme RJMCMC est démontrée empiriquement dans le cadre de postérieures fortement multimodales, en petites dimensions, tant dans le cas d’a priori gaussiens, que d’a priori de von Mises–Fisher. Enfin, le comportement observé du RJMCMC en dimensions croissantes semble conforter l’idée d’une telle approche pour échantillonner des distributions multimodales dans le cadre de la DA bayésienne
These thesis works address two main challenges in the field of Bayesian blind deconvolution using Markov chain Monte Carlo (MCMC) methods. Firstly, in Bayesian blind deconvolution, it is common to use Gaussian-type priors. However, these priors do not solve the scale ambiguity problem. The latter poses difficulties in the convergence of classical MCMC algorithms, which exhibit slow scale sampling, and complicates the design of scale-free estimators. To overcome this limitation, a von Mises–Fisher prior is proposed, which alleviates the scale ambiguity. This approach has already demonstrated its regularization effect in other inverse problems, including optimization-based blind deconvolution. The advantages of this prior within MCMC algorithms are discussed compared to conventional Gaussian priors, both theoretically and experimentally, especially in low dimensions. However, the multimodal nature of the posterior distribution still poses challenges and decreases the quality of the exploration of the state space, particularly when using algorithms such as the Gibbs sampler. These poor mixing properties lead to suboptimal performance in terms of inter-mode and intra-mode exploration and can limit the usefulness of Bayesian estimators at this stage. To address this issue, we propose an original approach based on the use of a reversible jump MCMC (RJMCMC) algorithm, which significantly improves the exploration of the state space by generating new states in high probability regions identified in a preliminary stage. The effectiveness of the RJMCMC algorithm is empirically demonstrated in the context of highly multimodal posteriors, particularly in low dimensions, for both Gaussian and von Mises–Fisher priors. Furthermore, the observed behavior of RJMCMC in increasing dimensions provides support for the applicability of this approach for sampling multimodal distributions in the context of Bayesian blind deconvolution

Conference papers on the topic "Von Mises-Fisher prior":

1

Traulle, Benjamin, Stephanie Bidon, and Damien Roque. "A von Mises—Fisher prior to Remove Scale Ambiguity in Blind Deconvolution." In 2022 30th European Signal Processing Conference (EUSIPCO). IEEE, 2022. http://dx.doi.org/10.23919/eusipco55093.2022.9909710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Черняев, Сергей, Sergey Chernyaev, Олег Лукашенко, and Oleg Lukashenko. "Comparative Analysis of Methods for Segmentation of FMRI Images Based on Markov Random Fields." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-1-143-147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of segmentation of three-dimensional fMRI images based on the Bayesian approach is considered, where Markov Random Field is used as the prior distribution, and von Mises-Fisher distribution is used as the observation model. The main problem when applying this approach in practice is an estimation of the model parameters. In this paper, we review algorithms HMRF-MCEM, HMRF-EM and GrabCut, which implement this statistical model and estimate parameters without the usage of the labeled training data. The methods HMRF-EM and GrabCut were introduced in conjunction with other statistical models, but after a small modification, they can be used with the von Mises-Fisher distribution. A comparative study was carried out by performing experiments on both synthetic, generated from the statistical model, and real fMRI data.
3

Jin, Yujie, Xu Chu, Yasha Wang, and Wenwu Zhu. "Domain Generalization through the Lens of Angular Invariance." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Domain generalization (DG) aims at generalizing a classifier trained on multiple source domains to an unseen target domain with domain shift. A common pervasive theme in existing DG literature is domain-invariant representation learning with various invariance assumptions. However, prior works restrict themselves to an impractical assumption for real-world challenges: If a mapping induced by a deep neural network (DNN) could align the source domains well, then such a mapping aligns a target domain as well. In this paper, we simply take DNNs as feature extractors to relax the requirement of distribution alignment. Specifically, we put forward a novel angular invariance and the accompanied norm shift assumption. Based on the proposed term of invariance, we propose a novel deep DG method dubbed Angular Invariance Domain Generalization Network (AIDGN). The optimization objective of AIDGN is developed with a von-Mises Fisher (vMF) mixture model. Extensive experiments on multiple DG benchmark datasets validate the effectiveness of the proposed AIDGN method.

To the bibliography