Academic literature on the topic 'A priori de von Mises-Fisher'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'A priori de von Mises-Fisher.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "A priori de von Mises-Fisher":

1

Hornik, Kurt, and Bettina Grün. "On conjugate families and Jeffreys priors for von Mises–Fisher distributions." Journal of Statistical Planning and Inference 143, no. 5 (May 2013): 992–99. http://dx.doi.org/10.1016/j.jspi.2012.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, He, and Weipeng Wu. "A deep clustering framework integrating pairwise constraints and a VMF mixture model." Electronic Research Archive 32, no. 6 (2024): 3952–72. http://dx.doi.org/10.3934/era.2024177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract><p>We presented a novel deep generative clustering model called Variational Deep Embedding based on Pairwise constraints and the Von Mises-Fisher mixture model (VDEPV). VDEPV consists of fully connected neural networks capable of learning latent representations from raw data and accurately predicting cluster assignments. Under the assumption of a genuinely non-informative prior, VDEPV adopted a von Mises-Fisher mixture model to depict the hyperspherical interpretation of the data. We defined and established pairwise constraints by employing a random sample mining strategy and applying data augmentation techniques. These constraints enhanced the compactness of intra-cluster samples in the spherical embedding space while improving inter-cluster samples' separability. By minimizing Kullback-Leibler divergence, we formulated a clustering loss function based on pairwise constraints, which regularized the joint probability distribution of latent variables and cluster labels. Comparative experiments with other deep clustering methods demonstrated the excellent performance of VDEPV.</p></abstract>
3

Lewin, Peter. "Rothbard and Mises on Interest: An Exercise in Theoretical Purity." Journal of the History of Economic Thought 19, no. 1 (1997): 141–59. http://dx.doi.org/10.1017/s1053837200004727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The conventional wisdom in economics holds, with Irving Fisher, that interest is explained jointly by the forces of time preference (thrift) and productivity. One school of thought, however, has held stubbornly to the assertion that interest is best understood as a result of time preference alone, time preference as the essential determinant of interest. This is the pure time preference approach to interest. And while most economists are inclined to dismiss this approach out of hand, the pure time preference approach has proved remarkably resilient. Part of the explanation for the persistence of rival theories can be found, not surprisingly, in terminological confusions and ambiguities, for example in deciding among candidates for essential causation. I hope in this article to improve the case for the pure time preference approach to interest by clarifying the argument. It appears that some of the confusion can be attributed to the approach of two theorists, Ludwig von Mises and Murray Rothbard, and to their connecting the time preference approach to their particular a priori methodology.
4

Barrotta, Pierluigi. "A Neo-Kantian Critique of Von Mises's Epistemology." Economics and Philosophy 12, no. 1 (April 1996): 51–66. http://dx.doi.org/10.1017/s0266267100003710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
More than many other Austrians, Mises tried to found aprioristic methodology on a well defined and developed epistemology. Although references to Kant are scattered rather unsystematically throughout his works, he nevertheless used an unequivocal Kantian terminology. He explicitly defended the existence of ‘a priori knowledge’, ‘synthetic a priori propositions’, ‘the category of action’, and so forth.
5

Scheall, Scott. "HAYEK THE APRIORIST?" Journal of the History of Economic Thought 37, no. 1 (February 12, 2015): 87–110. http://dx.doi.org/10.1017/s1053837214000765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper argues that Terence Hutchison’s (1981) argument that the young F. A. Hayek maintained a methodological position markedly similar to that of Ludwig von Mises fails to support the relevant conclusion. The first problem with Hutchison’s argument is that it is not clear exactly what conclusion he meant to establish. Mises (in)famously maintained a rather extreme methodological apriorism. However, the concept of a priori knowledge that emerges from Hayek’s epistemology as implied in his work on theoretical psychology is the opposite of Mises’s treatment of a priori knowledge. Thus, it cannot be maintained—if, indeed, Hutchison meant to establish—that Hayek was a Misesian apriorist during the years in question. What’s more, the paper shows that Hutchison’s argument does not support a weaker interpretation of the relevant conclusion. There are alternative interpretations of Hutchison’s evidence, more charitable and more consistent with Hayek’s epistemology, which undermine Hutchison’s conclusion.
6

Michel, Nicolas, Giovanni Chierchia, Romain Negrel, and Jean-François Bercher. "Learning Representations on the Unit Sphere: Investigating Angular Gaussian and Von Mises-Fisher Distributions for Online Continual Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14350–58. http://dx.doi.org/10.1609/aaai.v38i13.29348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function. We also consider the von Mises-Fisher distribution, which is the conditional of a Gaussian in the unit-sphere. The learned representations are pushed toward fixed directions, which are the prior means of the Gaussians; allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://github.com/Nicolas1203/ocl-fd.
7

Chang-Chien, Shou-Jen, Wajid Ali, and Miin-Shen Yang. "A Learning-Based EM Clustering for Circular Data with Unknown Number of Clusters." Proceedings of Engineering and Technology Innovation 15 (April 27, 2020): 42–51. http://dx.doi.org/10.46604/peti.2020.5241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Clustering is a method for analyzing grouped data. Circular data were well used in various applications, such as wind directions, departure directions of migrating birds or animals, etc. The expectation & maximization (EM) algorithm on mixtures of von Mises distributions is popularly used for clustering circular data. In general, the EM algorithm is sensitive to initials and not robust to outliers in which it is also necessary to give a number of clusters a priori. In this paper, we consider a learning-based schema for EM, and then propose a learning-based EM algorithm on mixtures of von Mises distributions for clustering grouped circular data. The proposed clustering method is without any initial and robust to outliers with automatically finding the number of clusters. Some numerical and real data sets are used to compare the proposed algorithm with existing methods. Experimental results and comparisons actually demonstrate these good aspects of effectiveness and superiority of the proposed learning-based EM algorithm.
8

V. Le, Canh, Phuc L. H. Ho, and Hoa T. Nguyen. "Airy-based equilibrium mesh-free method for static limit analysis of plane problems." Vietnam Journal of Mechanics 38, no. 3 (September 25, 2016): 167–79. http://dx.doi.org/10.15625/0866-7136/38/3/5961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a numerical procedure for lower bound limit analysis of plane problems governed by von Mises yield criterion. The stress fields are calculated based on the Airy function which is approximated using the moving least squares technique. With the use of the Airy-based equilibrium mesh-free method, equilibrium equations are ensured to be automatically satisfied a priori, and the size of the resulting optimization problem is reduced significantly. Various plane strain and plane stress with arbitrary geometries and boundary conditions are examined to illustrate the performance of the proposed procedure.
9

Robitaille, Christian. "La question de la connaissance a priori en sciences sociales : les points de vue de Simiand, Mises et Simmel." Revue de philosophie économique Vol. 24, no. 2 (December 22, 2023): 63–91. http://dx.doi.org/10.3917/rpec.242.0063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les sciences sociales contemporaines se caractérisent par un abandon de la quête d’une véritable connaissance a priori non-relativiste. D’une part, les méthodes quantitatives et le positivisme méthodologique rejettent en général la possibilité de l’acquisition de ce type de savoir. D’autre part, les méthodes qualitatives et les approches herméneutiques, lorsqu’elles ne cherchent pas explicitement à obtenir des connaissances a posteriori , se caractérisent généralement par un apriorisme sceptique selon lequel l’adoption de n’importe quelle perspective ou cadre théorique est considérée valable. Cet article propose d’évaluer trois perspectives différentes sur la possibilité de la connaissance a priori en sciences sociales, c’est-à-dire celles de François Simiand (critique de l’apriorisme), Ludwig von Mises (partisan de l’apriorisme praxéologique) et de Georg Simmel (initiateur d’un apriorisme formaliste). Cette évaluation comparative permet de mettre en évidence la portée et les limites de l’apriorisme en ce qui a trait à l’acquisition de connaissances en sciences sociales. Elle permet, en dernière analyse, de rendre à l’apriorisme ses lettres de noblesse et d’ainsi faciliter son éventuel retour sous une forme qui échapperait au relativisme actuel.
10

Strzalka, Carsten, and Manfred Zehn. "The Influence of Loading Position in A Priori High Stress Detection using Mode Superposition." Reports in Mechanical Engineering 1, no. 1 (October 24, 2020): 93–102. http://dx.doi.org/10.31181/rme200101093s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For the analysis of structural components, the finite element method (FEM) has become the most widely applied tool for numerical stress- and subsequent durability analyses. In industrial application advanced FE-models result in high numbers of degrees of freedom, making dynamic analyses time-consuming and expensive. As detailed finite element models are necessary for accurate stress results, the resulting data and connected numerical effort from dynamic stress analysis can be high. For the reduction of that effort, sophisticated methods have been developed to limit numerical calculations and processing of data to only small fractions of the global model. Therefore, detailed knowledge of the position of a component’s highly stressed areas is of great advantage for any present or subsequent analysis steps. In this paper an efficient method for the a priori detection of highly stressed areas of force-excited components is presented, based on modal stress superposition. As the component’s dynamic response and corresponding stress is always a function of its excitation, special attention is paid to the influence of the loading position. Based on the frequency domain solution of the modally decoupled equations of motion, a coefficient for a priori weighted superposition of modal von Mises stress fields is developed and validated on a simply supported cantilever beam structure with variable loading positions. The proposed approach is then applied to a simplified industrial model of a twist beam rear axle.

Dissertations / Theses on the topic "A priori de von Mises-Fisher":

1

Hornik, Kurt, and Bettina Grün. "On conjugate families and Jeffreys priors for von Mises-Fisher distributions." Elsevier, 2013. http://dx.doi.org/10.1016/j.jspi.2012.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper discusses characteristics of standard conjugate priors and their induced posteriors in Bayesian inference for von Mises-Fisher distributions, using either the canonical natural exponential family or the more commonly employed polar coordinate parameterizations. We analyze when standard conjugate priors as well as posteriors are proper, and investigate the Jeffreys prior for the von Mises-Fisher family. Finally, we characterize the proper distributions in the standard conjugate family of the (matrixvalued) von Mises-Fisher distributions on Stiefel manifolds.
2

Traullé, Benjamin. "Techniques d’échantillonnage pour la déconvolution aveugle bayésienne." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces travaux de thèse abordent deux défis principaux dans le domaine de la déconvolution aveugle (DA) bayésienne via l’utilisation de méthodes Markov chain Monte Carlo (MCMC). Tout d’abord, en DA, il est courant d’utiliser des lois a priori de type gaussien. Cependant, ces lois ne résolvent pas le problème de l’ambiguïté d’échelle. Cette dernière pose des difficultés à la convergence des algorithmes MCMC classiques, qui présentent un échantillonnage lent de l’échelle, et complique la conception d’estimateurs sans échelle, pourtant souhaitables en pratique. Pour surmonter cette limitation, un a priori de von Mises-Fisher est proposé et permet de supprimer efficacement l’ambiguïté d’échelle. Cette approche a déjà montré son effet régularisant dans d’autres problèmes inverses, notamment la DA basée sur l’optimisation. Les avantages de cet a priori au sein des algorithmes MCMC par rapport aux a priori gaussiens conventionnels sont discutés en faibles dimensions tant d’un point de vue théorique qu’expérimental. Cependant, la nature multimodale des postérieures demeure et peut encore entraver l’exploration de l’espace d’état, en particulier lors de l’utilisation d’algorithmes tel que l’échantillonneur de Gibbs. Ces mauvaises propriétés de mélange entraînent des performances sous-optimales en matière d’exploration inter- et intra-mode et peuvent rendre peu pertinente l’utilisation d’estimateurs bayésiens à ce stade. Pour résoudre ce problème, nous proposons une approche originale basée sur l’utilisation d’un algorithme reversible jump MCMC (RJMCMC) qui améliore considérablement l’exploration de l’espace en générant de nouveaux états dans des régions à forte probabilité, qui ont été identifiées dans une étape préliminaire. L’efficacité de l’algorithme RJMCMC est démontrée empiriquement dans le cadre de postérieures fortement multimodales, en petites dimensions, tant dans le cas d’a priori gaussiens, que d’a priori de von Mises–Fisher. Enfin, le comportement observé du RJMCMC en dimensions croissantes semble conforter l’idée d’une telle approche pour échantillonner des distributions multimodales dans le cadre de la DA bayésienne
These thesis works address two main challenges in the field of Bayesian blind deconvolution using Markov chain Monte Carlo (MCMC) methods. Firstly, in Bayesian blind deconvolution, it is common to use Gaussian-type priors. However, these priors do not solve the scale ambiguity problem. The latter poses difficulties in the convergence of classical MCMC algorithms, which exhibit slow scale sampling, and complicates the design of scale-free estimators. To overcome this limitation, a von Mises–Fisher prior is proposed, which alleviates the scale ambiguity. This approach has already demonstrated its regularization effect in other inverse problems, including optimization-based blind deconvolution. The advantages of this prior within MCMC algorithms are discussed compared to conventional Gaussian priors, both theoretically and experimentally, especially in low dimensions. However, the multimodal nature of the posterior distribution still poses challenges and decreases the quality of the exploration of the state space, particularly when using algorithms such as the Gibbs sampler. These poor mixing properties lead to suboptimal performance in terms of inter-mode and intra-mode exploration and can limit the usefulness of Bayesian estimators at this stage. To address this issue, we propose an original approach based on the use of a reversible jump MCMC (RJMCMC) algorithm, which significantly improves the exploration of the state space by generating new states in high probability regions identified in a preliminary stage. The effectiveness of the RJMCMC algorithm is empirically demonstrated in the context of highly multimodal posteriors, particularly in low dimensions, for both Gaussian and von Mises–Fisher priors. Furthermore, the observed behavior of RJMCMC in increasing dimensions provides support for the applicability of this approach for sampling multimodal distributions in the context of Bayesian blind deconvolution
3

Mismer, Romain. "Convergence et spike and Slab Bayesian posterior distributions in some high dimensional models." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On s'intéresse d'abord au modèle de suite gaussienne parcimonieuse. Une approche bayésienne empirique sur l'a priori Spike and Slab permet d'obtenir la convergence à vitesse minimax du moment d'ordre 2 a posteriori pour des Slabs Cauchy et on prouve un résultat de sous-optimalité pour un Slab Laplace. Un meilleur choix de Slab permet d'obtenir la constante exacte. Dans le modèle d'estimation de densité, un a priori arbre de Polya tel que les variables de l'arbre ont une distribution de type Spike and Slab donne la convergence à vitesse minimax et adaptative pour la norme sup de la loi a posteriori et un théorème Bernstein-von Mises non paramétrique
The first main focus is the sparse Gaussian sequence model. An Empirical Bayes approach is used on the Spike and Slab prior to derive minimax convergence of the posterior second moment for Cauchy Slabs and a suboptimality result for the Laplace Slab is proved. Next, with a special choice of Slab convergence with the sharp minimax constant is derived. The second main focus is the density estimation model using a special Polya tree prior where the variables in the tree construction follow a Spike and Slab type distribution. Adaptive minimax convergence in the supremum norm of the posterior distribution as well as a nonparametric Bernstein-von Mises theorem are obtained
4

Parr, Bouberima Wafia. "Modèles de mélange de von Mises-Fisher." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00987196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse.
5

Parr, Bouberima Wafia. "Modèles de mélange de von Mises-Fisher." Electronic Thesis or Diss., Paris 5, 2013. http://www.theses.fr/2013PA05S028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse
In contemporary life directional data are present in most areas, in several forms, aspects and large sizes / dimensions; hence the need for effective methods of studying the existing problems in these fields. To solve the problem of clustering, the probabilistic approach has become a classic approach, based on the simple idea: since the g classes are different from each other, it is assumed that each class follows a distribution of probability, whose parameters are generally different from one class to another. We are concerned here with mixture modelling. Under this assumption, the initial data are considered as a sample of a d-dimensional random variable whose density is a mixture of g distributions of probability where each one is specific to a class. In this thesis we are interested in the clustering of directional data that has been treated using known classification methods which are the most appropriate for this case. In which both approaches the geometric and the probabilistic one have been considered. In the first, some kmeans like algorithms have been explored and considered. In the second, by directly handling the estimation of parameters from which is deduced the partition maximizing the log-likelihood, this approach is represented by the EM algorithm. For the latter approach, model mixtures of distributions of von Mises-Fisher have been used, proposing variants of the EM algorithm: EMvMF, the CEMvMF, the SEMvMF and the SAEMvMF. In the same context, the problem of finding the number of the components in the mixture and the choice of the model, using some information criteria {Bic, Aic, Aic3, Aic4, AICC, AICU, CAIC, Clc, Icl-Bic, LI, Icl, Awe} have been discussed. The study concludes with a comparison of the used vMF model with a simpler exponential model. In the latter, it is assumed that all data are distributed on a hypersphere of a predetermined radius greater than one, instead of a unit hypersphere in the case of the vMF model. An improvement of this method based on the estimation step of the radius in the algorithm NEMρ has been proposed: this allowed us in most of our applications to find the best partitions; we have developed also the NCEMρ and NSEMρ algorithms. The algorithms proposed in this work were performed on a variety of textual data, genetic data and simulated data according to the vMF model; these applications gave us a better understanding of the different studied approaches throughout this thesis
6

Launay, Tristan. "Méthodes bayésiennes pour la prévision de consommation l'électricité." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00766237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans ce manuscrit, nous développons des outils de statistique bayésienne pour la prévision de consommation d'électricité en France. Nous prouvons tout d'abord la normalité asymptotique de la loi a posteriori (théorème de Bernstein-von Mises) pour le modèle linéaire par morceaux de part chauffage et la consistance de l'estimateur de Bayes. Nous décrivons ensuite la construction d'une loi a priori informative afin d'améliorer la qualité des prévisions d'un modèle de grande dimension en situation d'historique court. A partir de deux exemples impliquant les clients non télérelevés de EDF, nous montrons notamment que la méthode proposée permet de rendre l'évaluation du modèle plus robuste vis-à-vis du manque de données. Nous proposons enfin un nouveau modèle dynamique, non-linéaire, pour prévoir la consommation d'électricité en ligne. Nous construisons un algorithme de filtrage particulaire afin d'estimer ce modèle et comparons les prévisions obtenues aux prévisions opérationnelles utilisées au sein d'EDF.
7

Abeywardana, Sachinthaka. "Variational Inference in Generalised Hyperbolic and von Mises-Fisher Distributions." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/16504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most real world data are skewed, contain more than the set of real numbers, and have higher probabilities of extreme events occurring compared to a normal distribution. In this thesis we explore two non-Gaussian distributions, the Generalised Hyperbolic Distribution (GHD) and, the von-Mises Fisher (vMF) Distribution. These distributions are studied in the context of 1) Regression in heavy tailed data, 2) Quantifying variance of functions with reference to finding relevant quantiles and, 3) Clustering data that lie on the surface of the sphere. Firstly, we extend Gaussian Processes (GPs) and use the Genralised Hyperbolic Processes as a prior on functions instead. This prior is more flexible than GPs and is especially able to model data that has high kurtosis. The method is based on placing a Generalised Inverse Gaussian prior over the signal variance, which yields a scalar mixture of GPs. We show how to perform inference efficiently for the predictive mean and variance, and use a variational EM method for learning. Secondly, the skewed extension of the GHD is studied with respect to quantile regression. An underlying GP prior on the quantile function is used to make the inference non-parametric, while the skewed GHD is used as the data likelihood. The skewed GHD has a single parameter alpha which states the required quantile. Similar variational methods as the first contribution is used to perform inference. Finally, vMF distributions are introduced in order to cluster spherical data. In the two previous contributions continuous scalar mixtures of Gaussians were used to make the inference process simpler. However, for clustering, a discrete number of vMF distributions are typically used. We propose a Dirichlet Process (DP) to infer the number of clusters in the spherical data setup. The framework is extended to incorporate a nested and a temporal clustering architecture. Throughout this thesis in many cases the posterior cannot be calculated in closed form. Variational Bayesian approximations are derived in this situation for efficient inference. In certain cases further lower bounding of the optimisation function is required in order to perform Variational Bayes. These bounds themselves are novel.
8

Hornik, Kurt, and Bettina Grün. "movMF: An R Package for Fitting Mixtures of von Mises-Fisher Distributions." American Statistical Association, 2014. http://epub.wu.ac.at/4893/1/v58i10.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Finite mixtures of von Mises-Fisher distributions allow to apply model-based clustering methods to data which is of standardized length, i.e., all data points lie on the unit sphere. The R package movMF contains functionality to draw samples from finite mixtures of von Mises-Fisher distributions and to fit these models using the expectation-maximization algorithm for maximum likelihood estimation. Special features are the possibility to use sparse matrix representations for the input data, different variants of the expectationmaximization algorithm, different methods for determining the concentration parameters in the M-step and to impose constraints on the concentration parameters over the components. In this paper we describe the main fitting function of the package and illustrate its application. In addition we compare the clustering performance of finite mixtures of von Mises-Fisher distributions to spherical k-means. We also discuss the resolution of several numerical issues which occur for estimating the concentration parameters and for determining the normalizing constant of the von Mises-Fisher distribution. (authors' abstract)
9

Hornik, Kurt, and Bettina Grün. "On Maximum Likelihood Estimation of the Concentration Parameter of von Mises-Fisher Distributions." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3669/1/Report120.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio R_nu = I_{nu+1} / I_nu of modified Bessel functions. Computational issues when using approximative or iterative methods were discussed in Tanabe et al. (Comput Stat 22(1):145-157, 2007) and Sra (Comput Stat 27(1):177-190, 2012). In this paper we use Amos-type bounds for R_nu to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of R is evaluated at values tending to 1 (from the left). We show that previously introduced rational bounds for R_nu which are invertible using quadratic equations cannot be used to improve these bounds.
Series: Research Report Series / Department of Statistics and Mathematics
10

Salah, Aghiles. "Von Mises-Fisher based (co-)clustering for high-dimensional sparse data : application to text and collaborative filtering data." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB093/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La classification automatique, qui consiste à regrouper des objets similaires au sein de groupes, également appelés classes ou clusters, est sans aucun doute l’une des méthodes d’apprentissage non-supervisé les plus utiles dans le contexte du Big Data. En effet, avec l’expansion des volumes de données disponibles, notamment sur le web, la classification ne cesse de gagner en importance dans le domaine de la science des données pour la réalisation de différentes tâches, telles que le résumé automatique, la réduction de dimension, la visualisation, la détection d’anomalies, l’accélération des moteurs de recherche, l’organisation d’énormes ensembles de données, etc. De nombreuses méthodes de classification ont été développées à ce jour, ces dernières sont cependant fortement mises en difficulté par les caractéristiques complexes des ensembles de données que l’on rencontre dans certains domaines d’actualité tel que le Filtrage Collaboratif (FC) et de la fouille de textes. Ces données, souvent représentées sous forme de matrices, sont de très grande dimension (des milliers de variables) et extrêmement creuses (ou sparses, avec plus de 95% de zéros). En plus d’être de grande dimension et sparse, les données rencontrées dans les domaines mentionnés ci-dessus sont également de nature directionnelles. En effet, plusieurs études antérieures ont démontré empiriquement que les mesures directionnelles, telle que la similarité cosinus, sont supérieurs à d’autres mesures, telle que la distance Euclidiennes, pour la classification des documents textuels ou pour mesurer les similitudes entre les utilisateurs/items dans le FC. Cela suggère que, dans un tel contexte, c’est la direction d’un vecteur de données (e.g., représentant un document texte) qui est pertinente, et non pas sa longueur. Il est intéressant de noter que la similarité cosinus est exactement le produit scalaire entre des vecteurs unitaires (de norme 1). Ainsi, d’un point de vue probabiliste l’utilisation de la similarité cosinus revient à supposer que les données sont directionnelles et réparties sur la surface d’une hypersphère unité. En dépit des nombreuses preuves empiriques suggérant que certains ensembles de données sparses et de grande dimension sont mieux modélisés sur une hypersphère unité, la plupart des modèles existants dans le contexte de la fouille de textes et du FC s’appuient sur des hypothèses populaires : distributions Gaussiennes ou Multinomiales, qui sont malheureusement inadéquates pour des données directionnelles. Dans cette thèse, nous nous focalisons sur deux challenges d’actualité, à savoir la classification des documents textuels et la recommandation d’items, qui ne cesse d’attirer l’attention dans les domaines de la fouille de textes et celui du filtrage collaborative, respectivement. Afin de répondre aux limitations ci-dessus, nous proposons une série de nouveaux modèles et algorithmes qui s’appuient sur la distribution de von Mises-Fisher (vMF) qui est plus appropriée aux données directionnelles distribuées sur une hypersphère unité
Cluster analysis or clustering, which aims to group together similar objects, is undoubtedly a very powerful unsupervised learning technique. With the growing amount of available data, clustering is increasingly gaining in importance in various areas of data science for several reasons such as automatic summarization, dimensionality reduction, visualization, outlier detection, speed up research engines, organization of huge data sets, etc. Existing clustering approaches are, however, severely challenged by the high dimensionality and extreme sparsity of the data sets arising in some current areas of interest, such as Collaborative Filtering (CF) and text mining. Such data often consists of thousands of features and more than 95% of zero entries. In addition to being high dimensional and sparse, the data sets encountered in the aforementioned domains are also directional in nature. In fact, several previous studies have empirically demonstrated that directional measures—that measure the distance between objects relative to the angle between them—, such as the cosine similarity, are substantially superior to other measures such as Euclidean distortions, for clustering text documents or assessing the similarities between users/items in CF. This suggests that in such context only the direction of a data vector (e.g., text document) is relevant, not its magnitude. It is worth noting that the cosine similarity is exactly the scalar product between unit length data vectors, i.e., L 2 normalized vectors. Thus, from a probabilistic perspective using the cosine similarity is equivalent to assuming that the data are directional data distributed on the surface of a unit-hypersphere. Despite the substantial empirical evidence that certain high dimensional sparse data sets, such as those encountered in the above domains, are better modeled as directional data, most existing models in text mining and CF are based on popular assumptions such as Gaussian, Multinomial or Bernoulli which are inadequate for L 2 normalized data. In this thesis, we focus on the two challenging tasks of text document clustering and item recommendation, which are still attracting a lot of attention in the domains of text mining and CF, respectively. In order to address the above limitations, we propose a suite of new models and algorithms which rely on the von Mises-Fisher (vMF) assumption that arises naturally for directional data lying on a unit-hypersphere

Book chapters on the topic "A priori de von Mises-Fisher":

1

Gatto, Riccardo. "The Generalized von Mises℃Fisher Distribution." In Advances in Directional and Linear Statistics, 51–68. Heidelberg: Physica-Verlag HD, 2010. http://dx.doi.org/10.1007/978-3-7908-2628-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salah, Aghiles, and Mohamed Nadif. "Model-based von Mises-Fisher Co-clustering with a Conscience." In Proceedings of the 2017 SIAM International Conference on Data Mining, 246–54. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2017. http://dx.doi.org/10.1137/1.9781611974973.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Ritwik, Baba C. Vemuri, Fei Wang, Tanveer Syeda-Mahmood, Paul R. Carney, and Thomas H. Mareci. "Multi-fiber Reconstruction from DW-MRI Using a Continuous Mixture of Hyperspherical von Mises-Fisher Distributions." In Lecture Notes in Computer Science, 139–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02498-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

McGraw, Tim, Baba Vemuri, Robert Yezierski, and Thomas Mareci. "Segmentation of High Angular Resolution Diffusion MRI Modeled as a Field of von Mises-Fisher Mixtures." In Computer Vision – ECCV 2006, 463–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11744078_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Van Linh, Ngo, Nguyen Kim Anh, Khoat Than, and Nguyen Nguyen Tat. "Effective and Interpretable Document Classification Using Distinctly Labeled Dirichlet Process Mixture Models of von Mises-Fisher Distributions." In Database Systems for Advanced Applications, 139–53. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18123-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Peter T. "On the Characteristic Function of the Matrix von Mises—Fisher Distribution with Application to SO(N)—Deconvolution." In High Dimensional Probability II, 477–92. Boston, MA: Birkhäuser Boston, 2000. http://dx.doi.org/10.1007/978-1-4612-1358-1_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kunze, Karsten, and Helmut Schaeben. "Ideal Patterns of Crystallographic Preferred Orientation and Their Representation by the von Mises - Fisher Matrix or Bingham Quaternion Distribution." In Materials Science Forum, 295–300. Stafa: Trans Tech Publications Ltd., 2005. http://dx.doi.org/10.4028/0-87849-975-x.295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sra, Suvrit, Arindam Banerjee, Joydeep Ghosh, and Inderjit Dhillon. "Text Clustering with Mixture of von Mises-Fisher Distributions." In Text Mining, 121–61. Chapman and Hall/CRC, 2009. http://dx.doi.org/10.1201/9781420059458.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Text Clustering with Mixture of von Mises-Fisher Distribu- tions." In Text Mining, 151–84. Chapman and Hall/CRC, 2009. http://dx.doi.org/10.1201/9781420059458-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "A priori de von Mises-Fisher":

1

Traulle, Benjamin, Stephanie Bidon, and Damien Roque. "A von Mises—Fisher prior to Remove Scale Ambiguity in Blind Deconvolution." In 2022 30th European Signal Processing Conference (EUSIPCO). IEEE, 2022. http://dx.doi.org/10.23919/eusipco55093.2022.9909710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Черняев, Сергей, Sergey Chernyaev, Олег Лукашенко, and Oleg Lukashenko. "Comparative Analysis of Methods for Segmentation of FMRI Images Based on Markov Random Fields." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-1-143-147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of segmentation of three-dimensional fMRI images based on the Bayesian approach is considered, where Markov Random Field is used as the prior distribution, and von Mises-Fisher distribution is used as the observation model. The main problem when applying this approach in practice is an estimation of the model parameters. In this paper, we review algorithms HMRF-MCEM, HMRF-EM and GrabCut, which implement this statistical model and estimate parameters without the usage of the labeled training data. The methods HMRF-EM and GrabCut were introduced in conjunction with other statistical models, but after a small modification, they can be used with the von Mises-Fisher distribution. A comparative study was carried out by performing experiments on both synthetic, generated from the statistical model, and real fMRI data.
3

Jin, Yujie, Xu Chu, Yasha Wang, and Wenwu Zhu. "Domain Generalization through the Lens of Angular Invariance." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Domain generalization (DG) aims at generalizing a classifier trained on multiple source domains to an unseen target domain with domain shift. A common pervasive theme in existing DG literature is domain-invariant representation learning with various invariance assumptions. However, prior works restrict themselves to an impractical assumption for real-world challenges: If a mapping induced by a deep neural network (DNN) could align the source domains well, then such a mapping aligns a target domain as well. In this paper, we simply take DNNs as feature extractors to relax the requirement of distribution alignment. Specifically, we put forward a novel angular invariance and the accompanied norm shift assumption. Based on the proposed term of invariance, we propose a novel deep DG method dubbed Angular Invariance Domain Generalization Network (AIDGN). The optimization objective of AIDGN is developed with a von-Mises Fisher (vMF) mixture model. Extensive experiments on multiple DG benchmark datasets validate the effectiveness of the proposed AIDGN method.
4

Barbaro, Florian, and Fabrice Rossi. "Sparse mixture of von Mises-Fisher distribution." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mash'al, Mohamadreza, and Reshad Hosseini. "K-means++ for mixtures of von Mises-Fisher Distributions." In 2015 7th Conference on Information and Knowledge Technology (IKT). IEEE, 2015. http://dx.doi.org/10.1109/ikt.2015.7288786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kobayashi, Takumi, and Nobuyuki Otsu. "Von Mises-Fisher Mean Shift for Clustering on a Hypersphere." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Kailai, Florian Pfaff, and Uwe D. Hanebeck. "Nonlinear von Mises–Fisher Filtering Based on Isotropic Deterministic Sampling." In 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2020. http://dx.doi.org/10.1109/mfi49285.2020.9235260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramachandran, Vinod, and Tahir Ahmad. "Modeling of epileptic seizures using the von Mises-Fisher distribution." In PROCEEDINGS OF THE 24TH NATIONAL SYMPOSIUM ON MATHEMATICAL SCIENCES: Mathematical Sciences Exploration for the Universal Preservation. Author(s), 2017. http://dx.doi.org/10.1063/1.4995890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Skabar, Andrew, and Shahmeer Memon. "Density estimation-based document categorization using von Mises-Fisher kernels." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Traa, Johannes, and Paris Smaragdis. "Multiple speaker tracking with the Factorial von Mises-Fisher Filter." In 2014 IEEE 24th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2014. http://dx.doi.org/10.1109/mlsp.2014.6958891.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography