Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Prior informatif.

Dissertationen zum Thema „Prior informatif“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Prior informatif" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Papoutsis, Panayotis. „Potentiel et prévision des temps d'attente pour le covoiturage sur un territoire“. Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0059.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s’intéresse au potentiel et à la prévision des temps d’attente concernant le covoiturage sur un territoire en utilisant des méthodes d’apprentissage statistique. Cinq thèmes principaux sont abordés dans le présent manuscrit. Le premier présente des techniques de régression quantile afin de prédire des temps d’attente. Le deuxième détaille la construction d’un processus de travail empruntant des outils des Systèmes d’Information Géographique (SIG) afin d’exploiter pleinement les données issues du covoiturage. Dans un troisième temps nous construisons un modèle hiérarchique bayésien en vue de prédire des flux de trafic et des temps d’attente. En quatrième partie nous proposons une méthode de construction d’une loi a priori informative par transfert bayésien dans le but d’améliorer les prédictions des temps d’attente pour une situation de jeu de données court. Enfin, le dernier thème se concentre sur la mise en production et l’exploitation industrielle du modèle hiérarchique bayésien
This thesis focuses on the potential and prediction of carpooling waiting times in a territory using statistical learning methods. Five main themes are covered in this manuscript. The first presents quantile regression techniques to predict waiting times. The second details the construction of a workflow based on Geographic Information Systems (GIS) tools in order to fully leverage the carpooling data. In a third part we develop a hierarchical bayesian model in order to predict traffic flows and waiting times. In the fourth part, we propose a methodology for constructing an informative prior by bayesian transfer to improve the prediction of waiting times for a short dataset situation. Lastly, the final theme focuses on the production and industrial exploitation of the bayesian hierarchical model
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bioche, Christèle. „Approximation de lois impropres et applications“. Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22626/document.

Der volle Inhalt der Quelle
Annotation:
Le but de cette thèse est d’étudier l’approximation d’a priori impropres par des suites d’a priori propres. Nous définissons un mode de convergence sur les mesures de Radon strictement positives pour lequel une suite de mesures de probabilité peut admettre une mesure impropre pour limite. Ce mode de convergence, que nous appelons convergence q-vague, est indépendant du modèle statistique. Il permet de comprendre l’origine du paradoxe de Jeffreys-Lindley. Ensuite, nous nous intéressons à l’estimation de la taille d’une population. Nous considérons le modèle du removal sampling. Nous établissons des conditions nécessaires et suffisantes sur un certain type d’a priori pour obtenir des estimateurs a posteriori bien définis. Enfin, nous montrons à l’aide de la convergence q-vague, que l’utilisation d’a priori vagues n’est pas adaptée car les estimateurs obtenus montrent une grande dépendance aux hyperparamètres
The purpose of this thesis is to study the approximation of improper priors by proper priors. We define a convergence mode on the positive Radon measures for which a sequence of probability measures could converge to an improper limiting measure. This convergence mode, called q-vague convergence, is independant from the statistical model. It explains the origin of the Jeffreys-Lindley paradox. Then, we focus on the estimation of the size of a population. We consider the removal sampling model. We give necessary and sufficient conditions on the hyperparameters in order to have proper posterior distributions and well define estimate of abundance. In the light of the q-vague convergence, we show that the use of vague priors is not appropriate in removal sampling since the estimates obtained depend crucially on hyperparameters
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pohl, Kilian Maria. „Prior information for brain parcellation“. Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33925.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 171-184).
To better understand brain disease, many neuroscientists study anatomical differences between normal and diseased subjects. Frequently, they analyze medical images to locate brain structures influenced by disease. Many of these structures have weakly visible boundaries so that standard image analysis algorithms perform poorly. Instead, neuroscientists rely on manual procedures, which are time consuming and increase risks related to inter- and intra-observer reliability [53]. In order to automate this task, we develop an algorithm that robustly segments brain structures. We model the segmentation problem in a Bayesian framework, which is applicable to a variety of problems. This framework employs anatomical prior information in order to simplify the detection process. In this thesis, we experiment with different types of prior information such as spatial priors, shape models, and trees describing hierarchical anatomical relationships. We pose a maximum a posteriori probability estimation problem to find the optimal solution within our framework. From the estimation problem we derive an instance of the Expectation Maximization algorithm, which uses an initial imperfect estimate to converge to a good approximation.
(cont.) The resulting implementation is tested on a variety of studies, ranging from the segmentation of the brain into the three major brain tissue classes, to the parcellation of anatomical structures with weakly visible boundaries such as the thalamus or superior temporal gyrus. In general, our new method performs significantly better than other :standard automatic segmentation techniques. The improvement is due primarily to the seamless integration of medical image artifact correction, alignment of the prior information to the subject, detection of the shape of anatomical structures, and representation of the anatomical relationships in a hierarchical tree.
by Kilian Maria Pohl.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ahmed, Syed Ejaz Carleton University Dissertation Mathematics. „Estimation strategies under uncertain prior information“. Ottawa, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sunmola, Funlade Tajudeen. „Optimising learning with transferable prior information“. Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/3983/.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses the problem of how to incorporate user knowledge about an environment, or information acquired during previous learning in that environment or a similar one, to make future learning more effective. The problem is tackled within the framework of learning from rewards while acting in a Markov Decision Process (MDP). Appropriately incorporating user knowledge and prior experience into learning should lead to better performance during learning (the exploitation-exploration trade-off), and offer a better solution at the end of the learning period. We work in a Bayesian setting and consider two main types of transferable information namely historical data and constraints involving absolute and relative restrictions on process dynamics. We present new algorithms for reasoning with transition constraints and show how to revise beliefs about the MDP transition matrix using constraints and prior knowledge. We also show how to use the resulting beliefs to control exploration. Finally we demonstrate benefits of historical information via power priors and by using process templates to transfer information from one environment to a second with related local process dynamics. We present results showing that incorporating historical data and constraints on state transitions in uncertain environments, either separately or collectively, can improve learning performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ren, Shijie. „Using prior information in clinical trial design“. Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555104.

Der volle Inhalt der Quelle
Annotation:
A current concern in medical research is low productivity In the pharmaceutical industry. Failure rates of Phase III clinical trials are high, and this is very costly in terms of using resources and money. Our aim in this thesis is to incorporate prior information in clinical trial design and develop better assessments of the chances of successful clinical trials, so that trial sponsors can improve their success rates. Assurance calculations, which take into account uncertainty about how effective the treatment actually is, provide a more reliable assessment of the probability of a successful trial outcome comparing with power calculations. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. Prior elicitation is not an easy task, and we may be uncertain about what distribution 'best' represents an expert's beliefs. We demonstrated that robustness of the assurance to different choices of prior distribution can be assessed by treating the elicitation process as a Bayesian inference problem, using a nonparametric Bayesian approach to quantify uncertainty in the expert's density function of the true treatment effect. In this thesis, we also consider a decision-making problem for a single-arm open label Phase 11 trial for the PhD sponsor Roche. Based on the Bayesian decision- theoretic approach and assurance calculations, a model is developed for the trial sponsor in finding the optimal trial strategies according to their beliefs about the true treatment effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Parsley, M. P. „Simultaneous localisation and mapping with prior information“. Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1318103/.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with Simultaneous Localisation and Mapping (SLAM), a technique by which a platform can estimate its trajectory with greater accuracy than odometry alone, especially when the trajectory incorporates loops. We discuss some of the shortcomings of the "classical" SLAM approach (in particular EKF-SLAM), which assumes that no information is known about the environment a priori. We argue that in general this assumption is needlessly stringent; for most environments, such as cities some prior information is known. We introduce an initial Bayesian probabilistic framework which considers the world as a hierarchy of structures, and maps (such as those produced by SLAM systems) as consisting of features derived from them. Common underlying structure between features in maps allows one to express and thus exploit geometric relations between them to improve their estimates. We apply the framework to EKF-SLAM for the case of a vehicle equipped with a range-bearing sensor operating in an urban environment, building up a metric map of point features, and using a prior map consisting of line segments representing building footprints. We develop a novel method called the Dual Representation, which allows us to use information from the prior map to not only improve the SLAM estimate, but also reduce the severity of errors associated with the EKF. Using the Dual Representation, we investigate the effect of varying the accuracy of the prior map for the case where the underlying structures and thus relations between the SLAM map and prior map are known. We then generalise to the more realistic case, where there is "clutter" - features in the environment that do not relate with the prior map. This involves forming a hypothesis for whether a pair of features in the SLAMstate and prior map were derived from the same structure, and evaluating this based on a geometric likelihood model. Initially we try an incrementalMultiple Hypothesis SLAM(MHSLAM) approach to resolve hypotheses, developing a novel method called the Common State Filter (CSF) to reduce the exponential growth in computational complexity inherent in this approach. This allows us to use information from the prior map immediately, thus reducing linearisation and EKF errors. However we find that MHSLAM is still too inefficient, even with the CSF, so we use a strategy that delays applying relations until we can infer whether they apply; we defer applying information from structure hypotheses until their probability of holding exceeds a threshold. Using this method we investigate the effect of varying degrees of "clutter" on the performance of SLAM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Viggh, Herbert E. M. „Surface Prior Information Reflectance Estimation (SPIRE) algorithms“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/17564.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 393-396).
In this thesis we address the problem of estimating changes in surface reflectance in hyperspectral image cubes, under unknown multiplicative and additive illumination noise. Rather than using the Empirical Line Method (ELM) or physics-based approaches, we assumed the presence of a prior reflectance image cube and ensembles of typical multiplicative and additive illumination noise vectors, and developed algorithms which estimate reflectance using this prior information. These algorithms were developed under the additional assumptions that the illumination effects were band limited to lower spatial frequencies and that the differences in the surface reflectance from the prior were small in area relative to the scene, and have defined edges. These new algorithms were named Surface Prior Information Reflectance Estimation (SPIRE) algorithms. Spatial SPIRE algorithms that employ spatial processing were developed for six cases defined by the presence or absence of the additive noise, and by whether or not the noise signals are spatially uniform or varying. These algorithms use high-pass spatial filtering to remove the noise effects. Spectral SPIRE algorithms that employ spectral processing were developed and use zero-padded Principal Components (PC) filtering to remove the illumination noise. Combined SPIRE algorithms that use both spatial and spectral processing were also developed. A Selective SPIRE technique that chooses between Combined and Spectral SPIRE reflectance estimates was developed; it maximizes estimation performance on both modified and unmodified pixels. The different SPIRE algorithms were tested on HYDICE airborne sensor hyperspectral data, and their reflectance estimates were compared to those from the physics-based ATmospheric REMoval (ATREM) and the Empirical Line Method atmospheric compensation algorithms. SPIRE algorithm performance was found to be nearly identical to the ELM ground-truth based results. SPIRE algorithms performed better than ATREM overall, and significantly better under high clouds and haze. Minimum-distance classification experiments demonstrated SPIRE's superior performance over both ATREM and ELM in cross-image supervised classification applications. The taxonomy of SPIRE algorithms was presented and suggestions were made concerning which SPIRE algorithm is recommended for various applications.
by Herbert Erik Mattias Viggh.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ghadermarzy, Navid. „Using prior support information in compressed sensing“. Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44912.

Der volle Inhalt der Quelle
Annotation:
Compressed sensing is a data acquisition technique that entails recovering estimates of sparse and compressible signals from n linear measurements, significantly fewer than the signal ambient dimension N. In this thesis we show how we can reduce the required number of measurements even further if we incorporate prior information about the signal into the reconstruction algorithm. Specifically, we study certain weighted nonconvex Lp minimization algorithms and a weighted approximate message passing algorithm. In Chapter 1 we describe compressed sensing as a practicable signal acquisition method in application and introduce the generic sparse approximation problem. Then we review some of the algorithms used in compressed sensing literature and briefly introduce the method we used to incorporate prior support information into these problems. In Chapter 2 we derive sufficient conditions for stable and robust recovery using weighted Lp minimization and show that these conditions are better than those for recovery by regular Lp and weighted L1. We present extensive numerical experiments, both on synthetic examples and on audio, and seismic signals. In Chapter 3 we derive weighted AMP algorithm which iteratively solves the weighted L1 minimization. We also introduce a reweighting scheme for weighted AMP algorithms which enhances the recovery performance of weighted AMP. We also apply these algorithms on synthetic experiments and on real audio signals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Yang. „Application of prior information to discriminative feature learning“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/285558.

Der volle Inhalt der Quelle
Annotation:
Learning discriminative feature representations has attracted a great deal of attention since it is a critical step to facilitate the subsequent classification, retrieval and recommendation tasks. In this dissertation, besides incorporating prior knowledge about image labels into the image classification as most prevalent feature learning methods currently do, we also explore some other general-purpose priors and verify their effectiveness in the discriminant feature learning. As a more powerful representation can be learned by implementing such general priors, our approaches achieve state-of-the-art results on challenging benchmarks. We elaborate on these general-purpose priors and highlight where we have made novel contributions. We apply sparsity and hierarchical priors to the explanatory factors that describe the data, in order to better discover the data structure. More specifically, in the first approach we propose that we only incorporate sparse priors into the feature learning. To this end, we present a support discrimination dictionary learning method, which finds a dictionary under which the feature representation of images from the same class have a common sparse structure while the size of the overlapped signal support of different classes is minimised. Then we incorporate sparse priors and hierarchical priors into a unified framework, that is capable of controlling the sparsity of the neuron activation in deep neural networks. Our proposed approach automatically selects the most useful low-level features and effectively combines them into more powerful and discriminative features for our specific image classification problem. We also explore priors on the relationships between multiple factors. When multiple independent factors exist in the image generation process and only some of them are of interest to us, we propose a novel multi-task adversarial network to learn a disentangled feature which is optimized with respect to the factor of interest to us, while being distraction factors agnostic. When common factors exist in multiple tasks, leveraging common factors cannot only make the learned feature representation more robust, but also enable the model to generalise from very few labelled samples. More specifically, we address the domain adaptation problem and propose the re-weighted adversarial adaptation network to reduce the feature distribution divergence and adapt the classifier from source to target domains.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hotti, Alexandra. „Bayesian insurance pricing using informative prior estimation techniques“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286312.

Der volle Inhalt der Quelle
Annotation:
Large, well-established insurance companies build statistical pricing models based on customer claim data. Due to their long experience and large amounts of data, they can predict their future expected claim losses accurately. In contrast, small newly formed insurance start-ups do not have access to such data. Instead, a start-up’s pricing model’s initial parameters can be set by directly estimating the risk premium tariff’s parameters in a non-statistical manner. However, this approach results in a pricing model that cannot be adjusted based on new claim data through classical frequentist insurance approaches. This thesis has put forth three Bayesian approaches for including estimates of an existing multiplicative tariff as the expectation of a prior in a Generalized Linear Model (GLM). The similarity between premiums set using the prior estimations and the static pricing model was measured as their relative difference. The results showed that the static tariff could be closely estimated. The estimated priors were then merged with claim data through the likelihood. These posteriors were estimated via the two Markov Chain Monte Carlo approaches, Metropolis and Metropolis-Hastings. All in all, this resulted in three risk premium models that could take advantage of existing pricing knowledge and learn over time as new cases arrived. The results showed that the Bayesian pricing methods significantly reduced the discrepancy between predicted and actual claim costs on an overall portfolio level compared to the static tariff. Nevertheless, this could not be determined on an individual policyholder level.
Stora, väletablerade försäkringsbolag modellerar sina riskpremier med hjälp av statistiska modeller och data från skadeanmälningar. Eftersom försäkringsbolagen har tillgång till en lång historik av skadeanmälningar, så kan de förutspå sina framtida skadeanmälningskostnader med hög precision. Till skillnad från ett stort försäkringsbolag, har en liten, nyetablerad försäkringsstartup inte tillgång till den mängden data. Det nyetablerade försäkringsbolagets initiala prissättningsmodell kan därför istället byggas genom att direkt estimera parametrarna i en tariff med ett icke statistiskt tillvägagångssätt. Problematiken med en sådan metod är att tariffens parametrar inte kan justerares baserat på bolagets egna skadeanmälningar med klassiska frekvensbaserade prissättningsmetoder. I denna masteruppsats presenteras tre metoder för att estimera en existerande statisk multiplikativ tariff. Estimaten kan användas som en prior i en Bayesiansk riskpremiemodell. Likheten mellan premierna som har satts via den estimerade och den faktiska statiska tariffen utvärderas genom att beräkna deras relativa skillnad. Resultaten från jämförelsen tyder på att priorn kan estimeras med hög precision. De estimerade priorparametrarna kombinerades sedan med startupbolaget Hedvigs skadedata. Posteriorn estimerades sedan med Metropolis and Metropolis-Hastings, vilket är två Markov Chain Monte Carlo simuleringsmetoder. Sammantaget resulterade detta i en prissättningsmetod som kunde utnyttja kunskap från en existerande statisk prismodell, samtidigt som den kunde ta in mer kunskap i takt med att fler skadeanmälningar skedde. Resultaten tydde på att de Bayesianska prissättningsmetoderna kunde förutspå skadekostnader med högre precision jämfört med den statiska tariffen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Qin, Jing. „Prior Information Guided Image Processing and Compressive Sensing“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1365020074.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kamary, Kaniav. „Lois a priori non-informatives et la modélisation par mélange“. Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED022/document.

Der volle Inhalt der Quelle
Annotation:
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique
One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Li, Zhonggai. „Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model“. Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28121.

Der volle Inhalt der Quelle
Annotation:
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Walter, Gero. „Generalized Bayesian inference under prior-data conflict“. Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-170598.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with the generalisation of Bayesian inference towards the use of imprecise or interval probability, with a focus on model behaviour in case of prior-data conflict. Bayesian inference is one of the main approaches to statistical inference. It requires to express (subjective) knowledge on the parameter(s) of interest not incorporated in the data by a so-called prior distribution. All inferences are then based on the so-called posterior distribution, the subsumption of prior knowledge and the information in the data calculated via Bayes' Rule. The adequate choice of priors has always been an intensive matter of debate in the Bayesian literature. While a considerable part of the literature is concerned with so-called non-informative priors aiming to eliminate (or, at least, to standardise) the influence of priors on posterior inferences, inclusion of specific prior information into the model may be necessary if data are scarce, or do not contain much information about the parameter(s) of interest; also, shrinkage estimators, common in frequentist approaches, can be considered as Bayesian estimators based on informative priors. When substantial information is used to elicit the prior distribution through, e.g, an expert's assessment, and the sample size is not large enough to eliminate the influence of the prior, prior-data conflict can occur, i.e., information from outlier-free data suggests parameter values which are surprising from the viewpoint of prior information, and it may not be clear whether the prior specifications or the integrity of the data collecting method (the measurement procedure could, e.g., be systematically biased) should be questioned. In any case, such a conflict should be reflected in the posterior, leading to very cautious inferences, and most statisticians would thus expect to observe, e.g., wider credibility intervals for parameters in case of prior-data conflict. However, at least when modelling is based on conjugate priors, prior-data conflict is in most cases completely averaged out, giving a false certainty in posterior inferences. Here, imprecise or interval probability methods offer sound strategies to counter this issue, by mapping parameter uncertainty over sets of priors resp. posteriors instead of over single distributions. This approach is supported by recent research in economics, risk analysis and artificial intelligence, corroborating the multi-dimensional nature of uncertainty and concluding that standard probability theory as founded on Kolmogorov's or de Finetti's framework may be too restrictive, being appropriate only for describing one dimension, namely ideal stochastic phenomena. The thesis studies how to efficiently describe sets of priors in the setting of samples from an exponential family. Models are developed that offer enough flexibility to express a wide range of (partial) prior information, give reasonably cautious inferences in case of prior-data conflict while resulting in more precise inferences when prior and data agree well, and still remain easily tractable in order to be useful for statistical practice. Applications in various areas, e.g. common-cause failure modeling and Bayesian linear regression, are explored, and the developed approach is compared to other imprecise probability models.
Das Thema dieser Dissertation ist die Generalisierung der Bayes-Inferenz durch die Verwendung von unscharfen oder intervallwertigen Wahrscheinlichkeiten. Ein besonderer Fokus liegt dabei auf dem Modellverhalten in dem Fall, dass Vorwissen und beobachtete Daten in Konflikt stehen. Die Bayes-Inferenz ist einer der Hauptansätze zur Herleitung von statistischen Inferenzmethoden. In diesem Ansatz muss (eventuell subjektives) Vorwissen über die Modellparameter in einer sogenannten Priori-Verteilung (kurz: Priori) erfasst werden. Alle Inferenzaussagen basieren dann auf der sogenannten Posteriori-Verteilung (kurz: Posteriori), welche mittels des Satzes von Bayes berechnet wird und das Vorwissen und die Informationen in den Daten zusammenfasst. Wie eine Priori-Verteilung in der Praxis zu wählen sei, ist dabei stark umstritten. Ein großer Teil der Literatur befasst sich mit der Bestimmung von sogenannten nichtinformativen Prioris. Diese zielen darauf ab, den Einfluss der Priori auf die Posteriori zu eliminieren oder zumindest zu standardisieren. Falls jedoch nur wenige Daten zur Verfügung stehen, oder diese nur wenige Informationen in Bezug auf die Modellparameter bereitstellen, kann es hingegen nötig sein, spezifische Priori-Informationen in ein Modell einzubeziehen. Außerdem können sogenannte Shrinkage-Schätzer, die in frequentistischen Ansätzen häufig zum Einsatz kommen, als Bayes-Schätzer mit informativen Prioris angesehen werden. Wenn spezifisches Vorwissen zur Bestimmung einer Priori genutzt wird (beispielsweise durch eine Befragung eines Experten), aber die Stichprobengröße nicht ausreicht, um eine solche informative Priori zu überstimmen, kann sich ein Konflikt zwischen Priori und Daten ergeben. Dieser kann sich darin äußern, dass die beobachtete (und von eventuellen Ausreißern bereinigte) Stichprobe Parameterwerte impliziert, die aus Sicht der Priori äußerst überraschend und unerwartet sind. In solch einem Fall kann es unklar sein, ob eher das Vorwissen oder eher die Validität der Datenerhebung in Zweifel gezogen werden sollen. (Es könnten beispielsweise Messfehler, Kodierfehler oder eine Stichprobenverzerrung durch selection bias vorliegen.) Zweifellos sollte sich ein solcher Konflikt in der Posteriori widerspiegeln und eher vorsichtige Inferenzaussagen nach sich ziehen; die meisten Statistiker würden daher davon ausgehen, dass sich in solchen Fällen breitere Posteriori-Kredibilitätsintervalle für die Modellparameter ergeben. Bei Modellen, die auf der Wahl einer bestimmten parametrischen Form der Priori basieren, welche die Berechnung der Posteriori wesentlich vereinfachen (sogenannte konjugierte Priori-Verteilungen), wird ein solcher Konflikt jedoch einfach ausgemittelt. Dann werden Inferenzaussagen, die auf einer solchen Posteriori basieren, den Anwender in falscher Sicherheit wiegen. In dieser problematischen Situation können Intervallwahrscheinlichkeits-Methoden einen fundierten Ausweg bieten, indem Unsicherheit über die Modellparameter mittels Mengen von Prioris beziehungsweise Posterioris ausgedrückt wird. Neuere Erkenntnisse aus Risikoforschung, Ökonometrie und der Forschung zu künstlicher Intelligenz, die die Existenz von verschiedenen Arten von Unsicherheit nahelegen, unterstützen einen solchen Modellansatz, der auf der Feststellung aufbaut, dass die auf den Ansätzen von Kolmogorov oder de Finetti basierende übliche Wahrscheinlichkeitsrechung zu restriktiv ist, um diesen mehrdimensionalen Charakter von Unsicherheit adäquat einzubeziehen. Tatsächlich kann in diesen Ansätzen nur eine der Dimensionen von Unsicherheit modelliert werden, nämlich die der idealen Stochastizität. In der vorgelegten Dissertation wird untersucht, wie sich Mengen von Prioris für Stichproben aus Exponentialfamilien effizient beschreiben lassen. Wir entwickeln Modelle, die eine ausreichende Flexibilität gewährleisten, sodass eine Vielfalt von Ausprägungen von partiellem Vorwissen beschrieben werden kann. Diese Modelle führen zu vorsichtigen Inferenzaussagen, wenn ein Konflikt zwischen Priori und Daten besteht, und ermöglichen dennoch präzisere Aussagen für den Fall, dass Priori und Daten im Wesentlichen übereinstimmen, ohne dabei die Einsatzmöglichkeiten in der statistischen Praxis durch eine zu hohe Komplexität in der Anwendung zu erschweren. Wir ermitteln die allgemeinen Inferenzeigenschaften dieser Modelle, die sich durch einen klaren und nachvollziehbaren Zusammenhang zwischen Modellunsicherheit und der Präzision von Inferenzaussagen auszeichnen, und untersuchen Anwendungen in verschiedenen Bereichen, unter anderem in sogenannten common-cause-failure-Modellen und in der linearen Bayes-Regression. Zudem werden die in dieser Dissertation entwickelten Modelle mit anderen Intervallwahrscheinlichkeits-Modellen verglichen und deren jeweiligen Stärken und Schwächen diskutiert, insbesondere in Bezug auf die Präzision von Inferenzaussagen bei einem Konflikt von Vorwissen und beobachteten Daten.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Stroeymeyt, Nathalie. „Information gathering prior to emigration in house-hunting ants“. Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Stewart, Alexander D. „Localisation using the appearance of prior structure“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:4ee889ac-e8e3-4000-ae23-a9d7f84fcd65.

Der volle Inhalt der Quelle
Annotation:
Accurate and robust localisation is a fundamental aspect of any autonomous mobile robot. However, if these are to become widespread, it must also be available at low-cost. In this thesis, we develop a new approach to localisation using monocular cameras by leveraging a coloured 3D pointcloud prior of the environment, captured previously by a survey vehicle. We make no assumptions about the external conditions during the robot's traversal relative to those experienced by the survey vehicle, nor do we make any assumptions about their relative sensor configurations. Our method uses no extracted image features. Instead, it explicitly optimises for the pose which harmonises the information, in a Shannon sense, about the appearance of the scene from the captured images conditioned on the pose, with that of the prior. We use as our objective the Normalised Information Distance (NID), a true metric for information, and demonstrate as a consequence the robustness of our localisation formulation to illumination changes, occlusions and colourspace transformations. We present how, by construction of the joint distribution of the appearance of the scene from the prior and the live imagery, the gradients of the NID can be computed and how these can be used to efficiently solve our formulation using Quasi-Newton methods. In order to reliably identify any localisation failures, we present a new classifier using the local shape of the NID about the candidate pose and demonstrate the performance gains of the complete system from its use. Finally, we detail the development of a real-time capable implementation of our approach using commodity GPUs and demonstrate that it outperforms a high-grade, commercial GPS-aided INS on 57km of driving in central Oxford, over a range of different conditions, times of day and year.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Giménez, Febrer Pere Joan. „Matrix completion with prior information in reproducing kernel Hilbert spaces“. Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671718.

Der volle Inhalt der Quelle
Annotation:
In matrix completion, the objective is to recover an unknown matrix from a small subset of observed entries. Most successful methods for recovering the unknown entries are based on the assumption that the unknown full matrix has low rank. By having low rank, each of its entries are obtained as a function of a small number of coefficients which can be accurately estimated provided that there are enough available observations. Hence, in low-rank matrix completion the estimate is given by the matrix of minimum rank that fits the observed entries. Besides low rankness, the unknown matrix might exhibit other structural properties which can be leveraged in the recovery process. In a smooth matrix, it can be expected that entries that are close in index distance will have similar values. Similarly, groups of rows or columns can be known to contain similarly valued entries according to certain relational structures. This relational information is conveyed through different means such as covariance matrices or graphs, with the inconvenient that these cannot be derived from the data matrix itself since it is incomplete. Hence, any knowledge on how the matrix entries are related among them must be derived from prior information. This thesis deals with matrix completion with prior information, and presents an outlook that generalizes to many situations. In the first part, the columns of the unknown matrix are cast as graph signals with a graph known beforehand. In this, the adjacency matrix of the graph is used to calculate an initial point for a proximal gradient algorithm in order to reduce the iterations needed to converge to a solution. Then, under the assumption that the graph signals are smooth, the graph Laplacian is incorporated into the problem formulation with the aim to enforce smoothness on the solution. This results in an effective denoising of the observed matrix and reduced error, which is shown through theoretical analysis of the proximal gradient coupled with Laplacian regularization, and numerical tests. The second part of the thesis introduces a framework to exploit prior information through reproducing kernel Hilbert spaces. Since a kernel measures similarity between two points in an input set, it enables the encoding of any prior information such as feature vectors, dictionaries or connectivity on a graph. By associating each column and row of the unknown matrix with an item in a set, and defining a pair of kernels measuring similarity between columns or rows, the missing entries can be extrapolated by means of the kernel functions. A method based on kernel regression is presented, with two additional variants aimed at reducing computational cost, and online implementation. These methods prove to be competitive with existing techniques, especially when the number of observations is very small. Furthermore, mean-square error and generalization error analyses are carried out, shedding light on the factors impacting algorithm performance. For the generalization error analysis, the focus is on the transductive case, which measures the ability of an algorithm to transfer knowledge from a set of labelled inputs to an unlabelled set. Here, bounds are derived for the proposed and existing algorithms by means of the transductive Rademacher complexity, and numerical tests confirming the theoretical findings are presented. Finally, the thesis explores the question of how to choose the observed entries of a matrix in order to minimize the recovery error of the full matrix. A passive sampling approach is presented, which entails that no labelled inputs are needed to design the sampling distribution; only the input set and kernel functions are required. The approach is based on building the best Nyström approximation to the kernel matrix by sampling the columns according to their leverage scores, a metric that arises naturally in the theoretical analysis to find an optimal sampling distribution.
A matrix completion, l'objectiu és recuperar una matriu a partir d'un subconjunt d'entrades observables. Els mètodes més eficaços es basen en la idea que la matriu desconeguda és de baix rang. Al ser de baix rang, les seves entrades són funció d'uns pocs coeficients que poden ser estimats sempre que hi hagi suficients observacions. Així, a matrix completion la solució s'obté com la matriu de mínim rang que millor s'ajusta a les entrades visibles. A més de baix rang, la matriu desconeguda pot tenir altres propietats estructurals que poden ser aprofitades en el procés de recuperació. En una matriu suau, pot esperar-se que les entrades en posicions pròximes tinguin valor similar. Igualment, grups de columnes o files poden saber-se similars. Aquesta informació relacional es proporciona a través de diversos mitjans com ara matrius de covariància o grafs, amb l'inconvenient que aquests no poden ser derivats a partir de la matriu de dades ja que està incompleta. Aquesta tesi tracta sobre matrix completion amb informació prèvia, i presenta metodologies que poden aplicar-se a diverses situacions. En la primera part, les columnes de la matriu desconeguda s'identifiquen com a senyals en un graf conegut prèviament. Llavors, la matriu d'adjacència del graf s'usa per calcular un punt inicial per a un algorisme de gradient pròxim amb la finalitat de reduir les iteracions necessàries per arribar a la solució. Després, suposant que els senyals són suaus, la matriu laplaciana del graf s'incorpora en la formulació del problema amb tal forçar suavitat en la solució. Això resulta en una reducció de soroll en la matriu observada i menor error, la qual cosa es demostra a través d'anàlisi teòrica i simulacions numèriques. La segona part de la tesi introdueix eines per a aprofitar informació prèvia mitjançant reproducing kernel Hilbert spaces. Atès que un kernel mesura la similitud entre dos punts en un espai, permet codificar qualsevol tipus d'informació tal com vectors de característiques, diccionaris o grafs. En associar cada columna i fila de la matriu desconeguda amb un element en un set, i definir un parell de kernels que mesuren similitud entre columnes o files, les entrades desconegudes poden ser extrapolades mitjançant les funcions de kernel. Es presenta un mètode basat en regressió amb kernels, amb dues variants addicionals que redueixen el cost computacional. Els mètodes proposats es mostren competitius amb tècniques existents, especialment quan el nombre d'observacions és molt baix. A més, es detalla una anàlisi de l'error quadràtic mitjà i l'error de generalització. Per a l'error de generalització, s'adopta el context transductiu, el qual mesura la capacitat d'un algorisme de transferir informació d'un set de mostres etiquetades a un set no etiquetat. Després, es deriven cotes d'error per als algorismes proposats i existents fent ús de la complexitat de Rademacher, i es presenten proves numèriques que confirmen els resultats teòrics. Finalment, la tesi explora la qüestió de com triar les entrades observables de la matriu per a minimitzar l'error de recuperació de la matriu completa. Una estratègia de mostrejat passiva és proposada, la qual implica que no és necessari conèixer cap etiqueta per a dissenyar la distribució de mostreig. Només les funcions de kernel són necessàries. El mètode es basa en construir la millor aproximació de Nyström a la matriu de kernel mostrejant les columnes segons la seva leverage score, una mètrica que apareix de manera natural durant l'anàlisi teòric.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Poulsen, Rachel Lynn. „XPRIME: A Method Incorporating Expert Prior Information into Motif Exploration“. BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2083.

Der volle Inhalt der Quelle
Annotation:
One of the primary goals of active research in molecular biology is to better understand the process of transcription regulation. An important objective in understanding transcription is identifying transcription factors that directly regulate target genes. Identifying these transcription factors is a key step toward eliminating genetic diseases or disease susceptibilities that are encoded inside deoxyribonucleic acid (DNA). There is much uncertainty and variation associated with transcription factor binding sites, requiring these sites to be represented stochastically. Although typically each transcription factor prefers to bind to a specific DNA word, it can bind to different variations of that DNA word. In order to model these uncertainties, we use a Bayesian approach that allows the binding probabilities associated with the motif to vary. This project presents a new method for motif searching that uses expert prior information to scan DNA sequences for multiple known motif binding sites as well as new motifs. The method uses a mixture model to model the motifs of interest where each motif is represented by a Multinomial distribution, and Dirichlet prior distributions are placed on each motif of interest. Expert prior information is given to search for known motifs and diffuse priors are used to search for new motifs. The posterior distribution of each motif is then sampled using Markov Chain Monte Carlo (MCMC) techniques and Gibbs sampling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Guo, Linyi. „Constructing an Informative Prior Distribution of Noises in Seasonal Adjustment“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41069.

Der volle Inhalt der Quelle
Annotation:
Time series data is very common in our daily life. Since they are related to time, most of them show a periodicity. The existence of this periodic in uence leads to our research problem, seasonal adjustment. Seasonal adjustment is generally applied around us, especially in areas of economy and nance. Over the last few decades, scholars around the world made a lot of contributions in this area, and one of the latest methods is X-13ARIMA-SEATS, which is built on ARIMA models and linear lters. On the other hand, state space modelling (abbreviated to SSM) is also a popular method to solve this problem and researchers including J. Durbin, S.J. Koopman and and A. Harvery have contributed a lot of work to it. Unlike linear lters and ARIMA models, the study on SSM starts relatively late, thus it has not been studied and developed widely for the seasonal adjustment problem. And SSMs have a lot advantages over those ARIMA-based and lter-based methods such as exibility, the understandable structure and the potential to do partial pooling, but in practice, its default decomposition result behaves bad in some cases, such as excessively spiky trend series; on the contrary, X-13ARIMA-SEATS could output good decomposition result for us to analyze, but it can't be tweaked or combined as easily as generative models and behaves like a black-box. In this paper, we shall use Bayesian inference to combine both methods' characteristics together. Simultaneously, to show the advantage of using SSMs concretely, we shall give a simple application in partial pooling and talk about how to apply the Bayesian analysis to partial pooling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Johnson, Robert Spencer. „Incorporation of prior information into independent component analysis of FMRI“. Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Olsen, Catharina. „Causal inference and prior integration in bioinformatics using information theory“. Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209401.

Der volle Inhalt der Quelle
Annotation:
An important problem in bioinformatics is the reconstruction of gene regulatory networks from expression data. The analysis of genomic data stemming from high- throughput technologies such as microarray experiments or RNA-sequencing faces several difficulties. The first major issue is the high variable to sample ratio which is due to a number of factors: a single experiment captures all genes while the number of experiments is restricted by the experiment’s cost, time and patient cohort size. The second problem is that these data sets typically exhibit high amounts of noise.

Another important problem in bioinformatics is the question of how the inferred networks’ quality can be evaluated. The current best practice is a two step procedure. In the first step, the highest scoring interactions are compared to known interactions stored in biological databases. The inferred networks passes this quality assessment if there is a large overlap with the known interactions. In this case, a second step is carried out in which unknown but high scoring and thus promising new interactions are validated ’by hand’ via laboratory experiments. Unfortunately when integrating prior knowledge in the inference procedure, this validation procedure would be biased by using the same information in both the inference and the validation. Therefore, it would no longer allow an independent validation of the resulting network.

The main contribution of this thesis is a complete computational framework that uses experimental knock down data in a cross-validation scheme to both infer and validate directed networks. Its components are i) a method that integrates genomic data and prior knowledge to infer directed networks, ii) its implementation in an R/Bioconductor package and iii) a web application to retrieve prior knowledge from PubMed abstracts and biological databases. To infer directed networks from genomic data and prior knowledge, we propose a two step procedure: First, we adapt the pairwise feature selection strategy mRMR to integrate prior knowledge in order to obtain the network’s skeleton. Then for the subsequent orientation phase of the algorithm, we extend a criterion based on interaction information to include prior knowledge. The implementation of this method is available both as part of the prior retrieval tool Predictive Networks and as a stand-alone R/Bioconductor package named predictionet.

Furthermore, we propose a fully data-driven quantitative validation of such directed networks using experimental knock-down data: We start by identifying the set of genes that was truly affected by the perturbation experiment. The rationale of our validation procedure is that these truly affected genes should also be part of the perturbed gene’s childhood in the inferred network. Consequently, we can compute a performance score
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jiang, Xueyan. „Integrating prior knowledge into factorization approaches for relational learning“. Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-178640.

Der volle Inhalt der Quelle
Annotation:
An efficient way to represent the domain knowledge is relational data, where information is recorded in form of relationships between entities. Relational data is becoming ubiquitous over the years for knowledge representation due to the fact that many real-word data is inherently interlinked. Some well-known examples of relational data are: the World Wide Web (WWW), a system of interlinked hypertext documents; the Linked Open Data (LOD) cloud of the Semantic Web, a collection of published data and their interlinks; and finally the Internet of Things (IoT), a network of physical objects with internal states and communications ability. Relational data has been addressed by many different machine learning approaches, the most promising ones are in the area of relational learning, which is the focus of this thesis. While conventional machine learning algorithms consider entities as being independent instances randomly sampled from some statistical distribution and being represented as data points in a vector space, relational learning takes into account the overall network environment when predicting the label of an entity, an attribute value of an entity or the existence of a relationship between entities. An important feature is that relational learning can exploit contextual information that is more distant in the relational network. As the volume and structural complexity of the relational data increase constantly in the era of Big Data, scalability and the modeling power become crucial for relational learning algorithms. Previous relational learning algorithms either provide an intuitive representation of the model, such as Inductive Logic Programming (ILP) and Markov Logic Networks (MLNs), or assume a set of latent variables to explain the observed data, such as the Infinite Hidden Relational Model (IHRM), the Infinite Relational Model (IRM) and factorization approaches. Models with intuitive representations often involve some form of structure learning which leads to scalability problems due to a typically large search space. Factorizations are among the best-performing approaches for large-scale relational learning since the algebraic computations can easily be parallelized and since they can exploit data sparsity. Previous factorization approaches exploit only patterns in the relational data itself and the focus of the thesis is to investigate how additional prior information (comprehensive information), either in form of unstructured data (e.g., texts) or structured patterns (e.g., in form of rules) can be considered in the factorization approaches. The goal is to enhance the predictive power of factorization approaches by involving prior knowledge for the learning, and on the other hand to reduce the model complexity for efficient learning. This thesis contains two main contributions: The first contribution presents a general and novel framework for predicting relationships in multirelational data using a set of matrices describing the various instantiated relations in the network. The instantiated relations, derived or learnt from prior knowledge, are integrated as entities' attributes or entity-pairs' attributes into different adjacency matrices for the learning. All the information available is then combined in an additive way. Efficient learning is achieved using an alternating least squares approach exploiting sparse matrix algebra and low-rank approximation. As an illustration, several algorithms are proposed to include information extraction, deductive reasoning and contextual information in matrix factorizations for the Semantic Web scenario and for recommendation systems. Experiments on various data sets are conducted for each proposed algorithm to show the improvement in predictive power by combining matrix factorizations with prior knowledge in a modular way. In contrast to a matrix, a 3-way tensor si a more natural representation for the multirelational data where entities are connected by different types of relations. A 3-way tensor is a three dimensional array which represents the multirelational data by using the first two dimensions for entities and using the third dimension for different types of relations. In the thesis, an analysis on the computational complexity of tensor models shows that the decomposition rank is key for the success of an efficient tensor decomposition algorithm, and that the factorization rank can be reduced by including observable patterns. Based on these theoretical considerations, a second contribution of this thesis develops a novel tensor decomposition approach - an Additive Relational Effects (ARE) model - which combines the strengths of factorization approaches and prior knowledge in an additive way to discover different relational effects from the relational data. As a result, ARE consists of a decomposition part which derives the strong relational leaning effects from a highly scalable tensor decomposition approach RESCAL and a Tucker 1 tensor which integrates the prior knowledge as instantiated relations. An efficient least squares approach is proposed to compute the combined model ARE. The additive model contains weights that reflect the degree of reliability of the prior knowledge, as evaluated by the data. Experiments on several benchmark data sets show that the inclusion of prior knowledge can lead to better performing models at a low tensor rank, with significant benefits for run-time and storage requirements. In particular, the results show that ARE outperforms state-of-the-art relational learning algorithms including intuitive models such as MRC, which is an approach based on Markov Logic with structure learning, factorization approaches such as Tucker, CP, Bayesian Clustered Tensor Factorization (BCTF), the Latent Factor Model (LFM), RESCAL, and other latent models such as the IRM. A final experiment on a Cora data set for paper topic classification shows the improvement of ARE over RESCAL in both predictive power and runtime performance, since ARE requires a significantly lower rank.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Wang, Chunlai [Verfasser], und Bin [Akademischer Betreuer] Yang. „Object-level image segmentation with prior information / Chunlai Wang ; Betreuer: Bin Yang“. Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2019. http://d-nb.info/1195529422/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Gursoy, Dogan. „Development of a Travelers' Information Search Behavior Model“. Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/29970.

Der volle Inhalt der Quelle
Annotation:
In the dynamic global environment of today, understanding how travelers acquire information is important for marketing management decisions (Srinivasan 1990; Wilkie and Dickson 1985). For destination marketing managers, understanding information search behavior of travelers is crucial for designing effective marketing communication campaigns because information search represents the primary stage at which marketing can provide information and influence travelers' vacation decisions. Therefore, conceptual and empirical examinations of tourist information search behavior have a long tradition in tourism marketing literature (Etzel and Wahlers, 1985; Fodness and Murray, 1997, 1998, 1999; Perdue, 1985; Schul and Crompton, 1983; Snepenger and Snepenger 1993; Woodside and Ronkainen, 1980). Even though several studies examined travelers information search behavior and the factors that are likely to affect it, they all examined travelers' prior product knowledge as a uni-dimensional construct, most often referred to as destination familiarity or previous trip experiences (Woodside and Ronkainen, 1980). However, consumer behavior literature suggests that the prior product knowledge is not a uni-dimensional construct (Alba and Hutchinson). Alba and Hutchinson (1987) propose that prior product knowledge has two major components, familiarity and expertise, and cannot be measured by a single indicator. In addition, in tourism, little research has been done on the factors that are likely to influence travelers' prior product knowledge and, therefore, their information search behavior. The purpose of this study is to examine travelers' information search behavior by studying the effects of travelers' familiarity and expertise on their information search behavior and identifying the factors that are likely to influence travelers' familiarity and expertise and their information search behavior. A travelers' information search behavior model and a measurement instrument to assess the constructs of the model were designed for the use of this study. The model proposed that the type of information search (internal and/or external) that is likely to be utilized will be influenced by travelers' familiarity and expertise. In addition, travelers' involvement, learning, prior visits and cost of information search are proposed to influence travelers' familiarity and their information search behavior. Even though a very complex travelers' information search behavior model was proposed, only the effects of travelers' prior product knowledge (familiarity and expertise) on travelers' information search behavior were empirically tested due to the complex nature of the model. First the proposed measurement scales were pretested on 224 consumers. After making sure that proposed measures of each construct were valid and reliable, a survey of 470 consumers of travel/tourism services who reside in Virginia was conducted. Structural Equation Modeling (i.e., LISREL) analysis was performed to test the fit of the model. Results of the study confirmed that travelers' prior product knowledge has two components, familiarity and expertise, and expertise is a function of familiarity. Both familiarity and expertise affect travelers' information search behavior. While the effect of familiarity on internal search is positive and on external search is negative, the effect of expertise on internal search is negative and on external search is positive. The study identified a U-shaped relationship between travelers' prior product knowledge and external information search. At early stages of learning (low familiarity), travelers are likely to rely on external information sources to make their vacation decisions. As their prior product knowledge (familiarity) increases they tend to make their vacation decisions based on what is in their memory, therefore, reliance on external information sources decreases. However, as they learn more (become experts), they realize that they need more detailed information to make their vacation decisions. As a result, they start searching for additional external information to make their vacation decisions.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Kubes, Milena. „Use of prior knowledge in integration of information from technical materials“. Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75962.

Der volle Inhalt der Quelle
Annotation:
This study was designed to examine the ability to use prior knowledge in text comprehension and knowledge integration. The focus of the research was on effects of different degrees of subjects' theoretical knowledge in the domain of biochemistry on their comprehension of written technical materials describing experimental procedures and results, and the ability to integrate such new text derived information with prior theoretical knowledge considered by experts to be relevant to the topic. Effects of cues on the accessibility and use of prior knowledge were also examined. Pre-test questions testing the extent of subjects' prior knowledge of photosynthesis, and a "cue article" specifically designed to prime subjects' relevant prior knowledge of photosynthesis, served as cues in the study.
A theoretical model of experts' knowledge was developed from a semantic analysis of expert-produced texts. This "expert model" was used to evaluate the extent of students' theoretical knowledge of photosynthesis, and its accessibility while applying it to the experimental tasks. College students and university graduate students served as subjects in the study, permitting a contrast of groups varying in prior knowledge of and expertise in chemistry.
Statistical analyses of data obtained from coding subjects' verbal protocols against text propositions and the expert model revealed that prior knowledge and comprehension contribute significantly to predicting knowledge integration, but they are not sufficient for this process to take place. It appears that qualitative aspects and specific characteristics of subjects' knowledge structure contribute to the process of integration, not simply the amount of accumulated knowledge. There was also evidence that there are specific inferential processes unique to knowledge integration that differentiate it from test comprehension. Cues manifested their effects on performance on comprehension tasks and integrative tasks only through their interactions with other factors. Furthermore, it was found that textual complexity placed specific constraints on students' performance: the application of textual information to the integrative tasks and students' ability to build conceptual frame representations based on text propositions depended on the complexity of the textual material. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ng, Kwai-sang Sam. „The use of prior information for the reduction of operation anxiety“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B29726499.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hargreaves, Brock Edward. „Sparse signal recovery : analysis and synthesis formulations with prior support information“. Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46448.

Der volle Inhalt der Quelle
Annotation:
The synthesis model for signal recovery has been the model of choice for many years in compressive sensing. Various weighting schemes using prior support information to adjust the objective function associated with the synthesis model have been shown to improve the recovery of the signal in terms of accuracy. Generally, even with no prior knowledge of the support, iterative methods can build support estimates and incorporate that into the recovery which has also been shown to increase the speed and accuracy of the recovery. However when the original signal is sparse with respect to a redundant dictionary (rather than an orthonormal basis) there is a coun- terpart model to synthesis, namely the analysis model, which has been less popular but has recently attracted more attention. The analysis model is much less understood and thus there are fewer theorems available in both the context of non-weighted and weighted signal recovery. In this thesis, we investigate weighting in both the analysis model and synthesis model in weighted l-1 minimization. Theoretical guarantees on reconstruction and various weighting strategies for each model are discussed. We give conditions for weighted synthesis recovery with frames which do not require strict incoherency conditions, this is based on recent results of regular synthesis with frames using optimal dual l-1 analysis. A novel weighting technique is introduced in the analysis case which outperforms its traditional counterparts in the case of seismic wavefield reconstruction. We also introduce a weighted split Bregman algorithm for analysis and optimal dual analysis. We then investigate these techniques on seismic data and synthetically created test data using a variety of frames.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Popov, Dmitriy. „Iteratively reweighted least squares minimization with prior information a new approach“. Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4822.

Der volle Inhalt der Quelle
Annotation:
Iteratively reweighted least squares (IRLS) algorithms provide an alternative to the more standard l[sub1]-minimization approach in compressive sensing. Daubechies et al. introduced a particularly stable version of an IRLS algorithm and rigorously proved its convergence in 2010. They did not, however, consider the case in which prior information on the support of the sparse domain of the solution is available. In 2009, Miosso et al. proposed an IRLS algorithm that makes use of this information to further reduce the number of measurements required to recover the solution with specified accuracy. Although Miosso et al. obtained a number of simulation results strongly confirming the utility of their approach, they did not rigorously establish the convergence properties of their algorithm. In this paper, we introduce prior information on the support of the sparse domain of the solution into the algorithm of Daubechies et al. We then provide a rigorous proof of the convergence of the resulting algorithm.
ID: 030646220; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2011.; Includes bibliographical references (p. 37-38).
M.S.
Masters
Mathematics
Sciences
Mathematical Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Song, Qi. „Globally optimal image segmentation incorporating region, shape prior and context information“. Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2989.

Der volle Inhalt der Quelle
Annotation:
Accurate image segmentation is a challenging problem in the presence of weak boundary evidence, large object deformation, and serious mutual influence between multiple objects. In this thesis, we propose novel approaches to multi-object segmentation, which incorporates region, shape and context prior information to help overcome the stated challenges. The methods are based on a 3-D graph-theoretic framework. The main idea is to formulate the image segmentation problem as a discrete energy minimization problem. The prior region, shape and context information is incorporated by adding additional terms in our energy function , which are enforced using an arc-weighted graph representation. In particular, for optimal surface segmentation with region information, a ratio-form energy is employed, which contains both boundary term and regional term. To incorporate the shape and context prior information for multi-surface segmentation, additional shape-prior and context-prior terms are added, which penalize local shape change and local context change with respect to the prior shape model and the prior context model. We also propose a novel approach for the segmentation of terrain-like surfaces and regions with arbitrary topology. The context information is encoded by adding additional context term in the energy. Finally, a co-segmentation framework is proposed for tumor segmentation in PET-CT images, which makes use of the information from both modalities. The globally optimal solution for the segmentation of multiple objects can be obtained by computing a single maximum flow in a low-order polynomial time. The proposed method was validated on a variety of applications, including aorta segmentation in MRI images, intraretinal layer segmentation of OCT images, bladder-prostate segmentation in CT images, image resizing, robust delineation of pulmonary tumors in MVCBCT images, and co-segmentation of tumors in PET-CT images. The results demonstrated the applicability of the proposed approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fronczyk, Kassandra M. „Development of Informative Priors in Microarray Studies“. Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2031.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Xu, Jian. „Iterative Aggregation of Bayesian Networks Incorporating Prior Knowledge“. Miami University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=miami1105563019.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Macenko, Marc D. „Eigenimage-based Robust Image Segmentation Using Level Sets“. Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1155841672.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Alexander, Richard David. „Insider information trading analysis of Defense companies prior to major contract awards“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Sirisuk, Phaophak. „Transformation methods and partial prior information for blind system identification and equalisation“. Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326273.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Basevi, Hector Richard Abraham. „Use of prior information and probabilistic image reconstruction for optical tomographic imaging“. Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5876/.

Der volle Inhalt der Quelle
Annotation:
Preclinical bioluminescence tomographic reconstruction is underdetermined. This work addresses the use of prior information in bioluminescence tomography to improve image acquisition, reconstruction, and analysis. A structured light surface metrology method was developed to measure surface geometry and enable robust and automatic integration of mirrors into the measurement process. A mouse phantom was imaged and accuracy was measured at 0.2mm with excellent surface coverage. A sparsity-regularised reconstruction algorithm was developed to use instrument noise statistics to automatically determine the stopping point of reconstruction. It was applied to in silico and in simulacra data and successfully reconstructed and resolved two separate luminescent sources within a plastic mouse phantom. A Bayesian framework was constructed that incorporated bioluminescence properties and instrument properties. Distribution expectations and standard deviations were estimated, providing reconstructions and measures of reconstruction uncertainty. The reconstructions showed superior performance when applied to in simulacra data compared to the sparsity-based algorithm. The information content of measurements using different sets of wavelengths was quantified using the Bayesian framework via mutual information and applied to an in silico problem. Significant differences in information content were observed and comparison against a condition number-based approach indicated subtly different results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Etta-AkinAina, Florence Ebam. „Notetaking in lectures : the relationship between prior knowledge, information uptake and comprehension“. Thesis, University College London (University of London), 1988. http://discovery.ucl.ac.uk/10007370/.

Der volle Inhalt der Quelle
Annotation:
Notetaking during lectures has been mainly investigated using an input-output procedure where particular subject variables are related first to notes-asproduct, then to comprehension test scores. In contrast, the purpose of this thesis was to look at notetaking as a process rather than a product and to discover factors that influence the process. The first, orienting study took a fairly traditional approach of training students in the use of two strategies -summarizing and networking- hypothesized to improve notetaking activity. Training was administered for a period of six weeks. Results indicated a main effect for mathematical ability but not for training. Differences in mean scores for training methods were non significant and not in the hypothesized direction'fnetworking > summarizing> control. The next study was a first-approximation to a true processing analysis. Students' self-estimates of prior knowledge, as well as the volume of their notetaking were linked to strategic and tactical processing variables such as whether lecture material was written down as heard or translated into own terms; whether they wrote only important points, and so on. This pattern was then further related to self-estimates of lecture comprehension. The pattern of relationship among processes, and between these processes, note volume and comprehension varied with differing amounts of prior knowledge and with language ability. The third study was more ambitious in its approach to processing variables. A videotaped lecture was segmented into idea units with a pause between each unit. For each segment, students took notes as well as recording their understanding of it. A regression model for the data shows that while self-estimated prior knowledge appeared related to outcome variables (e.g. comprehension), 2 it did not relate to understanding of the lecture as it was being delivered. A more detailed analysis by segments revealed that notes reflected the status of transmitted information with regard to importance and the level of understanding achieved for specific pieces of information. Mean lecture comprehension accounted for the largest percentage of variance in the number of words in notes. Findings are discussed with respect to contemporary theories of note taking and comprehension. A cognitive model of notetaking detailing how the various processes are instantiated and related is also offered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Rojas, Temistocles Simon. „Controlling realism and uncertainty in reservoir models using intelligent sedimentological prior information“. Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2751.

Der volle Inhalt der Quelle
Annotation:
Forecasting reservoir production has a large associated uncertainty, since this is the final part of a very complex process, this process is based on sparse and indirect data measurements. One the methodologies used in the oil industry to predict reservoir production is based on the Baye’s theorem. Baye’s theorem applied to reservoir forecasting, samples parameters from a prior understanding of the uncertainty to generate reservoir models and updates this prior information by comparing reservoir production data with model production response. In automatic history matching it is challenging to generate reservoir models that preserve geological realism (obtain reservoir models with geological features that have been seen in nature). One way to control the geological realism in reservoir models is by controlling the realism of the geological prior information. The aim of this thesis is to encapsulate sedimentological information in order to build prior information that can control the geological realism of the history-matched models. This “intelligent” prior information is introduced into the automatic history-matching framework rejecting geologically unrealistic reservoir models. Machine Learning Techniques (MLT) were used to build realistic sedimentological prior information models. Another goal of this thesis was to include geological parameters into the automatic history-match framework that have an impact on reservoir model performance: vertical variation of facies proportions, connectivity of geobodies, and the use of multiple training images as a source of realistic sedimentological prior information. The main outcome of this thesis is that the use of “intelligent” sedimentological prior information guarantees the realism of reservoir models and reduces computing time and uncertainty in reservoir production prediction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Faye, Papa Abdoulaye. „Planification et analyse de données spatio-temporelles“. Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22638/document.

Der volle Inhalt der Quelle
Annotation:
La Modélisation spatio-temporelle permet la prédiction d’une variable régionalisée à des sites non observés du domaine d’étude, basée sur l’observation de cette variable en quelques sites du domaine à différents temps t donnés. Dans cette thèse, l’approche que nous avons proposé consiste à coupler des modèles numériques et statistiques. En effet en privilégiant l’approche bayésienne nous avons combiné les différentes sources d’information : l’information spatiale apportée par les observations, l’information temporelle apportée par la boîte noire ainsi que l’information a priori connue du phénomène. Ce qui permet une meilleure prédiction et une bonne quantification de l’incertitude sur la prédiction. Nous avons aussi proposé un nouveau critère d’optimalité de plans d’expérience incorporant d’une part le contrôle de l’incertitude en chaque point du domaine et d’autre part la valeur espérée du phénomène
Spatio-temporal modeling allows to make the prediction of a regionalized variable at unobserved points of a given field, based on the observations of this variable at some points of field at different times. In this thesis, we proposed a approach which combine numerical and statistical models. Indeed by using the Bayesian methods we combined the different sources of information : spatial information provided by the observations, temporal information provided by the black-box and the prior information on the phenomenon of interest. This approach allowed us to have a good prediction of the variable of interest and a good quantification of incertitude on this prediction. We also proposed a new method to construct experimental design by establishing a optimality criterion based on the uncertainty and the expected value of the phenomenon
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Ghanbari, Mahsa [Verfasser]. „Association measures and prior information in the reconstruction of gene networks / Mahsa Ghanbari“. Berlin : Freie Universität Berlin, 2016. http://d-nb.info/1104733757/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Wirfält, Petter. „Exploiting Prior Information in Parametric Estimation Problems for Multi-Channel Signal Processing Applications“. Doctoral thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134034.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses a number of problems all related to parameter estimation in sensor array processing. The unifying theme is that some of these parameters are known before the measurements are acquired. We thus study how to improve the estimation of the unknown parameters by incorporating the knowledge of the known parameters; exploiting this knowledge successfully has the potential to dramatically improve the accuracy of the estimates. For covariance matrix estimation, we exploit that the true covariance matrix is Kronecker and Toeplitz structured. We then devise a method to ascertain that the estimates possess this structure. Additionally, we can show that our proposed estimator has better performance than the state-of-art when the number of samples is low, and that it is also efficient in the sense that the estimates have Cram\'er-Rao lower Bound (CRB) equivalent variance. In the direction of arrival (DOA) scenario, there are different types of prior information; first, we study the case when the location of some of the emitters in the scene is known. We then turn to cases with additional prior information, i.e.~when it is known that some (or all) of the source signals are uncorrelated. As it turns out, knowledge of some DOA combined with this latter form of prior knowledge is especially beneficial, giving estimators that are dramatically more accurate than the state-of-art. We also derive the corresponding CRBs, and show that under quite mild assumptions, the estimators are efficient. Finally, we also investigate the frequency estimation scenario, where the data is a one-dimensional temporal sequence which we model as a spatial multi-sensor response. The line-frequency estimation problem is studied when some of the frequencies are known; through experimental data we show that our approach can be beneficial. The second frequency estimation paper explores the analysis of pulse spin-locking data sequences, which are encountered in nuclear resonance experiments. By introducing a novel modeling technique for such data, we develop a method for estimating the interesting parameters of the model. The technique is significantly faster than previously available methods, and provides accurate estimation results.
Denna doktorsavhandling behandlar parameterestimeringsproblem inom flerkanals-signalbehandling. Den gemensamma förutsättningen för dessa problem är att det finns information om de sökta parametrarna redan innan data analyseras; tanken är att på ett så finurligt sätt som möjligt använda denna kunskap för att förbättra skattningarna av de okända parametrarna. I en uppsats studeras kovariansmatrisskattning när det är känt att den sanna kovariansmatrisen har Kronecker- och Toeplitz-struktur. Baserat på denna kunskap utvecklar vi en metod som säkerställer att även skattningarna har denna struktur, och vi kan visa att den föreslagna skattaren har bättre prestanda än existerande metoder. Vi kan också visa att skattarens varians når Cram\'er-Rao-gränsen (CRB). Vi studerar vidare olika sorters förhandskunskap i riktningsbestämningsscenariot: först i det fall då riktningarna till ett antal av sändarna är kända. Sedan undersöker vi fallet då vi även vet något om kovariansen mellan de mottagna signalerna, nämligen att vissa (eller alla) signaler är okorrelerade. Det visar sig att just kombinationen av förkunskap om både korrelation och riktning är speciellt betydelsefull, och genom att utnyttja denna kunskap på rätt sätt kan vi skapa skattare som är mycket noggrannare än tidigare möjligt. Vi härleder även CRB för fall med denna förhandskunskap, och vi kan visa att de föreslagna skattarna är effektiva. Slutligen behandlar vi även frekvensskattning. I detta problem är data en en-dimensionell temporal sekvens som vi modellerar som en spatiell fler-kanalssignal. Fördelen med denna modelleringsstrategi är att vi kan använda liknande metoder i estimatorerna som vid sensor-signalbehandlingsproblemen. Vi utnyttjar återigen förhandskunskap om källsignalerna: i ett av bidragen är antagandet att vissa frekvenser är kända, och vi modifierar en existerande metod för att ta hänsyn till denna kunskap. Genom att tillämpa den föreslagna metoden på experimentell data visar vi metodens användbarhet. Det andra bidraget inom detta område studerar data som erhålls från exempelvis experiment inom kärnmagnetisk resonans. Vi introducerar en ny modelleringsmetod för sådan data och utvecklar en algoritm för att skatta de önskade parametrarna i denna modell. Vår algoritm är betydligt snabbare än existerande metoder, och skattningarna är tillräckligt noggranna för typiska tillämpningar.

QC 20131115

APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Lurz, Kristina [Verfasser], und Rainer [Gutachter] Göb. „Confidence and Prediction under Covariates and Prior Information / Kristina Lurz. Gutachter: Rainer Göb“. Würzburg : Universität Würzburg, 2015. http://d-nb.info/111178423X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tso, Chak Hau Michael. „The Relative Importance of Head, Flux and Prior Information in Hydraulic Tomography Analysis“. Thesis, The University of Arizona, 2015. http://hdl.handle.net/10150/556860.

Der volle Inhalt der Quelle
Annotation:
Using cross-correlation analysis, we demonstrate that flux measurements at observation locations during hydraulic tomography (HT) surveys carry non-redundant information about heterogeneity that are complementary to head measurements at the same locations. We then hypothesize that a joint interpretation of head and flux data can enhance the resolution of HT estimates. Subsequently, we use numerical experiments to test this hypothesis and investigate the impact of stationary and non-stationary hydraulic conductivity field, and prior information such as correlation lengths, and initial mean models (uniform or distributed means) on HT estimates. We find that flux and head data from HT have already possessed sufficient heterogeneity characteristics of aquifers. While prior information (as uniform mean or layered means, correlation scales) could be useful, its influence on the estimates is limited as more non-redundant data are used in the HT analysis (see Yeh and Liu [2000]). Lastly, some recommendation for conducting HT surveys and analysis are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Stegmaier, Johannes [Verfasser]. „New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty / Johannes Stegmaier“. Karlsruhe : KIT Scientific Publishing, 2017. http://www.ksp.kit.edu.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Georgiou, Christina Nefeli. „Constructing informative Bayesian priors to improve SLAM map quality“. Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/17167/.

Der volle Inhalt der Quelle
Annotation:
The problem of Simultaneous Localisation And Mapping (SLAM) has been widely researched and has been of particular interest in recent years, with robots and self driving cars becoming ubiquitous. SLAM solutions to date have aimed to produce faster, more robust solutions that yield consistent maps by improving the filtering algorithms used, introducing better sensors, more efficient map representations or improved motion estimates. Whilst performing well in simplified scenarios, many of these solutions perform poorly in challenging real life scenarios. It is therefore important to produce SLAM solutions that can perform well even when using limited computational resources and performing a quick exploration for time critical operations such as Urban Search And Rescue missions. In order to address this problem this thesis proposes the construction of informative Bayesian priors to improve performance without adding to the computational complexity of the SLAM algorithm. Indoors occupancy grid SLAM is used as a case study to demonstrate this concept and architectural drawings are used as a source of prior information. The use of prior information to improve the performance of robotics systems has been successful in applications such as visual odometry, self-driving car navigation and object recognition. However, none of these solutions leverage prior information to construct Bayesian priors that can be used in recursive map estimation. This thesis addresses this problem and proposes a novel method to process architectural drawings and floor plans to extract structural information. A study is then conducted to identify optimal prior values of occupancy to assign to extracted walls and empty space. A novel approach is proposed to assess the quality of maps produced using different priors and a multi-objective optimisation is used to identify Pareto optimal values. The proposed informative priors are found to perform better than the commonly used non-informative prior, yielding an increase of over 20% in the F2 metric, without adding to the computational complexity of the SLAM algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Reyland, John M. „Towards Wiener system identification with minimum a priori information“. Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1066.

Der volle Inhalt der Quelle
Annotation:
The ability to construct accurate mathematical models of real systems is an important part of control systems design. A block oriented systems identification approach models the unknown system as interconnected linear and nonlinear blocks. The subject of this thesis is a particular configuration of these blocks referred to as a Wiener model. The Wiener model studied here is a cascade of a one input linear block followed by a nonlinear block which then provides one output. We assume that the signal between the linear and nonlinear block is always unknown, only the Wiener model input and output can be sampled. This thesis investigates identification of the linear transfer function in a Wiener model. The question examined throughout the thesis is: given some small amount of a priori information on the nonlinear part, what can we determine about the linear part? Examples of minimal a priori information are knowledge of only one point on the nonlinear transfer characteristic, or simply that the transfer characteristic is monotonic over a certain range. Nonlinear blocks with and without memory are discussed. Several algorithms for identifying the linear transfer function of a block oriented Wiener system are presented and analyzed in detail. Linear blocks identified have both finite and infinite impulse response (i.e. FIR and IIR). Each algorithm has a carefully defined set of minimal a priori information on the nonlinearity. Also, each approach has a minimally restrictive set of assumptions on the input excitation. The universal applicability of each algorithm is established by providing rigorous proofs of identifiability and in some cases convergence. Extensive simulation testing of each algorithm has been performed. Simulation techniques and results are discussed in detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Kann, Lennart [Verfasser], und Rainer [Gutachter] Göb. „Statistical Failure Prediction with an Account for Prior Information / Lennart Kann ; Gutachter: Rainer Göb“. Würzburg : Universität Würzburg, 2020. http://d-nb.info/1211959651/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhou, Wei. „XPRIME-EM: Eliciting Expert Prior Information for Motif Exploration Using the Expectation-Maximization Algorithm“. BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3589.

Der volle Inhalt der Quelle
Annotation:
Understanding the possible mechanisms of gene transcription regulation is a primary challenge for current molecular biologists. Identifying transcription factor binding sites (TFBSs), also called DNA motifs, is an important step in understanding these mechanisms. Furthermore, many human diseases are attributed to mutations in TFBSs, which makes identifying those DNA motifs significant for disease treatment. Uncertainty and variations in specific nucleotides of TFBSs present difficulties for DNA motif searching. In this project, we present an algorithm, XPRIME-EM (Eliciting EXpert PRior Information for Motif Exploration using the Expectation-Maximization Algorithm), which can discover known and de novo (unknown) DNA motifs simultaneously from a collection of DNA sequences using a modified EM algorithm and describe the variation nature of DNA motifs using position specific weight matrix (PWM). XPRIME improves the efficiency of locating and describing motifs by prevent the overlap of multiple motifs, a phenomenon termed a phase shift, and generates stronger motifs by considering the correlations between nucleotides at different positions within each motif. Moreover, a Bayesian formulation of the XPRIME algorithm allows for the elicitation of prior information for motifs of interest from literature and experiments into motif searching. We are the first research team to incorporate human genome-wide nucleosome occupancy information into the PWM based DNA motif searching.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Bai, Junjie. „Efficient optimization for labeling problems with prior information: applications to natural and medical images“. Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/3045.

Der volle Inhalt der Quelle
Annotation:
Labeling problem, due to its versatile modeling ability, is widely used in various image analysis tasks. In practice, certain prior information is often available to be embedded in the model to increase accuracy and robustness. However, it is not always straightforward to formulate the problem so that the prior information is correctly incorporated. It is even more challenging that the proposed model admits efficient algorithms to find globally optimal solution. In this dissertation, a series of natural and medical image segmentation tasks are modeled as labeling problems. Each proposed model incorporates different useful prior information. These prior information includes ordering constraints between certain labels, soft user input enforcement, multi-scale context between over-segmented regions and original voxel, multi-modality context prior, location context between multiple modalities, star-shape prior, and gradient vector flow shape prior. With judicious exploitation of each problem's intricate structure, efficient and exact algorithms are designed for all proposed models. The efficient computation allow the proposed models to be applied on large natural and medical image datasets using small memory footprint and reasonable time assumption. The global optimality guarantee makes the methods robust to local noise and easy to debug. The proposed models and algorithms are validated on multiple experiments, using both natural and medical images. Promising and competitive results are shown when compared to state-of-art.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Schabert, Antek. „Integrating the use of prior information into Graph-SLAM with NDTregistration for loop detection“. Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-61379.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie