Dissertations / Theses on the topic 'Inference'

To see the other types of publications on this topic, follow the link: Inference.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Calabrese, Chris M. Eng Massachusetts Institute of Technology. "Distributed inference : combining variational inference with distributed computing." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 95-97).
The study of inference techniques and their use for solving complicated models has taken off in recent years, but as the models we attempt to solve become more complex, there is a worry that our inference techniques will be unable to produce results. Many problems are difficult to solve using current approaches because it takes too long for our implementations to converge on useful values. While coming up with more efficient inference algorithms may be the answer, we believe that an alternative approach to solving this complicated problem involves leveraging the computation power of multiple processors or machines with existing inference algorithms. This thesis describes the design and implementation of such a system by combining a variational inference implementation (Variational Message Passing) with a high-level distributed framework (Graphlab) and demonstrates that inference is performed faster on a few large graphical models when using this system.
by Chris Calabrese.
M. Eng.
2

Miller, J. Glenn (James). "Predictive inference." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/24294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cleave, Nancy. "Ecological inference." Thesis, University of Liverpool, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Henke, Joseph D. "Visualizing inference." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-76).
Common Sense Inference is an increasingly attractive technique to make computer interfaces more in touch with how human users think. However, the results of the inference process are often hard to interpret and evaluate. Visualization has been successful in many other fields of science, but to date it has not been used much for visualizing the results of inference. This thesis presents Alar, an interface which allows dynamic exploration of the results of the inference process. It enables users to detect errors in the input data and fine tune how liberal or conservative the inference should be. It accomplishes this through novel extensions to the AnalogySpace framework for inference and visualizing concepts and even assertions as nodes in a graph, clustered by their semantic relatedness. A usability study was performed and the results show users were able to successfully use Alar to determine the cause of an incorrect inference.
by Joseph D. Henke.
M. Eng.
5

Zhai, Yongliang. "Stochastic processes, statistical inference and efficient algorithms for phylogenetic inference." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Phylogenetic inference aims to reconstruct the evolutionary history of populations or species. With the rapid expansion of genetic data available, statistical methods play an increasingly important role in phylogenetic inference by analyzing genetic variation of observed data collected at current populations or species. In this thesis, we develop new evolutionary models, statistical inference methods and efficient algorithms for reconstructing phylogenetic trees at the level of populations using single nucleotide polymorphism data and at the level of species using multiple sequence alignment data. At the level of populations, we introduce a new inference method to estimate evolutionary distances for any two populations to their most recent common ancestral population using single-nucleotide polymorphism allele frequencies. Our method is based on a new evolutionary model for both drift and fixation. To scale this method to large numbers of populations, we introduce the asymmetric neighbor-joining algorithm, an efficient method for reconstructing rooted bifurcating trees. Asymmetric neighbor-joining provides a scalable rooting method applicable to any non-reversible evolutionary modelling setup. We explore the statistical properties of asymmetric neighbor-joining, and demonstrate its accuracy on synthetic data. We validate our method by reconstructing rooted phylogenetic trees from the Human Genome Diversity Panel data. Our results are obtained without using an outgroup, and are consistent with the prevalent recent single-origin model of human migration. At the level of species, we introduce a continuous time stochastic process, the geometric Poisson indel process, that allows indel rates to vary across sites. We design an efficient algorithm for computing the probability of a given multiple sequence alignment based on our new indel model. We describe a method to construct phylogeny estimates from a fixed alignment using neighbor-joining. Using simulation studies, we show that ignoring indel rate variation may have a detrimental effect on the accuracy of the inferred phylogenies, and that our proposed method can sidestep this issue by inferring latent indel rate categories. We also show that our phylogenetic inference method may be more stable to taxa subsampling in a real data experiment compared to some existing methods that either ignore indels or ignore indel rate variation.
Science, Faculty of
Statistics, Department of
Graduate
6

Wu, Jianrong. "Asymptotic likelihood inference." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq41050.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morris, Quaid Donald Jozef 1972. "Practical probabilistic inference." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D. in Computational Neuroscience)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2003.
Includes bibliographical references (leaves 157-163).
The design and use of expert systems for medical diagnosis remains an attractive goal. One such system, the Quick Medical Reference, Decision Theoretic (QMR-DT), is based on a Bayesian network. This very large-scale network models the appearance and manifestation of disease and has approximately 600 unobservable nodes and 4000 observable nodes that represent, respectively, the presence and measurable manifestation of disease in a patient. Exact inference of posterior distributions over the disease nodes is extremely intractable using generic algorithms. Inference can be made much more efficient by exploiting the QMR-DT's unique structure. Indeed, tailor-made inference algorithms for the QMR-DT efficiently generate exact disease posterior marginals for some diagnostic problems and accurate approximate posteriors for others. In this thesis, I identify a risk with using the QMR-DT disease posteriors for medical diagnosis. Specifically, I show that patients and physicians conspire to preferentially report findings that suggest the presence of disease. Because the QMR-DT does not contain an explicit model of this reporting bias, its disease posteriors may not be useful for diagnosis. Correcting these posteriors requires augmenting the QMR-DT with additional variables and dependencies that model the diagnostic procedure. I introduce the diagnostic QMR-DT (dQMR-DT), a Bayesian network containing both the QMR-DT and a simple model of the diagnostic procedure. Using diagnostic problems sampled from the dQMR-DT, I show the danger of doing diagnosis using disease posteriors from the unaugmented QMR-DT.
(cont.) I introduce a new class of approximate inference methods, based on feed-forward neural networks, for both the QMR-DT and the dQMR-DT. I show that these methods, recognition models, generate accurate approximate posteriors on the QMR-DT, on the dQMR-DT, and on a version of the dQMR-DT specified only indirectly through a set of presolved diagnostic problems.
by Quaid Donald Jozef Morris.
Ph.D.in Computational Neuroscience
8

Levine, Daniel S. Ph D. Massachusetts Institute of Technology. "Focused active inference." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 91-99).
In resource-constrained inferential settings, uncertainty can be efficiently minimized with respect to a resource budget by incorporating the most informative subset of observations - a problem known as active inference. Yet despite the myriad recent advances in both understanding and streamlining inference through probabilistic graphical models, which represent the structural sparsity of distributions, the propagation of information measures in these graphs is less well understood. Furthermore, active inference is an NP-hard problem, thus motivating investigation of bounds on the suboptimality of heuristic observation selectors. Prior work in active inference has considered only the unfocused problem, which assumes all latent states are of inferential interest. Often one learns a sparse, high-dimensional model from data and reuses that model for new queries that may arise. As any particular query involves only a subset of relevant latent states, this thesis explicitly considers the focused problem where irrelevant states are called nuisance variables. Marginalization of nuisances is potentially computationally expensive and induces a graph with less sparsity; observation selectors that treat nuisances as notionally relevant may fixate on reducing uncertainty in irrelevant dimensions. This thesis addresses two primary issues arising from the retention of nuisances in the problem and representing a gap in the existing observation selection literature. The interposition of nuisances between observations and relevant latent states necessitates the derivation of nonlocal information measures. This thesis presents propagation algorithms for nonlocal mutual information (MI) on universally embedded paths in Gaussian graphical models, as well as algorithms for estimating MI on Gaussian graphs with cycles via embedded substructures, engendering a significant computational improvement over existing linear algebraic methods. The presence of nuisances also undermines application of a technical diminishing returns condition called submodularity, which is typically used to bound the performance of greedy selection. This thesis introduces the concept of submodular relaxations, which can be used to generate online-computable performance bounds, and analyzes the class of optimal submodular relaxations providing the tightest such bounds.
by Daniel S. Levine.
Ph. D.
9

Olšarová, Nela. "Inference propojení komponent." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Master Thesis deals with the design of hardware component interconnection inference algorithm that is supposed to be used in the FPGA schema editor that was integrated into educational integrated development environment VLAM IDE. The aim of the algorithm is to support user by finding an optimal interconnection of two given components. The editor and the development environment are implemented as an Eclipse plugin using GMF framework. A brief description of this technologies and the embedded systems design are followed by the design of the inference algorithm. This problem is a topic of combinatorial optimization, related to the bipartite matching and assignment problem. After this, the implementation of the algorithm is described, followed by tests and a summary of achieved results.
10

MacCartney, Bill. "Natural language inference /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Amjad, Muhammad Jehangir. "Sequential data inference via matrix estimation : causal inference, cricket and retail." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 185-193).
This thesis proposes a unified framework to capture the temporal and longitudinal variation across multiple instances of sequential data. Examples of such data include sales of a product over a period of time across several retail locations; trajectories of scores across cricket games; and annual tobacco consumption across the United States over a period of decades. A key component of our work is the latent variable model (LVM) which views the sequential data as a matrix where the rows correspond to multiple sequences while the columns represent the sequential aspect. The goal is to utilize information in the data within the sequence and across different sequences to address two inferential questions: (a) imputation or "filling missing values" and "de-noising" observed values, and (b) forecasting or predicting "future" values, for a given sequence of data. Using this framework, we build upon the recent developments in "matrix estimation" to address the inferential goals in three different applications. First, a robust variant of the popular "synthetic control" method used in observational studies to draw causal statistical inferences. Second, a score trajectory forecasting algorithm for the game of cricket using historical data. This leads to an unbiased target resetting algorithm for shortened cricket games which is an improvement upon the biased incumbent approach (Duckworth-Lewis-Stern). Third, an algorithm which leads to a consistent estimator for the time- and location-varying demand of products using censored observations in the context of retail. As a final contribution, the algorithms presented are implemented and packaged as a scalable open-source library for the imputation and forecasting of sequential data with applications beyond those presented in this work.
by Muhammad Jehangir Amjad.
Ph. D.
12

Schwaller, Loïc. "Exact Bayesian Inference in Graphical Models : Tree-structured Network Inference and Segmentation." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS210/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur l'inférence de réseaux. Le cadre statistique naturel à ce genre de problèmes est celui des modèles graphiques, dans lesquels les relations de dépendance et d'indépendance conditionnelles vérifiées par une distribution multivariée sont représentées à l'aide d'un graphe. Il s'agit alors d'apprendre la structure du modèle à partir d'observations portant sur les sommets. Nous considérons le problème d'un point de vue bayésien. Nous avons également décidé de nous concentrer sur un sous-ensemble de graphes permettant d'effectuer l'inférence de manière exacte et efficace, à savoir celui des arbres couvrants. Il est en effet possible d'intégrer une fonction définie sur les arbres couvrants en un temps cubique par rapport au nombre de variables à la condition que cette fonction factorise selon les arêtes, et ce malgré le cardinal super-exponentiel de cet ensemble. En choisissant les distributions a priori sur la structure et les paramètres du modèle de manière appropriée, il est possible de tirer parti de ce résultat pour l'inférence de modèles graphiques arborescents. Nous proposons un cadre formel complet pour cette approche.Nous nous intéressons également au cas où les observations sont organisées en série temporelle. En faisant l'hypothèse que la structure du modèle graphique latent subit un certain nombre de brusques changements, le but est alors de retrouver le nombre et la position de ces points de rupture. Il s'agit donc d'un problème de segmentation. Sous certaines hypothèses de factorisation, l'exploration exhaustive de l'ensemble des segmentations est permise et, combinée aux résultats sur les arbres couvrants, permet d'obtenir, entre autres, la distribution a posteriori des points de ruptures en un temps polynomial à la fois par rapport au nombre de variables et à la longueur de la série
In this dissertation we investigate the problem of network inference. The statistical frame- work tailored to this task is that of graphical models, in which the (in)dependence relation- ships satis ed by a multivariate distribution are represented through a graph. We consider the problem from a Bayesian perspective and focus on a subset of graphs making structure inference possible in an exact and e cient manner, namely spanning trees. Indeed, the integration of a function de ned on spanning trees can be performed with cubic complexity with respect to number of variables under some factorisation assumption on the edges, in spite of the super-exponential cardinality of this set. A careful choice of prior distributions on both graphs and distribution parameters allows to use this result for network inference in tree-structured graphical models, for which we provide a complete and formal framework.We also consider the situation in which observations are organised in a multivariate time- series. We assume that the underlying graph describing the dependence structure of the distribution is a ected by an unknown number of abrupt changes throughout time. Our goal is then to retrieve the number and locations of these change-points, therefore dealing with a segmentation problem. Using spanning trees and assuming that segments are inde- pendent from one another, we show that this can be achieved with polynomial complexity with respect to both the number of variables and the length of the series
13

Thouin, Frédéric. "Bayesian inference in networks." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bayesian inference is a method that can be used to estimate an unknown and/or unobservable parameter based on evidence that is accumulated over time.In this thesis, we apply Bayesian inference techniques in the context of two network-based problems.First, we consider multi-target tracking in networks with superpositional sensors, i.e., sensors that generate measurements equal to the sum of individual contributions of each target.We derive a tractable form for a novel moment-based multi-target filter called the Additive Likelihood Moment (ALM) filter. We show, through simulations, that our particle approximation of the ALM filter is more accurate and computationally efficient than Markov chain Monte Carlo-based particle methods to perform radio-frequency (RF) tomographic tracking of multiple targets.The second problem we study is multi-path available bandwidth estimation in computer networks.We propose a probabilistic-rate-based definition for the available bandwidth, probabilistic available bandwidth (PAB), that addresses flaws of the classical utilization-based definition and existing estimation tools. We design a network-wide estimation tool that uses factor graphs, belief propagation and adaptive sampling to minimize the overhead. We deploy our tool on the Planet Lab network and show that it can produce accurate estimates of the PAB and achieve significant gains (over 70%) in terms of measurement overhead and latency over a popular estimation tool (Pathload). We extend our tool to i) track PAB in time and ii) use chirps to further reduce the number of required measurements by over 80%. Our simulations and online experiments demonstrate that our tracking algorithm is more accurate than block-based approaches without any significant additional complexity.
L'inférence bayésienne est une méthode qui peut être utilisée pour estimer des paramètres inconnus et/ou inobservables à partir de preuves accumulées au fil du temps. Dans cette thèse, nous appliquons les techniques d'inférence bayésienne à deux problèmes de réseautique.Premièrement, nous considérons la poursuite de plusieurs cibles dans des réseaux de capteurs où les mesures générées sont égales à la somme des contributions individuelles de chaque cible. Nous obtenons une forme traitable pour un filtre multi-cibles appelé filtre Additive Likelihood Moment (ALM). Nous montrons, au moyen de simulations, que notre approximation particulaire du filtre ALM est plus précise et efficace que les méthodes particulaires de Monte-Carlo par chaînes de Markov pour effectuer une poursuite tomographique de plusieurs cibles à l'aide de radiofréquences.Le deuxième problème que nous étudions est l'estimation simultanée pour plusieurs chemins de bande passante disponible dans les réseaux informatiques. Nous proposons une définition probabiliste de la bande passante disponible, probabilistic available bandwidth (PAB), qui vise a corriger les failles de i) la définition classique fondée sur l'utilisation et ii) des outils d'estimation existants. Nous concevons un outil d'estimation pour l'ensemble du réseau qui utilise les réseaux bayésiens, la propagation de croyance et d'échantillonnage adapté pour minimiser le surdébit. Nous validons notre outil sur le réseau Planet Lab et montrons qu'il peut produire des estimations précises de la PAB et procure des gains significatifs (plus de 70%) en termes de surdébit et de latence en comparaison avec un outil d'estimation populaire (Pathload). Nous proposons ensuite une extension à notre outil pour i) suivre la PAB dans le temps et ii) utiliser les ``chirps'' pour réduire davantage le nombre de mesures requises par plus de 80%. Nos simulations et expériences en ligne montrent que notre algorithme de suivi est plus précis, sans complexité supplémentaire notable, que les approches qui traitent l'information en bloc sans modèle dynamique.
14

Anderson, Christopher Lyon. "Type inference for JavaScript." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Adami, K. Z. "Bayesian inference and deconvolution." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.595341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the development of Bayesian methods for inference and deconvolution. We compare and contrast different Bayesian methods for model selection, specifically Markov Chain Monte Carlo methods (MCMC) and Variational methods and their application to medical and industrial problems. In chapter 1, the Bayesian framework is outlined. In chapter 2 the different methods for Bayesian model selection are introduced and we assess each method in turn. Problems with MCMC methods and Variational methods are highlighted, before a new method which combines the strengths of both the MCMC methods and the Variational methods is developed. Chapter 3 applies the inferential methods described in chapter 2 to the problem of interpolation, before a regression neural network is implemented and tested on a set of data from the microelectronics industry. Chapter 4 applies the interpolation methods developed in chapter three to characterise the electrical nature of the testing site in the integrated circuit (IC) manufacturing process. Chapter 5 describes Independent Component Analysis (ICA) as a solution to the bilinear decomposition problem and its application to Magnetic Resonance Imaging. This chapter also compares and contrasts various Bayesian algorithms for the bilinear problem with a non-Bayesian MUSIC algorithm. Chapter 6 describes various models for the deconvolution of images including a regression network. The ICA model of chapter 5 is then extended to the deconvolution and blind deconvolution problems with the addition of intrinsic correlation functions.
16

Frühwirth-Schnatter, Sylvia. "On Fuzzy Bayesian Inference." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1990. http://epub.wu.ac.at/384/1/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
17

Upsdell, M. P. "Bayesian inference for functions." Thesis, University of Nottingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Davies, Winton H. E. "Communication of inductive inference." Thesis, University of Aberdeen, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis addresses the question: "How can knowledge learnt through inductive inference be communicated in a multi-agent system?". Existing agent communication languages, such as KQML, assume logically sound inference methods. Unfortunately, induction is logically unsound. In general, machine learning techniques infer knowledge (or hypotheses) consistent with the locally available facts. However, in a multi-agent system, hypotheses learnt by one agent can directly contradict knowledge held by another. If an agent communicates induced knowledge as though it were logically sound, then the knowledge held by other agents in the community may become inconsistent. The answer we present in this thesis is that agents must, in general, communicate the bounds to such induced knowledge. The Version Space framework characterises inductive inference as a process which identifies the set of hypotheses that are consistent with both the observable facts and the constraints of the hypothesis description language. A Version Space can be expressed by two boundary sets, which represent the most general and most specific hypotheses. We thus propose that when communicating an induced hypothesis, that the hypothesis be bounded by descriptions of the most general and most specific hypotheses. In order to allow agents to integrate induced hypotheses with their own facts or their own induced hypotheses, the technique of Version Space Intersection can be used. We have investigated how boundary set descriptions can be generated for the common case of machine learning algorithms which learn hypotheses from unrestricted Version Spaces. This is a hard computational problem, as it is the equivalent of finding the minimal DNF description of a set of logical sentences. We consider four alternate approaches: exact minimization using the Quine-McCluskey algorithm; a naive, information-theoretic hill-climbing search; Espresso II, a sophisticated, heuristic logic minimization algorithm; and unsound approximation techniques. We demonstrate that none of these techniques are scalable to realistic machine learning problems.
19

Feldman, Jacob 1965. "Perceptual decomposition as inference." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/13693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Witty, Carl Roger. "The ontic inference language." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

De, León Eduardo Enrique. "Medical abstract inference dataset." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 35).
In this thesis, I built a dataset for predicting clinical outcomes from medical abstracts and their title. Medical Abstract Inference consists of 1,794 data points. Titles were filtered to include the abstract's reported medical intervention and clinical outcome. Data points were annotated with the interventions effect on the outcome. Resulting labels were one of the following: increased, decreased, or had no significant difference on the outcome. In addition, rationale sentences were marked, these sentences supply the necessary supporting evidence for the overall prediction. Preliminary modeling was also done to evaluate the corpus. Preliminary models included top performing Natural Language Inference models as well as Rationale based models and linear classifiers.
by Eduardo Enrique de León.
M. Eng.
22

Eaton, Frederik. "Combining approximations for inference." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mobbs, Deena Catherine. "Inference of genetic relationship." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/12662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Yingzhen. "Approximate inference : new visions." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.
25

Ashbridge, Jonathan. "Inference for plant-capture." Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
When investigating the dynamics of an animal population, a primary objective is to obtain reasonable estimates of abundance or population size. This thesis concentrates on the problem of obtaining point estimates of abundance from capture-recapture data and on how such estimation can be improved by using the method of plant-capture. Plant-capture constitutes a natural generalisation of capture-recapture. In a plant-capture study a pre-marked population of known size is added to the target population of unknown size. The capture-recapture experiment is then carried out on the augmented population. Chapter 1 considers the addition of planted individuals to target populations which behave according to the standard capture-recapture model M0. Chapter 2 investigates an analogous model based on sampling in continuous time. In each of these chapters, distributional results are derived under the assumption that the behaviour of the plants is indistinguishable from that of members of the target population. Maximum likelihood estimators and other new estimators are proposed for each model. The results suggest that the use of plants is beneficial, and furthermore that the new estimators perform more satisfactorily than the maximum likelihood estimators. Chapter 3 introduces, initially in the absence of plants, a new class of estimators, described as coverage adjusted estimators, for the standard capture-recapture model M[sub]h. These new estimators are shown, through simulation and real life data, to compare favourably with estimators that have previously been proposed. Plant-capture versions of these new estimators are then derived and the usefulness of the plants is demonstrated through simulation. Chapter 4 describes how the approach taken in chapter 3 can be modified to produce a new estimator for the analogous continuous time model. This estimator is then shown through simulation to be preferable to estimators that have previously been proposed.
26

GIORDANO, JEAN-YVES. "Inference de grammaires algebriques." Rennes 1, 1995. http://www.theses.fr/1995REN10041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'inference grammaticale peut etre vue comme le probleme de l'identification d'un langage a partir d'exemples et de contre-exemples de mots de ce langage. Ce probleme a surtout ete etudie pour les langages reguliers, qui ne peuvent rendre compte que de structures syntaxiques simples. Parmi les methodes d'inference de grammaires regulieres, les methodes par enumeration procedent en explorant un ensemble de grammaires a la recherche d'une solution. L'enumeration s'effectue selon une relation d'ordre definie sur cet ensemble, ce qui permet l'elagage de larges regions de l'espace de recherche. Nous montrons dans cette these que ce principe peut s'appliquer a l'inference de grammaires algebriques. Les possibilites offertes par deux relations d'ordre classiques, la couverture de reynolds et l'inclusion structurelle, sont examinees. Pour chacune de ces relations, des operateurs permettant de parcourir l'espace sont definis. Nous definissons dans un premier temps l'operateur correspondant a la couverture de reynolds: la fission de non terminaux. Un algorithme d'inference par enumeration utilisant cette relation est ensuite mise en uvre. L'inclusion structurelle est une relation plus complexe, pour laquelle la specification d'operateurs est difficile. Le choix de la forme normale inversible pour les grammaires de l'espace de recherche nous permet de decider de leur inclusion en temps polynomial, et de definir deux operateurs pouvant servir de base a un algorithme d'inference par enumeration: substitution sur la partie gauche d'une regle, et suppression d'une regle. Ces operateurs ne permettent pas de parcourir tout l'espace, aussi toutes les solutions ne sont pas toujours atteintes. Les mecanismes proposes pour l'inclusion structurelle constituent un compromis entre rapidite et exhaustivite de l'enumeration, et visent plus a la discrimination des exemples et contre-exemples de l'echantillon qu'a l'obtention de toutes les solutions au probleme d'inference
27

Šimeček, Josef. "Inference v Bayesovských sítích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This master's thesis deals with demonstration of various approaches to probabilistic inference in Bayesian networks. Basics of probability theory, introduction to Bayesian networks, methods for Bayesian inference and applications of Bayesian networks are described in theoretical part. Inference techniques are explained and complemented by their algorithm. Techniques are also illustrated on example. Practical part contains implementation description, experiments with demonstration applications and conclusion of the results.
28

Ramnarayan, Govind. "Distributed computation and inference." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 319-331).
In this thesis, we explore questions in algorithms and inference on distributed data. On the algorithmic side, we give a computationally efficient algorithm that allows parties to execute distributed computations in the presence of adversarial noise. This work falls into the framework of interactive coding, which is an extension of error correcting codes to interactive settings commonly found in theoretical computer science. On the inference side, we model social and biological processes and how they generate data, and analyze the computational limits of inference on the resulting data. Our first result regards the reconstruction of pedigrees, or family histories, from genetic data. We are given strings of genetic data for many individuals, and want to reconstruct how they are related. We show how to do this when we assume that both inheritance and mating are governed by some simple stochastic processes. This builds on previous work that posed the problem without a "random mating" assumption. Our second inference result concerns the problem of corruption detection on networks. In this problem, we have parties situated on a network that report on the identity of their neighbors - "truthful" or "corrupt." The goal is to understand which network structures are amenable to finding the true identities of the nodes. We study the problem of finding a single truthful node, give an efficient algorithm for finding such a node, and prove that optimally placing corrupt agents in the network is computationally hard. For the final result in this thesis, we present a model of opinion polarization. We show that in our model, natural advertising campaigns, with the sole goal of selling a product or idea, provably lead to the polarization of opinions on various topics. We characterize optimal strategies for advertisers in a simple setting, and show that executing an optimal strategy requires solving an NP-hard inference problem in the worst case.
by Govind Ramnarayan.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
29

Gudka, Khilan. "Lock inference for Java." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/10945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Atomicity is an important property for concurrent software, as it provides a stronger guarantee against errors caused by unanticipated thread interactions than race-freedom does. However, concurrency control in general is tricky to get right because current techniques are too low-level and error-prone. With the introduction of multicore processors, the problems are compounded. Consequently, a new software abstraction is gaining popularity to take care of concurrency control and the enforcing of atomicity properties, called atomic sections. One possible implementation of their semantics is to acquire a global lock upon entry to each atomic section, ensuring that they execute in mutual exclusion. However, this cripples concurrency, as non-interfering atomic sections cannot run in parallel. Transactional memory is another automated technique for providing atomicity, but relies on the ability to rollback conflicting atomic sections and thus places restrictions on the use of irreversible operations, such as I/O and system calls, or serialises all sections that use such features. Therefore, from a language designer's point of view, the challenge is to implement atomic sections without compromising performance or expressivity. This thesis explores the technique of lock inference, which infers a set of locks for each atomic section, while attempting to balance the requirements of maximal concurrency, minimal locking overhead and freedom from deadlock. We focus on lock-inference techniques for tackling large Java programs that make use of mature libraries. This improves upon existing work, which either (i) ignores libraries, (ii) requires library implementors to annotate which locks to take, or (iii) only considers accesses performed up to one-level deep in library call chains. As a result, each of these prior approaches may result in atomicity violations. This is a problem because even simple uses of I/O in Java programs can involve large amounts of library code. Our approach is the first to analyse library methods in full and thus able to soundly handle atomic sections involving complicated real-world side effects, while still permitting atomic sections to run concurrently in cases where their lock sets are disjoint. To validate our claims, we have implemented our techniques in Lockguard, a fully automatic tool that translates Java bytecode containing atomic sections to an equivalent program that uses locks instead. We show that our techniques scale well and despite protecting all library accesses, we obtain performance comparable to the original locking policy of our benchmarks.
30

Buchet, Mickaël. "Topological inference from measures." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112367/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La quantité de données disponibles n'a jamais été aussi grande. Se poser les bonnes questions, c'est-à-dire des questions qui soient à la fois pertinentes et dont la réponse est accessible est difficile. L'analyse topologique de données tente de contourner le problème en ne posant pas une question trop précise mais en recherchant une structure sous-jacente aux données. Une telle structure est intéressante en soi mais elle peut également guider le questionnement de l'analyste et le diriger vers des questions pertinentes. Un des outils les plus utilisés dans ce domaine est l'homologie persistante. Analysant les données à toutes les échelles simultanément, la persistance permet d'éviter le choix d'une échelle particulière. De plus, ses propriétés de stabilité fournissent une manière naturelle pour passer de données discrètes à des objets continus. Cependant, l'homologie persistante se heurte à deux obstacles. Sa construction se heurte généralement à une trop large taille des structures de données pour le travail en grandes dimensions et sa robustesse ne s'étend pas au bruit aberrant, c'est-à-dire à la présence de points non corrélés avec la structure sous-jacente.Dans cette thèse, je pars de ces deux constatations et m'applique tout d'abord à rendre le calcul de l'homologie persistante robuste au bruit aberrant par l'utilisation de la distance à la mesure. Utilisant une approximation du calcul de l'homologie persistante pour la distance à la mesure, je fournis un algorithme complet permettant d'utiliser l'homologie persistante pour l'analyse topologique de données de petite dimension intrinsèque mais pouvant être plongées dans des espaces de grande dimension. Précédemment, l'homologie persistante a également été utilisée pour analyser des champs scalaires. Ici encore, le problème du bruit aberrant limitait son utilisation et je propose une méthode dérivée de l'utilisation de la distance à la mesure afin d'obtenir une robustesse au bruit aberrant. Cela passe par l'introduction de nouvelles conditions de bruit et l'utilisation d'un nouvel opérateur de régression. Ces deux objets font l'objet d'une étude spécifique. Le travail réalisé au cours de cette thèse permet maintenant d'utiliser l'homologie persistante dans des cas d'applications réelles en grandes dimensions, que ce soit pour l'inférence topologique ou l'analyse de champs scalaires
Massive amounts of data are now available for study. Asking questions that are both relevant and possible to answer is a difficult task. One can look for something different than the answer to a precise question. Topological data analysis looks for structure in point cloud data, which can be informative by itself but can also provide directions for further questioning. A common challenge faced in this area is the choice of the right scale at which to process the data.One widely used tool in this domain is persistent homology. By processing the data at all scales, it does not rely on a particular choice of scale. Moreover, its stability properties provide a natural way to go from discrete data to an underlying continuous structure. Finally, it can be combined with other tools, like the distance to a measure, which allows to handle noise that are unbounded. The main caveat of this approach is its high complexity.In this thesis, we will introduce topological data analysis and persistent homology, then show how to use approximation to reduce the computational complexity. We provide an approximation scheme to the distance to a measure and a sparsifying method of weighted Vietoris-Rips complexes in order to approximate persistence diagrams with practical complexity. We detail the specific properties of these constructions.Persistent homology was previously shown to be of use for scalar field analysis. We provide a way to combine it with the distance to a measure in order to handle a wider class of noise, especially data with unbounded errors. Finally, we discuss interesting opportunities opened by these results to study data where parts are missing or erroneous
31

Simonetto, Andrea. "Indagini in Deep Inference." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1455/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tesi è uno studio di alcuni aspetti della nuova metodologia “deep inference”, abbinato ad una rivisitazione dei concetti classici di proof theory, con l'aggiunta di alcuni risultati originali orientati ad una maggior comprensione dell'argomento, nonché alle applicazioni pratiche. Nel primo capitolo vengono introdotti, seguendo un approccio di stampo formalista (con alcuni spunti personali), i concetti base della teoria della dimostrazione strutturale – cioè quella che usa strumenti combinatoriali (o “finitistici”) per studiare le proprietà delle dimostrazioni. Il secondo capitolo focalizza l'attenzione sulla logica classica proposizionale, prima introducendo il calcolo dei sequenti e dimostrando il Gentzen Hauptsatz, per passare poi al calcolo delle strutture (sistema SKS), dimostrando anche per esso un teorema di eliminazione del taglio, appositamente adattato dall'autore. Infine si discute e dimostra la proprietà di località per il sistema SKS. Un percorso analogo viene tracciato dal terzo ed ultimo capitolo, per quanto riguarda la logica lineare. Viene definito e motivato il calcolo dei sequenti lineari, e si discute del suo corrispettivo nel calcolo delle strutture. L'attenzione qui è rivolta maggiormente al problema di definire operatori non-commutativi, che mettono i sistemi in forte relazione con le algebre di processo.
32

Binch, Adam. "Perception as Bayesian inference." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/7618/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fallis, Don. "Goldman on Probabilistic Inference." Springer, 2002. http://hdl.handle.net/10150/105286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In his latest book, Knowledge in a Social World, Alvin Goldman claims to have established that if a reasoner starts with accurate estimates of the reliability of new evidence and conditionalizes on this evidence, then this reasoner is objectively likely to end up closer to the truth. In this paper, I argue that Goldmanâ s result is not nearly as philosophically significant as he would have us believe. First, accurately estimating the reliability of evidenceâ in the sense that Goldman requiresâ is not quite as easy as it might sound. Second, being objectively likely to end up closer to the truthâ in the sense that Goldman establishesâ is not quite as valuable as it might sound.
34

Lin, Lizhen. "Nonparametric Inference for Bioassay." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
35

Pelawa, Watagoda Lasanthi Chathurika Ranasinghe. "INFERENCE AFTER VARIABLE SELECTION." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents inference for the multiple linear regression model Y = beta_1 x_1 + ... + beta_p x_p + e after model or variable selection, including prediction intervals for a future value of the response variable Y_f, and testing hypotheses with the bootstrap. If n is the sample size, most results are for n/p large, but prediction intervals are developed that may increase in average length slowly as p increases for fixed n if the model is sparse: k predictors have nonzero coefficients beta_i where n/k is large.
36

TOZZO, VERONICA. "Generalised temporal network inference." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/986950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network inference is becoming increasingly central in the analysis of complex phenomena as it allows to obtain understandable models of entities interactions. Among the many possible graphical models, Markov Random Fields are widely used as they are strictly connected to a probability distribution assumption that allow to model a variety of different data. The inference of such models can be guided by two priors: sparsity and non-stationarity. In other words, only few connections are necessary to explain the phenomenon under observation and, as the phenomenon evolves, the underlying connections that explain it may change accordingly. This thesis contains two general methods for the inference of temporal graphical models that deeply rely on the concept of temporal consistency, i.e., the underlying structure of the system is similar (i.e., consistent) in time points that model the same behaviour (i.e., are dependent). The first contribution is a model that allows to be flexible in terms of probability assumption, temporal consistency, and dependency. The second contribution studies the previously introduces model in the presence of Gaussian partially un-observed data. Indeed, it is necessary to explicitly tackle the presence of un-observed data in order to avoid introducing misrepresentations in the inferred graphical model. All extensions are coupled with fast and non-trivial minimisation algorithms that are extensively validate on synthetic and real-world data. Such algorithms and experiments are implemented in a large and well-designed Python library that comprehends many tools for the modelling of multivariate data. Lastly, all the presented models have many hyper-parameters that need to be tuned on data. On this regard, we analyse different model selection strategies showing that a stability-based approach performs best in presence of multi-networks and multiple hyper-parameters.
37

ROSSI, FRANCESCA. "Inference for spatial data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/25536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is well known that econometric modelling and statistical inference are considerably complicated by the possibility of correlation across data data recorded at different locations in space. A major branch of the spatial econometrics literature has focused on testing the null hypothesis of spatial independence in Spatial Autoregressions (SAR) and the asymptotic properties of standard test statistics have been widely considered. However, finite sample properties of such tests have received relatively little consideration. Indeed, spatial datasets are likely to be small or moderately-sized and thus the derivation of finite sample corrections appears to be a crucially important task in order to obtain reliable tests. In this project we consider finite sample corrections based on formal Edgeworth expansions for the cumulative distribution function of some relevant test statistics. In Chapters 1 and 2 we present refined procedures for testing nullity of the spatial parameter in pure SAR based on ordinary least squares and Gaussian maximum likelihood, respectively. In both cases, the Edgeworth-corrected tests are compared with those obtained by a bootstrap procedure, which is supposed to have similar properties. The practical performance of new tests is assessed with Monte Carlo simulations and two empirical examples. In Chapter 3 we propose finite sample corrections for Lagrange Multiplier statistics, which are computationally particularly convenient as the estimation of the spatial parameter is not required. Monte Carlo simulations and the numerical implementation of Imhof's procedure confirm that the corrected tests outperform standard ones.
38

Vallès, Català Toni. "Network inference based on stochastic block models: model extensions, inference approaches and applications." Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/399539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'estudi de xarxes ha contribuït a la comprensió de sistemes complexos en una àmplia gamma de camps com la biologia molecular i cel·lular, l'anatomia, la neurociència, l'ecologia, l'economia i la sociologia. No obstant, el coneixement disponible sobre molts sistemes reals encara és limitat, per aquesta raó el poder predictiu de la ciència en xarxes s'ha de millorar per disminuir la bretxa entre coneixement i informació. Per abordar aquest tema fem servir la família de 'Stochastic Block Models' (SBM), una família de models generatius que està guanyant gran interès recentment a causa de la seva adaptabilitat a qualsevol tipus de xarxa. L'objectiu d'aquesta tesi és el desenvolupament de noves metodologies d'inferència basades en SBM que perfeccionaran la nostra comprensió de les xarxes complexes. En primer lloc, investiguem en quina mesura fer un mostreg sobre models pot millorar significativament la capacitat de predicció que considerar un únic conjunt òptim de paràmetres. Un cop sabem quin model és capaç de descriure millor una xarxa determinada, apliquem aquest mètode en un cas particular d'una xarxa real: una xarxa basada en les interaccions/sutures entre els ossos del crani en nounats. Concretament, descobrim que les sutures tancades a causa d'una malaltia patològica en el nounat humà son menys probables, des d'un punt de vista morfològic, que les sutures tancades sota un desenvolupament normal. Recents investigacions en xarxes multicapa conclou que el comportament de les xarxes d'una sola capa són diferents de les de múltiples capes; d'altra banda, les xarxes del món real se'ns presenten com xarxes d'una sola capa.
El estudio de las redes del mundo real han empujado hacia la comprensión de sistemas complejos en una amplia gama de campos como la biología molecular y celular, la anatomía, la neurociencia, la ecología, la economía y la sociología . Sin embargo, el conocimiento disponible de muchos sistemas reales aún es limitado, por esta razón el poder predictivo de la ciencia en redes se debe mejorar para disminuir la brecha entre conocimiento y información. Para abordar este tema usamos la familia de 'Stochastic Block Modelos' (SBM), una familia de modelos generativos que está ganando gran interés recientemente debido a su adaptabilidad a cualquier tipo de red. El objetivo de esta tesis es el desarrollo de nuevas metodologías de inferencia basadas en SBM que perfeccionarán nuestra comprensión de las redes complejas. En primer lugar, investigamos en qué medida hacer un muestreo sobre modelos puede mejorar significativamente la capacidad de predicción a considerar un único conjunto óptimo de parámetros. Seguidamente, aplicamos el método mas predictivo en una red real particular: una red basada en las interacciones/suturas entre los huesos del cráneo humano en recién nacidos. Concretamente, descubrimos que las suturas cerradas a causa de una enfermedad patológica en recién nacidos son menos probables, desde un punto de vista morfológico, que las suturas cerradas bajo un desarrollo normal. Concretamente, descubrimos que las suturas cerradas a causa de una enfermedad patológica en recién nacidos son menos probables, desde un punto de vista morfológico, que las suturas cerradas bajo un desarrollo normal. Recientes investigaciones en las redes multicapa concluye que el comportamiento de las redes en una sola capa son diferentes a las de múltiples capas; por otra parte, las redes del mundo real se nos presentan como redes con una sola capa. La parte final de la tesis está dedicada a diseñar un nuevo enfoque en el que dos SBM separados describen simultáneamente una red dada que consta de una sola capa, observamos que esta metodología predice mejor que la metodología de un SBM solo.
The study of real-world networks have pushed towards to the understanding of complex systems in a wide range of fields as molecular and cell biology, anatomy, neuroscience, ecology, economics and sociology. However, the available knowledge from most systems is still limited, hence network science predictive power should be enhanced to diminish the gap between knowledge and information. To address this topic we handle with the family of Stochastic Block Models (SBMs), a family of generative models that are gaining high interest recently due to its adaptability to any kind of network structure. The goal of this thesis is to develop novel SBM based inference approaches that will improve our understanding of complex networks. First, we investigate to what extent sampling over models significatively improves the predictive power than considering an optimal set of parameters alone. Once we know which model is capable to describe better a given network, we apply such method in a particular real world network case: a network based on the interactions/sutures between bones in newborn skulls. Notably, we discovered that sutures fused due to a pathological disease in human newborn were less likely, from a morphological point of view, that those sutures that fused under a normal development. Recent research on multilayer networks has concluded that the behavior of single-layered networks are different from those of multilayer ones; notwhithstanding, real world networks are presented to us as single-layered networks. The last part of the thesis is devoted to design a novel approach where two separate SBMs simultaneously describe a given single-layered network. We importantly find that it predicts better missing/spurious links that the single SBM approach.
39

Jenkins, David Russell. "How inference isn't blind : self-conscious inference and its role in doxastic agency." Thesis, King's College London (University of London), 2019. https://kclpure.kcl.ac.uk/portal/en/theses/how-inference-isnt-blind-selfconscious-inference-and-its-role-in-doxastic-agency(49a94363-0be9-4721-a974-333aa1608401).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis brings together two concerns. The first is the nature of inference—what it is to infer—where inference is understood as a distinctive kind of conscious and self-conscious occurrence. The second concern is the possibility of doxastic agency. To be capable of doxastic agency is to be such that one is capable of directly exercising agency over one’s beliefs. It is to be capable of exercising agency over one’s beliefs in a way which does not amount to mere self-manipulation. Subjects who can exercise doxastic agency can settle questions for themselves. A challenge to the possibility of doxastic agency stems from the fact that we cannot believe or come to believe “at will”, where this in turn seems to be so because belief “aims at truth”. It must be explained how we are capable of doxastic agency despite that we cannot believe or come to believe at will. On the orthodox ‘causalist’ conception of inference for an inference to occur is for one act of acceptance to cause another in some specifiable “right way”. This conception of inference prevents its advocates from adequately seeing how reasoning could be a means to exercise doxastic agency, as it is natural to think it is. Suppose, for instance, that one reasons and concludes by inferring where one’s inference yields belief in what one infers. Such an inference cannot be performed at will. We cannot infer at will when inference yields belief any more than we can believe or come to believe at will. When it comes to understanding the extent to which one could be exercising agency in such a case the causalist conception of inference suggests that we must look to the causal history of one’s concluding act of acceptance, the nature of the act’s being determined by the way in which it is caused. What results is a picture on which such reasoning as a whole cannot be action. We are at best capable of actions of a kind which lead causally to belief fixation through “mental ballistics”. The causalist account of inference, I argue, is in fact either inadequate or unmotivated. It either fails to accommodate the self-consciousness of inference or is not best placed to play the very explanatory role which it is put forward to play. On the alternative I develop when one infers one’s inference is the conscious event which is one’s act of accepting that which one is inferring. The act’s being an inference is determined, not by the way it is caused, but by the self-knowledge which it constitutively involves. This corrected understanding of inference renders the move from the challenge to the possibility of doxastic agency to the above ballistics picture no longer tempting. It also yields an account of how we are capable of exercising doxastic agency by reasoning despite being unable to believe or come to believe at will. In order to see how such reasoning could amount to the exercise of doxastic agency it needs to be conceived of appropriately. I suggest that paradigm reasoning which potentially amounts the exercise of doxastic agency ought to be conceived of as primarily epistemic agency—agency the aim of which is knowledge. With inference conceived as suggested, I argue, it can be seen how to engage in such reasoning can just be to successfully exercise such agency.
40

Hershey, John R. "Perceptual inference in generative models." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3181786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2005.
Title from first page of PDF file (viewed October 21, 2005) Available via UMI ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 162-179).
41

Moreno, Dávila Julio Moreno Davila Julio. "Mathematical programming for logic inference /." [S.l.] : [s.n.], 1990. http://library.epfl.ch/theses/?nr=784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fleissner, Roland. "Sequence alignment and phylogenetic inference." Berlin : Logos Verlag, 2004. http://diss.ub.uni-duesseldorf.de/ebib/diss/file?dissid=769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Robinson, Thomas Lewis. "Incremental on-line type inference." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA280991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fleissner, Roland. "Sequence alignment and phylogenetic inference." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971844704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ozbozkurt, Pelin. "Bayesian Inference In Anova Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611532/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Estimation of location and scale parameters from a random sample of size n is of paramount importance in Statistics. An estimator is called fully efficient if it attains the Cramer-Rao minimum variance bound besides being unbiased. The method that yields such estimators, at any rate for large n, is the method of modified maximum likelihood estimation. Apparently, such estimators cannot be made more efficient by using sample based classical methods. That makes room for Bayesian method of estimation which engages prior distributions and likelihood functions. A formal combination of the prior knowledge and the sample information is called posterior distribution. The posterior distribution is maximized with respect to the unknown parameter(s). That gives HPD (highest probability density) estimator(s). Locating the maximum of the posterior distribution is, however, enormously difficult (computationally and analytically) in most situations. To alleviate these difficulties, we use modified likelihood function in the posterior distribution instead of the likelihood function. We derived the HPD estimators of location and scale parameters of distributions in the family of Generalized Logistic. We have extended the work to experimental design, one way ANOVA. We have obtained the HPD estimators of the block effects and the scale parameter (in the distribution of errors)
they have beautiful algebraic forms. We have shown that they are highly efficient. We have given real life examples to illustrate the usefulness of our results. Thus, the enormous computational and analytical difficulties with the traditional Bayesian method of estimation are circumvented at any rate in the context of experimental design.
46

Mumm, Lennart. "Reject Inference in Online Purchases." Thesis, KTH, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract   As accurately as possible, creditors wish to determine if a potential debtor will repay the borrowed sum. To achieve this mathematical models known as credit scorecards quantifying the risk of default are used. In this study it is investigated whether the scorecard can be improved by using reject inference and thereby include the characteristics of the rejected population when refining the scorecard. The reject inference method used is parcelling. Logistic regression is used to estimate probability of default based on applicant characteristics. Two models, one with and one without reject inference, are compared using Gini coefficient and estimated profitability. The results yield that, when comparing the two models, the model with reject inference both has a slightly higher Gini coefficient as well a showing an increase in profitability. Thus, this study suggests that reject inference does improve the predictive power of the scorecard, but in order to verify the results additional testing on a larger calibration set is needed
47

Park, June Young. "Data-driven Building Metadata Inference." Research Showcase @ CMU, 2016. http://repository.cmu.edu/theses/127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Building technology has been developed due to the improvement of information technology. Specifically, a human can control and monitor the building operation by a number of sensors and actuators. The sensors and actuators are installed on every single element in a building. Thus, the large stream of building data allows us to implement both quantitative and qualitative improvements. However, there are still limitations to mapping between the physical building element and cyber system. To solve this mapping issue, last summer, a text mining methodology was developed as part of a project conducted by the Consortium for Building Energy Innovation. Building data was extracted from building 661, in Philadelphia, PA. The ground truth of the building data point with semantic information was labeled by manual inspection. And a Support Vector Machine was implemented to investigate the relationship between the data point name and the semantic information. This algorithm achieves 93% accuracy with unseen building 661 data points. Techniques and lessons were gained from this project, and this knowledge was used to develop the framework for analyzing the building data from the Gates Hillman Center (GHC) building, Pittsburgh PA. This new framework consists of two stages. In the first stage, we initially tried to cluster the data points by similar semantic information, using the hierarchical clustering method. However, the effectiveness and accuracy of the clustering method is not adequate for this framework. Thus, the filtering and classification model is developed to identify the semantic information of the data points. From the filtering and classification method, it correctly identifies the damper position and supply air duct pressure data point with 90% accuracy by daily statistical features. Having the semantic information from the first stage, the second stage figures out the relationship between Variable Air Volume (VAV) terminal units and Air Handling Units (AHU). The intuitive thermal and flow relationship between VAVs and AHUs are investigated at the beginning, and the statistical features clustering method is applied from the VAV discharge temperature data. However, the control strategy of this building makes this relationship invisible. Alternatively we then compared the similarity between damper position at VAVs and supply air duct pressure at AHUs by calculating the cross correlation. Finally, this similarity scoring method achieved 80% accuracy to map the relationship between VAVs and AHUs. The suggested framework will guide the user to find the desired information such as the VAVs – AHUs relationship from the problem generated by a large number of heterogeneous sensor networks by using data-driven methodology.
48

Hennig, Philipp. "Approximate inference in graphical models." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/237251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.
49

Thabane, Lehana. "Contributions to Bayesian statistical inference." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq31133.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ding, Keyue. "Inference problems after CUSUM tests." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0030/NQ46830.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography