Literatura académica sobre el tema "Bayesian Simulated Inference"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Bayesian Simulated Inference".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Bayesian Simulated Inference"

1

Beaumont, Mark A., Wenyang Zhang y David J. Balding. "Approximate Bayesian Computation in Population Genetics". Genetics 162, n.º 4 (1 de diciembre de 2002): 2025–35. http://dx.doi.org/10.1093/genetics/162.4.2025.

Texto completo
Resumen
Abstract We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Creel, Michael. "Inference Using Simulated Neural Moments". Econometrics 9, n.º 4 (24 de septiembre de 2021): 35. http://dx.doi.org/10.3390/econometrics9040035.

Texto completo
Resumen
This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&P 500 index.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Flury, Thomas y Neil Shephard. "BAYESIAN INFERENCE BASED ONLY ON SIMULATED LIKELIHOOD: PARTICLE FILTER ANALYSIS OF DYNAMIC ECONOMIC MODELS". Econometric Theory 27, n.º 5 (17 de mayo de 2011): 933–56. http://dx.doi.org/10.1017/s0266466610000599.

Texto completo
Resumen
We note that likelihood inference can be based on an unbiased simulation-based estimator of the likelihood when it is used inside a Metropolis–Hastings algorithm. This result has recently been introduced in statistics literature by Andrieu, Doucet, and Holenstein (2010, Journal of the Royal Statistical Society, Series B, 72, 269–342) and is perhaps surprising given the results on maximum simulated likelihood estimation. Bayesian inference based on simulated likelihood can be widely applied in microeconomics, macroeconomics, and financial econometrics. One way of generating unbiased estimates of the likelihood is through a particle filter. We illustrate these methods on four problems, producing rather generic methods. Taken together, these methods imply that if we can simulate from an economic model, we can carry out likelihood–based inference using its simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hu, Zheng Dong, Liu Xin Zhang, Fei Yue Zhou y Zhi Jun Li. "Statistic Inference for Inertial Instrumentation Error Model Using Bayesian Network". Applied Mechanics and Materials 392 (septiembre de 2013): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amm.392.719.

Texto completo
Resumen
For the parameter estimation problem of inertial instrumentation error models, a Bayesian network is founded to fuse the calibration data and make error coefficients statistical inference in this paper. First the fundamental of Bayesian network is stated and then how to establish network for a typical case of inertial instrumentation error coefficients estimation is illustrated. Since the difficult high-dimension integral calculus for model parameter can be avoidable, WinBUGS software based on MCMC method is used for calculation and inference. The simulated results show that using Bayesian network to make statistical inference for inertial instrumentation error model is reasonable and effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jeffrey, Niall y Filipe B. Abdalla. "Parameter inference and model comparison using theoretical predictions from noisy simulations". Monthly Notices of the Royal Astronomical Society 490, n.º 4 (18 de octubre de 2019): 5749–56. http://dx.doi.org/10.1093/mnras/stz2930.

Texto completo
Resumen
ABSTRACT When inferring unknown parameters or comparing different models, data must be compared to underlying theory. Even if a model has no closed-form solution to derive summary statistics, it is often still possible to simulate mock data in order to generate theoretical predictions. For realistic simulations of noisy data, this is identical to drawing realizations of the data from a likelihood distribution. Though the estimated summary statistic from simulated data vectors may be unbiased, the estimator has variance that should be accounted for. We show how to correct the likelihood in the presence of an estimated summary statistic by marginalizing over the true summary statistic in the framework of a Bayesian hierarchical model. For Gaussian likelihoods where the covariance must also be estimated from simulations, we present an alteration to the Sellentin–Heavens corrected likelihood. We show that excluding the proposed correction leads to an incorrect estimate of the Bayesian evidence with Joint Light-Curve Analysis data. The correction is highly relevant for cosmological inference that relies on simulated data for theory (e.g. weak lensing peak statistics and simulated power spectra) and can reduce the number of simulations required.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

de Campos, Luis M., José A. Gámez y Serafı́n Moral. "Partial abductive inference in Bayesian belief networks by simulated annealing". International Journal of Approximate Reasoning 27, n.º 3 (septiembre de 2001): 263–83. http://dx.doi.org/10.1016/s0888-613x(01)00043-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

CARDIAL, Marcílio Ramos Pereira, Juliana Betini FACHINI-GOMES y Eduardo Yoshio NAKANO. "EXPONENTIATED DISCRETE WEIBULL DISTRIBUTION FOR CENSORED DATA". REVISTA BRASILEIRA DE BIOMETRIA 38, n.º 1 (28 de marzo de 2020): 35. http://dx.doi.org/10.28951/rbb.v38i1.425.

Texto completo
Resumen
This paper further develops the statistical inference procedure of the exponentiated discrete Weibull distribution (EDW) for data with the presence of censoring. This generalization of the discrete Weibull distribution has the advantage of being suitable to model non-monotone failure rates, such as those with bathtub and unimodal distributions. Inferences about EDW distribution are presented using both frequentist and bayesian approaches. In addition, the classical Likelihood Ratio Test and a Full Bayesian Significance Test (FBST) were performed to test the parameters of EDW distribution. The method presented is applied to simulated data and illustrated with a real dataset regarding patients diagnosed with head and neck cancer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Üstündağ, Dursun y Mehmet Cevri. "Recovering Sinusoids from Noisy Data Using Bayesian Inference with Simulated Annealing". Mathematical and Computational Applications 16, n.º 2 (1 de agosto de 2011): 382–91. http://dx.doi.org/10.3390/mca16020382.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pascual-Izarra, C. y G. García. "Simulated annealing and Bayesian inference applied to experimental stopping force determination". Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 228, n.º 1-4 (enero de 2005): 388–91. http://dx.doi.org/10.1016/j.nimb.2004.10.076.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sha, Naijun y Hao Yang Teng. "A Bayes Inference for Step-Stress Accelerated Life Testing". International Journal of Statistics and Probability 6, n.º 6 (15 de septiembre de 2017): 1. http://dx.doi.org/10.5539/ijsp.v6n6p1.

Texto completo
Resumen
In this article, we present a Bayesian analysis with convex tent priors for step-stress accelerated life testing (SSALT) using a proportional hazard (PH) model. As flexible as the cumulative exposure (CE) model in fitting step-stress data and its attractive mathematical properties, the PH model makes Bayesian inference much more accessible than the CE model. Two sampling methods through Markov chain Monte Carlo algorithms are employed for posterior inference of parameters. The performance of the methodology is investigated using both simulated and real data sets.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Bayesian Simulated Inference"

1

Mezzavilla, Marco. "Advanced Resource Management Techniques for Next Generation Wireless Networks". Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423728.

Texto completo
Resumen
The increasing penetration of mobile devices in everyday life is posing a broad range of research challenges to meet such a massive data demand. Mobile users seek connectivity "anywhere, at anytime". In addition, killer applications with multimedia contents, like video transmissions, require larger amounts of resources to cope with tight quality constraints. Spectrum scarcity and interference issues represent the key aspects of next generation wireless networks. Consequently, designing proper resource management solutions is critical. To this aim, we first propose a model to better assess the performance of Orthogonal Frequency-Division Multiple Access (OFDMA)-based simulated cellular networks. A link abstraction of the downlink data transmission can provide an accurate performance metric at a low computational cost. Our model combines Mutual Information-based multi-carrier compression metrics with Link-Level performance profiles, thus expressing the dependency of the transmitted data Block Error Rate (BLER) on the SINR values and on the modulation and coding scheme (MCS) being assigned. In addition, we aim at evaluating the impact of Jumboframes transmission in LTE networks, which are packets breaking the 1500-byte legacy value. A comparative evaluation is performed based on diverse network configuration criteria, thus highlighting specific limitations. In particular, we observed rapid buffer saturation under certain circumstances, due to the transmission of oversized packets with scarce radio resources. A novel cross-layer approach is proposed to prevent saturation, and thus tune the transmitted packet size with the instantaneous channel conditions, fed back through standard CQI-based procedures. Recent advances in wireless networking introduce the concept of resource sharing as one promising way to enhance the performance of radio communications. As the wireless spectrum is a scarce resource, and its usage is often found to be inefficient, it may be meaningful to design solutions where multiple operators join their efforts, so that wireless access takes place on shared, rather than proprietary to a single operator, frequency bands. In spite of the conceptual simplicity of this idea, the resulting mathematical analysis may be very complex, since it involves analytical representation of multiple wireless channels. Thus, we propose an evaluative tool for spectrum sharing techniques in OFDMA-based wireless networks, where multiple sharing policies can be easily integrated and, consequently, evaluated. On the other hand, relatively to contention-based broadband wireless access, we target an important issue in mobile ad hoc networks: the intrinsic inefficiency of the standard transmission control protocol (TCP), which presents degraded performance mainly due to mechanisms such as congestion control and avoidance. In fact, TCP was originally designed for wired networks, where packet losses indicate congestion. Conversely, channels in wireless networks might vary rapidly, thus most loss events are due to channel errors or link layer contention. We aim at designing a light-weight cross-layer framework which, differently from many other works in the literature, is based on the cognitive network paradigm. It includes an observation phase, i.e., a training set in which the network parameters are collected; a learning phase, in which the information to be used is extracted from the data; a planning phase, in which we define the strategies to trigger; an acting phase, which corresponds to dynamically applying such strategies during network simulations. The next generation mobile infrastructure frontier relies on the concept of heterogeneous networks. However, the existence of multiple types of access nodes poses new challenges such as more stringent interference constraints due to node densification and self-deployed access. Here, we propose methods that aim at extending femto cells coverage range by enabling idle User Equipments (UE) to serve as relays. This way, UEs otherwise connected to macro cells can be offloaded to femto cells through UE relays. A joint resource allocation and user association scheme based on the solutions of a convex optimization problem is proposed. Another challenging issue to be addressed in such scenarios is admission control, which is in charge of ensuring that, when a new resource reservation is accepted, previously connected users continue having their QoS guarantees honored. Thus, we consider different approaches to compute the aggregate projected capacity in OFDMA-based networks, and propose the E-Diophantine solution, whose mathematical foundation is provided along with the performance improvements to be expected, both in accuracy and computational terms.
L'esplosiva penetrazione di dispositivi mobili nella vita di tutti i giorni pone molteplici sfide nel campo della ricerca nelle comunicazioni 'senza fili', al fine di sostenere la crescente mole di dati generata dagli utenti cellulari, i quali richiedono connettività "in ogni momento, in ogni dove". Inoltre, le applicazioni più richieste, dotate di contenuti multimediali quali le trasmissioni video, richiedono l'utilizzo di molte risorse per sostenere stringenti vincoli di qualità. La limitatezza dello spettro, congiuntamente ai problemi legati all'interferenza, rappresentano i fattori chiave delle reti cellulari di nuova generazione. Conseguentemente, il design di tecniche per la gestione delle risorse risulta essere un aspetto particolarmente critico. A questo fine, proponiamo in primo luogo un modello per valutare le prestazioni simulate delle reti cellulari basate sulla tecnologia Orthogonal Frequency-Division Multiple Access (OFDMA). Un modello di astrazione del canale associato alla trasmissione in downlink di dati fornisce un'accurata metrica valutativa a basso costo computazionale. Il nostro modello combina metriche di compressione multi-portante basate sull'Informazione Mutua con profili prestazionali generati a livello di canale, esprimendo così la dipendenza del rate d'errore associato al blocco di dati trasmesso con i valori di SINR, e l'indice di codifica e modulazione (MCS) assegnato dall'allocatore di risorse. Inoltre, ci proponiamo di valutare l'impatto della trasmissione di Jumboframes in reti LTE, ovvero pacchetti la cui dimensione supera il massimo valore tradizionale di 1500 Bytes. Una valutazione comparativa viene eseguita relativamente a varie configurazioni di rete, in modo da mettere in luce specifiche limitazioni. In particolare, abbiamo potuto osservare una rapida saturazione del buffer di trasmissione legato alla trasmissione di maxi pacchetti attraverso link di bassa qualità. Abbiamo dunque proposto un'architettura di rete 'cross-layer' che ci permetta di prevenire tale esubero di risorse disponibili; si tratta di un approccio che rende possibile la regolazione della dimensione dei pacchetti a seconda della capacità istantanea del canale, nota attraverso procedure standard basate sulla conoscenza di predefinite sequenze pilota alle quali vengono associate valori di qualità (CQI). Nella ricerca applicata alle reti wireless è stato recentemente introdotto il concetto di condivisione delle risorse, visto come promettente approccio attraverso cui migliorare le prestazioni delle comunicazioni radio. Lo spettro radio è limitato, e il suo utilizzo risulta spesso inefficiente. Per questi motivi appare significativo proporre soluzioni nelle quali diversi operatori uniscano le proprie forze al fine di fornire accesso wireless a bande condivise piuttosto che proprietarie. Diversamente dalla semplicità concettuale di tale idea, l'analisi matematica che ne deriva può essere molto complessa. Per questo motivo proponiamo uno strumento atto a valutare le prestazioni delle tecniche di condivisone dello spettro nelle reti cellulari basate sulla tecnologia OFDMA, al cui interno è dunque possibile integrare, testare e valutare ogni politica di condivisione. D'altra parte, relativamente all'accesso a banda larga basato sulla contesa al mezzo, ci soffermiamo su un'importante problematica all'interno delle reti mobili ad hoc WiFi, ovvero l'intrinseca inefficienza del protocollo di trasporto universalmente riconosciuto come standard, il TCP. Quest'ultimo presenta ridotte prestazioni, principalmente legate alle politiche di controllo della congestione. Infatti il TCP è stato originariamente pensato per le reti cablate, dove le perdite di pacchetti indicano una congestione. Diversamente, gli eventi di perdita nelle reti wireless possono essere legati alle variazioni del canale radio, o alla contesa sul collegamento. Intendiamo dunque definire un'architettura 'cross-layer' sufficientemente snella e dinamica, basata sul paradigma delle reti cognitive. Questo framework include una fase di osservazione, i.e., un 'training set' all'interno del quale vengono collezionati svariati parametri di rete; una fase di apprendimento, in cui viene estratta l'informazione da utilizzare per il miglioramento delle prestazioni di rete; una fase di pianificazione, in cui vengono definite le strategie da utilizzare con le informazioni 'imparate'; infine, una fase di azione che rappresenta l'esecuzione 'online' di tali strategie all'interno della rete. La più recente frontiera per le infrastrutture di rete di prossima generazione si sviluppa intorno al concetto di reti eterogenee. Tuttavia, la presenza di una moltitudine di dispositivi diversi fra loro, in quanto a tecnologia e tecniche di accesso al mezzo, pone nuove sfide. Fra tutte, l'incremento dell'interferenza legato alla densificazione dei nodi e alla pianificazione autonoma. Proponiamo dunque un approccio atto a supportare il reindirizzamento del carico di rete dalle macro celle alle femto celle, attraverso una cooperazione fornita dagli utenti mobili in modalità 'idle', che operano a tutti gli effetti come nodi relay. In questo modo aumentiamo la probabilità che un utente connesso alla macro cella possa alternativamente connettersi ad una femto cella (procedura nota come offload). Abbiamo così definito un modello di ottimizzazione congiunto per l'allocazione delle risorse e la determinazione del collegamento stazione radio base - utente. Un ulteriore tema particolarmente importante riguarda il controllo per l'accettazione di nuovi utenti nel sistema. Tale modello deve garantire il mantenimento dei margini di qualità (QoS) associati ai nodi precedentemente connessi alla rete. A questo fine consideriamo diversi approcci per il calcolo della proiezione di capacità allocata in reti wireless basate sulla tecnologia OFDMA. Infine proponiamo la soluzione 'E-Diophantine' basata sulla teoria diofantina, di cui forniamo le basi matematiche, e mostriamo l'incremento delle prestazioni che ne risulta.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kolba, Mark Philip. "Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data". Diss., 2009. http://hdl.handle.net/10161/1313.

Texto completo
Resumen

In the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.

This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.


Dissertation
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Viscardi, Cecilia. "Approximate Bayesian Computation and Statistical Applications to Anonymized Data: an Information Theoretic Perspective". Doctoral thesis, 2021. http://hdl.handle.net/2158/1236316.

Texto completo
Resumen
Realistic statistical modelling of complex phenomena often leads to considering several latent variables and nuisance parameters. In such cases, the Bayesian approach to inference requires the computation of challenging integrals or summations over high dimensional spaces. Monte Carlo methods are a class of widely used algorithms for performing simulated inference. In this thesis, we consider the problem of sample degeneracy in Monte Carlo methods focusing on Approximate Bayesian Computation (ABC), a class of likelihood-free algorithms allowing inference when the likelihood function is analytically intractable or computationally demanding to evaluate. In the ABC framework sample degeneracy arises when proposed values of the parameters, once given as input to the generative model, rarely lead to simulations resembling the observed data and are hence discarded. Such "poor" parameter proposals, i.e., parameter values having an (exponentially) small probability of producing simulation outcomes close to the observed data, do not contribute at all to the representation of the parameter's posterior distribution. This leads to a very large number of required simulations and/or a waste of computational resources, as well as to distortions in the computed posterior distribution. To mitigate this problem, we propose two algorithms, referred to as the Large Deviations Approximate Bayesian Computation algorithms (LD-ABC), where the ABC typical rejection step is avoided altogether. We adopt an information theoretic perspective resorting to the Method of Types formulation of Large Deviations, thus first restricting our attention to models for i.i.d. discrete random variables and then extending the method to parametric finite state Markov chains. We experimentally evaluate our method through proof-of-concept implementations. Furthermore, we consider statistical applications to anonymized data. We adopt the point of view of an evaluator interested in publishing data about individuals in an ananonymized form that allows balancing the learner’s utility against the risk posed by an attacker, potentially targeting individuals in the dataset. Accordingly, we present a unified Bayesian model applying to data anonymized employing group-based schemes and a related MCMC method to learn the population parameters. This allows relative threat analysis, i.e., an analysis of the risk for any individual in the dataset to be linked to a specific sensitive value beyond what is implied for the general population. Finally, we show the performance of the ABC methods in this setting and test LD-ABC at work on a real-world obfuscated dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Bayesian Simulated Inference"

1

Lin, Luan y Jun Zhu. "Using Simulated Data to Evaluate Bayesian Network Approach for Integrating Diverse Data". En Gene Network Inference, 119–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45161-4_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Harrow, Aram W. y Annie Y. Wei. "Adaptive Quantum Simulated Annealing for Bayesian Inference and Estimating Partition Functions". En Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 193–212. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2020. http://dx.doi.org/10.1137/1.9781611975994.12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Santner, Thomas J., Brian J. Williams y William I. Notz. "Bayesian Inference for Simulator Output". En Springer Series in Statistics, 115–43. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4939-8847-1_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Stewart, Donal, Stephen Gilmore y Michael A. Cousin. "FM-Sim: A Hybrid Protocol Simulator of Fluorescence Microscopy Neuroscience Assays with Integrated Bayesian Inference". En Hybrid Systems Biology, 159–74. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27656-4_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Radde, Nicole y Lars Kaderali. "A Bayes Regularized Ordinary Differential Equation Model for the Inference of Gene Regulatory Networks". En Handbook of Research on Computational Methodologies in Gene Regulatory Networks, 139–68. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-685-3.ch006.

Texto completo
Resumen
Differential equation models provide a detailed, quantitative description of transcription regulatory networks. However, due to the large number of model parameters, they are usually applicable to small networks only, with at most a few dozen genes. Moreover, they are not well suited to deal with noisy data. In this chapter, we show how to circumvent these limitations by integrating an ordinary differential equation model into a stochastic framework. The resulting model is then embedded into a Bayesian learning approach. We integrate the-biologically motivated-expectation of sparse connectivity in the network into the inference process using a specifically defined prior distribution on model parameters. The approach is evaluated on simulated data and a dataset of the transcriptional network governing the yeast cell cycle.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Bayesian Simulated Inference"

1

Yousefian, Sajjad, Gilles Bourque, Sandeep Jella, Philippe Versailles y Rory F. D. Monaghan. "A Stochastic and Bayesian Inference Toolchain for Uncertainty and Risk Quantification of Rare Autoignition Events in DLE Premixers". En ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-83667.

Texto completo
Resumen
Abstract Quantification of aleatoric uncertainties due to the inherent variabilities in operating conditions and fuel composition is essential for designing and improving premixers in dry low-emissions (DLE) combustion systems. Advanced stochastic simulation tools require a large number of evaluations in order to perform this type of uncertainty quantification (UQ) analysis. This task is computationally prohibitive using high-fidelity computational fluid dynamic (CFD) approaches such as large eddy simulation (LES). In this paper, we describe a novel and computationally-efficient toolchain for stochastic modelling using minimal input from LES, to perform uncertainty and risk quantification of a DLE system. More specially, high-fidelity LES, chemical reactor network (CRN) model, beta mixture model, Bayesian inference and sequential Monte Carlo (SMC) are integrated into the toolchain. The methodology is applied to a practical premixer of low-emission combustion system with dimethyl ether (DME)/methane-air mixtures to simulate autoignition events at different engine conditions. First, the benchmark premixer is simulated using a set of LESs for a methane/air mixture at elevated pressure and temperature conditions. A partitioning approach is employed to generate a set of deterministic chemical reactor network (CRN) models from LES results. These CRN models are then solved at the volume-average conditions and validated by LES results. A mixture modelling approach using the expectation-method of moment (EMM) is carried out to generate a set of beta mixture models and characterise uncertainties for LES-predicted temperature distributions. These beta mixture models and a normal distribution for DME volume fraction are used to simulate a set of stochastic CRN models. The Bayesian inference approach through Sequential Monte Carlo (SMC) method is then implemented on the results of temperature distributions from stochastic CRN models to simulate the probability of autoignition in the benchmark premixer. The results present a very satisfactory performance for the stochastic toolchain to compute the autoignition propensity for a few events with a particular combination of inlet temperature and DME volume fraction. Characterisation of these rare events is computationally prohibitive in the conventional deterministic methods such as high-fidelity LES.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Palencia, O. G., A. P. Teixeira y C. Guedes Soares. "Safety of Pipelines Subjected to Deterioration Processes Modelled Through Dynamic Bayesian Networks". En ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61969.

Texto completo
Resumen
The paper studies the application of Dynamic Bayesian Networks for modelling degradation processes in oil and gas pipelines. A DBN tool consisting of a Matlab code has been developed for performing inference on models. The tool is then applied for probabilistic modelling of the burst pressure of a pipe subjected to corrosion degradation and for safety assessment. The burst pressure is evaluated using the ASME B31G design method and other empirical formulas. A model for corrosion prediction in pipelines and its governing parameters are explicitly included into the probabilistic framework. Different sets of simulated corrosion measurements are used to increase the accuracy of the model predictions. Several parametric studies are conducted to investigate how changes in the observed corrosion (depth and length) and in the frequency of inspections affect the pipe reliability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Losi, Enzo, Mauro Venturini y Lucrezia Manservigi. "Gas Turbine Health State Prognostics by Means of Bayesian Hierarchical Models". En ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-90054.

Texto completo
Resumen
Abstract The prediction of the time evolution of gas turbine performance is an emerging requirement of modern prognostics and health management (PHM), aimed at improving system reliability and availability, while reducing life cycle costs. In this work, a data-driven Bayesian Hierarchical Model (BHM) is employed to perform a probabilistic prediction of gas turbine future health state thanks to its capability to deal with fleet data from multiple units. First, the theoretical background of the predictive methodology is outlined to highlight the inference mechanism and data processing for estimating BHM predicted outputs. Then, BHM is applied to both simulated and field data representative of gas turbine degradation to assess its prediction reliability and grasp some rules of thumb for minimizing BHM prediction error. For the considered field data, the average values of the prediction errors were found to be lower than 1.0 % or 1.7 % for single- or multi-step prediction, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Mengchen, Liu Jiang, Junlin Liu, Xiting Wang, Jun Zhu y Shixia Liu. "Improving Learning-from-Crowds through Expert Validation". En Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/324.

Texto completo
Resumen
Although several effective learning-from-crowd methods have been developed to infer correct labels from noisy crowdsourced labels, a method for post-processed expert validation is still needed. This paper introduces a semi-supervised learning algorithm that is capable of selecting the most informative instances and maximizing the influence of expert labels. Specifically, we have developed a complete uncertainty assessment to facilitate the selection of the most informative instances. The expert labels are then propagated to similar instances via regularized Bayesian inference. Experiments on both real-world and simulated datasets indicate that given a specific accuracy goal (e.g., 95%) our method reduces expert effort from 39% to 60% compared with the state-of-the-art method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Di Francesco, Domenic, Marios Chryssanthopoulos, Michael Havbro Faber y Ujjwal Bharadwaj. "Bayesian Multi-Level Modelling for Improved Prediction of Corrosion Growth Rate". En ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-18744.

Texto completo
Resumen
Abstract In pipelines, pressure vessels and various other steel structures, the remaining thickness of a corroding ligament can be measured directly and repeatedly over time. Statistical analysis of these measurements is a common approach for estimating the rate of corrosion growth, where the uncertainties associated with the inspection activity are taken into account. An additional source of variability in such calculations is the epistemic uncertainty associated with the limited number of measurements that are available to engineers at any point in time. Traditional methods face challenges in fitting models to limited or missing datasets. In such cases, deterministic upper bound values, as recommended in industrial guidance, are sometimes assumed for the purpose of integrity management planning. In this paper, Bayesian inference is proposed as a means for representing available information in consistency with evidence. This, in turn, facilitates decision support in the context of risk-informed integrity management. Aggregating inspection data from multiple locations does not account for the possible variability between the locations, and creating fully independent models can result in excessive levels of uncertainty at locations with limited data. Engineers intuitively acknowledge that the areas with more sites of corrosion should, to some extent, inform estimates of growth rates in other locations. Bayesian multi-level (hierarchical) models provide a mathematical basis for achieving this by means of the appropriate pooling of information, based on the homogeneity of the data. Included in this paper is an outline of the process of fitting a Bayesian multi-level model and a discussion of the benefits and challenges of pooling inspection data between distinct locations, using example calculations and simulated data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Profir, B., M. H. Eres, J. P. Scanlan y R. Bates. "Investigation of Fan Blade off Events Using a Bayesian Framework". En ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63431.

Texto completo
Resumen
This paper illustrates a probabilistic method of studying Fan Blade Off (FBO) events which is based upon Bayesian inference. Investigating this case study is of great interest from the point of view of the engineering team responsible with the dynamic modelling of the fan. The reason is because subsequent to an FBO event, the fan loses its axisymmetry and as a result of that, severe impacting can occur between the blades and the inner casing of the engine. The mechanical modelling (which is not the scope of this paper) involves studying the oscillation modes of the fan at various release speeds (defined as the speed at which an FBO event occurs) and at various amounts of damage (defined as the percentage of blade which gets released during an FBO event). However, it is virtually infeasible to perform the vibrational analysis for all combinations of release speed and damage. Consequently, the Bayesian updating which forms the foundation of the framework presented in the paper is used to identify the most likely combinations prone to occur after an FBO event which are then going to be used further for the mechanical analysis. The Bayesian inference engine presented here makes use of expert judgements which are updated using in-service data (which for the purposes of this paper are fictitious). The resulting inputs are then passed through 1,000,000 Monte Carlo iterations (which from a physical standpoint represent the number of FBO events simulated) in order to check which are the most common combinations of release speed and blade damage so as to report back to the mechanical engineering team. Therefore, the scope of the project outlined in this paper is to create a flexible model which changes every time data becomes available in order to reflect both the original expert judgements it was based on as well as the real data itself. The features of interest of the posterior distributions which can be seen in the Results section are the peaks of the probability distributions. The reason for this has already been outlined: only the most likely FBO events (i.e.: the peaks of the distributions) are of interest for the purposes of the dynamics analysis. Even though it may be noticed that the differences between prior and posterior distributions are not pronounced, it should be recalled that this is due to the particular data set used for the update; using another data set or adding to the existing one will produce different distributions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hou, Danlin, Chang Shu, Lili Ji, Ibrahim Galal Hassan y Liangzhu (Leon) Wang. "Bayesian Calibrating Educational Building Thermal Models to Hourly Indoor Air Temperature: Methodology and Case Study". En ASME 2021 Verification and Validation Symposium. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/vvs2021-65268.

Texto completo
Resumen
Abstract With the increase in the frequency and duration of heatwaves and extreme temperatures, global warming becomes one of the most critical environmental issues. Heatwaves pose significant threats to human health, including related diseases and deaths, especially for vulnerable groups. Such as the one during the 2018 summer in Montreal, Canada, caused up to 53 deaths, with most lived in buildings without access to air-conditioning. Unlike building energy models that mainly focus on energy performance, building thermal models emphasizes indoor thermal performance without a mechanical system. It is required an understanding of the complex dynamic building thermal physics in which detailed building parameters need to be specified but challenging to be determined in real life. The uncertainty assessment of the parameters estimates can make the results more reliable. Therefore, in this paper, a Bayesian-based calibration procedure was presented and applied to an educational building. First, the building was modeled in EnergyPlus based on an in-site visit and related information collection. Second, a sensitivity analysis was performed to identify significant parameters affecting the errors between simulated and monitored indoor air temperatures. Then, a Meta-model was developed and used during the calibration process instead of the original EnergyPlus model to decrease the requirement of computing load and time. Subsequently, the Bayesian inference theory was employed to calibrate the model on hourly indoor air temperatures in summer. Finally, the model was validated. It is shown that the Bayesian calibration procedure not only can calibrate the model within the performance tolerance required by international building standards/codes but also predict future thermal performance with a confidence interval, which makes it more reliable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Gnanasekaran, N. y C. Balaji. "A Correlation for Nusselt Number Under Turbulent Mixed Convection Using Transient Heat Transfer Experiments". En 2010 14th International Heat Transfer Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ihtc14-22428.

Texto completo
Resumen
This paper reports the results of an experimental investigation of transient, turbulent mixed convection in a vertical channel in which one of the walls is heated and the other is adiabatic. The goal is to simultaneously estimate the constants in a Nusselt number correlation whose form is assumed a priori by synergistically marrying the experimental results with repeated numerical calculations that assume guess values of the constants. The convective heat transfer coefficient “h” is replaced by the Nusselt number (Nu) which is then assumed to have a form Nu = a (1+RiD) b ReDc where a, b and c are the constants to be evaluated. From the experimentally obtained temperature time history and the simulated temperature time history, based on some guess values of a, b, and c, one can define the objective function or the residue as the sum of the square of the difference between experimentally obtained and simulated temperatures. Using Bayesian inference driven by the Markov chain Monte Carlo method, one, more or all of the constants a, b and c are retrieved together with the uncertainty involved in these estimates. Additionally, the estimated parameters are compared with experimental benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bisinotto, Gustavo A., Lucas P. Cotrim, Fabio G. Cozman y Eduardo A. Tannuri. "Assessment of Sea State Estimation With Convolutional Neural Networks Based on the Motion of a Moored FPSO Subjected to High-Frequency Wave Excitation". En ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-78603.

Texto completo
Resumen
Abstract Motion-based wave inference has been extensively discussed over the past years to estimate sea state parameters from the measured motions of a vessel. Most of those methods rely on the linearity assumption between waves and ship response and present a limitation related to high-frequency waves, whose first-order excitation is mostly filtered by the vessel. In a previous study in this project, the motion of a spread-moored FPSO platform, associated with a dataset of environmental conditions, was used to train convolutional neural networks models so as to estimate sea state parameters, displaying good results, even for high-frequency waves. This paper further explores this supervised learning inference method, focusing on the estimation of unimodal high-frequency waves along with an evaluation of particular features related to the approach. The analysis is performed by training estimation models under different circumstances. First, models are obtained from the simulated platform response out of a dataset with synthetic sea state parameters, that are uniformly distributed. Then, a second dataset of metocean conditions, with unimodal waves observed at a Brazilian Offshore Basin, is considered to verify the behavior of the models with data that have different distributions of wave parameters. Next, the input time series are filtered to separate first-order response and slow drift motion, allowing the derivation of distinct models and the determination of the contribution of each motion component to the estimation. Finally, a comparison among the outcomes of the approach based on neural networks evaluated under those conditions and the results obtained by the traditional Bayesian modeling is carried out, to assess the performance presented by the proposed models and their applicability to face one of the classical issues on motion-based wave inference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kelly, Dana L. "Using Copulas to Model Dependence in Simulation Risk Assessment". En ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-41284.

Texto completo
Resumen
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía