Gotowa bibliografia na temat „Computational inference method”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Computational inference method”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Computational inference method"

1

Jha, Kunal, Tuan Anh Le, Chuanyang Jin, Yen-Ling Kuo, Joshua B. Tenenbaum i Tianmin Shu. "Neural Amortized Inference for Nested Multi-Agent Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 1 (24.03.2024): 530–37. http://dx.doi.org/10.1609/aaai.v38i1.27808.

Pełny tekst źródła
Streszczenie:
Multi-agent interactions, such as communication, teaching, and bluffing, often rely on higher-order social inference, i.e., understanding how others infer oneself. Such intricate reasoning can be effectively modeled through nested multi-agent reasoning. Nonetheless, the computational complexity escalates exponentially with each level of reasoning, posing a significant challenge. However, humans effortlessly perform complex social inferences as part of their daily lives. To bridge the gap between human-like inference capabilities and computational limitations, we propose a novel approach: leveraging neural networks to amortize high-order social inference, thereby expediting nested multi-agent reasoning. We evaluate our method in two challenging multi-agent interaction domains. The experimental results demonstrate that our method is computationally efficient while exhibiting minimal degradation in accuracy.
Style APA, Harvard, Vancouver, ISO itp.
2

Martina Perez, Simon, Heba Sailem i Ruth E. Baker. "Efficient Bayesian inference for mechanistic modelling with high-throughput data". PLOS Computational Biology 18, nr 6 (21.06.2022): e1010191. http://dx.doi.org/10.1371/journal.pcbi.1010191.

Pełny tekst źródła
Streszczenie:
Bayesian methods are routinely used to combine experimental data with detailed mathematical models to obtain insights into physical phenomena. However, the computational cost of Bayesian computation with detailed models has been a notorious problem. Moreover, while high-throughput data presents opportunities to calibrate sophisticated models, comparing large amounts of data with model simulations quickly becomes computationally prohibitive. Inspired by the method of Stochastic Gradient Descent, we propose a minibatch approach to approximate Bayesian computation. Through a case study of a high-throughput imaging scratch assay experiment, we show that reliable inference can be performed at a fraction of the computational cost of a traditional Bayesian inference scheme. By applying a detailed mathematical model of single cell motility, proliferation and death to a data set of 118 gene knockdowns, we characterise functional subgroups of gene knockdowns, each displaying its own typical combination of local cell density-dependent and -independent motility and proliferation patterns. By comparing these patterns to experimental measurements of cell counts and wound closure, we find that density-dependent interactions play a crucial role in the process of wound healing.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Chendong, i Ting Chen. "Bayesian slip inversion with automatic differentiation variational inference". Geophysical Journal International 229, nr 1 (29.10.2021): 546–65. http://dx.doi.org/10.1093/gji/ggab438.

Pełny tekst źródła
Streszczenie:
SUMMARY The Bayesian slip inversion offers a powerful tool for modelling the earthquake source mechanism. It can provide a fully probabilistic result and thus permits us to quantitatively assess the inversion uncertainty. The Bayesian problem is usually solved with Monte Carlo methods, but they are computationally expensive and are inapplicable for high-dimensional and large-scale problems. Variational inference is an alternative solver to the Bayesian problem. It turns Bayesian inference into an optimization task and thus enjoys better computational performances. In this study, we introduce a general variational inference algorithm, automatic differentiation variational inference (ADVI), to the Bayesian slip inversion and compare it with the classic Metropolis–Hastings (MH) sampling method. The synthetic test shows that the two methods generate nearly identical mean slip distributions and standard deviation maps. In the real case study, the two methods produce highly consistent mean slip distributions, but the ADVI-derived standard deviation map differs from that produced by the MH method, possibly because of the limitation of the Gaussian approximation in the ADVI method. In both cases, ADVI can give comparable results to the MH method but with a significantly lower computational cost. Our results show that ADVI is a promising and competitive method for the Bayesian slip inversion.
Style APA, Harvard, Vancouver, ISO itp.
4

Koblents, Eugenia, Inés P. Mariño i Joaquín Míguez. "Bayesian Computation Methods for Inference in Stochastic Kinetic Models". Complexity 2019 (20.01.2019): 1–15. http://dx.doi.org/10.1155/2019/7160934.

Pełny tekst źródła
Streszczenie:
In this paper we investigate Monte Carlo methods for the approximation of the posterior probability distributions in stochastic kinetic models (SKMs). SKMs are multivariate Markov jump processes that model the interactions among species in biological systems according to a set of usually unknown parameters. The tracking of the species populations together with the estimation of the interaction parameters is a Bayesian inference problem for which Markov chain Monte Carlo (MCMC) methods have been a typical computational tool. Specifically, the particle MCMC (pMCMC) method has been shown to be effective, while computationally demanding method applicable to this problem. Recently, it has been shown that an alternative approach to Bayesian computation, namely, the class of adaptive importance samplers, may be more efficient than classical MCMC-like schemes, at least for certain applications. For example, the nonlinear population Monte Carlo (NPMC) algorithm has yielded promising results with a low dimensional SKM (the classical predator-prey model). In this paper we explore the application of both pMCMC and NPMC to analyze complex autoregulatory feedback networks modelled by SKMs. We demonstrate numerically how the populations of the relevant species in the network can be tracked and their interaction rates estimated, even in scenarios with partial observations. NPMC schemes attain an appealing trade-off between accuracy and computational cost that can make them advantageous in many practical applications.
Style APA, Harvard, Vancouver, ISO itp.
5

Beaumont, Mark A., Wenyang Zhang i David J. Balding. "Approximate Bayesian Computation in Population Genetics". Genetics 162, nr 4 (1.12.2002): 2025–35. http://dx.doi.org/10.1093/genetics/162.4.2025.

Pełny tekst źródła
Streszczenie:
Abstract We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Ziyue, Kan Ren, Yifan Yang, Xinyang Jiang, Yuqing Yang i Dongsheng Li. "Towards Inference Efficient Deep Ensemble Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 8711–19. http://dx.doi.org/10.1609/aaai.v37i7.26048.

Pełny tekst źródła
Streszczenie:
Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e.g., can be up to 2048X in large-scale ensemble tasks. However, we found that the majority of computations in ensemble methods are redundant. For instance, over 77% of samples in CIFAR-100 dataset can be correctly classified with only a single ResNet-18 model, which indicates that only around 23% of the samples need an ensemble of extra models. To this end, we propose an inference efficient ensemble learning method, to simultaneously optimize for effectiveness and efficiency in ensemble learning. More specifically, we regard ensemble of models as a sequential inference process and learn the optimal halting event for inference on a specific sample. At each timestep of the inference process, a common selector judges if the current ensemble has reached ensemble effectiveness and halt further inference, otherwise filters this challenging sample for the subsequent models to conduct more powerful ensemble. Both the base models and common selector are jointly optimized to dynamically adjust ensemble inference for different samples with various hardness, through the novel optimization goals including sequential ensemble boosting and computation saving. The experiments with different backbones on real-world datasets illustrate our method can bring up to 56% inference cost reduction while maintaining comparable performance to full ensemble, achieving significantly better ensemble utility than other baselines. Code and supplemental materials are available at https://seqml.github.io/irene.
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Benchong, Shoufeng Cai i Jianhua Guo. "A computational algebraic-geometry method for conditional-independence inference". Frontiers of Mathematics in China 8, nr 3 (25.03.2013): 567–82. http://dx.doi.org/10.1007/s11464-013-0295-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Springer, Sebastian, Heikki Haario, Jouni Susiluoto, Aleksandr Bibov, Andrew Davis i Youssef Marzouk. "Efficient Bayesian inference for large chaotic dynamical systems". Geoscientific Model Development 14, nr 7 (9.07.2021): 4319–33. http://dx.doi.org/10.5194/gmd-14-4319-2021.

Pełny tekst źródła
Streszczenie:
Abstract. Estimating parameters of chaotic geophysical models is challenging due to their inherent unpredictability. These models cannot be calibrated with standard least squares or filtering methods if observations are temporally sparse. Obvious remedies, such as averaging over temporal and spatial data to characterize the mean behavior, do not capture the subtleties of the underlying dynamics. We perform Bayesian inference of parameters in high-dimensional and computationally demanding chaotic dynamical systems by combining two approaches: (i) measuring model–data mismatch by comparing chaotic attractors and (ii) mitigating the computational cost of inference by using surrogate models. Specifically, we construct a likelihood function suited to chaotic models by evaluating a distribution over distances between points in the phase space; this distribution defines a summary statistic that depends on the geometry of the attractor, rather than on pointwise matching of trajectories. This statistic is computationally expensive to simulate, compounding the usual challenges of Bayesian computation with physical models. Thus, we develop an inexpensive surrogate for the log likelihood with the local approximation Markov chain Monte Carlo method, which in our simulations reduces the time required for accurate inference by orders of magnitude. We investigate the behavior of the resulting algorithm with two smaller-scale problems and then use a quasi-geostrophic model to demonstrate its large-scale application.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Xinfang, Miao Li, Bomin Wang i Zexian Li. "A Parameter Correction method of CFD based on the Approximate Bayesian Computation technique". Journal of Physics: Conference Series 2569, nr 1 (1.08.2023): 012076. http://dx.doi.org/10.1088/1742-6596/2569/1/012076.

Pełny tekst źródła
Streszczenie:
Abstract Numerical simulation and modeling techniques are becoming the primary research tools for aerodynamic analysis and design. However, various uncertainties in physical modeling and numerical simulation seriously affect the credibility of Computational Fluid Dynamics (CFD) simulation results. Therefore, CFD models need to be adjusted and modified with consideration of uncertainties to improve the prediction accuracy and confidence level of CFD numerical simulations. This paper presents a parameter correction method of CFD for aerodynamic analysis by making full use of the advantages of the Approximate Bayesian Computation (ABC) technique in dealing with the analysis and inference of complex statistical models, in which the parameters of turbulence models for CFD are inferenced. The proposed parameter correction method is applied to the aerodynamic prediction of the NACA0012 airfoil. The results show the feasibility and effectiveness of the proposed approach in improving CFD prediction accuracy.
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Chi, Yilun Wang, Lili Zhang i Huicheng Zhou. "A fuzzy inference method based on association rule analysis with application to river flood forecasting". Water Science and Technology 66, nr 10 (1.11.2012): 2090–98. http://dx.doi.org/10.2166/wst.2012.420.

Pełny tekst źródła
Streszczenie:
In this paper, a computationally efficient version of the widely used Takagi-Sugeno (T-S) fuzzy reasoning method is proposed, and applied to river flood forecasting. It is well known that the number of fuzzy rules of traditional fuzzy reasoning methods exponentially increases as the number of input parameters increases, often causing prohibitive computational burden. The proposed method greatly reduces the number of fuzzy rules by making use of the association rule analysis on historical data, and therefore achieves computational efficiency for the cases of a large number of input parameters. In the end, we apply this new method to a case study of river flood forecasting, which demonstrates that the proposed fuzzy reasoning engine can achieve better prediction accuracy than the widely used Muskingum–Cunge scheme.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Computational inference method"

1

Bergmair, Richard. "Monte Carlo semantics : robust inference and logical pattern processing with natural language text". Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609713.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Guo, Wenbin. "Computational analysis and method development for high throughput transcriptomics and transcriptional regulatory inference in plants". Thesis, University of Dundee, 2018. https://discovery.dundee.ac.uk/en/studentTheses/3f14dd8e-0c6c-4b46-adb0-bbb10b0cbe19.

Pełny tekst źródła
Streszczenie:
RNA sequencing (RNA-seq) technologies facilitate the characterisation of genes and transcripts in different cell types as well as their expression analysis across various conditions. Due to its ability to provide in-depth insights into transcription and post-transcription mechanisms, RNA-seq has been extensively used in functional genetics and transcriptomics, system biology and developmental biology in animals, plants, diseases, etc. The aim of this project is to use mathematical and computational models to integrate big genomic and transcriptomic data from high-throughput technologies in plant biology and develop new methods to identify which genes or transcripts have significant expression variation across experimental conditions of interest, then to interpret the regulatory causalities of these expression changes by distinguishing the effects from the transcription and alternative splicing. We performed a high resolution ultra-deep RNA-seq time-course experiment to study Arabidopsis in response to cold treatment where plants were grown at 20oC and then the temperature was reduced to 4oC. We have developed a high quality Arabidopsis thaliana Reference Transcript Dataset (AtRTD2) transcriptome for accurate transcript and gene quantification. This high quality time-series dataset was used as the benchmark for novel method development and downstream expression analysis. The main outcomes of this project include three parts. i) A pipeline for differential expression (DE) and differential alternative splicing (DAS) analysis at both gene and transcript levels. Firstly, we implemented data pre-processing to reduce the noise/low expression, batch effects and technical biases of read counts. Then we used the limma-voom pipeline to compare the expression at corresponding time-points of 4oC to the time-points of 20oC. We identified 8,949 genes with altered expression of which 2,442 showed significant DAS and 1,647 were only regulated by AS. Compared with current publications, 3,039 of these genes were novel cold-responsive genes. In addition, we identified 4,008 differential transcript usage (DTU) transcripts of which the expression changes were significantly different to their cognate DAS genes. ii) A TSIS R package for time-series transcript isoform switch (IS) analysis was developed. IS refers to the time-points when a pair of transcript isoforms from the same gene reverse their relative expression abundances. By using a five metric scheme to evaluate robustly the qualities of each switch point, we identified 892 significant ISs between the high abundance transcripts in the DAS genes and about 57% of these switches occurred very rapidly between 0-6h following transfer to 4oC. iii) A RLowPC R package for co-expression network construction was generated. The RLowPC method uses a two-step approach to select the high-confidence edges first by reducing the search space by only picking the top ranked genes from an initial partial correlation analysis, and then computes the partial correlations in the confined search space by only removing the linear dependencies from the shared neighbours, largely ignoring the genes showing lower association. In future work, we will construct dynamic transcriptional and AS regulatory networks to interpret the causalities of DE and DAS. We will study the coupling and de-coupling of expression rhythmicity to the Arabidopsis circadian clock in response to cold. We will develop new methods to improve the statistical power of expression comparative analysis, such as by taking into account the missing values of expression and by distinguishing the technical and biological variabilities.
Style APA, Harvard, Vancouver, ISO itp.
3

Strid, Ingvar. "Computational methods for Bayesian inference in macroeconomic models". Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-1118.

Pełny tekst źródła
Streszczenie:
The New Macroeconometrics may succinctly be described as the application of Bayesian analysis to the class of macroeconomic models called Dynamic Stochastic General Equilibrium (DSGE) models. A prominent local example from this research area is the development and estimation of the RAMSES model, the main macroeconomic model in use at Sveriges Riksbank.   Bayesian estimation of DSGE models is often computationally demanding. In this thesis fast algorithms for Bayesian inference are developed and tested in the context of the state space model framework implied by DSGE models. The algorithms discussed in the thesis deal with evaluation of the DSGE model likelihood function and sampling from the posterior distribution. Block Kalman filter algorithms are suggested for likelihood evaluation in large linearised DSGE models. Parallel particle filter algorithms are presented for likelihood evaluation in nonlinearly approximated DSGE models. Prefetching random walk Metropolis algorithms and adaptive hybrid sampling algorithms are suggested for posterior sampling. The generality of the algorithms, however, suggest that they should be of interest also outside the realm of macroeconometrics.
Style APA, Harvard, Vancouver, ISO itp.
4

Warne, David James. "Computational inference in mathematical biology: Methodological developments and applications". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/202835/1/David_Warne_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Complexity in living organisms occurs on multiple spatial and temporal scales. The function of tissues depends on interactions of cells, and in turn, cell dynamics depends on intercellular and intracellular biochemical networks. A diverse range of mathematical modelling frameworks are applied in quantitative biology. Effective application of models in practice depends upon reliable statistical inference methods for experimental design, model calibration and model selection. In this thesis, new results are obtained for quantification of contact inhibition and cell motility mechanisms in prostate cancer cells, and novel computationally efficient inference algorithms suited for the study of biochemical systems are developed.
Style APA, Harvard, Vancouver, ISO itp.
5

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models". Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Pełny tekst źródła
Streszczenie:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
Style APA, Harvard, Vancouver, ISO itp.
6

Lienart, Thibaut. "Inference on Markov random fields : methods and applications". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:3095b14c-98fb-4bda-affc-a1fa1708f628.

Pełny tekst źródła
Streszczenie:
This thesis considers the problem of performing inference on undirected graphical models with continuous state spaces. These models represent conditional independence structures that can appear in the context of Bayesian Machine Learning. In the thesis, we focus on computational methods and applications. The aim of the thesis is to demonstrate that the factorisation structure corresponding to the conditional independence structure present in high-dimensional models can be exploited to decrease the computational complexity of inference algorithms. First, we consider the smoothing problem on Hidden Markov Models (HMMs) and discuss novel algorithms that have sub-quadratic computational complexity in the number of particles used. We show they perform on par with existing state-of-the-art algorithms with a quadratic complexity. Further, a novel class of rejection free samplers for graphical models known as the Local Bouncy Particle Sampler (LBPS) is explored and applied on a very large instance of the Probabilistic Matrix Factorisation (PMF) problem. We show the method performs slightly better than Hamiltonian Monte Carlo methods (HMC). It is also the first such practical application of the method to a statistical model with hundreds of thousands of dimensions. In a second part of the thesis, we consider approximate Bayesian inference methods and in particular the Expectation Propagation (EP) algorithm. We show it can be applied as the backbone of a novel distributed Bayesian inference mechanism. Further, we discuss novel variants of the EP algorithms and show that a specific type of update mechanism, analogous to the mirror descent algorithm outperforms all existing variants and is robust to Monte Carlo noise. Lastly, we show that EP can be used to help the Particle Belief Propagation (PBP) algorithm in order to form cheap and adaptive proposals and significantly outperform classical PBP.
Style APA, Harvard, Vancouver, ISO itp.
7

Wang, Tengyao. "Spectral methods and computational trade-offs in high-dimensional statistical inference". Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/260825.

Pełny tekst źródła
Streszczenie:
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.
Style APA, Harvard, Vancouver, ISO itp.
8

Pardo, Jérémie. "Méthodes d'inférence de cibles thérapeutiques et de séquences de traitement". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG011.

Pełny tekst źródła
Streszczenie:
Un enjeu majeur de la médecine des réseaux est l’identification des perturbations moléculaires induites par les maladies complexes et les thérapies afin de réaliser une reprogrammation cellulaire. L’action de la reprogrammation est le résultat de l’application d’un contrôle. Dans cette thèse, nous étendons le contrôle unique des réseaux biologiques en étudiant le contrôle séquentiel des réseaux booléens. Nous présentons un nouveau cadre théorique pour l’étude formelle des séquences de contrôle. Nous considérons le contrôle par gel de noeuds. Ainsi, une variable du réseau booléen peut être fixée à la valeur 0, 1 ou décontrôlée. Nous définissons un modèle de dynamique contrôlée pour le mode de mise à jour synchrone où la modification de contrôle ne se produit que sur un état stable. Nous appelons CoFaSe le problème d’inférence consistant à trouver une séquence de contrôle modifiant la dynamique pour évoluer vers une propriété ou un état souhaité. Les réseaux auxquels sera appliqué CoFaSe auront toujours un ensemble de variables incontrôlables. Nous montrons que ce problème est PSPACE-dur. L’étude des caractéristiques dynamiques du problème CoFaSe nous a permis de constater que les propriétés dynamiques qui impliquent la nécessité d’une séquence de contrôle émergent des fonctions de mise à jour des variables incontrôlables. Nous trouvons que la longueur d’une séquence de contrôle minimale ne peut pas être supérieure à deux fois le nombre de profils des variables incontrôlables. À partir de ce résultat, nous avons construit deux algorithmes inférant des séquences de contrôle minimales sous la dynamique synchrone. Enfin, l’étude des interdépendances entre le contrôle séquentiel et la topologie du graphe d’interaction du réseau booléen nous a permis de découvrir des relations existantes entre structure et contrôle. Celles-ci mettent en évidence une borne maximale plus resserrée pour certaines topologies que celles obtenues par l’étude de la dynamique. L’étude sur la topologie met en lumière l’importance de la présence de cycles non-négatifs dans le graphe d’interaction pour l’émergence de séquences minimales de contrôle de taille supérieure ou égale à deux
Network controllability is a major challenge in network medicine. It consists in finding a way to rewire molecular networks to reprogram the cell fate. The reprogramming action is typically represented as the action of a control. In this thesis, we extended the single control action method by investigating the sequential control of Boolean networks. We present a theoretical framework for the formal study of control sequences.We consider freeze controls, under which the variables can only be frozen to 0, 1 or unfrozen. We define a model of controlled dynamics where the modification of the control only occurs at a stable state in the synchronous update mode. We refer to the inference problem of finding a control sequence modifying the dynamics to evolve towards a desired state or property as CoFaSe. Under this problem, a set of variables are uncontrollable. We prove that this problem is PSPACE-hard. We know from the complexity of CoFaSe that finding a minimal sequence of control by exhaustively exploring all possible control sequences is not practically tractable. By studying the dynamical properties of the CoFaSe problem, we found that the dynamical properties that imply the necessity of a sequence of control emerge from the update functions of uncontrollable variables. We found that the length of a minimal control sequence cannot be larger than twice the number of profiles of uncontrollable variables. From this result, we built two algorithms inferring minimal control sequences under synchronous dynamics. Finally, the study of the interdependencies between sequential control and the topology of the interaction graph of the Boolean network allowed us to investigate the causal relationships that exist between structure and control. Furthermore, accounting for the topological properties of the network gives additional tools for tightening the upper bounds on sequence length. This work sheds light on the key importance of non-negative cycles in the interaction graph for the emergence of minimal sequences of control of size greater than or equal to two
Style APA, Harvard, Vancouver, ISO itp.
9

Angulo, Rafael Villa. "Computational methods for haplotype inference with application to haplotype block characterization in cattle". Fairfax, VA : George Mason University, 2009. http://hdl.handle.net/1920/4558.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--George Mason University, 2009.
Vita: p. 123. Thesis director: John J. Grefenstette. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Bioinformatics and Computational Biology. Title from PDF t.p. (viewed Sept. 8, 2009). Includes bibliographical references (p. 114-122). Also issued in print.
Style APA, Harvard, Vancouver, ISO itp.
10

Ruli, Erlis. "Recent Advances in Approximate Bayesian Computation Methods". Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423529.

Pełny tekst źródła
Streszczenie:
The Bayesian approach to statistical inference in fundamentally probabilistic. Exploiting the internal consistency of the probability framework, the posterior distribution extracts the relevant information in the data, and provides a complete and coherent summary of post data uncertainty. However, summarising the posterior distribution often requires the calculation of awkward multidimensional integrals. A further complication with the Bayesian approach arises when the likelihood functions is unavailable. In this respect, promising advances have been made by theory of Approximate Bayesian Computations (ABC). This thesis focuses on computational methods for the approximation of posterior distributions, and it discusses six original contributions. The first contribution concerns the approximation of marginal posterior distributions for scalar parameters. By combining higher-order tail area approximation with the inverse transform sampling, we define the HOTA algorithm which draws independent random sample from the approximate marginal posterior. The second discusses the HOTA algorithm with pseudo-posterior distributions, \eg, posterior distributions obtained by the combination of a pseudo-likelihood with a prior within Bayes' rule. The third contribution extends the use of tail-area approximations to contexts with multidimensional parameters, and proposes a method which gives approximate Bayesian credible regions with good sampling coverage properties. The forth presents an improved Laplace approximation which can be used for computing marginal likelihoods. The fifth contribution discusses a model-based procedure for choosing good summary statistics for ABC, by using composite score functions. Lastly, the sixth contribution discusses the choice of a default proposal distribution for ABC that is based on the notion of quasi-likelihood.
L'approccio bayesiano all'inferenza statistica è fondamentalmente probabilistico. Attraverso il calcolo delle probabilità, la distribuzione a posteriori estrae l'informazione rilevante offerta dai dati e produce una descrizione completa e coerente dell'incertezza condizionatamente ai dati osservati. Tuttavia, la descrizione della distribuzione a posteriori spesso richiede il computo di integrali multivariati e complicati. Un'ulteriore difficoltà dell'approccio bayesiano è legata alla funzione di verosimiglianza e nasce quando quest'ultima è matematicamento o computazionalmente intrattabile. In questa direzione, notevoli sviluppi sono stati compiuti dalla cosiddetta teaoria di Approximate Bayesian Computations (ABC). Questa tesi si focalizza su metodi computazionali per l'approssimazione della distribuzione a posteriori e propone sei contributi originali. Il primo contributo concerne l'approssimazione della distributione a posteriori marginale per un parametro scalare. Combinando l'approssimazione di ordine superiore per tail-area con il metodo della simulazione per inversione, si ottiene l'algorimo denominato HOTA, il quale può essere usato per simulare in modo indipendente da un'approssimazione della distribuzione a posteriori. Il secondo contributo si propone di estendere l'uso dell'algoritmo HOTA in contesti di distributioni pseudo-posterior, ovvero una distribuzione a posteriori ottenuta attraverso la combinazione di una pseudo-verosimiglianza con una prior, tramite il teorema di Bayes. Il terzo contributo estende l'uso dell'approssimazione di tail-area in contesti con parametri multidimensionali e propone un metodo per calcolare delle regioni di credibilità le quali presentano buone proprietà di copertura frequentista. Il quarto contributo presenta un'approssimazione di Laplace di terzo ordine per il calcolo della verosimiglianza marginale. Il quinto contributo si focalizza sulla scelta delle statistiche descrittive per ABC e propone un metodo parametrico, basato sulla funzione di score composita, per la scelta di tali statistiche. Infine, l'ultimo contributo si focalizza sulla scelta di una distribuzione di proposta da defalut per algoritmi ABC, dove la procedura di derivazione di tale distributzione è basata sulla nozione della quasi-verosimiglianza.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Computational inference method"

1

Istrail, Sorin, Michael Waterman i Andrew Clark, red. Computational Methods for SNPs and Haplotype Inference. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b96286.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Learning and inference in computational systems biology. Cambridge, Mass: MIT Press, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Neil, Lawrence, red. Learning and inference in computational systems biology. Cambridge, MA: MIT Press, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Heard, Nick. An Introduction to Bayesian Inference, Methods and Computation. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82808-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lo, Andrew W. Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation. Cambridge, MA: National Bureau of Economic Research, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Workshop for Dialogue on Reverse Engineering Assessment and Methods (2006 New York, N.Y.). Reverse engineering biological networks: Opportunities and challenges in computational methods for pathway inference. Boston, Mass: Published by Blackwell Publishing on behalf of the New York Academy of Sciences, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sorin, Istrail, Waterman Michael S i Clark Andrew G. 1954-, red. Computational methods for SNPs and Haplotype inference: DIMACS/RECOMB satellite workshop, Piscataway, NJ, USA, November 21-22, 2002 : revised papers. Berlin: Springer-Verlag, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Diagrams 2010 (2010 Portland, Or.). Diagrammatic representation and inference: 6th international conference, Diagrams 2010, Portland, OR, USA, August 9-11, 2010 : proceedings. Berlin: Springer, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Varlamov, Oleg. Mivar databases and rules. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1508665.

Pełny tekst źródła
Streszczenie:
The multidimensional open epistemological active network MOGAN is the basis for the transition to a qualitatively new level of creating logical artificial intelligence. Mivar databases and rules became the foundation for the creation of MOGAN. The results of the analysis and generalization of data representation structures of various data models are presented: from relational to "Entity — Relationship" (ER-model). On the basis of this generalization, a new model of data and rules is created: the mivar information space "Thing-Property-Relation". The logic-computational processing of data in this new model of data and rules is shown, which has linear computational complexity relative to the number of rules. MOGAN is a development of Rule - Based Systems and allows you to quickly and easily design algorithms and work with logical reasoning in the "If..., Then..." format. An example of creating a mivar expert system for solving problems in the model area "Geometry"is given. Mivar databases and rules can be used to model cause-and-effect relationships in different subject areas and to create knowledge bases of new-generation applied artificial intelligence systems and real-time mivar expert systems with the transition to"Big Knowledge". The textbook in the field of training "Computer Science and Computer Engineering" is intended for students, bachelors, undergraduates, postgraduates studying artificial intelligence methods used in information processing and management systems, as well as for users and specialists who create mivar knowledge models, expert systems, automated control systems and decision support systems. Keywords: cybernetics, artificial intelligence, mivar, mivar networks, databases, data models, expert system, intelligent systems, multidimensional open epistemological active network, MOGAN, MIPRA, KESMI, Wi!Mi, Razumator, knowledge bases, knowledge graphs, knowledge networks, Big knowledge, products, logical inference, decision support systems, decision-making systems, autonomous robots, recommendation systems, universal knowledge tools, expert system designers, logical artificial intelligence.
Style APA, Harvard, Vancouver, ISO itp.
10

Desmarais, Bruce A., i Skyler J. Cranmer. Statistical Inference in Political Networks Research. Redaktorzy Jennifer Nicoll Victor, Alexander H. Montgomery i Mark Lubell. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780190228217.013.8.

Pełny tekst źródła
Streszczenie:
Researchers interested in statistically modeling network data have a well-established and quickly growing set of approaches from which to choose. Several of these methods have been regularly applied in research on political networks, while others have yet to permeate the field. This chapter reviews the most prominent methods of inferential network analysis for both cross-sectionally and longitudinally observed networks, including (temporal) exponential random graph models, latent space models, the quadratic assignment procedure, and stochastic actor oriented models. For each method, the chapter summarizes its analytic form, identifies prominent published applications in political science, and discusses computational considerations. It concludes with a set of guidelines for selecting a method for a given application.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Computational inference method"

1

Revell, Jeremy, i Paolo Zuliani. "Stochastic Rate Parameter Inference Using the Cross-Entropy Method". W Computational Methods in Systems Biology, 146–64. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99429-1_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lilik, Ferenc, i László T. Kóczy. "The Determination of the Bitrate on Twisted Pairs by Mamdani Inference Method". W Studies in Computational Intelligence, 59–74. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03206-1_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Izumi, Satoru, Yusuke Kobayashi, Hideyuki Takahashi, Takuo Suganuma, Tetsuo Kinoshita i Norio Shiratori. "An Effective Inference Method Using Sensor Data for Symbiotic Healthcare Support System". W Computational Science and Its Applications – ICCSA 2010, 152–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12189-0_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bumee, Somkid, Chalothorn Liamwirat, Treenut Saithong i Asawin Meechai. "Extended Constraint-Based Boolean Analysis: A Computational Method in Genetic Network Inference". W Communications in Computer and Information Science, 71–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16750-8_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zolkepli, Maslina Binti, i Teh Noranis Binti Mohd Aris. "Cross Domain Recommendations Based on the Application of Fuzzy AHP and Fuzzy Inference Method in Establishing Transdisciplinary Collaborations". W Computational and Experimental Simulations in Engineering, 397–412. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27053-7_36.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Heard, Nick. "Computational Inference". W An Introduction to Bayesian Inference, Methods and Computation, 39–60. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82808-0_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Alvo, Mayer. "Bayesian Computation Methods". W Statistical Inference and Machine Learning for Big Data, 385–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06784-6_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Long, Quan. "Computational Haplotype Inference from Pooled Samples". W Methods in Molecular Biology, 309–19. New York, NY: Springer New York, 2017. http://dx.doi.org/10.1007/978-1-4939-6750-6_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Klingner, Marvin, i Tim Fingscheidt. "Improved DNN Robustness by Multi-task Training with an Auxiliary Self-Supervised Task". W Deep Neural Networks and Data for Automated Driving, 149–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_5.

Pełny tekst źródła
Streszczenie:
AbstractWhile deep neural networks for environment perception tasks in autonomous driving systems often achieve impressive performance on clean and well-prepared images, their robustness under real conditions, i.e., on images being perturbed with noise patterns or adversarial attacks, is often subject to a significantly decreased performance. In this chapter, we address this problem for the task of semantic segmentation by proposing multi-task training with the additional task of depth estimation with the goal to improve the DNN robustness. This method has a very wide potential applicability as the additional depth estimation task can be trained in a self-supervised fashion, relying only on unlabeled image sequences during training. The final trained segmentation DNN is, however, still applicable on a single-image basis during inference without additional computational overhead compared to the single-task model. Additionally, our evaluation introduces a measure which allows for a meaningful comparison between different noise and attack types. We show the effectiveness of our approach on the Cityscapes and KITTI datasets, where our method improves the DNN performance w.r.t. the single-task baseline in terms of robustness against multiple noise and adversarial attack types, which is supplemented by an improved absolute prediction performance of the resulting DNN.
Style APA, Harvard, Vancouver, ISO itp.
10

Clark, Andrew G., Emmanouil T. Dermitzakis i Stylianos E. Antonarakis. "Trisomic Phase Inference". W Computational Methods for SNPs and Haplotype Inference, 1–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24719-7_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Computational inference method"

1

Kimura, Shuhei, Masato Tokuhisa i Mariko Okada-Hatakeyama. "Simultaneous execution method of gene clustering and network inference". W 2016 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB). IEEE, 2016. http://dx.doi.org/10.1109/cibcb.2016.7758123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Vu, Luong H., Benjamin N. Passow, Daniel Paluszczyszyn, Lipika Deka i Eric Goodyer. "Neighbouring link travel time inference method using artificial neural network". W 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017. http://dx.doi.org/10.1109/ssci.2017.8285221.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zhou, Lina, Yin Qing, Liehui Jiang, Wenjian Yin i Tieming Liu. "A Method of Type Inference Based on Dataflow Analysis for Decompilation". W 2009 International Conference on Computational Intelligence and Software Engineering. IEEE, 2009. http://dx.doi.org/10.1109/cise.2009.5362985.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Koul, Nimrita. "Method for Feature Selection Based on Inference of Gene Regulatory Networks". W 2023 2nd International Conference on Computational Systems and Communication (ICCSC). IEEE, 2023. http://dx.doi.org/10.1109/iccsc56913.2023.10143012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kimura, Shuhei, Kazuki Sota i Masato Tokuhisa. "Inference of Genetic Networks using Random Forests: A Quantitative Weighting Method for Gene Expression Data". W 2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB). IEEE, 2022. http://dx.doi.org/10.1109/cibcb55180.2022.9863035.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Xiaohong, Xianyi Zeng, Yang Xu i Ludovic Koehl. "A method of experiment design based on IOWA operator inference in sensory evaluation". W Multiconference on "Computational Engineering in Systems Applications. IEEE, 2006. http://dx.doi.org/10.1109/cesa.2006.4281644.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Xianzhi, Tang, i Wang Qingnian. "Driving Intention Intelligent Identification Method for Hybrid Vehicles Based on Fuzzy Logic Inference". W 2010 3rd International Symposium on Computational Intelligence and Design (ISCID). IEEE, 2010. http://dx.doi.org/10.1109/iscid.2010.11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Guan, Yu. "Regularization Method for Rule Reduction in Belief Rule-based SystemRegularization Method for Rule Reduction in Belief Rule-based System". W 8th International Conference on Computational Science and Engineering (CSE 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101705.

Pełny tekst źródła
Streszczenie:
Belief rule-based inference system introduces a belief distribution structure into the conventional rule-based system, which can effectively synthesize incomplete and fuzzy information. In order to optimize reasoning efficiency and reduce redundant rules, this paper proposes a rule reduction method based on regularization. This method controls the distribution of rules by setting corresponding regularization penalties in different learning steps and reduces redundant rules. This paper first proposes the use of the Gaussian membership function to optimize the structure and activation process of the belief rule base, and the corresponding regularization penalty construction method. Then, a step-by-step training method is used to set a different objective function for each step to control the distribution of belief rules, and a reduction threshold is set according to the distribution information of the belief rule base to perform rule reduction. Two experiments will be conducted based on the synthetic classification data set and the benchmark classification data set to verify the performance of the reduced belief rule base.
Style APA, Harvard, Vancouver, ISO itp.
9

Wright, Stephen, Avinash Ravikumar, Laura Redmond, Benjamin Lawler, Matthew Castanier, Eric Gingrich i Michael Tess. "Data Reduction Methods to Improve Computation Time for Calibration of Piston Thermal Models". W WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0112.

Pełny tekst źródła
Streszczenie:
<div class="section abstract"><div class="htmlview paragraph">Fatigue analysis of pistons is reliant on an accurate representation of the high temperatures to which they are exposed. It can be difficult to represent this accurately, because instrumented tests to validate piston thermal models typically include only measurements near the piston crown and there are many unknown backside heat transfer coefficients (HTCs). Previously, a methodology was proposed to aid in the estimation of HTCs for backside convection boundary conditions of a stratified charge compression ignition (SCCI) piston. This methodology relies on Bayesian inference of backside HTC using a co-simulation between computational fluid dynamics (CFD) and finite element analysis (FEA) solvers. Although this methodology primarily utilizes the more computationally efficient FEA model for the iterations in the calibration, this can still be a computationally expensive process. In this paper, several data reduction methods, such as principal component analysis, data clustering and resampling, sensor reduction, and uniform bin sampling are investigated to improve computation time while minimizing reduction in accuracy of the inference results. Each data reduction method is compared to a control case to determine change in accuracy and improvement in run time. Results indicate that most reduction methods were no more effective than using a smaller Latin hypercube design to inform the Gaussian process within the Bayesian inference code. Reduced error was observed for the structured sensor reduction method, indicating that further studies on the value of individual sensor locations to the overall calibration might be a viable path to reduce the computation time of the calibration methodology without compromising accuracy.</div></div>
Style APA, Harvard, Vancouver, ISO itp.
10

Guan, Jiaqi, Yang Liu, Qiang Liu i Jian Peng. "Energy-efficient Amortized Inference with Cascaded Deep Classifiers". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/302.

Pełny tekst źródła
Streszczenie:
Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing. We address this problem by proposing a novel framework that optimizes the prediction accuracy and energy cost simultaneously, thus enabling effective cost-accuracy trade-off at test time. In our framework, each data instance is pushed into a cascade of deep neural networks with increasing sizes, and a selection module is used to sequentially determine when a sufficiently accurate classifier can be used for this data instance. The cascade of neural networks and the selection module are jointly trained in an end-to-end fashion by the REINFORCE algorithm to optimize a trade-off between the computational cost and the predictive accuracy. Our method is able to simultaneously improve the accuracy and efficiency by learning to assign easy instances to fast yet sufficiently accurate classifiers to save computation and energy cost, while assigning harder instances to deeper and more powerful classifiers to ensure satisfiable accuracy. Moreover, we demonstrate our method's effectiveness with extensive experiments on CIFAR-10/100, ImageNet32x32 and original ImageNet dataset.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Computational inference method"

1

Giacomini, Raffaella, Toru Kitagawa i Matthew Read. Identification and Inference under Narrative Restrictions. Reserve Bank of Australia, październik 2023. http://dx.doi.org/10.47688/rdp2023-07.

Pełny tekst źródła
Streszczenie:
We consider structural vector autoregressions subject to narrative restrictions, which are inequalities involving structural shocks in specific time periods (e.g. shock signs in given quarters). Narrative restrictions are used widely in the empirical literature. However, under these restrictions, there are no formal results on identification or the properties of frequentist approaches to inference, and existing Bayesian methods can be sensitive to prior choice. We provide formal results on identification, propose a computationally tractable robust Bayesian method that eliminates prior sensitivity, and show that it is asymptotically valid from a frequentist perspective. Using our method, we find that inferences about the output effects of US monetary policy obtained under restrictions related to the Volker episode are sensitive to prior choice. Under a richer set of restrictions, there is robust evidence that output falls following a positive monetary policy shock.
Style APA, Harvard, Vancouver, ISO itp.
2

Arthur, Jennifer Ann. Subcritical Neutron Multiplication Inference Benchmarks for Nuclear Data and Computational Methods Validation. Office of Scientific and Technical Information (OSTI), grudzień 2018. http://dx.doi.org/10.2172/1485365.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Koop, Gary, Jamie Cross i Aubrey Poon. Introduction to Bayesian Econometrics in MATLAB. Instats Inc., 2022. http://dx.doi.org/10.61700/t3wrch7yujr7a469.

Pełny tekst źródła
Streszczenie:
This seminar provides an introduction to Bayesian econometrics. It covers the general theory underlying Bayesian econometrics and Bayesian inference in the linear regression model including an introduction of Bayesian machine learning methods for Big Data regression. Bayesian computational methods such as Gibbs sampling and the Metropolis-Hastings algorithm will be covered, with hands-on lab sections run using real-world data so that you will be able to apply these methods in your ongoing research. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
Style APA, Harvard, Vancouver, ISO itp.
4

Koop, Gary, Jamie Cross i Aubrey Poon. Introduction to Bayesian Econometrics in MATLAB. Instats Inc., 2023. http://dx.doi.org/10.61700/aebi3thp50fr3469.

Pełny tekst źródła
Streszczenie:
This seminar provides an introduction to Bayesian econometrics. It covers the general theory underlying Bayesian econometrics and Bayesian inference in the linear regression model including an introduction of Bayesian machine learning methods for Big Data regression. Bayesian computational methods such as Gibbs sampling and the Metropolis-Hastings algorithm will be covered, with hands-on lab sections run using real-world data so that you will be able to apply these methods in your ongoing research. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
Style APA, Harvard, Vancouver, ISO itp.
5

de Kemp, E. A., H. A. J. Russell, B. Brodaric, D. B. Snyder, M. J. Hillier, M. St-Onge, C. Harrison i in. Initiating transformative geoscience practice at the Geological Survey of Canada: Canada in 3D. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331097.

Pełny tekst źródła
Streszczenie:
Application of 3D technologies to the wide range of Geosciences knowledge domains is well underway. These have been operationalized in workflows of the hydrocarbon sector for a half-century, and now in mining for over two decades. In Geosciences, algorithms, structured workflows and data integration strategies can support compelling Earth models, however challenges remain to meet the standards of geological plausibility required for most geoscientific studies. There is also missing links in the institutional information infrastructure supporting operational multi-scale 3D data and model development. Canada in 3D (C3D) is a vision and road map for transforming the Geological Survey of Canada's (GSC) work practice by leveraging emerging 3D technologies. Primarily the transformation from 2D geological mapping, to a well-structured 3D modelling practice that is both data-driven and knowledge-driven. It is tempting to imagine that advanced 3D computational methods, coupled with Artificial Intelligence and Big Data tools will automate the bulk of this process. To effectively apply these methods there is a need, however, for data to be in a well-organized, classified, georeferenced (3D) format embedded with key information, such as spatial-temporal relations, and earth process knowledge. Another key challenge for C3D is the relative infancy of 3D geoscience technologies for geological inference and 3D modelling using sparse and heterogeneous regional geoscience information, while preserving the insights and expertise of geoscientists maintaining scientific integrity of digital products. In most geological surveys, there remains considerable educational and operational challenges to achieve this balance of digital automation and expert knowledge. Emerging from the last two decades of research are more efficient workflows, transitioning from cumbersome, explicit (manual) to reproducible implicit semi-automated methods. They are characterized by integrated and iterative, forward and reverse geophysical modelling, coupled with stratigraphic and structural approaches. The full impact of research and development with these 3D tools, geophysical-geological integration and simulation approaches is perhaps unpredictable, but the expectation is that they will produce predictive, instructive models of Canada's geology that will be used to educate, prioritize and influence sustainable policy for stewarding our natural resources. On the horizon are 3D geological modelling methods spanning the gulf between local and frontier or green-fields, as well as deep crustal characterization. These are key components of mineral systems understanding, integrated and coupled hydrological modelling and energy transition applications, e.g. carbon sequestration, in-situ hydrogen mining, and geothermal exploration. Presented are some case study examples at a range of scales from our efforts in C3D.
Style APA, Harvard, Vancouver, ISO itp.
6

de Kemp, E. A., H. A. J. Russell, B. Brodaric, D. B. Snyder, M. J. Hillier, M. St-Onge, C. Harrison i in. Initiating transformative geoscience practice at the Geological Survey of Canada: Canada in 3D. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331871.

Pełny tekst źródła
Streszczenie:
Application of 3D technologies to the wide range of Geosciences knowledge domains is well underway. These have been operationalized in workflows of the hydrocarbon sector for a half-century, and now in mining for over two decades. In Geosciences, algorithms, structured workflows and data integration strategies can support compelling Earth models, however challenges remain to meet the standards of geological plausibility required for most geoscientific studies. There is also missing links in the institutional information infrastructure supporting operational multi-scale 3D data and model development. Canada in 3D (C3D) is a vision and road map for transforming the Geological Survey of Canada's (GSC) work practice by leveraging emerging 3D technologies. Primarily the transformation from 2D geological mapping, to a well-structured 3D modelling practice that is both data-driven and knowledge-driven. It is tempting to imagine that advanced 3D computational methods, coupled with Artificial Intelligence and Big Data tools will automate the bulk of this process. To effectively apply these methods there is a need, however, for data to be in a well-organized, classified, georeferenced (3D) format embedded with key information, such as spatial-temporal relations, and earth process knowledge. Another key challenge for C3D is the relative infancy of 3D geoscience technologies for geological inference and 3D modelling using sparse and heterogeneous regional geoscience information, while preserving the insights and expertise of geoscientists maintaining scientific integrity of digital products. In most geological surveys, there remains considerable educational and operational challenges to achieve this balance of digital automation and expert knowledge. Emerging from the last two decades of research are more efficient workflows, transitioning from cumbersome, explicit (manual) to reproducible implicit semi-automated methods. They are characterized by integrated and iterative, forward and reverse geophysical modelling, coupled with stratigraphic and structural approaches. The full impact of research and development with these 3D tools, geophysical-geological integration and simulation approaches is perhaps unpredictable, but the expectation is that they will produce predictive, instructive models of Canada's geology that will be used to educate, prioritize and influence sustainable policy for stewarding our natural resources. On the horizon are 3D geological modelling methods spanning the gulf between local and frontier or green-fields, as well as deep crustal characterization. These are key components of mineral systems understanding, integrated and coupled hydrological modelling and energy transition applications, e.g. carbon sequestration, in-situ hydrogen mining, and geothermal exploration. Presented are some case study examples at a range of scales from our efforts in C3D.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii