Добірка наукової літератури з теми "Variational Infernce"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Variational Infernce".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Variational Infernce":

1

Yun-Shan Sun, Yun-Shan Sun, Hong-Yan Xu Yun-Shan Sun, and Yan-Qin Li Hong-Yan Xu. "Missing Data Interpolation with Variational Bayesian Inference for Socio-economic Statistics Applications." 電腦學刊 33, no. 2 (April 2022): 169–76. http://dx.doi.org/10.53106/199115992022043302015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The information integrity is needed to solving socio-economic statistical problems. However, the information integrity is destroyed by missing data which is caused by various subjective and objective reasons. So the missing data interpolation is used to supplement missing data. In this paper, missing data interpolation with variational Bayesian inference is proposed. This method is combined with Gaussian model to approximate the posterior distribution to obtain complete data. The experiments include two datasets (artificial dataset and actual dataset) based on three missing ratios separately. The missing data interpolation performance of variational Bayesian method is compared with that which is obtained by mean interpolation and K-nearest neighbor interpolation methods separately in MSE (Mean Square Error) and MAPE (Mean Absolute Percentage Error). The experimental results show that the proposed variational Bayesian method is better in MSE and MAPE.</p> <p>&nbsp;</p>
2

Yun-Shan Sun, Yun-Shan Sun, Hong-Yan Xu Yun-Shan Sun, and Yan-Qin Li Hong-Yan Xu. "Missing Data Interpolation with Variational Bayesian Inference for Socio-economic Statistics Applications." 電腦學刊 33, no. 2 (April 2022): 169–76. http://dx.doi.org/10.53106/199115992022043302015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The information integrity is needed to solving socio-economic statistical problems. However, the information integrity is destroyed by missing data which is caused by various subjective and objective reasons. So the missing data interpolation is used to supplement missing data. In this paper, missing data interpolation with variational Bayesian inference is proposed. This method is combined with Gaussian model to approximate the posterior distribution to obtain complete data. The experiments include two datasets (artificial dataset and actual dataset) based on three missing ratios separately. The missing data interpolation performance of variational Bayesian method is compared with that which is obtained by mean interpolation and K-nearest neighbor interpolation methods separately in MSE (Mean Square Error) and MAPE (Mean Absolute Percentage Error). The experimental results show that the proposed variational Bayesian method is better in MSE and MAPE.</p> <p>&nbsp;</p>
3

Jaakkola, T. S., and M. I. Jordan. "Variational Probabilistic Inference and the QMR-DT Network." Journal of Artificial Intelligence Research 10 (May 1, 1999): 291–322. http://dx.doi.org/10.1613/jair.583.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We describe a variational approximation method for efficient inference in large-scale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the `Quick Medical Reference' (QMR) network. The QMR network is a large-scale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a state-of-the-art stochastic sampling method.
4

Unlu, Ali, and Laurence Aitchison. "Gradient Regularization as Approximate Variational Inference." Entropy 23, no. 12 (December 3, 2021): 1629. http://dx.doi.org/10.3390/e23121629.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
5

Merlo, A., A. Pavone, D. Böckenhoff, E. Pasch, G. Fuchert, K. J. Brunner, K. Rahbarnia, et al. "Accelerated Bayesian inference of plasma profiles with self-consistent MHD equilibria at W7-X via neural networks." Journal of Instrumentation 18, no. 11 (November 1, 2023): P11012. http://dx.doi.org/10.1088/1748-0221/18/11/p11012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract High-β operations require a fast and robust inference of plasma parameters with a self-consistent magnetohydrodynamic (MHD) equilibrium. Precalculated MHD equilibria are usually employed at Wendelstein 7-X (W7-X) due to the high computational cost. To address this, we couple a physics-regularized artificial neural network (NN) model that approximates the ideal-MHD equilibrium with the Bayesian modeling framework Minerva. We show the fast and robust inference of plasma profiles (electron temperature and density) with a self-consistent MHD equilibrium approximated by the NN model. We investigate the robustness of the inference across diverse synthetic W7-X plasma scenarios. The inferred plasma parameters and their uncertainties are compatible with the parameters inferred using the variational moments equilibrium code (VMEC), and the inference time is reduced by more than two orders of magnitude. This work suggests that MHD self-consistent inferences of plasma parameters can be performed between shots.
6

Becker, McCoy R., Alexander K. Lew, Xiaoyan Wang, Matin Ghavami, Mathieu Huot, Martin C. Rinard, and Vikash K. Mansinghka. "Probabilistic Programming with Programmable Variational Inference." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 2123–47. http://dx.doi.org/10.1145/3656463.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that may employ first-class constructs for computing densities of and expected values under user-defined models and variational families. We then transform these programs systematically into unbiased gradient estimators for optimizing the objectives they define. Our design makes it possible to prove unbiasedness by reasoning modularly about many interacting concerns in PPL implementations of variational inference, including automatic differentiation, density accumulation, tracing, and the application of unbiased gradient estimation strategies. Additionally, relative to existing support for VI in PPLs, our design increases expressiveness along three axes: (1) it supports an open-ended set of user-defined variational objectives, rather than a fixed menu of options; (2) it supports a combinatorial space of gradient estimation strategies, many not automated by today’s PPLs; and (3) it supports a broader class of models and variational families, because it supports constructs for approximate marginalization and normalization (previously introduced for Monte Carlo inference). We implement our approach in an extension to the Gen probabilistic programming system (genjax.vi, implemented in JAX), and evaluate our automation on several deep generative modeling tasks, showing minimal performance overhead vs. hand-coded implementations and performance competitive with well-established open-source PPLs.
7

Fourment, Mathieu, and Aaron E. Darling. "Evaluating probabilistic programming and fast variational Bayesian inference in phylogenetics." PeerJ 7 (December 18, 2019): e8272. http://dx.doi.org/10.7717/peerj.8272.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent advances in statistical machine learning techniques have led to the creation of probabilistic programming frameworks. These frameworks enable probabilistic models to be rapidly prototyped and fit to data using scalable approximation methods such as variational inference. In this work, we explore the use of the Stan language for probabilistic programming in application to phylogenetic models. We show that many commonly used phylogenetic models including the general time reversible substitution model, rate heterogeneity among sites, and a range of coalescent models can be implemented using a probabilistic programming language. The posterior probability distributions obtained via the black box variational inference engine in Stan were compared to those obtained with reference implementations of Markov chain Monte Carlo (MCMC) for phylogenetic inference. We find that black box variational inference in Stan is less accurate than MCMC methods for phylogenetic models, but requires far less compute time. Finally, we evaluate a custom implementation of mean-field variational inference on the Jukes–Cantor substitution model and show that a specialized implementation of variational inference can be two orders of magnitude faster and more accurate than a general purpose probabilistic implementation.
8

Frank, Philipp, Reimar Leike, and Torsten A. Enßlin. "Geometric Variational Inference." Entropy 23, no. 7 (July 2, 2021): 853. http://dx.doi.org/10.3390/e23070853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Efficiently accessing the information contained in non-linear and high dimensional probability distributions remains a core challenge in modern statistics. Traditionally, estimators that go beyond point estimates are either categorized as Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques. While MCMC methods that utilize the geometric properties of continuous probability distributions to increase their efficiency have been proposed, VI methods rarely use the geometry. This work aims to fill this gap and proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric. It is used to construct a coordinate transformation that relates the Riemannian manifold associated with the metric to Euclidean space. The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation by a normal distribution. Furthermore, the algorithmic structure allows for an efficient implementation of geoVI which is demonstrated on multiple examples, ranging from low-dimensional illustrative ones to non-linear, hierarchical Bayesian inverse problems in thousands of dimensions.
9

Kiselev, Igor. "Variational BEJG Solvers for Marginal-MAP Inference with Accurate Approximation of B-Conditional Entropy." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9957–58. http://dx.doi.org/10.1609/aaai.v33i01.33019957.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previously proposed variational techniques for approximate MMAP inference in complex graphical models of high-order factors relax a dual variational objective function to obtain its tractable approximation, and further perform MMAP inference in the resulting simplified graphical model, where the sub-graph with decision variables is assumed to be a disconnected forest. In contrast, we developed novel variational MMAP inference algorithms and proximal convergent solvers, where we can improve the approximation accuracy while better preserving the original MMAP query by designing such a dual variational objective function that an upper bound approximation is applied only to the entropy of decision variables. We evaluate the proposed algorithms on both simulated synthetic datasets and diagnostic Bayesian networks taken from the UAI inference challenge, and our solvers outperform other variational algorithms in a majority of reported cases. Additionally, we demonstrate the important real-life application of the proposed variational approaches to solve complex tasks of policy optimization by MMAP inference, and performance of the implemented approximation algorithms is compared. Here, we demonstrate that the original task of optimizing POMDP controllers can be approached by its reformulation as the equivalent problem of marginal-MAP inference in a novel single-DBN generative model, which guarantees that the control policies computed by probabilistic inference over this model are optimal in the traditional sense. Our motivation for approaching the planning problem through probabilistic inference in graphical models is explained by the fact that by transforming a Markovian planning problem into the task of probabilistic inference (a marginal MAP problem) and applying belief propagation techniques in generative models, we can achieve a computational complexity reduction from PSPACE-complete or NEXP-complete to NPPP-complete in comparison to solving the POMDP and Dec-POMDP models respectively search vs. dynamic programming).
10

Chi, Jinjin, Zhichao Zhang, Zhiyao Yang, Jihong Ouyang, and Hongbin Pei. "Generalized Variational Inference via Optimal Transport." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11534–42. http://dx.doi.org/10.1609/aaai.v38i10.29035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Variational Inference (VI) has gained popularity as a flexible approximate inference scheme for computing posterior distributions in Bayesian models. Original VI methods use Kullback-Leibler (KL) divergence to construct variational objectives. However, KL divergence has zero-forcing behavior and is completely agnostic to the metric of the underlying data distribution, resulting in bad approximations. To alleviate this issue, we propose a new variational objective by using Optimal Transport (OT) distance, which is a metric-aware divergence, to measure the difference between approximate posteriors and priors. The superior performance of OT distance enables us to learn more accurate approximations. We further enhance the objective by gradually including the OT term using a hyperparameter λ for over-parameterized models. We develop a Variational inference method with OT (VOT) which presents a gradient-based black-box framework for solving Bayesian models, even when the density function of approximate distribution is not available. We provide the consistency analysis of approximate posteriors and demonstrate the practical effectiveness on Bayesian neural networks and variational autoencoders.

Дисертації з теми "Variational Infernce":

1

Rouillard, Louis. "Bridging Simulation-based Inference and Hierarchical Modeling : Applications in Neuroscience." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La neuroimagerie étudie l'architecture et le fonctionnement du cerveau à l'aide de la résonance magnétique (IRM). Pour comprendre le signal complexe observé, les neuroscientifiques émettent des hypothèses sous la forme de modèles explicatifs, régis par des paramètres interprétables. Cette thèse étudie l'inférence statistique : deviner quels paramètres auraient pu produire le signal à travers le modèle.L'inférence en neuroimagerie est complexifiée par au moins trois obstacles : une grande dimensionnalité, une grande incertitude et la structure hiérarchique des données. Pour s'attaquer à ce régime, nous utlisons l'inférence variationnelle (VI), une méthode basée sur l'optimisation.Plus précisément, nous combinons l'inférence variationnelle stochastique structurée et les flux de normalisation (NF) pour concevoir des familles variationnelles expressives et adaptées à la large dimensionnalité. Nous appliquons ces techniques à l'IRM de diffusion et l'IRM fonctionnelle, sur des tâches telles que la parcellation individuelle, l'inférence de la microstructure et l'estimation du couplage directionnel. Via ces applications, nous soulignons l'interaction entre les divergences de Kullback-Leibler (KL) forward et reverse comme outils complémentaires pour l'inférence. Nous démontrons également les capacité de l'inférence variationelle automatique (AVI) comme méthode d'inférence robuste et adaptée à la large dimensionnalité, apte à relever les défis de la modélisation en neuroscience
Neuroimaging investigates the brain's architecture and function using magnetic resonance (MRI). To make sense of the complex observed signal, Neuroscientists posit explanatory models, governed by interpretable parameters. This thesis tackles statistical inference : guessing which parameters could have yielded the signal through the model.Inference in Neuroimaging is complexified by at least three hurdles : a large dimensionality, a large uncertainty, and the hierarchcial structure of data. We look into variational inference (VI) as an optimization-based method to tackle this regime.Specifically, we conbine structured stochastic VI and normalizing flows (NFs) to design expressive yet scalable variational families. We apply those techniques in diffusion and functional MRI, on tasks including individual parcellation, microstructure inference and directional coupling estimation. Through these applications, we underline the interplay between the forward and reverse Kullback-Leibler (KL) divergences as complemen-tary tools for inference. We also demonstrate the ability of automatic VI (AVI) as a reliable and scalable inference method to tackle the challenges of model-driven Neuroscience
2

Calabrese, Chris M. Eng Massachusetts Institute of Technology. "Distributed inference : combining variational inference with distributed computing." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85407.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 95-97).
The study of inference techniques and their use for solving complicated models has taken off in recent years, but as the models we attempt to solve become more complex, there is a worry that our inference techniques will be unable to produce results. Many problems are difficult to solve using current approaches because it takes too long for our implementations to converge on useful values. While coming up with more efficient inference algorithms may be the answer, we believe that an alternative approach to solving this complicated problem involves leveraging the computation power of multiple processors or machines with existing inference algorithms. This thesis describes the design and implementation of such a system by combining a variational inference implementation (Variational Message Passing) with a high-level distributed framework (Graphlab) and demonstrates that inference is performed faster on a few large graphical models when using this system.
by Chris Calabrese.
M. Eng.
3

Lawrence, Neil David. "Variational inference in probabilistic models." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Beal, Matthew James. "Variational algorithms for approximate Bayesian inference." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Pengyu. "Collapsed variational inference for computational linguistics." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:13c08f60-1441-4ea5-b52f-7ffd0d7a744f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bayesian modelling is a natural fit for tasks in computational linguistics, since it can provide interpretable structures, useful prior controls, and coherent management of uncertainty. However, exact Bayesian inference is intractable for many models of practical interest. Developing both accurate and efficient approximate Bayesian inference algorithms remains a fundamental challenge, especially for the field of computational linguistics where datasets are large and growing and models consist of complex latent structures. Collapsed variational inference (CVI) is an important milestone that combines the efficiency of variational inference (VI) and the accuracy of Markov chain Monte Carlo (MCMC) (Teh et al., 2006). However, its previous applications were limited to bag-of-words models whose hidden variables are conditionally independent given the parameters, whereas in computational linguistics, the hidden variable dependencies are crucial for modelling the underlying syntactic and semantic relations. To enlarge the application domain of CVI as well as to address the above Bayesian inference challenge, we investigate the applications of collapsed variational inference to computational linguistics. In this thesis, our contributions are three-fold. First, we solve a number of inference challenges arising from the hidden variable dependencies and derive a set of new CVI algorithms for the two ubiquitous and foundational models in computational linguistics, namely hidden Markov models (HMMs) and probabilistic context free grammars. We also propose CVI for hierarchical Dirichlet process (HDP) HMMs that are Bayesian nonparametric extensions of HMMs. Second, along the way we propose a set of novel algorithmic techniques, which are generally applicable to a wide variety of probabilistic graphical models in the conjugate exponential family and computational linguistic models using non-conjugate HDP constructions. Therefore, our work represents one step in bridging the gap between increasingly richer Bayesian models in computational linguistics and recent advances in approximate Bayesian inference. Third, we empirically evaluate our proposed CVI algorithms and their stochastic versions in a range of computational linguistic tasks, such as part-of-speech induction, grammar induction and many others. Experimental results consistently demonstrate that, using our techniques for handling the hidden variable dependencies, the empirical advantages of both VI and MCMC can be combined in a much larger domain of CVI applications.
6

Mamikonyan, Arsen. "Variational inference for non-stationary distributions." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 49).
In this thesis, I look at multiple Variational Inference algorithm, transform Kalman Variational Bayes and Stochastic Variational Inference into streaming algorithms and try to identify if any of them work with non-stationary distributions. I conclude that Kalman Variational Bayes can do as good as any other algorithm for stationary distributions, and tracks non-stationary distributions better than any other algorithm in question.
by Arsen Mamikonyan.
M. Eng.
7

Thorpe, Matthew. "Variational methods for geometric statistical inference." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/74241/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Estimating multiple geometric shapes such as tracks or surfaces creates significant mathematical challenges particularly in the presence of unknown data association. In particular, problems of this type have two major challenges. The first is typically the object of interest is infinite dimensional whilst data is finite dimensional. As a result the inverse problem is ill-posed without regularization. The second is the data association makes the likelihood function highly oscillatory. The focus of this thesis is on techniques to validate approaches to estimating problems in geometric statistical inference. We use convergence of the large data limit as an indicator of robustness of the methodology. One particular advantage of our approach is that we can prove convergence under modest conditions on the data generating process. This allows one to apply the theory where very little is known about the data. This indicates a robustness in applications to real world problems. The results of this thesis therefore concern the asymptotics for a selection of statistical inference problems. We construct our estimates as the minimizer of an appropriate functional and look at what happens in the large data limit. In each case we will show our estimates converge to a minimizer of a limiting functional. In certain cases we also add rates of convergence. The emphasis is on problems which contain a data association or classification component. More precisely we study a generalized version of the k-means method which is suitable for estimating multiple trajectories from unlabeled data which combines data association with spline smoothing. Another problem considered is a graphical approach to estimating the labeling of data points. Our approach uses minimizers of the Ginzburg-Landau functional on a suitably defined graph. In order to study these problems we use variational techniques and in particular I-convergence. This is the natural framework to use for studying sequences of minimization problems. A key advantage of this approach is that it allows us to deal with infinite dimensional and highly oscillatory functionals.
8

Challis, E. A. L. "Variational approximate inference in latent linear models." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1414228/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Latent linear models are core to much of machine learning and statistics. Specific examples of this model class include Bayesian generalised linear models, Gaussian process regression models and unsupervised latent linear models such as factor analysis and principal components analysis. In general, exact inference in this model class is computationally and analytically intractable. Approximations are thus required. In this thesis we consider deterministic approximate inference methods based on minimising the Kullback-Leibler (KL) divergence between a given target density and an approximating `variational' density. First we consider Gaussian KL (G-KL) approximate inference methods where the approximating variational density is a multivariate Gaussian. Regarding this procedure we make a number of novel contributions: sufficient conditions for which the G-KL objective is differentiable and convex are described, constrained parameterisations of Gaussian covariance that make G-KL methods fast and scalable are presented, the G-KL lower-bound to the target density's normalisation constant is proven to dominate those provided by local variational bounding methods. We also discuss complexity and model applicability issues of G-KL and other Gaussian approximate inference methods. To numerically validate our approach we present results comparing the performance of G-KL and other deterministic Gaussian approximate inference methods across a range of latent linear model inference problems. Second we present a new method to perform KL variational inference for a broad class of approximating variational densities. Specifically, we construct the variational density as an affine transformation of independently distributed latent random variables. The method we develop extends the known class of tractable variational approximations for which the KL divergence can be computed and optimised and enables more accurate approximations of non-Gaussian target densities to be obtained.
9

Matthews, Alexander Graeme de Garis. "Scalable Gaussian process inference using variational methods." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/278022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Gaussian processes can be used as priors on functions. The need for a flexible, principled, probabilistic model of functional relations is common in practice. Consequently, such an approach is demonstrably useful in a large variety of applications. Two challenges of Gaussian process modelling are often encountered. These are dealing with the adverse scaling with the number of data points and the lack of closed form posteriors when the likelihood is non-Gaussian. In this thesis, we study variational inference as a framework for meeting these challenges. An introductory chapter motivates the use of stochastic processes as priors, with a particular focus on Gaussian process modelling. A section on variational inference reviews the general definition of Kullback-Leibler divergence. The concept of prior conditional matching that is used throughout the thesis is contrasted to classical approaches to obtaining tractable variational approximating families. Various theoretical issues arising from the application of variational inference to the infinite dimensional Gaussian process setting are settled decisively. From this theory we are able to give a new argument for existing approaches to variational regression that settles debate about their applicability. This view on these methods justifies the principled extensions found in the rest of the work. The case of scalable Gaussian process classification is studied, both for its own merits and as a case study for non-Gaussian likelihoods in general. Using the resulting algorithms we find credible results on datasets of a scale and complexity that was not possible before our work. An extension to include Bayesian priors on model hyperparameters is studied alongside a new inference method that combines the benefits of variational sparsity and MCMC methods. The utility of such an approach is shown on a variety of example modelling tasks. We describe GPflow, a new Gaussian process software library that uses TensorFlow. Implementations of the variational algorithms discussed in the rest of the thesis are included as part of the software. We discuss the benefits of GPflow when compared to other similar software. Increased computational speed is demonstrated in relevant, timed, experimental comparisons.
10

Maestrini, Luca. "On variational approximations for frequentist and bayesian inference." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424936.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Variational approximations are approximate inference techniques for complex statisticalmodels providing fast, deterministic alternatives to conventional methods that,however accurate, take much longer to run. We extend recent work concerning variationalapproximations developing and assessing some variational tools for likelihoodbased and Bayesian inference. In particular, the first part of this thesis employs a Gaussian variational approximation strategy to handle frequentist generalized linear mixedmodels with general design random effects matrices such as those including spline basisfunctions. This method involves approximation to the distributions of random effectsvectors, conditional on the responses, via a Gaussian density. The second thread isconcerned with a particular class of variational approximations, known as mean fieldvariational Bayes, which is based upon a nonparametric product density restriction on the approximating density. Algorithms for inference and fitting for models with elaborateresponses and structures are developed adopting the variational message passingperspective. The modularity of variational message passing is such that extensions tomodels with more involved likelihood structures and scalability to big datasets are relatively simple. We also derive algorithms for models containing higher level randomeffects and non-normal responses, which are streamlined in support of computationalefficiency. Numerical studies and illustrations are provided, including comparisons witha Markov chain Monte Carlo benchmark.
Le approssimazioni variazionali sono tecniche di inferenza approssimata per modelli statisticicomplessi che si propongono come alternative, più rapide e di tipo deterministico,a metodi tradizionali che, sebbene accurati, necessitano di maggiori tempi per l'adattamento. Vengono qui sviluppati e valutati alcuni strumenti variazionali per l'inferenzabasata sulla verosimiglianza e per l'inferenza bayesiana, estendendo dei risultati recentiin letteratura sulle approssimazioni variazionali. In particolare, la prima parte dellatesi impiega una strategia basata su un'approssimazione variazionale gaussiana per la funzione di verosimiglianza di modelli lineari generalizzati misti con matrici di disegnodegli effetti casuali generiche, includenti, per esempio, funzioni di basi spline. Questometodo consiste nell'approssimare la distribuzione del vettore degli effetti casuali,condizionatamente alle risposte, con una densità gaussiana. Il secondo filone concerneinvece una particolare classe di approssimazioni variazionali nota come mean field variational Bayes, che impone un prodotto di densità come restrizione non parametrica sulla densità approssimante. Vengono sviluppati algoritmi per l'inferenza e l'adattamento dimodelli con risposte elaborate, adottando la prospettiva del variational message passing. La modularità del variational message passing è tale da consentire estensioni amodelli con strutture di verosimiglianza più complesse e scalabilità a insiemi di dati di grandi dimensioni con relativa semplicità. Vengono inoltre derivati in forma esplicitadegli algoritmi per modelli contenenti effetti casuali su più livelli e risposte non normali,introducendo semplicazioni atte a incrementare l'efficienza computazionale. Sonoinclusi studi numerici e illustrazioni, considerando come riferimento per un confronto il metodo Markov chain Monte Carlo.

Книги з теми "Variational Infernce":

1

Quah, Danny. Exploiting cross section variation for unit root inference in dynamic data. London: London School of Economics, Financial Markets Group, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Quah, Danny. Exploiting cross section variation for unit root inference in dynamic data. Stockholm: Stockholm University, Institute for International Economic Studies, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bartholomew, David J. Statistics without Mathematics. London, UK: SAGE Publications Ltd, 2015.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

United States. National Aeronautics and Space Administration., ed. Compositional variation in Apollo 16 impact-melt breccias and inferences for the geology and bombardment history of the central highlands of the moon. [Washington, DC: National Aeronautics and Space Administration, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

United States. National Aeronautics and Space Administration., ed. Compositional variation in Apollo 16 impact-melt breccias and inferences for the geology and bombardment history of the central highlands of the moon. [Washington, DC: National Aeronautics and Space Administration, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wainwright, Martin J., and Michael I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Now Publishers, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sekhon, Jasjeet. The Neyman— Rubin Model of Causal Inference and Estimation Via Matching Methods. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article presents a detailed discussion of the Neyman-Rubin model of causal inference. Additionally, it describes under what conditions ‘matching’ approaches can lead to valid inferences, and what kinds of compromises sometimes have to be made with respect to generalizability to ensure valid causal inferences. Moreover, the article summarizes Mill's first three canons and shows the importance of taking chance into account and comparing conditional probabilities when chance variations cannot be ignored. The significance of searching for causal mechanisms is often overestimated by political scientists and this sometimes leads to an underestimate of the importance of comparing conditional probabilities. The search for causal mechanisms is probably especially useful when working with observational data. Machine learning algorithms can be used against the matching problem.
8

Bortone, Pietro. Language and Nationality: Social Inferences, Cultural Differences, and Linguistic Misconceptions. Bloomsbury Academic & Professional, 2023.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bortone, Pietro. Language and Nationality: Social Inferences, Cultural Differences, and Linguistic Misconceptions. Bloomsbury Publishing Plc, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schadt, Eric E. Network Methods for Elucidating the Complexity of Common Human Diseases. Edited by Dennis S. Charney, Eric J. Nestler, Pamela Sklar, and Joseph D. Buxbaum. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190681425.003.0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The life sciences are now a significant contributor to the ever expanding digital universe of data, and stand poised to lead in both the generation of big data and the realization of dramatic benefit from it. We can now score variations in DNA across whole genomes; RNA levels and alternative isoforms, metabolite levels, protein levels, and protein state information across the transcriptome, metabolome and proteome; methylation status across the methylome; and construct extensive protein–protein and protein–DNA interaction maps, all in a comprehensive fashion and at the scale of populations of individuals. This chapter describes a number of analytical approaches aimed at inferring causal relationships among variables in very large-scale datasets by leveraging DNA variation as a systematic perturbation source. The causal inference procedures are also demonstrated to enhance the ability to reconstruct truly predictive, probabilistic causal gene networks that reflect the biological processes underlying complex phenotypes like disease.

Частини книг з теми "Variational Infernce":

1

Cohen, Shay. "Variational Inference." In Synthesis Lectures on Human Language Technologies, 131–49. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-031-02161-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cohen, Shay. "Variational Inference." In Bayesian Analysis in Natural Language Processing, 135–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02170-1_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jiang, Di, Chen Zhang, and Yuanfeng Song. "Variational Inference." In Probabilistic Topic Models, 79–93. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2431-8_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Erwig, Martin, and Karl Smeltzer. "Variational Pictures." In Diagrammatic Representation and Inference, 55–70. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91376-6_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Drori, Iddo. "Deep Variational Inference." In Handbook of Variational Methods for Nonlinear Geometric Data, 361–76. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-31351-7_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Egorov, Evgenii, Kirill Neklydov, Ruslan Kostoev, and Evgeny Burnaev. "MaxEntropy Pursuit Variational Inference." In Advances in Neural Networks – ISNN 2019, 409–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22796-8_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mohamad, Saad, Abdelhamid Bouchachia, and Moamar Sayed-Mouchaweh. "Asynchronous Stochastic Variational Inference." In Proceedings of the International Neural Networks Society, 296–308. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16841-4_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Longford, Nicholas T. "Inference about variation." In Models for Uncertainty in Educational Testing, 1–15. New York, NY: Springer New York, 1995. http://dx.doi.org/10.1007/978-1-4613-8463-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ayabe, Hiroaki, Emmanuel Manalo, Mari Fukuda, and Norihiro Sadato. "What Diagrams Are Considered Useful for Solving Mathematical Word Problems in Japan?" In Diagrammatic Representation and Inference, 79–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractPrevious studies have shown that diagram use is effective in mathematical word problem solving. However, they have also revealed that students manifest many problems in using diagrams for such purposes. A possible reason is an inadequacy in students’ understanding of variations in types of problems and the corresponding kinds of diagrams appropriate to use. In the present study, a preliminary investigation was undertaken of how such correspondences between problem types and kinds of diagrams are represented in textbooks. One government-approved textbook series for elementary school level in Japan was examined for the types of mathematical word problems, and the kinds of diagrams presented with those problems. The analyses revealed significant differences in association between kinds of diagrams and types of problems. More concrete diagrams were included with problems involving change, combination, variation, and visualization of quantities; while number lines were more often used with comparison and variation problems. Tables and graphs corresponded to problems requiring organization of quantities; and more concrete diagrams and graphs to problems involving quantity visualization. These findings are considered in relation to the crucial role of textbooks and other teaching materials in facilitating strategy knowledge acquisition in students.
10

McGrory, Clare A. "Variational Bayesian Inference for Mixture Models." In Case Studies in Bayesian Statistical Modelling and Analysis, 388–402. Chichester, UK: John Wiley & Sons, Ltd, 2012. http://dx.doi.org/10.1002/9781118394472.ch23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Variational Infernce":

1

Gianniotis, Nikolaos. "Mixed Variational Inference." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852348.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bouchard, Guillaume, and Onno Zoeter. "Split variational inference." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chen, Yuqiao, Yibo Yang, Sriraam Natarajan, and Nicholas Ruozzi. "Lifted Hybrid Variational Inference." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Lifted inference algorithms exploit model symmetry to reduce computational cost in probabilistic inference. However, most existing lifted inference algorithms operate only over discrete domains or continuous domains with restricted potential functions. We investigate two approximate lifted variational approaches that apply to domains with general hybrid potentials, and are expressive enough to capture multi-modality. We demonstrate that the proposed variational methods are highly scalable and can exploit approximate model symmetries even in the presence of a large amount of continuous evidence, outperforming existing message-passing-based approaches in a variety of settings. Additionally, we present a sufficient condition for the Bethe variational approximation to yield a non-trivial estimate over the marginal polytope.
4

Hu, Pingbo, and Yang Weng. "Accelerated Stochastic Variational Inference." In 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, 2019. http://dx.doi.org/10.1109/ispa-bdcloud-sustaincom-socialcom48970.2019.00183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Xiaopeng, Chuancai Liu, and Xiaochun Zhang. "Laplacian Black Box Variational Inference." In the International Conference. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3175684.3175700.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chantas, Giannis, Nikolaos Galatsanos, Rafael Molina, and Aggelos Katsaggelos. "Variational Bayesian inference image restoration using a product of total variation-like image priors." In 2010 2nd International Workshop on Cognitive Information Processing (CIP). IEEE, 2010. http://dx.doi.org/10.1109/cip.2010.5604259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dresdner, Gideon, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, and Gunnar Rätsch. "Boosting Variational Inference With Locally Adaptive Step-Sizes." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution. Instead, Boosting Variational Inference allows practitioners to obtain increasingly good posterior approximations by spending more compute. The main obstacle to widespread adoption of Boosting Variational Inference is the amount of resources necessary to improve over a strong Variational Inference baseline. In our work, we trace this limitation back to the global curvature of the KL-divergence. We characterize how the global curvature impacts time and memory consumption, address the problem with the notion of local curvature, and provide a novel approximate backtracking algorithm for estimating local curvature. We give new theoretical convergence rates for our algorithms and provide experimental validation on synthetic and real-world datasets.
8

Xiu, Zidi, Chenyang Tao, and Ricardo Henao. "Variational learning of individual survival distributions." In ACM CHIL '20: ACM Conference on Health, Inference, and Learning. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368555.3384454.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Aziz, Wilker, and Philip Schulz. "Variational Inference and Deep Generative Models." In Proceedings of ACL 2018, Tutorial Abstracts. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-5003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Plotz, Tobias, Anne S. Wannenwetsch, and Stefan Roth. "Stochastic Variational Inference with Gradient Linearization." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Variational Infernce":

1

Chertkov, Michael, Sungsoo Ahn, and Jinwoo Shin. Gauging Variational Inference. Office of Scientific and Technical Information (OSTI), May 2017. http://dx.doi.org/10.2172/1360686.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Teh, Yee W., David Newman, and Max Welling. A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation. Fort Belvoir, VA: Defense Technical Information Center, September 2007. http://dx.doi.org/10.21236/ada629956.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Walker, David. Developing variational Bayesian inference for applications to gene expression data. Ames (Iowa): Iowa State University, January 2021. http://dx.doi.org/10.31274/cc-20240624-535.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Roberson, Madeleine, Kathleen Inman, Ashley Carey, Isaac Howard, and Jameson Shannon. Probabilistic neural networks that predict compressive strength of high strength concrete in mass placements using thermal history. Engineer Research and Development Center (U.S.), June 2022. http://dx.doi.org/10.21079/11681/44483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study explored the use of artificial neural networks to predict UHPC compressive strengths given thermal history and key mix components. The model developed herein employs Bayesian variational inference using Monte Carlo dropout to convey prediction uncertainty using 735 datapoints on seven UHPC mixtures collected using a variety of techniques. Datapoints contained a measured compressive strength along with three curing inputs (specimen maturity, maximum temperature experienced during curing, time of maximum temperature) and five mixture inputs to distinguish each UHPC mixture (cement type, silicon dioxide content, mix type, water to cementitious material ratio, and admixture dosage rate). Input analysis concluded that predictions were more sensitive to curing inputs than mixture inputs. On average, 8.2% of experimental results in the final model fell outside of the predicted range with 67.9%of these cases conservatively underpredicting. The results support that this model methodology is able to make sufficient probabilistic predictions within the scope of the provided dataset but is not for extrapolating beyond the training data. In addition, the model was vetted using various datasets obtained from literature to assess its versatility. Overall this model is a promising advancement towards predicting mechanical properties of high strength concrete with known uncertainties.
5

Lewin, Alex, Karla Diaz-Ordaz, Chris Bonell, James Hargreaves, and Edoardo Masset. Machine learning for impact evaluation in CEDIL-funded studies: an ex ante lesson learning paper. Centre for Excellence and Development Impact and Learning (CEDIL), April 2023. http://dx.doi.org/10.51744/llp3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Centre of Excellence for Development Impact and Learning (CEDIL) has recently funded several studies that use machine learning methods to enhance the inferences made from impact evaluations. These studies focus on assessing the impact of complex development interventions, which can be expected to have impacts in different domains, possibly over an extended period of time. These studiestherefore involve study participants being followed up at multiple time-points after the intervention, and typically collect large numbers of variables at each follow-up. The hope is that machine learning approaches can uncover new insights into the variation in responses to interventions, and possible causal mechanisms, which in turn can highlight potential areas that policy and planning can focus on. This paper describes these studies using machine learning methods, gives an overview of the common aims and methodological approaches used in impact evaluations, and highlights some lessons and important caveats.
6

Sadowski, Dieter. Board-Level Codetermination in Germany - The Importance and Economic Impact of Fiduciary Duties. Association Inter-University Centre Dubrovnik, May 2021. http://dx.doi.org/10.53099/ntkd4304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The empirical accounts of the costs and benefits of quasi-parity codetermined supervisory boards, a very special German institution, have long been inconclusive. A valid economic analysis of a particular legal regulation must take the legal specificities seriously, otherwise it will be easily lost in economic fictions of functional equivalence. At its core the corporate actor “supervisory board” has no a priori objective function to be maximised – the corner stone of the theory of the firm – but its objective function will only be brought about a posteriori – should negotiations result in an agreement (E. Fraenkel). With this understanding,the paper presents six recent quasi-experimental studies on the economic (dis) advantageousness of the German codetermination laws that try to follow the rules of causal inference despite the lack of random variation. By and large they refute the hold-up model of codetermination by showing positive or nonnegative effects even on shareholder wealth – and a far-reaching improvement of the well-being of the core workforce. In conclusion, indications are offered that the shareholder primacy movement has only weakened, but not dissolved the “Deutschland AG”.
7

Chen, Z., S. E. Grasby, C. Deblonde, and X. Liu. AI-enabled remote sensing data interpretation for geothermal resource evaluation as applied to the Mount Meager geothermal prospective area. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The objective of this study is to search for features and indicators from the identified geothermal resource sweet spot in the south Mount Meager area that are applicable to other volcanic complexes in the Garibaldi Volcanic Belt. A Landsat 8 multi-spectral band dataset, for a total of 57 images ranging from visible through infrared to thermal infrared frequency channels and covering different years and seasons, were selected. Specific features that are indicative of high geothermal heat flux, fractured permeable zones, and groundwater circulation, the three key elements in exploring for geothermal resource, were extracted. The thermal infrared images from different seasons show occurrence of high temperature anomalies and their association with volcanic and intrusive bodies, and reveal the variation in location and intensity of the anomalies with time over four seasons, allowing inference of specific heat transform mechanisms. Automatically extracted linear features using AI/ML algorithms developed for computer vision from various frequency bands show various linear segment groups that are likely surface expression associated with local volcanic activities, regional deformation and slope failure. In conjunction with regional structural models and field observations, the anomalies and features from remotely sensed images were interpreted to provide new insights for improving our understanding of the Mount Meager geothermal system and its characteristics. After validation, the methods developed and indicators identified in this study can be applied to other volcanic complexes in the Garibaldi, or other volcanic belts for geothermal resource reconnaissance.
8

DeJaeghere, Joan, Bich-Hang Duong, and Vu Dao. Teaching Practices That Support and Promote Learning: Qualitative Evidence from High and Low Performing Classes in Vietnam. Research on Improving Systems of Education (RISE), January 2021. http://dx.doi.org/10.35489/bsg-rise-ri_2021/024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This Insight Note contributes to the growing body of knowledge on teaching practices that foster student learning and achievement by analysing in-depth qualitative data from classroom observations and teacher interviews. Much of the research on teachers and teaching in development literature focuses on observable and quantified factors, including qualifications and training. But simply being qualified (with a university degree in education or subject areas), or trained in certain ways (e.g., coaching versus in-service) explains very little of the variation in learning outcomes (Kane and Staiger, 2008; Wößmann, 2003; Das and Bau, 2020). Teaching is a complex set of practices that draw on teachers’ beliefs about learning, their prior experiences, their content and pedagogical knowledge and repertoire, and their commitment and personality. Recent research in the educational development literature has turned to examining teaching practices, including content knowledge, pedagogical practices, and teacher-student interactions, primarily through quantitative data from knowledge tests and classroom observations of practices (see Bruns, De Gregorio and Taut, 2016; Filmer, Molina and Wane, 2020; Glewwe et al, in progress). Other studies, such as TIMSS, the OECD and a few World Bank studies have used classroom videos to further explain high inference factors of teachers’ (Gallimore and Hiebert, 2000; Tomáš and Seidel, 2013). In this Note, we ask the question: What are the teaching practices that support and foster high levels of learning? Vietnam is a useful case to examine because student learning outcomes based on international tests are high, and most students pass the basic learning levels (Dang, Glewwe, Lee and Vu, 2020). But considerable variation exists between learning outcomes, particularly at the secondary level, where high achieving students will continue to upper-secondary and lower achieving students will drop out at Grade 9 (Dang and Glewwe, 2018). So what differentiates teaching for those who achieve these high learning outcomes and those who don’t? Some characteristics of teachers, such as qualifications and professional commitment, do not vary greatly because most Vietnamese teachers meet the national standards in terms of qualifications (have a college degree) and have a high level of professionalism (Glewwe et al., in progress). Other factors that influence teaching, such as using lesson plans and teaching the national curriculum, are also highly regulated. Therefore, to explain how teaching might affect student learning outcomes, it is important to examine more closely teachers’ practices in the classroom.

До бібліографії