Journal articles on the topic 'Computational inference method'

To see the other types of publications on this topic, follow the link: Computational inference method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational inference method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jha, Kunal, Tuan Anh Le, Chuanyang Jin, Yen-Ling Kuo, Joshua B. Tenenbaum, and Tianmin Shu. "Neural Amortized Inference for Nested Multi-Agent Reasoning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (March 24, 2024): 530–37. http://dx.doi.org/10.1609/aaai.v38i1.27808.

Full text
Abstract:
Multi-agent interactions, such as communication, teaching, and bluffing, often rely on higher-order social inference, i.e., understanding how others infer oneself. Such intricate reasoning can be effectively modeled through nested multi-agent reasoning. Nonetheless, the computational complexity escalates exponentially with each level of reasoning, posing a significant challenge. However, humans effortlessly perform complex social inferences as part of their daily lives. To bridge the gap between human-like inference capabilities and computational limitations, we propose a novel approach: leveraging neural networks to amortize high-order social inference, thereby expediting nested multi-agent reasoning. We evaluate our method in two challenging multi-agent interaction domains. The experimental results demonstrate that our method is computationally efficient while exhibiting minimal degradation in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Martina Perez, Simon, Heba Sailem, and Ruth E. Baker. "Efficient Bayesian inference for mechanistic modelling with high-throughput data." PLOS Computational Biology 18, no. 6 (June 21, 2022): e1010191. http://dx.doi.org/10.1371/journal.pcbi.1010191.

Full text
Abstract:
Bayesian methods are routinely used to combine experimental data with detailed mathematical models to obtain insights into physical phenomena. However, the computational cost of Bayesian computation with detailed models has been a notorious problem. Moreover, while high-throughput data presents opportunities to calibrate sophisticated models, comparing large amounts of data with model simulations quickly becomes computationally prohibitive. Inspired by the method of Stochastic Gradient Descent, we propose a minibatch approach to approximate Bayesian computation. Through a case study of a high-throughput imaging scratch assay experiment, we show that reliable inference can be performed at a fraction of the computational cost of a traditional Bayesian inference scheme. By applying a detailed mathematical model of single cell motility, proliferation and death to a data set of 118 gene knockdowns, we characterise functional subgroups of gene knockdowns, each displaying its own typical combination of local cell density-dependent and -independent motility and proliferation patterns. By comparing these patterns to experimental measurements of cell counts and wound closure, we find that density-dependent interactions play a crucial role in the process of wound healing.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Chendong, and Ting Chen. "Bayesian slip inversion with automatic differentiation variational inference." Geophysical Journal International 229, no. 1 (October 29, 2021): 546–65. http://dx.doi.org/10.1093/gji/ggab438.

Full text
Abstract:
SUMMARY The Bayesian slip inversion offers a powerful tool for modelling the earthquake source mechanism. It can provide a fully probabilistic result and thus permits us to quantitatively assess the inversion uncertainty. The Bayesian problem is usually solved with Monte Carlo methods, but they are computationally expensive and are inapplicable for high-dimensional and large-scale problems. Variational inference is an alternative solver to the Bayesian problem. It turns Bayesian inference into an optimization task and thus enjoys better computational performances. In this study, we introduce a general variational inference algorithm, automatic differentiation variational inference (ADVI), to the Bayesian slip inversion and compare it with the classic Metropolis–Hastings (MH) sampling method. The synthetic test shows that the two methods generate nearly identical mean slip distributions and standard deviation maps. In the real case study, the two methods produce highly consistent mean slip distributions, but the ADVI-derived standard deviation map differs from that produced by the MH method, possibly because of the limitation of the Gaussian approximation in the ADVI method. In both cases, ADVI can give comparable results to the MH method but with a significantly lower computational cost. Our results show that ADVI is a promising and competitive method for the Bayesian slip inversion.
APA, Harvard, Vancouver, ISO, and other styles
4

Koblents, Eugenia, Inés P. Mariño, and Joaquín Míguez. "Bayesian Computation Methods for Inference in Stochastic Kinetic Models." Complexity 2019 (January 20, 2019): 1–15. http://dx.doi.org/10.1155/2019/7160934.

Full text
Abstract:
In this paper we investigate Monte Carlo methods for the approximation of the posterior probability distributions in stochastic kinetic models (SKMs). SKMs are multivariate Markov jump processes that model the interactions among species in biological systems according to a set of usually unknown parameters. The tracking of the species populations together with the estimation of the interaction parameters is a Bayesian inference problem for which Markov chain Monte Carlo (MCMC) methods have been a typical computational tool. Specifically, the particle MCMC (pMCMC) method has been shown to be effective, while computationally demanding method applicable to this problem. Recently, it has been shown that an alternative approach to Bayesian computation, namely, the class of adaptive importance samplers, may be more efficient than classical MCMC-like schemes, at least for certain applications. For example, the nonlinear population Monte Carlo (NPMC) algorithm has yielded promising results with a low dimensional SKM (the classical predator-prey model). In this paper we explore the application of both pMCMC and NPMC to analyze complex autoregulatory feedback networks modelled by SKMs. We demonstrate numerically how the populations of the relevant species in the network can be tracked and their interaction rates estimated, even in scenarios with partial observations. NPMC schemes attain an appealing trade-off between accuracy and computational cost that can make them advantageous in many practical applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Beaumont, Mark A., Wenyang Zhang, and David J. Balding. "Approximate Bayesian Computation in Population Genetics." Genetics 162, no. 4 (December 1, 2002): 2025–35. http://dx.doi.org/10.1093/genetics/162.4.2025.

Full text
Abstract:
Abstract We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Ziyue, Kan Ren, Yifan Yang, Xinyang Jiang, Yuqing Yang, and Dongsheng Li. "Towards Inference Efficient Deep Ensemble Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8711–19. http://dx.doi.org/10.1609/aaai.v37i7.26048.

Full text
Abstract:
Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e.g., can be up to 2048X in large-scale ensemble tasks. However, we found that the majority of computations in ensemble methods are redundant. For instance, over 77% of samples in CIFAR-100 dataset can be correctly classified with only a single ResNet-18 model, which indicates that only around 23% of the samples need an ensemble of extra models. To this end, we propose an inference efficient ensemble learning method, to simultaneously optimize for effectiveness and efficiency in ensemble learning. More specifically, we regard ensemble of models as a sequential inference process and learn the optimal halting event for inference on a specific sample. At each timestep of the inference process, a common selector judges if the current ensemble has reached ensemble effectiveness and halt further inference, otherwise filters this challenging sample for the subsequent models to conduct more powerful ensemble. Both the base models and common selector are jointly optimized to dynamically adjust ensemble inference for different samples with various hardness, through the novel optimization goals including sequential ensemble boosting and computation saving. The experiments with different backbones on real-world datasets illustrate our method can bring up to 56% inference cost reduction while maintaining comparable performance to full ensemble, achieving significantly better ensemble utility than other baselines. Code and supplemental materials are available at https://seqml.github.io/irene.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Benchong, Shoufeng Cai, and Jianhua Guo. "A computational algebraic-geometry method for conditional-independence inference." Frontiers of Mathematics in China 8, no. 3 (March 25, 2013): 567–82. http://dx.doi.org/10.1007/s11464-013-0295-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Springer, Sebastian, Heikki Haario, Jouni Susiluoto, Aleksandr Bibov, Andrew Davis, and Youssef Marzouk. "Efficient Bayesian inference for large chaotic dynamical systems." Geoscientific Model Development 14, no. 7 (July 9, 2021): 4319–33. http://dx.doi.org/10.5194/gmd-14-4319-2021.

Full text
Abstract:
Abstract. Estimating parameters of chaotic geophysical models is challenging due to their inherent unpredictability. These models cannot be calibrated with standard least squares or filtering methods if observations are temporally sparse. Obvious remedies, such as averaging over temporal and spatial data to characterize the mean behavior, do not capture the subtleties of the underlying dynamics. We perform Bayesian inference of parameters in high-dimensional and computationally demanding chaotic dynamical systems by combining two approaches: (i) measuring model–data mismatch by comparing chaotic attractors and (ii) mitigating the computational cost of inference by using surrogate models. Specifically, we construct a likelihood function suited to chaotic models by evaluating a distribution over distances between points in the phase space; this distribution defines a summary statistic that depends on the geometry of the attractor, rather than on pointwise matching of trajectories. This statistic is computationally expensive to simulate, compounding the usual challenges of Bayesian computation with physical models. Thus, we develop an inexpensive surrogate for the log likelihood with the local approximation Markov chain Monte Carlo method, which in our simulations reduces the time required for accurate inference by orders of magnitude. We investigate the behavior of the resulting algorithm with two smaller-scale problems and then use a quasi-geostrophic model to demonstrate its large-scale application.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Xinfang, Miao Li, Bomin Wang, and Zexian Li. "A Parameter Correction method of CFD based on the Approximate Bayesian Computation technique." Journal of Physics: Conference Series 2569, no. 1 (August 1, 2023): 012076. http://dx.doi.org/10.1088/1742-6596/2569/1/012076.

Full text
Abstract:
Abstract Numerical simulation and modeling techniques are becoming the primary research tools for aerodynamic analysis and design. However, various uncertainties in physical modeling and numerical simulation seriously affect the credibility of Computational Fluid Dynamics (CFD) simulation results. Therefore, CFD models need to be adjusted and modified with consideration of uncertainties to improve the prediction accuracy and confidence level of CFD numerical simulations. This paper presents a parameter correction method of CFD for aerodynamic analysis by making full use of the advantages of the Approximate Bayesian Computation (ABC) technique in dealing with the analysis and inference of complex statistical models, in which the parameters of turbulence models for CFD are inferenced. The proposed parameter correction method is applied to the aerodynamic prediction of the NACA0012 airfoil. The results show the feasibility and effectiveness of the proposed approach in improving CFD prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Chi, Yilun Wang, Lili Zhang, and Huicheng Zhou. "A fuzzy inference method based on association rule analysis with application to river flood forecasting." Water Science and Technology 66, no. 10 (November 1, 2012): 2090–98. http://dx.doi.org/10.2166/wst.2012.420.

Full text
Abstract:
In this paper, a computationally efficient version of the widely used Takagi-Sugeno (T-S) fuzzy reasoning method is proposed, and applied to river flood forecasting. It is well known that the number of fuzzy rules of traditional fuzzy reasoning methods exponentially increases as the number of input parameters increases, often causing prohibitive computational burden. The proposed method greatly reduces the number of fuzzy rules by making use of the association rule analysis on historical data, and therefore achieves computational efficiency for the cases of a large number of input parameters. In the end, we apply this new method to a case study of river flood forecasting, which demonstrates that the proposed fuzzy reasoning engine can achieve better prediction accuracy than the widely used Muskingum–Cunge scheme.
APA, Harvard, Vancouver, ISO, and other styles
11

Seki, Hirosato, Hiroaki Ishii, and Masaharu Mizumoto. "On the Monotonicity of Fuzzy-Inference Methods Related to T–S Inference Method." IEEE Transactions on Fuzzy Systems 18, no. 3 (June 2010): 629–34. http://dx.doi.org/10.1109/tfuzz.2010.2046668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Mengzhao, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, and Rongrong Ji. "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7042–52. http://dx.doi.org/10.1609/aaai.v37i6.25860.

Full text
Abstract:
Vision Transformers (ViT) have made many breakthroughs in computer vision tasks. However, considerable redundancy arises in the spatial dimension of an input image, leading to massive computational costs. Therefore, We propose a coarse-to-fine vision transformer (CF-ViT) to relieve computational burden while retaining performance in this paper. Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image. (2) Most images can be well recognized by a ViT model in a small-length token sequence. Therefore, our CF-ViT implements network inference in a two-stage manner. At coarse inference stage, an input image is split into a small-length patch sequence for a computationally economical classification. If not well recognized, the informative patches are identified and further re-split in a fine-grained granularity. Extensive experiments demonstrate the efficacy of our CF-ViT. For example, without any compromise on performance, CF-ViT reduces 53% FLOPs of LV-ViT, and also achieves 2.01x throughput. Code of this project is at https://github.com/ChenMnZ/CF-V
APA, Harvard, Vancouver, ISO, and other styles
13

Dou, Rui, Feilong Ding, Xi Chen, Jian Wang, Deyong Yu, and Yuangui Tang. "Grid-less wideband direction of arrival estimation based on variational Bayesian inference." Journal of the Acoustical Society of America 155, no. 3 (March 1, 2024): 2087–98. http://dx.doi.org/10.1121/10.0025284.

Full text
Abstract:
Many recent works have addressed the problem of wideband direction of arrival (DOA) estimation using grid-less sparse techniques, and these methods have been shown to outperform the traditional wideband DOA estimation methods. However, these methods often suffer from the problem of requiring manual parameter tuning or high computational complexity, which reduces their practicality. To alleviate this problem, a grid-less wideband DOA estimation method based on variational Bayesian inference is proposed in this paper. The method approximates the posterior probability density function of DOA with the help of variational Bayesian inference, which does not require manual adjustment of parameters and can obtain accurate DOA estimation results with low computational complexity. Numerical simulations and real measurement data processing show that the proposed method has a higher DOA estimation accuracy than other grid-less wideband methods while providing higher computational speed.
APA, Harvard, Vancouver, ISO, and other styles
14

Qi, Zhen, and Eberhard O. Voit. "Inference of cancer mechanisms through computational systems analysis." Molecular BioSystems 13, no. 3 (2017): 489–97. http://dx.doi.org/10.1039/c6mb00672h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yu, Tingting, Lang Wu, and Peter B. Gilbert. "A joint model for mixed and truncated longitudinal data and survival data, with application to HIV vaccine studies." Biostatistics 19, no. 3 (September 23, 2017): 374–90. http://dx.doi.org/10.1093/biostatistics/kxx047.

Full text
Abstract:
SUMMARY In HIV vaccine studies, a major research objective is to identify immune response biomarkers measured longitudinally that may be associated with risk of HIV infection. This objective can be assessed via joint modeling of longitudinal and survival data. Joint models for HIV vaccine data are complicated by the following issues: (i) left truncations of some longitudinal data due to lower limits of quantification; (ii) mixed types of longitudinal variables; (iii) measurement errors and missing values in longitudinal measurements; (iv) computational challenges associated with likelihood inference. In this article, we propose a joint model of complex longitudinal and survival data and a computationally efficient method for approximate likelihood inference to address the foregoing issues simultaneously. In particular, our model does not make unverifiable distributional assumptions for truncated values, which is different from methods commonly used in the literature. The parameters are estimated based on the h-likelihood method, which is computationally efficient and offers approximate likelihood inference. Moreover, we propose a new approach to estimate the standard errors of the h-likelihood based parameter estimates by using an adaptive Gauss–Hermite method. Simulation studies show that our methods perform well and are computationally efficient. A comprehensive data analysis is also presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Ekmekci, Canberk, and Mujdat Cetin. "Model-Based Bayesian Deep Learning Architecture for Linear Inverse Problems in Computational Imaging." Electronic Imaging 2021, no. 15 (January 18, 2021): 201–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.15.coimg-201.

Full text
Abstract:
We propose a neural network architecture combined with specific training and inference procedures for linear inverse problems arising in computational imaging to reconstruct the underlying image and to represent the uncertainty about the reconstruction. The proposed architecture is built from the model-based reconstruction perspective, which enforces data consistency and eliminates the artifacts in an alternating manner. The training and the inference procedures are based on performing approximate Bayesian analysis on the weights of the proposed network using a variational inference method. The proposed architecture with the associated inference procedure is capable of characterizing uncertainty while performing reconstruction with a modelbased approach. We tested the proposed method on a simulated magnetic resonance imaging experiment. We showed that the proposed method achieved an adequate reconstruction capability and provided reliable uncertainty estimates in the sense that the regions having high uncertainty provided by the proposed method are likely to be the regions where reconstruction errors occur.
APA, Harvard, Vancouver, ISO, and other styles
17

Singer, Amit, and Fred J. Sigworth. "Computational Methods for Single-Particle Electron Cryomicroscopy." Annual Review of Biomedical Data Science 3, no. 1 (July 20, 2020): 163–90. http://dx.doi.org/10.1146/annurev-biodatasci-021020-093826.

Full text
Abstract:
Single-particle electron cryomicroscopy (cryo-EM) is an increasingly popular technique for elucidating the three-dimensional (3D) structure of proteins and other biologically significant complexes at near-atomic resolution. It is an imaging method that does not require crystallization and can capture molecules in their native states. In single-particle cryo-EM, the 3D molecular structure needs to be determined from many noisy 2D tomographic projections of individual molecules, whose orientations and positions are unknown. The high level of noise and the unknown pose parameters are two key elements that make reconstruction a challenging computational problem. Even more challenging is the inference of structural variability and flexible motions when the individual molecules being imaged are in different conformational states. This review discusses computational methods for structure determination by single-particle cryo-EM and their guiding principles from statistical inference, machine learning, and signal processing, which also play a significant role in many other data science applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, De-Gang, Yan-Ping Meng, and Hong-Xing Li. "A fuzzy similarity inference method for fuzzy reasoning." Computers & Mathematics with Applications 56, no. 10 (November 2008): 2445–54. http://dx.doi.org/10.1016/j.camwa.2008.03.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fulk, M., and S. Jain. "Approximate Inference and Scientific Method." Information and Computation 114, no. 2 (November 1994): 179–91. http://dx.doi.org/10.1006/inco.1994.1084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Xiaoxu, Xiangdong Xu, and Chao Yang. "Trip mode inference from mobile phone signaling data using Logarithm Gaussian Mixture Model." Journal of Transport and Land Use 13, no. 1 (November 12, 2020): 429–45. http://dx.doi.org/10.5198/jtlu.2020.1554.

Full text
Abstract:
Trip mode inference plays an important role in transportation planning and management. Most studies in the field have focused on the methods based on GPS data collected from mobile devices. While these methods can achieve relatively high accuracy, they also have drawbacks in data quantity, coverage, and computational complexity. This paper develops a trip mode inference method based on mobile phone signaling data. The method mainly consists of three parts: activity-nodes recognition, travel-time computation, and clustering using the Logarithm Gaussian Mixed Model. Moreover, we compare two other methods (i.e., Gaussian Mixed Model and K-Means) with the Logarithm Gaussian Mixed Model. We conduct experiments using real mobile phone signaling data in Shanghai and the results show that the proposed method can obtain acceptable accuracy overall. This study provides an important opportunity to infer trip mode from the aspect of probability using mobile phone signaling data.
APA, Harvard, Vancouver, ISO, and other styles
21

Sprevak, Mark. "Not All Computational Methods Are Effective Methods." Philosophies 7, no. 5 (October 10, 2022): 113. http://dx.doi.org/10.3390/philosophies7050113.

Full text
Abstract:
An effective method is a computational method that might, in principle, be executed by a human. In this paper, I argue that there are methods for computing that are not effective methods. The examples I consider are taken primarily from quantum computing, but these are only meant to be illustrative of a much wider class. Quantum inference and quantum parallelism involve steps that might be implemented in multiple physical systems, but cannot be implemented, or at least not at will, by an idealised human. Recognising that not all computational methods are effective methods is important for at least two reasons. First, it is needed to correctly state the results of Turing and other founders of computation theory. Turing is sometimes said to have offered a replacement for the informal notion of an effective method with the formal notion of a Turing machine. I argue that such a view only holds under limited circumstances. Second, not distinguishing between computational methods and effective methods can lead to mistakes when quantifying over the class of all possible computational methods. Such quantification is common in philosophy of mind in the context of thought experiments that explore the limits of computational functionalism. I argue that these ‘homuncular’ thought experiments should not be treated as valid.
APA, Harvard, Vancouver, ISO, and other styles
22

Musgrove, Donald R., John Hughes, and Lynn E. Eberly. "Fast, fully Bayesian spatiotemporal inference for fMRI data." Biostatistics 17, no. 2 (April 1, 2016): 291–303. http://dx.doi.org/10.1093/biostatistics/kxv044.

Full text
Abstract:
Abstract We propose a spatial Bayesian variable selection method for detecting blood oxygenation level dependent activation in functional magnetic resonance imaging (fMRI) data. Typical fMRI experiments generate large datasets that exhibit complex spatial and temporal dependence. Fitting a full statistical model to such data can be so computationally burdensome that many practitioners resort to fitting oversimplified models, which can lead to lower quality inference. We develop a full statistical model that permits efficient computation. Our approach eases the computational burden in two ways. We partition the brain into 3D parcels, and fit our model to the parcels in parallel. Voxel-level activation within each parcel is modeled as regressions located on a lattice. Regressors represent the magnitude of change in blood oxygenation in response to a stimulus, while a latent indicator for each regressor represents whether the change is zero or non-zero. A sparse spatial generalized linear mixed model captures the spatial dependence among indicator variables within a parcel and for a given stimulus. The sparse SGLMM permits considerably more efficient computation than does the spatial model typically employed in fMRI. Through simulation we show that our parcellation scheme performs well in various realistic scenarios. Importantly, indicator variables on the boundary between parcels do not exhibit edge effects. We conclude by applying our methodology to data from a task-based fMRI experiment.
APA, Harvard, Vancouver, ISO, and other styles
23

de Oliveira Peres, Marcos Vinicius, Ricardo Puziol de Oliveira, Edson Zangiacomi Martinez, and Jorge Alberto Achcar. "Different inference approaches for the estimators of the sushila distribution." Model Assisted Statistics and Applications 16, no. 4 (December 20, 2021): 251–60. http://dx.doi.org/10.3233/mas-210539.

Full text
Abstract:
In this paper, we order to evaluate via Monte Carlo simulations the performance of sample properties of the estimates of the estimates for Sushila distribution, introduced by Shanker et al. (2013). We consider estimates obtained by six estimation methods, the known approaches of maximum likelihood, moments and Bayesian method, and other less traditional methods: L-moments, ordinary least-squares and weighted least-squares. As a comparison criterion, the biases and the roots of mean-squared errors were used through nine scenarios with samples ranging from 30 to 300 (every 30rd). In addition, we also considered a simulation and a real data application to illustrate the applicability of the proposed estimators as well as the computation time to get the estimates. In this case, the Bayesian method was also considered. The aim of the study was to find an estimation method to be considered as a better alternative or at least interchangeable with the traditional maximum likelihood method considering small or large sample sizes and with low computational cost.
APA, Harvard, Vancouver, ISO, and other styles
24

Pantho, Md Jubaer Hossain, Pankaj Bhowmik, and Christophe Bobda. "Towards an Efficient CNN Inference Architecture Enabling In-Sensor Processing." Sensors 21, no. 6 (March 10, 2021): 1955. http://dx.doi.org/10.3390/s21061955.

Full text
Abstract:
The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities.
APA, Harvard, Vancouver, ISO, and other styles
25

Sesta, Luca, Andrea Pagnani, Jorge Fernandez-de-Cossio-Diaz, and Guido Uguzzoni. "Inference of annealed protein fitness landscapes with AnnealDCA." PLOS Computational Biology 20, no. 2 (February 20, 2024): e1011812. http://dx.doi.org/10.1371/journal.pcbi.1011812.

Full text
Abstract:
The design of proteins with specific tasks is a major challenge in molecular biology with important diagnostic and therapeutic applications. High-throughput screening methods have been developed to systematically evaluate protein activity, but only a small fraction of possible protein variants can be tested using these techniques. Computational models that explore the sequence space in-silico to identify the fittest molecules for a given function are needed to overcome this limitation. In this article, we propose AnnealDCA, a machine-learning framework to learn the protein fitness landscape from sequencing data derived from a broad range of experiments that use selection and sequencing to quantify protein activity. We demonstrate the effectiveness of our method by applying it to antibody Rep-Seq data of immunized mice and screening experiments, assessing the quality of the fitness landscape reconstructions. Our method can be applied to several experimental cases where a population of protein variants undergoes various rounds of selection and sequencing, without relying on the computation of variants enrichment ratios, and thus can be used even in cases of disjoint sequence samples.
APA, Harvard, Vancouver, ISO, and other styles
26

Wonkap, Stephanie Kamgnia, and Gregory Butler. "BENIN: Biologically enhanced network inference." Journal of Bioinformatics and Computational Biology 18, no. 03 (June 2020): 2040007. http://dx.doi.org/10.1142/s0219720020400077.

Full text
Abstract:
Gene regulatory network inference is one of the central problems in computational biology. We need models that integrate the variety of data available in order to use their complementarity information to overcome the issues of noisy and limited data. BENIN: Biologically Enhanced Network INference is our proposal to integrate data and infer more accurate networks. BENIN is a general framework that jointly considers different types of prior knowledge with expression datasets to improve the network inference. The method states the network inference as a feature selection problem and uses a popular penalized regression method, the Elastic net, combined with bootstrap resampling to solve it. BENIN significantly outperforms the state-of-the-art methods on the simulated data from the DREAM 4 challenge when combining genome-wide location data, knockout gene expression data, and time series expression data.
APA, Harvard, Vancouver, ISO, and other styles
27

Stoltz, Marnus, Boris Baeumer, Remco Bouckaert, Colin Fox, Gordon Hiscott, and David Bryant. "Bayesian Inference of Species Trees using Diffusion Models." Systematic Biology 70, no. 1 (July 6, 2020): 145–61. http://dx.doi.org/10.1093/sysbio/syaa051.

Full text
Abstract:
Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]
APA, Harvard, Vancouver, ISO, and other styles
28

Stamatakis, Alexandros, and Michael Ott. "Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1512 (October 7, 2008): 3977–84. http://dx.doi.org/10.1098/rstb.2008.0163.

Full text
Abstract:
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on ‘gappy’ multi-gene alignments. By ‘gappy’ we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAxML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
APA, Harvard, Vancouver, ISO, and other styles
29

Xu, Chongxin, Chunwei Zhou, Gang Liu, and Shengming Liao. "Modified Fuzzy Inference Method for Heat Flux Inversion of Geothermal Reservoir Heat Source." E3S Web of Conferences 350 (2022): 02009. http://dx.doi.org/10.1051/e3sconf/202235002009.

Full text
Abstract:
The key to determine the performance of fuzzy inference inversion is to select a reasonable domain. However, there is no universal method for selecting domain at present. According to the characteristics of heat flux of geothermal heat source and the research of fuzzy inference inversion process, this paper modified the fuzzy inference method from two aspects of domain setting and iteration termination condition. The recommended domain and selection scheme for solving the problem of geothermal heat flux are given, and the modified fuzzy inference inversion method is applied to Rucheng geothermal field to verify the method. The results showed that the modified fuzzy inference inversion method could overcome the problem that the solution of the traditional method fell into a cycle, and the relative error of the verification term was less than 5%. Compared with the traditional method, the modified method greatly improved the computational efficiency, and the number of iterations was reduced to only 7. This method has a good application prospect for geothermal heat source inversion and resource evaluation.
APA, Harvard, Vancouver, ISO, and other styles
30

Mao, Jiachen, Qing Yang, Ang Li, Kent W. Nixon, Hai Li, and Yiran Chen. "Toward Efficient and Adaptive Design of Video Detection System with Deep Neural Networks." ACM Transactions on Embedded Computing Systems 21, no. 3 (May 31, 2022): 1–21. http://dx.doi.org/10.1145/3484946.

Full text
Abstract:
In the past decade, Deep Neural Networks (DNNs), e.g., Convolutional Neural Networks, achieved human-level performance in vision tasks such as object classification and detection. However, DNNs are known to be computationally expensive and thus hard to be deployed in real-time and edge applications. Many previous works have focused on DNN model compression to obtain smaller parameter sizes and consequently, less computational cost. Such methods, however, often introduce noticeable accuracy degradation. In this work, we optimize a state-of-the-art DNN-based video detection framework—Deep Feature Flow (DFF) from the cloud end using three proposed ideas. First, we propose Asynchronous DFF (ADFF) to asynchronously execute the neural networks. Second, we propose a Video-based Dynamic Scheduling (VDS) method that decides the detection frequency based on the magnitude of movement between video frames. Last, we propose Spatial Sparsity Inference, which only performs the inference on part of the video frame and thus reduces the computation cost. According to our experimental results, ADFF can reduce the bottleneck latency from 89 to 19 ms. VDS increases the detection accuracy by 0.6% mAP without increasing computation cost. And SSI further saves 0.2 ms with a 0.6% mAP degradation of detection accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Hou, Shou Ming, Li Juan He, Wen Peng Xu, Hua Tao Fan, and Zhong Qi Sheng. "Computation Model of Case Similarity Based on Uneven Weight Distance Coefficient." Applied Mechanics and Materials 44-47 (December 2010): 3965–69. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.3965.

Full text
Abstract:
In order to solve the retrieval problem of design example in rapid response design, similarity computation model among design examples is given, computation of attribute weight of design example is done based on the method of uneven weight distance coefficient, computation of similarity between product structure model that meet the requirement of customers and design examples existed is done by the method of combination weight. This method has been applied in rolling guide conceptual design based on example inference, the result indicated that the weight computational method proposed is simple and the reliability is good, it can solve the intelligent retrieval problem of design examples in rapid response design.
APA, Harvard, Vancouver, ISO, and other styles
32

Marku, Malvina, and Vera Pancaldi. "From time-series transcriptomics to gene regulatory networks: A review on inference methods." PLOS Computational Biology 19, no. 8 (August 10, 2023): e1011254. http://dx.doi.org/10.1371/journal.pcbi.1011254.

Full text
Abstract:
Inference of gene regulatory networks has been an active area of research for around 20 years, leading to the development of sophisticated inference algorithms based on a variety of assumptions and approaches. With the ever increasing demand for more accurate and powerful models, the inference problem remains of broad scientific interest. The abstract representation of biological systems through gene regulatory networks represents a powerful method to study such systems, encoding different amounts and types of information. In this review, we summarize the different types of inference algorithms specifically based on time-series transcriptomics, giving an overview of the main applications of gene regulatory networks in computational biology. This review is intended to give an updated reference of regulatory networks inference tools to biologists and researchers new to the topic and guide them in selecting the appropriate inference method that best fits their questions, aims, and experimental data.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Z., W. Zhou, X. S. Zhang, and L. Chen. "A parsimonious tree-grow method for haplotype inference." Bioinformatics 21, no. 17 (July 7, 2005): 3475–81. http://dx.doi.org/10.1093/bioinformatics/bti572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Rojas-Guzmán, Carlos, and Mark A. Kramer. "An Evolutionary Computing Approach to Probabilistic Reasoning on Bayesian Networks." Evolutionary Computation 4, no. 1 (March 1996): 57–85. http://dx.doi.org/10.1162/evco.1996.4.1.57.

Full text
Abstract:
Bayesian belief networks can be used to represent and to reason about complex systems with uncertain or incomplete information. Bayesian networks are graphs capable of encoding and quantifying probabilistic dependence and conditional independence among variables. Diagnostic reasoning, also referred to as abductive inference, determining the most probable explanation (MPE), or finding the maximum a posteriori instantiation (MAP), involves determining the global most probable system description given the values of any subset of variables. In some cases abductive inference can be performed with exact algorithms using distributed network computations, but the problem is NP-hard, and complexity increases significantly with the presence of undirected cycles, the number of discrete states per variable, and the number of variables in the network. This paper describes an approximate method composed of a graph-based evolutionary algorithm that uses nonbinary alphabets, graphs instead of strings, and graph operators to perform abductive inference on multiply connected networks for which systematic search methods are not feasible. The motivation, basis, and adequacy of the method are discussed, and experimental results are presented.
APA, Harvard, Vancouver, ISO, and other styles
35

Bauer, Alexander, Shinichi Nakajima, and Klaus-Robert Müller. "Polynomial-Time Constrained Message Passing for Exact MAP Inference on Discrete Models with Global Dependencies." Mathematics 11, no. 12 (June 8, 2023): 2628. http://dx.doi.org/10.3390/math11122628.

Full text
Abstract:
Considering the worst-case scenario, the junction-tree algorithm remains the most general solution for exact MAP inference with polynomial run-time guarantees. Unfortunately, its main tractability assumption requires the treewidth of a corresponding MRF to be bounded, strongly limiting the range of admissible applications. In fact, many practical problems in the area of structured prediction require modeling global dependencies by either directly introducing global factors or enforcing global constraints on the prediction variables. However, this always results in a fully-connected graph, making exact inferences by means of this algorithm intractable. Previous works focusing on the problem of loss-augmented inference have demonstrated how efficient inference can be performed on models with specific global factors representing non-decomposable loss functions within the training regime of SSVMs. Making the observation that the same fundamental idea can be applied to solve a broader class of computational problems, in this paper, we adjust the framework for an efficient exact inference proposed in to allow much finer interactions between the energy of the core model and the sufficient statistics of the global terms. As a result, we greatly increase the range of admissible applications and strongly improve upon the theoretical guarantees of computational efficiency. We illustrate the applicability of our method in several use cases, including one that is not covered by the previous problem formulation. Furthermore, we propose a new graph transformation technique via node cloning, which ensures a polynomial run-time for solving our target problem. In particular, the overall computational complexity of our constrained message-passing algorithm depends only on form-independent quantities such as the treewidth of a corresponding graph (without global connections) and image size of the sufficient statistics of the global terms.
APA, Harvard, Vancouver, ISO, and other styles
36

Bu, Diyue, Xiaofeng Wang, and Haixu Tang. "Haplotype-based membership inference from summary genomic data." Bioinformatics 37, Supplement_1 (July 1, 2021): i161—i168. http://dx.doi.org/10.1093/bioinformatics/btab305.

Full text
Abstract:
Abstract Motivation The availability of human genomic data, together with the enhanced capacity to process them, is leading to transformative technological advances in biomedical science and engineering. However, the public dissemination of such data has been difficult due to privacy concerns. Specifically, it has been shown that the presence of a human subject in a case group can be inferred from the shared summary statistics of the group, e.g. the allele frequencies, or even the presence/absence of genetic variants (e.g. shared by the Beacon project) in the group. These methods rely on the availability of the target’s genome, i.e. the DNA profile of a target human subject, and thus are often referred to as the membership inference method. Results In this article, we demonstrate the haplotypes, i.e. the sequence of single nucleotide variations (SNVs) showing strong genetic linkages in human genome databases, may be inferred from the summary of genomic data without using a target’s genome. Furthermore, novel haplotypes that did not appear in the database may be reconstructed solely from the allele frequencies from genomic datasets. These reconstructed haplotypes can be used for a haplotype-based membership inference algorithm to identify target subjects in a case group with greater power than existing methods based on SNVs. Availability and implementation The implementation of the membership inference algorithms is available at https://github.com/diybu/Haplotype-based-membership-inferences.
APA, Harvard, Vancouver, ISO, and other styles
37

Ouellette, Tom W., and Philip Awadalla. "Inferring ongoing cancer evolution from single tumour biopsies using synthetic supervised learning." PLOS Computational Biology 18, no. 4 (April 28, 2022): e1010007. http://dx.doi.org/10.1371/journal.pcbi.1010007.

Full text
Abstract:
Variant allele frequencies (VAF) encode ongoing evolution and subclonal selection in growing tumours. However, existing methods that utilize VAF information for cancer evolutionary inference are compressive, slow, or incorrectly specify the underlying cancer evolutionary dynamics. Here, we provide a proof-of-principle synthetic supervised learning method, TumE, that integrates simulated models of cancer evolution with Bayesian neural networks, to infer ongoing selection in bulk-sequenced single tumour biopsies. Analyses in synthetic and patient tumours show that TumE significantly improves both accuracy and inference time per sample when detecting positive selection, deconvoluting selected subclonal populations, and estimating subclone frequency. Importantly, we show how transfer learning can leverage stored knowledge within TumE models for related evolutionary inference tasks—substantially reducing data and computational time for further model development and providing a library of recyclable deep learning models for the cancer evolution community. This extensible framework provides a foundation and future directions for harnessing progressive computational methods for the benefit of cancer genomics and, in turn, the cancer patient.
APA, Harvard, Vancouver, ISO, and other styles
38

Xie, Yanxi, Yuewen Li, Zhijie Xia, Ruixia Yan, and Dongqing Luan. "A Penalized h-Likelihood Variable Selection Algorithm for Generalized Linear Regression Models with Random Effects." Complexity 2020 (September 15, 2020): 1–13. http://dx.doi.org/10.1155/2020/8941652.

Full text
Abstract:
Reinforcement learning is one of the paradigms and methodologies of machine learning developed in the computational intelligence community. Reinforcement learning algorithms present a major challenge in complex dynamics recently. In the perspective of variable selection, we often come across situations where too many variables are included in the full model at the initial stage of modeling. Due to a high-dimensional and intractable integral of longitudinal data, likelihood inference is computationally challenging. It can be computationally difficult such as very slow convergence or even nonconvergence, for the computationally intensive methods. Recently, hierarchical likelihood (h-likelihood) plays an important role in inferences for models having unobservable or unobserved random variables. This paper focuses linear models with random effects in the mean structure and proposes a penalized h-likelihood algorithm which incorporates variable selection procedures in the setting of mean modeling via h-likelihood. The penalized h-likelihood method avoids the messy integration for the random effects and is computationally efficient. Furthermore, it demonstrates good performance in relevant-variable selection. Throughout theoretical analysis and simulations, it is confirmed that the penalized h-likelihood algorithm produces good fixed effect estimation results and can identify zero regression coefficients in modeling the mean structure.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Y., N. L. Zhang, and T. Chen. "Latent Tree Models and Approximate Inference in Bayesian Networks." Journal of Artificial Intelligence Research 32 (August 26, 2008): 879–900. http://dx.doi.org/10.1613/jair.2530.

Full text
Abstract:
We propose a novel method for approximate inference in Bayesian networks (BNs). The idea is to sample data from a BN, learn a latent tree model (LTM) from the data offline, and when online, make inference with the LTM instead of the original BN. Because LTMs are tree-structured, inference takes linear time. In the meantime, they can represent complex relationship among leaf nodes and hence the approximation accuracy is often good. Empirical evidence shows that our method can achieve good approximation accuracy at low online computational cost.
APA, Harvard, Vancouver, ISO, and other styles
40

Gadosey, Pius Kwao, Yujian Li, Enock Adjei Agyekum, Ting Zhang, Zhaoying Liu, Peter T. Yamak, and Firdaous Essaf. "SD-UNet: Stripping down U-Net for Segmentation of Biomedical Images on Platforms with Low Computational Budgets." Diagnostics 10, no. 2 (February 18, 2020): 110. http://dx.doi.org/10.3390/diagnostics10020110.

Full text
Abstract:
During image segmentation tasks in computer vision, achieving high accuracy performance while requiring fewer computations and faster inference is a big challenge. This is especially important in medical imaging tasks but one metric is usually compromised for the other. To address this problem, this paper presents an extremely fast, small and computationally effective deep neural network called Stripped-Down UNet (SD-UNet), designed for the segmentation of biomedical data on devices with limited computational resources. By making use of depthwise separable convolutions in the entire network, we design a lightweight deep convolutional neural network architecture inspired by the widely adapted U-Net model. In order to recover the expected performance degradation in the process, we introduce a weight standardization algorithm with the group normalization method. We demonstrate that SD-UNet has three major advantages including: (i) smaller model size (23x smaller than U-Net); (ii) 8x fewer parameters; and (iii) faster inference time with a computational complexity lower than 8M floating point operations (FLOPs). Experiments on the benchmark dataset of the Internatioanl Symposium on Biomedical Imaging (ISBI) challenge for segmentation of neuronal structures in electron microscopic (EM) stacks and the Medical Segmentation Decathlon (MSD) challenge brain tumor segmentation (BRATs) dataset show that the proposed model achieves comparable and sometimes better results compared to the current state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhai, Yao, Wei Liu, Yunzhi Jin, and Yanqing Zhang. "Variational Bayesian Variable Selection for High-Dimensional Hidden Markov Models." Mathematics 12, no. 7 (March 27, 2024): 995. http://dx.doi.org/10.3390/math12070995.

Full text
Abstract:
The Hidden Markov Model (HMM) is a crucial probabilistic modeling technique for sequence data processing and statistical learning that has been extensively utilized in various engineering applications. Traditionally, the EM algorithm is employed to fit HMMs, but currently, academics and professionals exhibit augmenting enthusiasm in Bayesian inference. In the Bayesian context, Markov Chain Monte Carlo (MCMC) methods are commonly used for inferring HMMs, but they can be computationally demanding for high-dimensional covariate data. As a rapid substitute, variational approximation has become a noteworthy and effective approximate inference approach, particularly in recent years, for representation learning in deep generative models. However, there has been limited exploration of variational inference for HMMs with high-dimensional covariates. In this article, we develop a mean-field Variational Bayesian method with the double-exponential shrinkage prior to fit high-dimensional HMMs whose hidden states are of discrete types. The proposed method offers the advantage of fitting the model and investigating specific factors that impact the response variable changes simultaneously. In addition, since the proposed method is based on the Variational Bayesian framework, the proposed method can avoid huge memory and intensive computational cost typical of traditional Bayesian methods. In the simulation studies, we demonstrate that the proposed method can quickly and accurately estimate the posterior distributions of the parameters with good performance. We analyzed the Beijing Multi-Site Air-Quality data and predicted the PM2.5 values via the fitted HMMs.
APA, Harvard, Vancouver, ISO, and other styles
42

Sashittal, Palash, and Mohammed El-Kebir. "Sampling and summarizing transmission trees with multi-strain infections." Bioinformatics 36, Supplement_1 (July 1, 2020): i362—i370. http://dx.doi.org/10.1093/bioinformatics/btaa438.

Full text
Abstract:
Abstract Motivation The combination of genomic and epidemiological data holds the potential to enable accurate pathogen transmission history inference. However, the inference of outbreak transmission histories remains challenging due to various factors such as within-host pathogen diversity and multi-strain infections. Current computational methods ignore within-host diversity and/or multi-strain infections, often failing to accurately infer the transmission history. Thus, there is a need for efficient computational methods for transmission tree inference that accommodate the complexities of real data. Results We formulate the direct transmission inference (DTI) problem for inferring transmission trees that support multi-strain infections given a timed phylogeny and additional epidemiological data. We establish hardness for the decision and counting version of the DTI problem. We introduce Transmission Tree Uniform Sampler (TiTUS), a method that uses SATISFIABILITY to almost uniformly sample from the space of transmission trees. We introduce criteria that prioritize parsimonious transmission trees that we subsequently summarize using a novel consensus tree approach. We demonstrate TiTUS’s ability to accurately reconstruct transmission trees on simulated data as well as a documented HIV transmission chain. Availability and implementation https://github.com/elkebir-group/TiTUS. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
43

Fernandez-de-Cossio-Diaz, Jorge, Guido Uguzzoni, and Andrea Pagnani. "Unsupervised Inference of Protein Fitness Landscape from Deep Mutational Scan." Molecular Biology and Evolution 38, no. 1 (August 8, 2020): 318–28. http://dx.doi.org/10.1093/molbev/msaa204.

Full text
Abstract:
Abstract The recent technological advances underlying the screening of large combinatorial libraries in high-throughput mutational scans deepen our understanding of adaptive protein evolution and boost its applications in protein design. Nevertheless, the large number of possible genotypes requires suitable computational methods for data analysis, the prediction of mutational effects, and the generation of optimized sequences. We describe a computational method that, trained on sequencing samples from multiple rounds of a screening experiment, provides a model of the genotype–fitness relationship. We tested the method on five large-scale mutational scans, yielding accurate predictions of the mutational effects on fitness. The inferred fitness landscape is robust to experimental and sampling noise and exhibits high generalization power in terms of broader sequence space exploration and higher fitness variant predictions. We investigate the role of epistasis and show that the inferred model provides structural information about the 3D contacts in the molecular fold.
APA, Harvard, Vancouver, ISO, and other styles
44

Smith, Rory J. E., Gregory Ashton, Avi Vajpeyi, and Colm Talbot. "Massively parallel Bayesian inference for transient gravitational-wave astronomy." Monthly Notices of the Royal Astronomical Society 498, no. 3 (August 19, 2020): 4492–502. http://dx.doi.org/10.1093/mnras/staa2483.

Full text
Abstract:
ABSTRACT Understanding the properties of transient gravitational waves (GWs) and their sources is of broad interest in physics and astronomy. Bayesian inference is the standard framework for astrophysical measurement in transient GW astronomy. Usually, stochastic sampling algorithms are used to estimate posterior probability distributions over the parameter spaces of models describing experimental data. The most physically accurate models typically come with a large computational overhead which can render data analsis extremely time consuming, or possibly even prohibitive. In some cases highly specialized optimizations can mitigate these issues, though they can be difficult to implement, as well as to generalize to arbitrary models of the data. Here, we investigate an accurate, flexible, and scalable method for astrophysical inference: parallelized nested sampling. The reduction in the wall-time of inference scales almost linearly with the number of parallel processes running on a high-performance computing cluster. By utilizing a pool of several hundreds or thousands of CPUs in a high-performance cluster, the large wall times of many astrophysical inferences can be alleviated while simultaneously ensuring that any GW signal model can be used ‘out of the box’, i.e. without additional optimization or approximation. Our method will be useful to both the LIGO-Virgo-KAGRA collaborations and the wider scientific community performing astrophysical analyses on GWs. An implementation is available in the open source gravitational-wave inference library pBilby (parallel bilby).
APA, Harvard, Vancouver, ISO, and other styles
45

Cheng, Huayang, Yunchao Ding, and Lu Yang. "Real-Time Motion Detection Network Based on Single Linear Bottleneck and Pooling Compensation." Applied Sciences 12, no. 17 (August 29, 2022): 8645. http://dx.doi.org/10.3390/app12178645.

Full text
Abstract:
Motion (change) detection is a basic preprocessing step in video processing, which has many application scenarios. One challenge is that deep learning-based methods require high computation power to improve their accuracy. In this paper, we introduce a novel semantic segmentation and lightweight-based network for motion detection, called Real-time Motion Detection Network Based on Single Linear Bottleneck and Pooling Compensation (MDNet-LBPC). In the feature extraction stage, the most computationally expensive CNN block is replaced with our single linear bottleneck operator to reduce the computational cost. During the decoder stage, our pooling compensation mechanism can supplement the useful motion detection information. To our best knowledge, this is the first work to use the lightweight operator to solve the motion detection task. We show that the acceleration performance of the single linear bottleneck is 5% higher than that of the linear bottleneck, which is more suitable for improving the efficiency of model inference. On the dataset CDNet2014, MDNet-LBPC increases the frames per second (FPS) metric by 123 compared to the suboptimal method FgSegNet_v2, ranking first in inference speed. Meanwhile, our MDNet-LBPC achieves 95.74% on the accuracy metric, which is comparable to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Jiang, Kai, Jianghao Su, and Juan Zhang. "A Data-Driven Parameter Prediction Method for HSS-Type Methods." Mathematics 10, no. 20 (October 14, 2022): 3789. http://dx.doi.org/10.3390/math10203789.

Full text
Abstract:
Some matrix-splitting iterative methods for solving systems of linear equations contain parameters that need to be specified in advance, and the choice of these parameters directly affects the efficiency of the corresponding iterative methods. This paper uses a Bayesian inference-based Gaussian process regression (GPR) method to predict the relatively optimal parameters of some HSS-type iteration methods and provide extensive numerical experiments to compare the prediction performance of the GPR method with other existing methods. Numerical results show that using GPR to predict the parameters of the matrix-splitting iterative methods has the advantage of smaller computational effort, predicting more optimal parameters and universality compared to the currently available methods for finding the parameters of the HSS-type iteration methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Enomoro, Shohei, and Takeharu Eda. "Learning to Cascade: Confidence Calibration for Improving the Accuracy and Computational Cost of Cascade Inference Systems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7331–39. http://dx.doi.org/10.1609/aaai.v35i8.16900.

Full text
Abstract:
Recently, deep neural networks have become to be used in a variety of applications. While the accuracy of deep neural networks is increasing, the confidence score, which indicates the reliability of the prediction results, is becoming more important. Deep neural networks are seen as highly accurate but known to be overconfident, making it important to calibrate the confidence score. Many studies have been conducted on confidence calibration. They calibrate the confidence score of the model to match its accuracy, but it is not clear whether these confidence scores can improve the performance of systems that use confidence scores. This paper focuses on cascade inference systems, one kind of systems using confidence scores, and discusses the desired confidence score to improve system performance in terms of inference accuracy and computational cost. Based on the discussion, we propose a new confidence calibration method, Learning to Cascade. Learning to Cascade is a simple but novel method that optimizes the loss term for confidence calibration simultaneously with the original loss term. Experiments are conducted using two datasets, CIFAR-100 and ImageNet, in two system settings, and show that naive application of existing calibration methods to cascade inference systems sometimes performs worse. However, Learning to Cascade always achieves a better trade-off between inference accuracy and computational cost. The simplicity of Learning to Cascade allows it to be easily applied to improve the performance of existing systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Xiangxiang, Wenkai Hu, and Fan Yang. "Detection of Cause-Effect Relations Based on Information Granulation and Transfer Entropy." Entropy 24, no. 2 (January 28, 2022): 212. http://dx.doi.org/10.3390/e24020212.

Full text
Abstract:
Causality inference is a process to infer Cause-Effect relations between variables in, typically, complex systems, and it is commonly used for root cause analysis in large-scale process industries. Transfer entropy (TE), as a non-parametric causality inference method, is an effective method to detect Cause-Effect relations in both linear and nonlinear processes. However, a major drawback of transfer entropy lies in the high computational complexity, which hinders its real application, especially in systems that have high requirements for real-time estimation. Motivated by such a problem, this study proposes an improved method for causality inference based on transfer entropy and information granulation. The calculation of transfer entropy is improved with a new framework that integrates the information granulation as a critical preceding step; moreover, a window-length determination method is proposed based on delay estimation, so as to conduct appropriate data compression using information granulation. The effectiveness of the proposed method is demonstrated by both a numerical example and an industrial case, with a two-tank simulation model. As shown by the results, the proposed method can reduce the computational complexity significantly while holding a strong capability for accurate casuality detection.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Tianling, Bin He, and Yangyang Zheng. "Research and Implementation of High Computational Power for Training and Inference of Convolutional Neural Networks." Applied Sciences 13, no. 2 (January 11, 2023): 1003. http://dx.doi.org/10.3390/app13021003.

Full text
Abstract:
Algorithms and computing power have consistently been the two driving forces behind the development of artificial intelligence. The computational power of a platform has a significant impact on the implementation cost, performance, power consumption, and flexibility of an algorithm. Currently, AI algorithmic models are mainly trained using high-performance GPU platforms, and their inferencing can be implemented using GPU, CPU, and FPGA. On the one hand, due to its high-power consumption and extreme cost, GPU is not suitable for power and cost-sensitive application scenarios. On the other hand, because the training and inference of the neural network use different computing power platforms, the data of the neural network model needs to be transmitted on platforms with varying computing power, which affects the data processing capability of the network and affects the real-time performance and flexibility of the neural network. This paper focuses on the high computing power implementation method of the integration of convolutional neural network (CNN) training and inference in artificial intelligence and proposes to implement the process of CNN training and inference by using high-performance heterogeneous architecture (HA) devices with field programmable gate array (FPGA) as the core. Numerous repeated multiplication and accumulation operations in the process of CNN training and inference have been implemented by programmable logic (PL), which significantly improves the speed of CNN training and inference and reduces the overall power consumption, thus providing a modern implementation method for neural networks in an application field that is sensitive to power, cost, and footprint. First, based on the data stream containing the training and inference process of the CNN, this study investigates methods to merge the training and inference data streams. Secondly, high-level language was used to describe the merged data stream structure, and a high-level description was converted to a hardware register transfer level (RTL) description by the high-level synthesis tool (HLS), and the intellectual property (IP) core was generated. The PS was used for overall control, data preprocessing, and result analysis, and it was then connected to the IP core via an on-chip AXI bus interface in the HA device. Finally, the integrated implementation method was tested and validated with the Xilinx HA device, and the MNIST handwritten digit validation set was used in the tests. According to the test results, compared with using a GPU, the model trained in the HA device PL achieves the same convergence rate with only 78.04 percent training time. With a processing time of only 3.31 ms and 0.65 ms for a single frame image, an average recognition accuracy of 95.697%, and an overall power consumption of only 3.22 W @ 100 MHz, the two convolutional neural networks mentioned in this paper are suitable for deployment in lightweight domains with limited power consumption.
APA, Harvard, Vancouver, ISO, and other styles
50

Jeon, Hyojin, Seungcheol Park, Jin-Gee Kim, and U. Kang. "PET: Parameter-efficient Knowledge Distillation on Transformer." PLOS ONE 18, no. 7 (July 6, 2023): e0288060. http://dx.doi.org/10.1371/journal.pone.0288060.

Full text
Abstract:
Given a large Transformer model, how can we obtain a small and computationally efficient model which maintains the performance of the original model? Transformer has shown significant performance improvements for many NLP tasks in recent years. However, their large size, expensive computational cost, and long inference time make it challenging to deploy them to resource-constrained devices. Existing Transformer compression methods mainly focus on reducing the size of the encoder ignoring the fact that the decoder takes the major portion of the long inference time. In this paper, we propose PET (Parameter-Efficient knowledge distillation on Transformer), an efficient Transformer compression method that reduces the size of both the encoder and decoder. In PET, we identify and exploit pairs of parameter groups for efficient weight sharing, and employ a warm-up process using a simplified task to increase the gain through Knowledge Distillation. Extensive experiments on five real-world datasets show that PET outperforms existing methods in machine translation tasks. Specifically, on the IWSLT’14 EN→DE task, PET reduces the memory usage by 81.20% and accelerates the inference speed by 45.15% compared to the uncompressed model, with a minor decrease in BLEU score of 0.27.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography