Journal articles on the topic 'Maximum a posteriori (MAP) framework'

To see the other types of publications on this topic, follow the link: Maximum a posteriori (MAP) framework.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Maximum a posteriori (MAP) framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pennec, X., J. Ehrhardt, N. Ayache, H. Handels, and H. Hufnagel. "Computation of a Probabilistic Statistical Shape Model in a Maximum-a-posteriori Framework." Methods of Information in Medicine 48, no. 04 (2009): 314–19. http://dx.doi.org/10.3414/me9228.

Full text
Abstract:
Summary Objectives: When analyzing shapes and shape variabilities, the first step is bringing those shapes into correspondence. This is a fundamental problem even when solved by manually determining exact correspondences such as landmarks. We developed a method to represent a mean shape and a variability model for a training data set based on probabilistic correspondence computed between the observations. Methods: First, the observations are matched on each other with an affine transformation found by the Expectation-Maximization Iterative-Closest-Points (EM-ICP) registration. We then propose a maximum-a-posteriori (MAP) framework in order to compute the statistical shape model (SSM) parameters which result in an optimal adaptation of the model to the observations. The optimization of the MAP explanation is realized with respect to the observation parameters and the generative model parameters in a global criterion and leads to very efficient and closed-form solutions for (almost) all parameters. Results: We compared our probabilistic SSM to a SSM based on one-to-one correspondences and the PCA (classical SSM). Experiments on synthetic data served to test the performances on non-convex shapes (15 training shapes) which have proved difficult in terms of proper correspondence determination. We then computed the SSMs for real putamen data (21 training shapes). The evaluation was done by measuring the generalization ability as well as the specificity of both SSMs and showed that especially shape detail differences are better modeled by the probabilistic SSM (Hausdorff distance in generalization ability ≈ 25% smaller). Conclusions: The experimental outcome shows the efficiency and advantages of the new approach as the probabilistic SSM performs better in modeling shape details and differences.
APA, Harvard, Vancouver, ISO, and other styles
2

Coene, W. "A practical algorithm for maximum-likelihood HREM image reconstruction." Proceedings, annual meeting, Electron Microscopy Society of America 50, no. 2 (August 1992): 986–87. http://dx.doi.org/10.1017/s0424820100129565.

Full text
Abstract:
Reconstruction in HREM of the complex wave function at the exit face of a crystal foil out of a focal series of HREM images, and with correction for the microscope's aberrations, can be performed with a variety of algorithms depending on the approximations involved for the HREM image formation. The maximum-a-posteriori (MAP) recursive reconstruction algorithm of Kirkland is the most general one with the full benefit of the effects of non-linear imaging and partial coherence, which are correctly treated in terms of a transmission-cross-coefficient (TCC). However, the routine application of the Kirkland algorithm has thusfar been hampered by its enormous computational demands, especially when large image frame sizes (5122) and a large number of HREM images (≥20) are aimed at. In this paper, we present a modified version of the Kirkland method within a maximum-likelihood (MAL) framework, and with a new numerical implementation yielding a workable algorithm with a much higher computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Hong, Yaobin Qiao, Shuangcheng Sun, Yuchen Yao, and Liming Ruan. "Image Reconstruction of Two-Dimensional Highly Scattering Inhomogeneous Medium Using MAP-Based Estimation." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/412315.

Full text
Abstract:
A maximum a posteriori (MAP) estimation based on Bayesian framework is applied to image reconstruction of two-dimensional highly scattering inhomogeneous medium. The finite difference method (FDM) and conjugate gradient (CG) algorithm serve as the forward and inverse solving models, respectively. The generalized Gaussian Markov random field model (GGMRF) is treated as the regularization, and finally the influence of the measurement errors and initial distributions is investigated. Through the test cases, the MAP estimate algorithm is demonstrated to greatly improve the reconstruction results of the optical coefficients.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhongli, Litong Fan, and Baigen Cai. "A 3D Relative-Motion Context Constraint-Based MAP Solution for Multiple-Object Tracking Problems." Sensors 18, no. 7 (July 20, 2018): 2363. http://dx.doi.org/10.3390/s18072363.

Full text
Abstract:
Multi-object tracking (MOT), especially by using a moving monocular camera, is a very challenging task in the field of visual object tracking. To tackle this problem, the traditional tracking-by-detection-based method is heavily dependent on detection results. Occlusion and mis-detections will often lead to tracklets or drifting. In this paper, the tasks of MOT and camera motion estimation are formulated as finding a maximum a posteriori (MAP) solution of joint probability and synchronously solved in a unified framework. To improve performance, we incorporate the three-dimensional (3D) relative-motion model into a sequential Bayesian framework to track multiple objects and the camera’s ego-motion estimation. A 3D relative-motion model that describes spatial relations among objects is exploited for predicting object states robustly and recovering objects when occlusion and mis-detections occur. Reversible jump Markov chain Monte Carlo (RJMCMC) particle filtering is applied to solve the posteriori estimation problem. Both quantitative and qualitative experiments with benchmark datasets and video collected on campus were conducted, which confirms that the proposed method is outperformed in many evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Yan Qiu, Tao Zhang, Shuang Xu, and Hou Jie Li. "Bayesian Image Denoising Using an Anisotropic Markov Random Field Model." Key Engineering Materials 467-469 (February 2011): 2018–23. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.2018.

Full text
Abstract:
This paper presents a Bayesian denoising method based on an anisotropic Markov Random Field (MRF) model in wavelet domain in order to improve the image denoising performance and reduce the computational complexity. The classical single-resolution image restoration method using MRFs and the maximum a posteriori (MAP) estimation is extended to the wavelet domain. To obtain the accurate MAP estimation, a novel anisotropic MRF model is proposed under this framework. As compared to the simple isotropic MRF model, this new model can capture the intrascale dependencies of wavelet coefficients significantly better. Simulation results demonstrate our proposed method has a good denoising performance while reducing the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Feng, Tanghuai Fan, Chenrong Huang, Xin Wang, and Lizhong Xu. "Block-Based MAP Superresolution Using Feature-Driven Prior Model." Mathematical Problems in Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/508357.

Full text
Abstract:
In the field of image superresolution reconstruction (SRR), the prior can be employed to solve the ill-posed problem. However, the prior model is selected empirically and characterizes the entire image so that the local feature of image cannot be represented accurately. This paper proposes a feature-driven prior model relying on feature of the image and introduces a block-based maximum a posteriori (MAP) framework under which the image is split into several blocks to perform SRR. Therefore, the local feature of image can be characterized more accurately, which results in a better SRR. In process of recombining superresolution blocks, we still design a border-expansion strategy to remove a byproduct, namely, cross artifacts. Experimental results show that the proposed method is effective.
APA, Harvard, Vancouver, ISO, and other styles
7

YANG, WENJIA, LIHUA DOU, and JUAN ZHAN. "A MULTI-HISTOGRAM CLUSTERING APPROACH TOWARD MARKOV RANDOM FIELD FOR FOREGROUND SEGMENTATION." International Journal of Image and Graphics 11, no. 01 (January 2011): 65–81. http://dx.doi.org/10.1142/s0219467811003993.

Full text
Abstract:
This paper presents a Bayesian approach for foreground segmentation in monocular image sequences. To overcome the limitations of background modeling in dealing with pixel-wise processing, spatial coherence and temporal persistency are formulated with background model under a maximum a posteriori probability (MAP)–Markov random field statistical (MRF) framework. Fuzzy clustering factor was introduced into the prior energy of MRFs for the new implementation scheme, where contextual constraints can be adaptively adjusted in terms of feature cues. Experimental results for several image sequences are provided to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Wenchao, Yi Wang, Tao Lei, Yangyu Fan, and Yan Feng. "Level Set Segmentation of Medical Images Based on Local Region Statistics and Maximum a Posteriori Probability." Computational and Mathematical Methods in Medicine 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/570635.

Full text
Abstract:
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Pillow, Jonathan W., Yashar Ahmadian, and Liam Paninski. "Model-Based Decoding, Information Estimation, and Change-Point Detection Techniques for Multineuron Spike Trains." Neural Computation 23, no. 1 (January 2011): 1–45. http://dx.doi.org/10.1162/neco_a_00058.

Full text
Abstract:
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Jia, Mingyu Zhang, Chaoyong Wang, Rongjun Chen, Xiaofeng An, and Yufei Wang. "Upper Bound on the Bit Error Probability of Systematic Binary Linear Codes via Their Weight Spectra." Discrete Dynamics in Nature and Society 2020 (January 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/1469090.

Full text
Abstract:
In this paper, upper bound on the probability of maximum a posteriori (MAP) decoding error for systematic binary linear codes over additive white Gaussian noise (AWGN) channels is proposed. The proposed bound on the bit error probability is derived with the framework of Gallager’s first bounding technique (GFBT), where the Gallager region is defined to be an irregular high-dimensional geometry by using a list decoding algorithm. The proposed bound on the bit error probability requires only the knowledge of weight spectra, which is helpful when the input-output weight enumerating function (IOWEF) is not available. Numerical results show that the proposed bound on the bit error probability matches well with the maximum-likelihood (ML) decoding simulation approach especially in the high signal-to-noise ratio (SNR) region, which is better than the recently proposed Ma bound.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Si, Danmei Chen, Mahesan Niranjan, and Shun-ichi Amari. "Sequential Bayesian Decoding with a Population of Neurons." Neural Computation 15, no. 5 (May 1, 2003): 993–1012. http://dx.doi.org/10.1162/089976603765202631.

Full text
Abstract:
Population coding is a simplified model of distributed information processing in the brain. This study investigates the performance and implementation of a sequential Bayesian decoding (SBD) paradigm in the framework of population coding. In the first step of decoding, when no prior knowledge is available, maximum likelihood inference is used; the result forms the prior knowledge of stimulus for the second step of decoding. Estimates are propagated sequentially to apply maximum a posteriori (MAP) decoding in which prior knowledge for any step is taken from estimates from the previous step. Not only do we analyze the performance of SBD, obtaining the optimal form of prior knowledge that achieves the best estimation result, but we also investigate its possible biological realization, in the sense that all operations are performed by the dynamics of a recurrent network. In order to achieve MAP, a crucial point is to identify a mechanism that propagates prior knowledge. We find that this could be achieved by short-term adaptation of network weights according to the Hebbian learning rule. Simulation results on both constant and time-varying stimulus support the analysis.
APA, Harvard, Vancouver, ISO, and other styles
12

Kiranagi, Manasi, Devika Dhoble, Madeeha Tahoor, and Dr Rekha Patil. "Finding Optimal Path and Privacy Preserving for Wireless Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 10 (October 31, 2022): 360–65. http://dx.doi.org/10.22214/ijraset.2022.46949.

Full text
Abstract:
Abstract: Privacy-preserving routing protocols in wireless networks frequently utilize additional artificial traffic to hide the source-destination identities of the communicating pair. Usually, the addition of artificial traffic is done heuristically with no guarantees that the transmission cost, latency, etc., are optimized in every network topology. We explicitly examine the privacyutility trade-off problem for wireless networks and develop a novel privacy-preserving routing algorithm called Optimal Privacy Enhancing Routing Algorithm (OPERA). OPERA uses a statistical decision-making framework to optimize the privacy of the routing protocol given a utility (or cost) constraint. We consider global adversaries with both Lossless and lossy observations that use the Bayesian maximum-a-posteriori (MAP) estimation strategy. We formulate the privacy-utility trade-off problem as a linear program which can be efficiently solved.
APA, Harvard, Vancouver, ISO, and other styles
13

Duan, Hai Jun, Guang Min Wu, Dan Liu, John D. Mai, and Jian Ming Chen. "Influence of Clique Potential Parameters on Classification Using Bayesian MRF Model for Remote Sensing Image in Dali Erhai Basin." Advanced Materials Research 658 (January 2013): 508–12. http://dx.doi.org/10.4028/www.scientific.net/amr.658.508.

Full text
Abstract:
Image classification of remote sensing data is an important topic and long-term tasks in applications [1]. Markov random field (MRF) has more advantages in processing contextual information [2]. Bayesian approach enables the incorporation of prior model and likelihood distribution, this paper has formulated a Bayesian-MRF classification model based on MAP-ICM framework. It uses Potts model in label field and assume Gaussian distribution in observation field. According to maximum a posteriori (MAP) criterion, each new classified label can be obtained by the minimum of energy using Iterated Conditional Modes (ICM) algorithm. Finally, classification tasks are carried out by Bayesian-MRF classification model. Experimental results show that: (1) Clique potential parameters affect classification greatly. When it is 0.5, the classification accuracy reaches maximum with the best classification result for study area of Dali Erhai Lake basin using landsat TM data. (2) Bayesian MRF model have obvious advantages in classification for neighbourhood pixels so that it can separate Shadow class from Water class because the Shadow in mountain areas is very similar to Water in spectrum. In this case study, the best classification accuracy reaches 95.8%. The approaches and results will have important reference value for applications such as land use/cover classification, environment/ecological monitoring etc.
APA, Harvard, Vancouver, ISO, and other styles
14

Price, M. A., J. D. McEwen, L. Pratley, and T. D. Kitching. "Sparse Bayesian mass-mapping with uncertainties: Full sky observations on the celestial sphere." Monthly Notices of the Royal Astronomical Society 500, no. 4 (November 17, 2020): 5436–52. http://dx.doi.org/10.1093/mnras/staa3563.

Full text
Abstract:
ABSTRACT To date weak gravitational lensing surveys have typically been restricted to small fields of view, such that the flat-sky approximation has been sufficiently satisfied. However, with Stage IV surveys (e.g. LSST and Euclid) imminent, extending mass-mapping techniques to the sphere is a fundamental necessity. As such, we extend the sparse hierarchical Bayesian mass-mapping formalism presented in previous work to the spherical sky. For the first time, this allows us to construct maximum a posteriori spherical weak lensing dark-matter mass-maps, with principled Bayesian uncertainties, without imposing or assuming Gaussianty. We solve the spherical mass-mapping inverse problem in the analysis setting adopting a sparsity promoting Laplace-type wavelet prior, though this theoretical framework supports all log-concave posteriors. Our spherical mass-mapping formalism facilitates principled statistical interpretation of reconstructions. We apply our framework to convergence reconstruction on high resolution N-body simulations with pseudo-Euclid masking, polluted with a variety of realistic noise levels, and show a significant increase in reconstruction fidelity compared to standard approaches. Furthermore, we perform the largest joint reconstruction to date of the majority of publicly available shear observational data sets (combining DESY1, KiDS450, and CFHTLens) and find that our formalism recovers a convergence map with significantly enhanced small-scale detail. Within our Bayesian framework we validate, in a statistically rigorous manner, the community’s intuition regarding the need to smooth spherical Kaiser-Squires estimates to provide physically meaningful convergence maps. Such approaches cannot reveal the small-scale physical structures that we recover within our framework.
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, Xiaoqiang, Yulin Wang, and Weiwei Hu. "Estimation of Longitudinal Force, Sideslip Angle and Yaw Rate for Four-Wheel Independent Actuated Autonomous Vehicles Based on PWA Tire Model." Sensors 22, no. 9 (April 29, 2022): 3403. http://dx.doi.org/10.3390/s22093403.

Full text
Abstract:
This article introduces an efficient and high-precision estimation framework for four-wheel independently actuated (FWIA) autonomous vehicles based on a novel tire model and adaptive square-root cubature Kalman filter (SCKF) estimation strategy. Firstly, a reliable and concise tire model that considers the tire’s nonlinear mechanics characteristics under combined conditions through the piecewise affine (PWA) identification method is established to improve the accuracy of the lateral dynamics model of FWIA autonomous vehicles. On this basis, the longitudinal relaxation length of each tire is integrated into the lateral dynamics modeling of FWIA autonomous vehicle. A novel nonlinear state function, including the PWA tire model, is proposed in this paper. To reduce the impact of the uncertainty of noise statistics on the estimation accuracy, an adaptive SCKF estimation algorithm based on the maximum a posteriori (MAP) criterion is proposed in the estimation framework. Finally, the estimation accuracy and stability of the adaptive SCKF algorithm are verified by the co-simulation of CarSim and Simulink. The simulation results show that when the statistical characteristics of noise are unknown and the target state changes suddenly under critical maneuvers, the estimation framework proposed in this paper still maintains high accuracy and stability.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Jungang, Tian Jin, Chao Xiao, and Xiaotao Huang. "Compressed Sensing Radar Imaging: Fundamentals, Challenges, and Advances." Sensors 19, no. 14 (July 13, 2019): 3100. http://dx.doi.org/10.3390/s19143100.

Full text
Abstract:
In recent years, sparsity-driven regularization and compressed sensing (CS)-based radar imaging methods have attracted significant attention. This paper provides an introduction to the fundamental concepts of this area. In addition, we will describe both sparsity-driven regularization and CS-based radar imaging methods, along with other approaches in a unified mathematical framework. This will provide readers with a systematic overview of radar imaging theories and methods from a clear mathematical viewpoint. The methods presented in this paper include the minimum variance unbiased estimation, least squares (LS) estimation, Bayesian maximum a posteriori (MAP) estimation, matched filtering, regularization, and CS reconstruction. The characteristics of these methods and their connections are also analyzed. Sparsity-driven regularization and CS based radar imaging methods represent an active research area; there are still many unsolved or open problems, such as the sampling scheme, computational complexity, sparse representation, influence of clutter, and model error compensation. We will summarize the challenges as well as recent advances related to these issues.
APA, Harvard, Vancouver, ISO, and other styles
17

Babaniyi, Olalekan, Ruanui Nicholson, Umberto Villa, and Noémi Petra. "Inferring the basal sliding coefficient field for the Stokes ice sheet model under rheological uncertainty." Cryosphere 15, no. 4 (April 9, 2021): 1731–50. http://dx.doi.org/10.5194/tc-15-1731-2021.

Full text
Abstract:
Abstract. We consider the problem of inferring the basal sliding coefficient field for an uncertain Stokes ice sheet forward model from synthetic surface velocity measurements. The uncertainty in the forward model stems from unknown (or uncertain) auxiliary parameters (e.g., rheology parameters). This inverse problem is posed within the Bayesian framework, which provides a systematic means of quantifying uncertainty in the solution. To account for the associated model uncertainty (error), we employ the Bayesian approximation error (BAE) approach to approximately premarginalize simultaneously over both the noise in measurements and uncertainty in the forward model. We also carry out approximative posterior uncertainty quantification based on a linearization of the parameter-to-observable map centered at the maximum a posteriori (MAP) basal sliding coefficient estimate, i.e., by taking the Laplace approximation. The MAP estimate is found by minimizing the negative log posterior using an inexact Newton conjugate gradient method. The gradient and Hessian actions to vectors are efficiently computed using adjoints. Sampling from the approximate covariance is made tractable by invoking a low-rank approximation of the data misfit component of the Hessian. We study the performance of the BAE approach in the context of three numerical examples in two and three dimensions. For each example, the basal sliding coefficient field is the parameter of primary interest which we seek to infer, and the rheology parameters (e.g., the flow rate factor or the Glen's flow law exponent coefficient field) represent so-called nuisance (secondary uncertain) parameters. Our results indicate that accounting for model uncertainty stemming from the presence of nuisance parameters is crucial. Namely our findings suggest that using nominal values for these parameters, as is often done in practice, without taking into account the resulting modeling error, can lead to overconfident and heavily biased results. We also show that the BAE approach can be used to account for the additional model uncertainty at no additional cost at the online stage.
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Lizhen, Yifeng Niu, and Lincheng Shen. "Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category Detection." Mathematical Problems in Engineering 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/671397.

Full text
Abstract:
Even though several promising approaches have been proposed in the literature, generic category-level object detection is still challenging due to high intraclass variability and ambiguity in the appearance among different object instances. From the view of constructing object models, the balance between flexibility and discrimination must be taken into consideration. Motivated by these demands, we propose a novel contextual hierarchical part-driven conditional random field (CRF) model, which is based on not only individual object part appearance but also model contextual interactions of the parts simultaneously. By using a latent two-layer hierarchical formulation of labels and a weighted neighborhood structure, the model can effectively encode the dependencies among object parts. Meanwhile, beta-stable local features are introduced as observed data to ensure the discriminative and robustness of part description. The object category detection problem can be solved in a probabilistic framework using a supervised learning method based on maximum a posteriori (MAP) estimation. The benefits of the proposed model are demonstrated on the standard dataset and satellite images.
APA, Harvard, Vancouver, ISO, and other styles
19

Oliva, A., S. Pulicani, V. Lefort, L. Bréhélin, O. Gascuel, and S. Guindon. "Accounting for ambiguity in ancestral sequence reconstruction." Bioinformatics 35, no. 21 (April 12, 2019): 4290–97. http://dx.doi.org/10.1093/bioinformatics/btz249.

Full text
Abstract:
Abstract Motivation The reconstruction of ancestral genetic sequences from the analysis of contemporaneous data is a powerful tool to improve our understanding of molecular evolution. Various statistical criteria defined in a phylogenetic framework can be used to infer nucleotide, amino-acid or codon states at internal nodes of the tree, for every position along the sequence. These criteria generally select the state that maximizes (or minimizes) a given criterion. Although it is perfectly sensible from a statistical perspective, that strategy fails to convey useful information about the level of uncertainty associated to the inference. Results The present study introduces a new criterion for ancestral sequence reconstruction, the minimum posterior expected error (MPEE), that selects a single state whenever the signal conveyed by the data is strong, and a combination of multiple states otherwise. We also assess the performance of a criterion based on the Brier scoring scheme which, like MPEE, does not rely on any tuning parameters. The precision and accuracy of several other criteria that involve arbitrarily set tuning parameters are also evaluated. Large scale simulations demonstrate the benefits of using the MPEE and Brier-based criteria with a substantial increase in the accuracy of the inference of past sequences compared to the standard approach and realistic compromises on the precision of the solutions returned. Availability and implementation The software package PhyML (https://github.com/stephaneguindon/phyml) provides an implementation of the Maximum A Posteriori (MAP) and MPEE criteria for reconstructing ancestral nucleotide and amino-acid sequences.
APA, Harvard, Vancouver, ISO, and other styles
20

BAYAR, BELHASSEN, NIDHAL BOUAYNAYA, and ROMAN SHTERENBERG. "PROBABILISTIC NON-NEGATIVE MATRIX FACTORIZATION: THEORY AND APPLICATION TO MICROARRAY DATA ANALYSIS." Journal of Bioinformatics and Computational Biology 12, no. 01 (January 28, 2014): 1450001. http://dx.doi.org/10.1142/s0219720014500012.

Full text
Abstract:
Non-negative matrix factorization (NMF) has proven to be a useful decomposition technique for multivariate data, where the non-negativity constraint is necessary to have a meaningful physical interpretation. NMF reduces the dimensionality of non-negative data by decomposing it into two smaller non-negative factors with physical interpretation for class discovery. The NMF algorithm, however, assumes a deterministic framework. In particular, the effect of the data noise on the stability of the factorization and the convergence of the algorithm are unknown. Collected data, on the other hand, is stochastic in nature due to measurement noise and sometimes inherent variability in the physical process. This paper presents new theoretical and applied developments to the problem of non-negative matrix factorization (NMF). First, we generalize the deterministic NMF algorithm to include a general class of update rules that converges towards an optimal non-negative factorization. Second, we extend the NMF framework to the probabilistic case (PNMF). We show that the Maximum a posteriori (MAP) estimate of the non-negative factors is the solution to a weighted regularized non-negative matrix factorization problem. We subsequently derive update rules that converge towards an optimal solution. Third, we apply the PNMF to cluster and classify DNA microarrays data. The proposed PNMF is shown to outperform the deterministic NMF and the sparse NMF algorithms in clustering stability and classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Chi-Kan. "Inference of gene networks from gene expression time series using recurrent neural networks and sparse MAP estimation." Journal of Bioinformatics and Computational Biology 16, no. 04 (August 2018): 1850009. http://dx.doi.org/10.1142/s0219720018500099.

Full text
Abstract:
Background: The inference of genetic regulatory networks (GRNs) provides insight into the cellular responses to signals. A class of recurrent neural networks (RNNs) capturing the dynamics of GRN has been used as a basis for inferring small-scale GRNs from gene expression time series. The Bayesian framework facilitates incorporating the hypothesis of GRN into the model estimation to improve the accuracy of GRN inference. Results: We present new methods for inferring small-scale GRNs based on RNNs. The weights of wires of RNN represent the strengths of gene-to-gene regulatory interactions. We use a class of automatic relevance determination (ARD) priors to enforce the sparsity in the maximum a posteriori (MAP) estimates of wire weights of RNN. A particle swarm optimization (PSO) is integrated as an optimization engine into the MAP estimation process. Likely networks of genes generated based on estimated wire weights are combined using the majority rule to determine a final estimated GRN. As an alternative, a class of [Formula: see text]-norm ([Formula: see text]) priors is used for attaining the sparse MAP estimates of wire weights of RNN. We also infer the GRN using the maximum likelihood (ML) estimates of wire weights of RNN. The RNN-based GRN inference algorithms, ARD-RNN, [Formula: see text]-RNN, and ML-RNN are tested on simulated and experimental E. coli and yeast time series containing 6–11 genes and 7–19 data points. Published GRN inference algorithms based on regressions and mutual information networks are performed on the benchmark datasets to compare performances. Conclusion: ARD and [Formula: see text]-norm priors are used for the estimation of wire weights of RNN. Results of GRN inference experiments show that ARD-RNN, [Formula: see text]-RNN have similar best accuracies on the simulated time series. The ARD-RNN is more accurate than [Formula: see text]-RNN, ML-RNN, and mostly more accurate than the reference algorithms on the experimental time series. The effectiveness of ARD-RNN for inferring small-scale GRNs using gene expression time series of limited length is empirically verified.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Peng, Shuyu Zhou, Peng Liu, and Mengwei Li. "Distributed Ellipsoidal Intersection Fusion Estimation for Multi-Sensor Complex Systems." Sensors 22, no. 11 (June 6, 2022): 4306. http://dx.doi.org/10.3390/s22114306.

Full text
Abstract:
This paper investigates the problem of distributed ellipsoidal intersection (DEI) fusion estimation for linear time-varying multi-sensor complex systems with unknown input disturbances and measurement data transmission delays. For the problem with external unknown input disturbance signals, a non-informative prior distribution is used to model the problem. A set of independent random variables obeying Bernoulli distribution is also used to describe the situation of measurement data transmission delay caused by network channel congestion, and appropriate buffer areas are added at the link nodes to retrieve the delayed transmission data values. For multi-sensor systems with complex situations, a minimum mean square error (MMSE) local estimator is designed in a Bayesian framework based on the maximum a posteriori (MAP) estimation criterion. In order to deal with the unknown correlations among the local estimators and to select the fusion estimator with lower computational complexity, the fusion estimator is designed using ellipsoidal intersection (EI) fusion technique, and the consistency of the estimator is demonstrated. In this paper, the difference between DEI fusion and distributed covariance intersection (DCI) fusion and centralized fusion estimation is analyzed by a numerical example, and the superiority of the DEI fusion method is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
23

Leung, Raymond, Alexander Lowe, Anna Chlingaryan, Arman Melkumyan, and John Zigman. "Bayesian Surface Warping Approach for Rectifying Geological Boundaries Using Displacement Likelihood and Evidence from Geochemical Assays." ACM Transactions on Spatial Algorithms and Systems 8, no. 1 (March 31, 2022): 1–23. http://dx.doi.org/10.1145/3476979.

Full text
Abstract:
This article presents a Bayesian framework for manipulating mesh surfaces with the aim of improving the positional integrity of the geological boundaries that they seek to represent. The assumption is that these surfaces, created initially using sparse data, capture the global trend and provide a reasonable approximation of the stratigraphic, mineralization, and other types of boundaries for mining exploration, but they are locally inaccurate at scales typically required for grade estimation. The proposed methodology makes local spatial corrections automatically to maximize the agreement between the modeled surfaces and observed samples. Where possible, vertices on a mesh surface are moved to provide a clear delineation, for instance, between ore and waste material across the boundary based on spatial and compositional analysis using assay measurements collected from densely spaced, geo-registered blast holes. The maximum a posteriori (MAP) solution ultimately considers the chemistry observation likelihood in a given domain. Furthermore, it is guided by an a priori spatial structure that embeds geological domain knowledge and determines the likelihood of a displacement estimate. The results demonstrate that increasing surface fidelity can significantly improve grade estimation performance based on large-scale model validation.
APA, Harvard, Vancouver, ISO, and other styles
24

Han, J., S. L. Zhang, and Z. Ye. "COMBINED PATCH-WISE MINIMAL-MAXIMAL PIXELS REGULARIZATION FOR DEBLURRING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020 (August 3, 2020): 17–23. http://dx.doi.org/10.5194/isprs-annals-v-1-2020-17-2020.

Full text
Abstract:
Abstract. Deblurring is a vital image pre-processing procedure to improve the quality of images. It is a classical ill-posed problem. A new blind deblurring method based on image sparsity prior is proposed here. The proposed image sparsity prior combines patch-wise minimal and maximal pixels of latent image, and improves gradually the image sparsity during deblurring. An algorithm that is different with half quadratics splitting algorithm is applied under the maximum a posterior (MAP) framework. Experiment results demonstrate that the proposed method can keep more subtle texture and sharpened edges, reduce the artefacts in visual, and the corresponding evaluated indexes perform favourably against it of the state-of-the-art methods on synthesized, natural and remote sensing images (RSI) quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
25

Jin, Qiwen, Yong Ma, Erting Pan, Fan Fan, Jun Huang, Hao Li, Chenhong Sui, and Xiaoguang Mei. "Hyperspectral Unmixing with Gaussian Mixture Model and Spatial Group Sparsity." Remote Sensing 11, no. 20 (October 20, 2019): 2434. http://dx.doi.org/10.3390/rs11202434.

Full text
Abstract:
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods only use the spectral information alone and do not fully exploit the possible local spatial correlation. When the pixels lie on the inhomogeneous region, the abundances of the neighboring pixels will not share the same prior constraints. Thus, in this paper, to achieve better abundance estimation performance, a method based on the Gaussian mixture model (GMM) and spatial group sparsity constraint is proposed. To fully exploit the group structure, we take the superpixel segmentation (SS) as preprocessing to generate the spatial groups. Then, we use GMM to model the endmember distribution, incorporating the spatial group sparsity as a mixed-norm regularization into the objective function. Finally, under the Bayesian framework, the conditional density function leads to a standard maximum a posteriori (MAP) problem, which can be solved using generalized expectation-maximization (GEM). Experiments on simulated and real hyperspectral data demonstrate that the proposed algorithm has higher unmixing precision compared with other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Aidong, Wenqi Huang, Peng Li, Huajun Chen, Jiaxiao Meng, and Xiaobin Guo. "Mechanical Vibration Signal Denoising Using Quantum-Inspired Standard Deviation Based on Subband Based Gaussian Mixture Model." Shock and Vibration 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/5169070.

Full text
Abstract:
Aiming at improving noise reduction effect for mechanical vibration signal, a Gaussian mixture model (SGMM) and a quantum-inspired standard deviation (QSD) are proposed and applied to the denoising method using the thresholding function in wavelet domain. Firstly, the SGMM is presented and utilized as a local distribution to approximate the wavelet coefficients distribution in each subband. Then, within Bayesian framework, the maximum a posteriori (MAP) estimator is employed to derive a thresholding function with conventional standard deviation (CSD) which is calculated by the expectation-maximization (EM) algorithm. However, the CSD has a disadvantage of ignoring the interscale dependency between wavelet coefficients. Considering this limit for the CSD, the quantum theory is adopted to analyze the interscale dependency between coefficients in adjacent subbands, and the QSD for noise-free wavelet coefficients is presented based on quantum mechanics. Next, the QSD is constituted for the CSD in the thresholding function to shrink noisy coefficients. Finally, an application in the mechanical vibration signal processing is used to illustrate the denoising technique. The experimental study shows the SGMM can model the distribution of wavelet coefficients accurately and QSD can depict interscale dependency of wavelet coefficients of true signal quite successfully. Therefore, the denoising method utilizing the SGMM and QSD performs better than others.
APA, Harvard, Vancouver, ISO, and other styles
27

Sumaiya, M. N., and R. Shantha Selva Kumari. "Comparative study of statistical modeling of wavelet coefficients for SAR image despeckling." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 01 (January 2017): 1750003. http://dx.doi.org/10.1142/s0219691317500035.

Full text
Abstract:
The work is concentrated to get good despeckled performance under non-homomorphic framework by involving simple distributions to obtain closed form Maximum-a-Posteriori probability (MAP) solutions. To estimate noise-free wavelet coefficient, distributions with lesser number of parameters allied Laplacian/Gaussian probability density function (pdf) for signal reflectivity and Rayleigh/Gaussian pdf for noisy signal are employed. Thus, the four despeckling methods are formed, namely LRMAP, GRMAP, LGMAP and GGMAP to despeckle SAR images and the effectiveness of different distributions is studied. The despeckling method is made adaptive by estimating the local variance of high frequency image using wavelet sub-band coefficient statistics. Also, the parameter space used in the proposed methods does not involve initialization, iterative search and convergence problems. The performances of the proposed methods are evaluated in terms of Equivalent Number of Looks (ENL), Peak Signal to Noise Ratio (PSNR), Edge Preservation ([Formula: see text] and Mean Structural Similarity Index Measure (MSSIM). Experimental results show that LRMAP yields good results over all methods and GGMAP does not perform good for all images in terms of all quality metrics. Also, the proposed methods yield good quality metrics in less computing time as compared with the method available in the literature.
APA, Harvard, Vancouver, ISO, and other styles
28

Kittisuwan, Pichid. "Speckle Noise Reduction of Medical Imaging via Logistic Density in Redundant Wavelet Domain." International Journal on Artificial Intelligence Tools 27, no. 02 (March 2018): 1850006. http://dx.doi.org/10.1142/s0218213018500069.

Full text
Abstract:
In the digital world, artificial intelligence tools and machine learning algorithms are widely applied in analysis of medical images for identifying diseases and make diagnoses; for example, to make recognition and classification. Speckle noises affect all medical imaging systems. Therefore, reduction in corrupting speckle noises is very important, since it deteriorates the quality of the medical images and makes tasks such as recognition and classification difficult. Most existing denoising algorithms have been developed for the additive white Gaussian noise (AWGN). However, AWGN is not a speckle noise. Therefore, this work presents a novel speckle noise removal algorithm within the framework of Bayesian estimation and wavelet analysis. This research focuses on noise reduction by the Bayesian with wavelet-based method because it provides good efficiency in noise reduction and spends short time in processing. The subband decomposition of a logarithmically transformed image is best described by a family of heavy-tailed densities such as Logistic distribution. Then, this research proposes the maximum a posteriori (MAP) estimator assuming Logistic random vectors for each parent-child wavelet co-efficient of noise-free log-transformed data and log-normal density for speckle noises. Moreover, a redundant wavelet transform, i.e., the cycle-spinning method, is applied in our proposed methods. In our experiments, our proposed methods give promising denoising results.
APA, Harvard, Vancouver, ISO, and other styles
29

Rafsanjani, Seyed Hadi Hashemi, and Saeed Ghazi Maghrebi. "A new framework for utilizing side information in sparse representation." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 1 (October 1, 2021): 403. http://dx.doi.org/10.11591/ijeecs.v24.i1.pp403-409.

Full text
Abstract:
An underdetermined system of linear equation has infinitely number of answers. To find a specific solution, regularization method is used. For this propose, we define a cost function based on desired features of the solution and that answer with the best matches to these function is selected as the desired solution. In case of sparse solution, zero-norm function is selected as the cost function. In many engineering cases, there is side information which are omitted because of the zero-norm function. Finding a way to conquer zero-norm function limitation, will help to improve estimation of the desired parameter. In this regard, we utilize maximum a posterior (MAP) estimation and modify the prior information such that both sparsity and side information are utilized. As a consequence, a framework to utilize side information into sparse representation algorithms is proposed. We also test our proposed framework in orthogonal frequency division multiplexing (OFDM) sparse channel estimation problem which indicates, by utilizing our proposed system, the performance of the system improves and fewer resources are required for estimating the channel.
APA, Harvard, Vancouver, ISO, and other styles
30

Shokrollahi, Yasin, Pengfei Dong, Mehmet Kaya, Donny W. Suh, and Linxia Gu. "Rapid Prediction of Retina Stress and Strain Patterns in Soccer-Related Ocular Injury: Integrating Finite Element Analysis with Machine Learning Approach." Diagnostics 12, no. 7 (June 23, 2022): 1530. http://dx.doi.org/10.3390/diagnostics12071530.

Full text
Abstract:
Soccer-related ocular injuries, especially retinal injuries, have attracted increasing attention. The mechanics of a flying soccer ball have induced abnormally higher retinal stresses and strains, and their correlation with retinal injuries has been characterized using the finite element (FE) method. However, FE simulations demand solid mechanical expertise and extensive computational time, both of which are difficult to adopt in clinical settings. This study proposes a framework that combines FE analysis with a machine learning (ML) approach for the fast prediction of retina mechanics. Different impact scenarios were simulated using the FE method to obtain the von Mises stress map and the maximum principal strain map in the posterior retina. These stress and strain patterns, along with their input parameters, were used to train and test a partial least squares regression (PLSR) model to predict the soccer-induced retina stress and strain in terms of distributions and peak magnitudes. The peak von Mises stress and maximum principal strain prediction errors were 3.03% and 9.94% for the frontal impact and were 9.08% and 16.40% for the diagonal impact, respectively. The average prediction error of von Mises stress and the maximum principal strain were 15.62% and 21.15% for frontal impacts and were 10.77% and 21.78% for diagonal impacts, respectively. This work provides a surrogate model of FE analysis for the fast prediction of the dynamic mechanics of the retina in response to the soccer impact, which could be further utilized for developing a diagnostic tool for soccer-related ocular trauma.
APA, Harvard, Vancouver, ISO, and other styles
31

Panda, Susmita, and Pradipta Kumar Nanda. "MRF Model-Based Estimation of Camera Parameters and Detection of Underwater Moving Objects." International Journal of Cognitive Informatics and Natural Intelligence 14, no. 4 (October 2020): 1–29. http://dx.doi.org/10.4018/ijcini.2020100101.

Full text
Abstract:
The detection of underwater objects in a video is a challenging problem particularly when both the camera and the objects are in motion. In this article, this problem has been conceived as an incomplete data problem and hence the problem is formulated in expectation maximization (EM) framework. In the E-step, the frame labels are the maximum a posterior (MAP) estimates, which are obtained using simulated annealing (SA) and the iterated conditional mode (ICM) algorithm. In the M-step, the camera model parameters, both intrinsic and extrinsic, are estimated. In case of parameter estimation, the features are extracted at coarse and fine scale. In order to continuously detect the object in different video frames, EM algorithm is repeated for each frame. The performance of the proposed scheme has been compared with other algorithms and the proposed algorithm is found to outperform.
APA, Harvard, Vancouver, ISO, and other styles
32

Tan, Ke, Xingyu Lu, Jianchao Yang, Weimin Su, and Hong Gu. "A Novel Bayesian Super-Resolution Method for Radar Forward-Looking Imaging Based on Markov Random Field Model." Remote Sensing 13, no. 20 (October 14, 2021): 4115. http://dx.doi.org/10.3390/rs13204115.

Full text
Abstract:
Super-resolution technology is considered as an efficient approach to promote the image quality of forward-looking imaging radar. However, super-resolution technology is inherently an ill-conditioned issue, whose solution is quite susceptible to noise. Bayesian method can efficiently alleviate this issue through utilizing prior knowledge of the imaging process, in which the scene prior information plays a pretty significant role in ensuring the imaging accuracy. In this paper, we proposed a novel Bayesian super-resolution method on the basis of Markov random field (MRF) model. Compared with the traditional super-resolution method which is focused on one-dimensional (1-D) echo processing, the MRF model adopted in this study strives to exploit the two-dimensional (2-D) prior information of the scene. By using the MRF model, the 2-D spatial structural characteristics of the imaging scene can be well described and utilized by the nth-order neighborhood system. Then, the imaging objective function can be constructed through the maximum a posterior (MAP) framework. Finally, an accelerated iterative threshold/shrinkage method is utilized to cope with the objective function. Validation experiments using both synthetic echo and measured data are designed, and results demonstrate that the new MAP-MRF method exceeds other benchmarking approaches in terms of artifacts suppression and contour recovery.
APA, Harvard, Vancouver, ISO, and other styles
33

Chow, J. C. K. "DRIFT-FREE INDOOR NAVIGATION USING SIMULTANEOUS LOCALIZATION AND MAPPING OF THE AMBIENT HETEROGENEOUS MAGNETIC FIELD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 339–44. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-339-2017.

Full text
Abstract:
In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).
APA, Harvard, Vancouver, ISO, and other styles
34

Yi, Nengjun, and Shizhong Xu. "Bayesian Mapping of Quantitative Trait Loci Under the Identity-by-Descent-Based Variance Component Model." Genetics 156, no. 1 (September 1, 2000): 411–22. http://dx.doi.org/10.1093/genetics/156.1.411.

Full text
Abstract:
AbstractVariance component analysis of quantitative trait loci (QTL) is an important strategy of genetic mapping for complex traits in humans. The method is robust because it can handle an arbitrary number of alleles with arbitrary modes of gene actions. The variance component method is usually implemented using the proportion of alleles with identity-by-descent (IBD) shared by relatives. As a result, information about marker linkage phases in the parents is not required. The method has been studied extensively under either the maximum-likelihood framework or the sib-pair regression paradigm. However, virtually all investigations are limited to normally distributed traits under a single QTL model. In this study, we develop a Bayes method to map multiple QTL. We also extend the Bayesian mapping procedure to identify QTL responsible for the variation of complex binary diseases in humans under a threshold model. The method can also treat the number of QTL as a parameter and infer its posterior distribution. We use the reversible jump Markov chain Monte Carlo method to infer the posterior distributions of parameters of interest. The Bayesian mapping procedure ends with an estimation of the joint posterior distribution of the number of QTL and the locations and variances of the identified QTL. Utilities of the method are demonstrated using a simulated population consisting of multiple full-sib families.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Jun, Yu Liu, Kai Dong, Ziran Ding, and You He. "A Novel Distributed State Estimation Algorithm with Consensus Strategy." Sensors 19, no. 9 (May 8, 2019): 2134. http://dx.doi.org/10.3390/s19092134.

Full text
Abstract:
Owing to its high-fault tolerance and scalability, the consensus-based paradigm has attracted immense popularity for distributed state estimation. If a target is neither observed by a certain node nor by its neighbors, this node is naive about the target. Some existing algorithms have considered the presence of naive nodes, but it takes sufficient consensus iterations for these algorithms to achieve a satisfactory performance. In practical applications, because of constrained energy and communication resources, only a limited number of iterations are allowed and thus the performance of these algorithms will be deteriorated. By fusing the measurements as well as the prior estimates of each node and its neighbors, a local optimal estimate is obtained based on the proposed distributed local maximum a posterior (MAP) estimator. With some approximations of the cross-covariance matrices and a consensus protocol incorporated into the estimation framework, a novel distributed hybrid information weighted consensus filter (DHIWCF) is proposed. Then, theoretical analysis on the guaranteed stability of the proposed DHIWCF is performed. Finally, the effectiveness and superiority of the proposed DHIWCF is evaluated. Simulation results indicate that the proposed DHIWCF can achieve an acceptable estimation performance even with a single consensus iteration.
APA, Harvard, Vancouver, ISO, and other styles
36

Stankiewicz, Olgierd, and Marek Domański. "Depth Map Estimation based on Maximum a Posteriori Probability." IEIE Transactions on Smart Processing & Computing 7, no. 1 (February 28, 2018): 49–61. http://dx.doi.org/10.5573/ieiespc.2018.7.1.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Thiruvasagam, Priya, and Kalavathi Palanisamy. "Brain Tissue Segmentation from Magnetic Resonance Brain Images Using Histogram Based Swarm Optimization Techniques." Current Medical Imaging Formerly Current Medical Imaging Reviews 16, no. 6 (July 27, 2020): 752–65. http://dx.doi.org/10.2174/1573405615666190318154943.

Full text
Abstract:
Background and Objective: In order to reduce time complexity and to improve the computational efficiency in diagnosing process, automated brain tissue segmentation for magnetic resonance brain images is proposed in this paper. Methods: This method incorporates two processes, the first one is preprocessing and the second one is segmentation of brain tissue using Histogram based Swarm Optimization techniques. The proposed method was investigated with images obtained from twenty volumes and eighteen volumes of T1-Weighted images obtained from Internet Brain Segmentation Repository (IBSR), Alzheimer disease images from Minimum Interval Resonance Imaging in Alzheimer's Disease (MIRIAD) and T2-Weighted real-time images collected from SBC Scan Center Dindigul. Results: The proposed technique was tested with three brain image datasets. Quantitative evaluation was done with Jaccard (JC) and Dice (DC) and also it was compared with existing swarm optimization techniques and other methods like Adaptive Maximum a posteriori probability (AMAP), Biased Maximum a posteriori Probability (BMAP), Maximum a posteriori Probability (MAP), Maximum Likelihood (ML) and Tree structure K-Means (TK-Means). Conclusion: The performance comparative analysis shows that our proposed method Histogram based Darwinian Particle Swarm Optimization (HDPSO) gives better results than other proposed techniques such as Histogram based Particle Swarm Optimization (HPSO), Histogram based Fractional Order Darwinian Particle Swarm Optimization (HFODPSO) and with existing swarm optimization techniques and other techniques like Adaptive Maximum a posteriori Probability (AMAP), Biased Maximum a posteriori Probability (BMAP), Maximum a posteriori Probability (MAP), Maximum Likelihood (ML) and Tree structure K-Means (TK-Means).
APA, Harvard, Vancouver, ISO, and other styles
38

Tolpin, David, and Frank Wood. "Maximum a Posteriori Estimation by Search in Probabilistic Programs." Proceedings of the International Symposium on Combinatorial Search 6, no. 1 (September 1, 2021): 201–5. http://dx.doi.org/10.1609/socs.v6i1.18369.

Full text
Abstract:
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models.
APA, Harvard, Vancouver, ISO, and other styles
39

Gaser, C. "Partial Volume Segmentation with Adaptive Maximum A Posteriori (MAP) Approach." NeuroImage 47 (July 2009): S121. http://dx.doi.org/10.1016/s1053-8119(09)71151-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lingasubramanian, Karthikeyan, Syed M. Alam, and Sanjukta Bhanja. "Maximum error modeling for fault-tolerant computation using maximum a posteriori (MAP) hypothesis." Microelectronics Reliability 51, no. 2 (February 2011): 485–501. http://dx.doi.org/10.1016/j.microrel.2010.07.156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Huang, Guoquan, Ke Zhou, Nikolas Trawny, and Stergios I. Roumeliotis. "A Bank of Maximum A Posteriori (MAP) Estimators for Target Tracking." IEEE Transactions on Robotics 31, no. 1 (February 2015): 85–103. http://dx.doi.org/10.1109/tro.2014.2378432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Jakubiec, Felicia Y., and Alejandro Ribeiro. "D-MAP: Distributed Maximum a Posteriori Probability Estimation of Dynamic Systems." IEEE Transactions on Signal Processing 61, no. 2 (January 2013): 450–66. http://dx.doi.org/10.1109/tsp.2012.2222398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Fan, Wanshu, Hongyan Wang, Yan Wang, and Zhixun Su. "Blind Deconvolution with Scale Ambiguity." Applied Sciences 10, no. 3 (January 31, 2020): 939. http://dx.doi.org/10.3390/app10030939.

Full text
Abstract:
Recent years have witnessed significant advances in single image deblurring due to the increasing popularity of electronic imaging equipment. Most existing blind image deblurring algorithms focus on designing distinctive image priors for blur kernel estimation, which usually play regularization roles in deconvolution formulation. However, little research effort has been devoted to the relative scale ambiguity between the latent image and the blur kernel. The well-known L 1 normalization constraint, i.e., fixing the sum of all the kernel weights to be one, is commonly selected to remove this ambiguity. In contrast to this arbitrary choice, we in this paper introduce the L p -norm normalization constraint on the blur kernel associated with a hyper-Laplacian prior. We show that the employed hyper-Laplacian regularizer can be transformed into a joint regularized prior based on a scale factor. We quantitatively show that the proper choice of p makes the joint prior sufficient to favor the sharp solutions over the trivial solutions (the blurred input and the delta kernel). This facilitates the kernel estimation within the conventional maximum a posterior (MAP) framework. We carry out numerical experiments on several synthesized datasets and find that the proposed method with p = 2 generates the highest average kernel similarity, the highest average PSNR and the lowest average error ratio. Based on these numerical results, we set p = 2 in our experiments. The evaluation on some real blurred images demonstrate that the results by the proposed methods are visually better than the state-of-the-art deblurring methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Jameel, Shoaib, Zihao Fu, Bei Shi, Wai Lam, and Steven Schockaert. "Word Embedding as Maximum A Posteriori Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6562–69. http://dx.doi.org/10.1609/aaai.v33i01.33016562.

Full text
Abstract:
The GloVe word embedding model relies on solving a global optimization problem, which can be reformulated as a maximum likelihood estimation problem. In this paper, we propose to generalize this approach to word embedding by considering parametrized variants of the GloVe model and incorporating priors on these parameters. To demonstrate the usefulness of this approach, we consider a word embedding model in which each context word is associated with a corresponding variance, intuitively encoding how informative it is. Using our framework, we can then learn these variances together with the resulting word vectors in a unified way. We experimentally show that the resulting word embedding models outperform GloVe, as well as many popular alternatives.
APA, Harvard, Vancouver, ISO, and other styles
45

Pivato, Marcus. "A STATISTICAL APPROACH TO EPISTEMIC DEMOCRACY." Episteme 9, no. 2 (June 2012): 115–37. http://dx.doi.org/10.1017/epi.2012.4.

Full text
Abstract:
AbstractWe briefly review Condorcet's and Young's epistemic interpretations of preference aggregation rules as maximum likelihood estimators. We then develop a general framework for interpreting epistemic social choice rules as maximum likelihood estimators, maximum a posteriori estimators, or expected utility maximizers. We illustrate this framework with several examples. Finally, we critique this program.
APA, Harvard, Vancouver, ISO, and other styles
46

Shi, Yuying, Zijin Liu, Xiaoying Wang, and Jinping Zhang. "Edge detection with mixed noise based on maximum a posteriori approach." Inverse Problems & Imaging 15, no. 5 (2021): 1223. http://dx.doi.org/10.3934/ipi.2021035.

Full text
Abstract:
<p style='text-indent:20px;'>Edge detection is an important problem in image processing, especially for mixed noise. In this work, we propose a variational edge detection model with mixed noise by using Maximum A-Posteriori (MAP) approach. The novel model is formed with the regularization terms and the data fidelity terms that feature different mixed noise. Furthermore, we adopt the alternating direction method of multipliers (ADMM) to solve the proposed model. Numerical experiments on a variety of gray and color images demonstrate the efficiency of the proposed model.</p>
APA, Harvard, Vancouver, ISO, and other styles
47

Maik, Vivek, S. N. Rani Aishwarya, and Joonki Paik. "Blind deconvolution using maximum a posteriori (MAP) estimation with directional edge based priori." Optik 157 (March 2018): 1129–42. http://dx.doi.org/10.1016/j.ijleo.2017.03.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jalali, Shirin, and Arian Maleki. "New approach to Bayesian high-dimensional linear regression." Information and Inference: A Journal of the IMA 7, no. 4 (January 2, 2018): 605–55. http://dx.doi.org/10.1093/imaiai/iax016.

Full text
Abstract:
Abstract Consider the problem of estimating parameters $X^n \in \mathbb{R}^n $, from $m$ response variables $Y^m = AX^n+Z^m$, under the assumption that the distribution of $X^n$ is known. Lack of computationally feasible algorithms that employ generic prior distributions and provide a good estimate of $X^n$ has limited the set of distributions researchers use to model the data. To address this challenge, in this article, a new estimation scheme named quantized maximum a posteriori (Q-MAP) is proposed. The new method has the following properties: (i) In the noiseless setting, it has similarities to maximum a posteriori (MAP) estimation. (ii) In the noiseless setting, when $X_1,\ldots,X_n$ are independent and identically distributed, asymptotically, as $n$ grows to infinity, its required sampling rate ($m/n$) for an almost zero-distortion recovery approaches the fundamental limits. (iii) It scales favorably with the dimensions of the problem and therefore is applicable to high-dimensional setups. (iv) The solution of the Q-MAP optimization can be found via a proposed iterative algorithm that is provably robust to error (noise) in response variables.
APA, Harvard, Vancouver, ISO, and other styles
49

LÓPEZ, ANTONIO, RAFAEL MOLINA, JAVIER MATEOS, and AGGELOS K. KATSAGGELOS. "SPECT IMAGE RECONSTRUCTION USING COMPOUND PRIOR MODELS." International Journal of Pattern Recognition and Artificial Intelligence 16, no. 03 (May 2002): 317–30. http://dx.doi.org/10.1142/s0218001402001708.

Full text
Abstract:
We propose a new iterative method for Maximum a Posteriori (MAP) reconstruction of SPECT (Single Photon Emission Computed Tomography) images. The method uses Compound Gauss Markov Random Fields (CGMRF) as prior model and is stochastic for the line process and deterministic for the reconstruction. Synthetic and real images are used to compare the new method with existing ones.
APA, Harvard, Vancouver, ISO, and other styles
50

Di, Ruohai, Peng Wang, Chuchao He, and Zhigao Guo. "Constrained Adjusted Maximum a Posteriori Estimation of Bayesian Network Parameters." Entropy 23, no. 10 (September 30, 2021): 1283. http://dx.doi.org/10.3390/e23101283.

Full text
Abstract:
Maximum a posteriori estimation (MAP) with Dirichlet prior has been shown to be effective in improving the parameter learning of Bayesian networks when the available data are insufficient. Given no extra domain knowledge, uniform prior is often considered for regularization. However, when the underlying parameter distribution is non-uniform or skewed, uniform prior does not work well, and a more informative prior is required. In reality, unless the domain experts are extremely unfamiliar with the network, they would be able to provide some reliable knowledge on the studied network. With that knowledge, we can automatically refine informative priors and select reasonable equivalent sample size (ESS). In this paper, considering the parameter constraints that are transformed from the domain knowledge, we propose a Constrained adjusted Maximum a Posteriori (CaMAP) estimation method, which is featured by two novel techniques. First, to draw an informative prior distribution (or prior shape), we present a novel sampling method that can construct the prior distribution from the constraints. Then, to find the optimal ESS (or prior strength), we derive constraints on the ESS from the parameter constraints and select the optimal ESS by cross-validation. Numerical experiments show that the proposed method is superior to other learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography