Academic literature on the topic 'Maximum a posteriori (MAP) framework'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Maximum a posteriori (MAP) framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Maximum a posteriori (MAP) framework"

1

Pennec, X., J. Ehrhardt, N. Ayache, H. Handels, and H. Hufnagel. "Computation of a Probabilistic Statistical Shape Model in a Maximum-a-posteriori Framework." Methods of Information in Medicine 48, no. 04 (2009): 314–19. http://dx.doi.org/10.3414/me9228.

Full text
Abstract:
Summary Objectives: When analyzing shapes and shape variabilities, the first step is bringing those shapes into correspondence. This is a fundamental problem even when solved by manually determining exact correspondences such as landmarks. We developed a method to represent a mean shape and a variability model for a training data set based on probabilistic correspondence computed between the observations. Methods: First, the observations are matched on each other with an affine transformation found by the Expectation-Maximization Iterative-Closest-Points (EM-ICP) registration. We then propose a maximum-a-posteriori (MAP) framework in order to compute the statistical shape model (SSM) parameters which result in an optimal adaptation of the model to the observations. The optimization of the MAP explanation is realized with respect to the observation parameters and the generative model parameters in a global criterion and leads to very efficient and closed-form solutions for (almost) all parameters. Results: We compared our probabilistic SSM to a SSM based on one-to-one correspondences and the PCA (classical SSM). Experiments on synthetic data served to test the performances on non-convex shapes (15 training shapes) which have proved difficult in terms of proper correspondence determination. We then computed the SSMs for real putamen data (21 training shapes). The evaluation was done by measuring the generalization ability as well as the specificity of both SSMs and showed that especially shape detail differences are better modeled by the probabilistic SSM (Hausdorff distance in generalization ability ≈ 25% smaller). Conclusions: The experimental outcome shows the efficiency and advantages of the new approach as the probabilistic SSM performs better in modeling shape details and differences.
APA, Harvard, Vancouver, ISO, and other styles
2

Coene, W. "A practical algorithm for maximum-likelihood HREM image reconstruction." Proceedings, annual meeting, Electron Microscopy Society of America 50, no. 2 (August 1992): 986–87. http://dx.doi.org/10.1017/s0424820100129565.

Full text
Abstract:
Reconstruction in HREM of the complex wave function at the exit face of a crystal foil out of a focal series of HREM images, and with correction for the microscope's aberrations, can be performed with a variety of algorithms depending on the approximations involved for the HREM image formation. The maximum-a-posteriori (MAP) recursive reconstruction algorithm of Kirkland is the most general one with the full benefit of the effects of non-linear imaging and partial coherence, which are correctly treated in terms of a transmission-cross-coefficient (TCC). However, the routine application of the Kirkland algorithm has thusfar been hampered by its enormous computational demands, especially when large image frame sizes (5122) and a large number of HREM images (≥20) are aimed at. In this paper, we present a modified version of the Kirkland method within a maximum-likelihood (MAL) framework, and with a new numerical implementation yielding a workable algorithm with a much higher computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Hong, Yaobin Qiao, Shuangcheng Sun, Yuchen Yao, and Liming Ruan. "Image Reconstruction of Two-Dimensional Highly Scattering Inhomogeneous Medium Using MAP-Based Estimation." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/412315.

Full text
Abstract:
A maximum a posteriori (MAP) estimation based on Bayesian framework is applied to image reconstruction of two-dimensional highly scattering inhomogeneous medium. The finite difference method (FDM) and conjugate gradient (CG) algorithm serve as the forward and inverse solving models, respectively. The generalized Gaussian Markov random field model (GGMRF) is treated as the regularization, and finally the influence of the measurement errors and initial distributions is investigated. Through the test cases, the MAP estimate algorithm is demonstrated to greatly improve the reconstruction results of the optical coefficients.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhongli, Litong Fan, and Baigen Cai. "A 3D Relative-Motion Context Constraint-Based MAP Solution for Multiple-Object Tracking Problems." Sensors 18, no. 7 (July 20, 2018): 2363. http://dx.doi.org/10.3390/s18072363.

Full text
Abstract:
Multi-object tracking (MOT), especially by using a moving monocular camera, is a very challenging task in the field of visual object tracking. To tackle this problem, the traditional tracking-by-detection-based method is heavily dependent on detection results. Occlusion and mis-detections will often lead to tracklets or drifting. In this paper, the tasks of MOT and camera motion estimation are formulated as finding a maximum a posteriori (MAP) solution of joint probability and synchronously solved in a unified framework. To improve performance, we incorporate the three-dimensional (3D) relative-motion model into a sequential Bayesian framework to track multiple objects and the camera’s ego-motion estimation. A 3D relative-motion model that describes spatial relations among objects is exploited for predicting object states robustly and recovering objects when occlusion and mis-detections occur. Reversible jump Markov chain Monte Carlo (RJMCMC) particle filtering is applied to solve the posteriori estimation problem. Both quantitative and qualitative experiments with benchmark datasets and video collected on campus were conducted, which confirms that the proposed method is outperformed in many evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Yan Qiu, Tao Zhang, Shuang Xu, and Hou Jie Li. "Bayesian Image Denoising Using an Anisotropic Markov Random Field Model." Key Engineering Materials 467-469 (February 2011): 2018–23. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.2018.

Full text
Abstract:
This paper presents a Bayesian denoising method based on an anisotropic Markov Random Field (MRF) model in wavelet domain in order to improve the image denoising performance and reduce the computational complexity. The classical single-resolution image restoration method using MRFs and the maximum a posteriori (MAP) estimation is extended to the wavelet domain. To obtain the accurate MAP estimation, a novel anisotropic MRF model is proposed under this framework. As compared to the simple isotropic MRF model, this new model can capture the intrascale dependencies of wavelet coefficients significantly better. Simulation results demonstrate our proposed method has a good denoising performance while reducing the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Feng, Tanghuai Fan, Chenrong Huang, Xin Wang, and Lizhong Xu. "Block-Based MAP Superresolution Using Feature-Driven Prior Model." Mathematical Problems in Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/508357.

Full text
Abstract:
In the field of image superresolution reconstruction (SRR), the prior can be employed to solve the ill-posed problem. However, the prior model is selected empirically and characterizes the entire image so that the local feature of image cannot be represented accurately. This paper proposes a feature-driven prior model relying on feature of the image and introduces a block-based maximum a posteriori (MAP) framework under which the image is split into several blocks to perform SRR. Therefore, the local feature of image can be characterized more accurately, which results in a better SRR. In process of recombining superresolution blocks, we still design a border-expansion strategy to remove a byproduct, namely, cross artifacts. Experimental results show that the proposed method is effective.
APA, Harvard, Vancouver, ISO, and other styles
7

YANG, WENJIA, LIHUA DOU, and JUAN ZHAN. "A MULTI-HISTOGRAM CLUSTERING APPROACH TOWARD MARKOV RANDOM FIELD FOR FOREGROUND SEGMENTATION." International Journal of Image and Graphics 11, no. 01 (January 2011): 65–81. http://dx.doi.org/10.1142/s0219467811003993.

Full text
Abstract:
This paper presents a Bayesian approach for foreground segmentation in monocular image sequences. To overcome the limitations of background modeling in dealing with pixel-wise processing, spatial coherence and temporal persistency are formulated with background model under a maximum a posteriori probability (MAP)–Markov random field statistical (MRF) framework. Fuzzy clustering factor was introduced into the prior energy of MRFs for the new implementation scheme, where contextual constraints can be adaptively adjusted in terms of feature cues. Experimental results for several image sequences are provided to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Wenchao, Yi Wang, Tao Lei, Yangyu Fan, and Yan Feng. "Level Set Segmentation of Medical Images Based on Local Region Statistics and Maximum a Posteriori Probability." Computational and Mathematical Methods in Medicine 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/570635.

Full text
Abstract:
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Pillow, Jonathan W., Yashar Ahmadian, and Liam Paninski. "Model-Based Decoding, Information Estimation, and Change-Point Detection Techniques for Multineuron Spike Trains." Neural Computation 23, no. 1 (January 2011): 1–45. http://dx.doi.org/10.1162/neco_a_00058.

Full text
Abstract:
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Jia, Mingyu Zhang, Chaoyong Wang, Rongjun Chen, Xiaofeng An, and Yufei Wang. "Upper Bound on the Bit Error Probability of Systematic Binary Linear Codes via Their Weight Spectra." Discrete Dynamics in Nature and Society 2020 (January 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/1469090.

Full text
Abstract:
In this paper, upper bound on the probability of maximum a posteriori (MAP) decoding error for systematic binary linear codes over additive white Gaussian noise (AWGN) channels is proposed. The proposed bound on the bit error probability is derived with the framework of Gallager’s first bounding technique (GFBT), where the Gallager region is defined to be an irregular high-dimensional geometry by using a list decoding algorithm. The proposed bound on the bit error probability requires only the knowledge of weight spectra, which is helpful when the input-output weight enumerating function (IOWEF) is not available. Numerical results show that the proposed bound on the bit error probability matches well with the maximum-likelihood (ML) decoding simulation approach especially in the high signal-to-noise ratio (SNR) region, which is better than the recently proposed Ma bound.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Maximum a posteriori (MAP) framework"

1

CARMO, F. L. "Melhoria da Convergência do Método Ica-Map para Remoção de Ruído em Sinal de Voz." Universidade Federal do Espírito Santo, 2013. http://repositorio.ufes.br/handle/10/9621.

Full text
Abstract:
Made available in DSpace on 2018-08-02T00:00:56Z (GMT). No. of bitstreams: 1 tese_4705_DissLouvatti.pdf: 7714377 bytes, checksum: 4882a516f1ffdbc4fd342e10e0c06e83 (MD5) Previous issue date: 2013-12-10
O problema de separação de fontes consiste em recuperar um sinal latente de um conjunto de misturas observáveis. Em problemas de denoising, que podem ser encarados como um problema de separação de fontes, é necessário extrair um sinal de voz não observado a partir de um sinal contaminado por ruído. Em tal caso, uma importante abordagem baseia-se na análise de componentes independentes (modelos ICA). Neste sentido, o uso da ICA com o algoritmo maximum a posteriori (MAP) é conhecido como ICA-MAP. O emprego de duas transformações individuais para sinal de voz e ruído pode proporcionar uma melhor estimativa dentro de um ambiente linear. Esse trabalho apresenta uma modificação feita no algoritmo ICA-MAP a fim de melhorar sua convergência. Foi observado, através de testes, que é possível limitar a magnitude do vetor gradiente, usado para estimar os parâmetros do modelo de denoising, e assim melhorar a estabilidade do algoritmo. Tal adaptação pode ser entendida como uma restrição no problema de otimização original. Outra abordagem proposta é aproximar a derivada do modelo GGM (generalized gaussian model) em torno de zero por uma spline. Para acelerar o algoritmo, é aplicado um passo variável no algoritmo do gradiente. Testes comparativos foram realizados empregando-se bases padrões de dados de voz (masculino e feminino) e de ruído. No final, os resultados obtidos são comparados com técnicas clássicas, a fim de destacar as vantagens do método.
APA, Harvard, Vancouver, ISO, and other styles
2

Bacharach, Lucien. "Caractérisation des limites fondamentales de l'erreur quadratique moyenne pour l'estimation de signaux comportant des points de rupture." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS322/document.

Full text
Abstract:
Cette thèse porte sur l'étude des performances d'estimateurs en traitement du signal, et s'attache en particulier à étudier les bornes inférieures de l'erreur quadratique moyenne (EQM) pour l'estimation de points de rupture, afin de caractériser le comportement d'estimateurs, tels que celui du maximum de vraisemblance (dans le contexte fréquentiste), mais surtout du maximum a posteriori ou de la moyenne conditionnelle (dans le contexte bayésien). La difficulté majeure provient du fait que, pour un signal échantillonné, les paramètres d'intérêt (à savoir les points de rupture) appartiennent à un espace discret. En conséquence, les résultats asymptotiques classiques (comme la normalité asymptotique du maximum de vraisemblance) ou la borne de Cramér-Rao ne s'appliquent plus. Quelques résultats sur la distribution asymptotique du maximum de vraisemblance provenant de la communauté mathématique sont actuellement disponibles, mais leur applicabilité à des problèmes pratiques de traitement du signal n'est pas immédiate. Si l'on décide de concentrer nos efforts sur l'EQM des estimateurs comme indicateur de performance, un travail important autour des bornes inférieures de l'EQM a été réalisé ces dernières années. Plusieurs études ont ainsi permis de proposer des inégalités plus précises que la borne de Cramér-Rao. Ces dernières jouissent en outre de conditions de régularité plus faibles, et ce, même en régime non asymptotique, permettant ainsi de délimiter la plage de fonctionnement optimal des estimateurs. Le but de cette thèse est, d'une part, de compléter la caractérisation de la zone asymptotique (en particulier lorsque le rapport signal sur bruit est élevé et/ou pour un nombre d'observations infini) dans un contexte d'estimation de points de rupture. D'autre part, le but est de donner les limites fondamentales de l'EQM d'un estimateur dans la plage non asymptotique. Les outils utilisés ici sont les bornes inférieures de l’EQM de la famille Weiss-Weinstein qui est déjà connue pour être plus précise que la borne de Cramér-Rao dans les contextes, entre autres, de l’analyse spectrale et du traitement d’antenne. Nous fournissons une forme compacte de cette famille dans le cas d’un seul et de plusieurs points de ruptures puis, nous étendons notre analyse aux cas où les paramètres des distributions sont inconnus. Nous fournissons également une analyse de la robustesse de cette famille vis-à-vis des lois a priori utilisées dans nos modèles. Enfin, nous appliquons ces bornes à plusieurs problèmes pratiques : données gaussiennes, poissonniennes et processus exponentiels
This thesis deals with the study of estimators' performance in signal processing. The focus is the analysis of the lower bounds on the Mean Square Error (MSE) for abrupt change-point estimation. Such tools will help to characterize performance of maximum likelihood estimator in the frequentist context but also maximum a posteriori and conditional mean estimators in the Bayesian context. The main difficulty comes from the fact that, when dealing with sampled signals, the parameters of interest (i.e., the change points) lie on a discrete space. Consequently, the classical large sample theory results (e.g., asymptotic normality of the maximum likelihood estimator) or the Cramér-Rao bound do not apply. Some results concerning the asymptotic distribution of the maximum likelihood only are available in the mathematics literature but are currently of limited interest for practical signal processing problems. When the MSE of estimators is chosen as performance criterion, an important amount of work has been provided concerning lower bounds on the MSE in the last years. Then, several studies have proposed new inequalities leading to tighter lower bounds in comparison with the Cramér-Rao bound. These new lower bounds have less regularity conditions and are able to handle estimators’ MSE behavior in both asymptotic and non-asymptotic areas. The goal of this thesis is to complete previous results on lower bounds in the asymptotic area (i.e. when the number of samples and/or the signal-to-noise ratio is high) for change-point estimation but, also, to provide an analysis in the non-asymptotic region. The tools used here will be the lower bounds of the Weiss-Weinstein family which are already known in signal processing to outperform the Cramér-Rao bound for applications such as spectral analysis or array processing. A closed-form expression of this family is provided for a single and multiple change points and some extensions are given when the parameters of the distributions on each segment are unknown. An analysis in terms of robustness with respect to the prior influence on our models is also provided. Finally, we apply our results to specific problems such as: Gaussian data, Poisson data and exponentially distributed data
APA, Harvard, Vancouver, ISO, and other styles
3

Karlsson, Fredrik. "Matting of Natural Image Sequences using Bayesian Statistics." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2355.

Full text
Abstract:

The problem of separating a non-rectangular foreground image from a background image is a classical problem in image processing and analysis, known as matting or keying. A common example is a film frame where an actor is extracted from the background to later be placed on a different background. Compositing of these objects against a new background is one of the most common operations in the creation of visual effects. When the original background is of non-constant color the matting becomes an under determined problem, for which a unique solution cannot be found.

This thesis describes a framework for computing mattes from images with backgrounds of non-constant color, using Bayesian statistics. Foreground and background color distributions are modeled as oriented Gaussians and optimal color and opacity values are determined using a maximum a posteriori approach. Together with information from optical flow algorithms, the framework produces mattes for image sequences without needing user input for each frame.

The approach used in this thesis differs from previous research in a few areas. The optimal order of processing is determined in a different way and sampling of color values is changed to work more efficiently on high-resolution images. Finally a gradient-guided local smoothness constraint can optionally be used to improve results for cases where the normal technique produces poor results.

APA, Harvard, Vancouver, ISO, and other styles
4

Sekhi, Ikram. "Développement d'un alphabet structural intégrant la flexibilité des structures protéiques." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC084/document.

Full text
Abstract:
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines
The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
APA, Harvard, Vancouver, ISO, and other styles
5

McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Samarasinghe, Devanarayanage Pradeepa. "Efficient methodologies for real-time image restoration." Phd thesis, 2011. http://hdl.handle.net/1885/9859.

Full text
Abstract:
In this thesis we investigate the problem of image restoration. The main focus of our research is to come up with novel algorithms and enhance existing techniques in order to deliver efficient and effective methodologies, applicable in real-time image restoration scenarios. Our research starts with a literature review, which identifies the gaps in existing techniques and helps us to come up with a novel classification on image restoration, which integrates and discusses more recent developments in the area of image restoration. With this novel classification, we identified three major areas which need our attention. The first developments relate to non-blind image restoration. The two mostly used techniques, namely deterministic linear algorithms and stochastic nonlinear algorithms are compared and contrasted. Under deterministic linear algorithms, we develop a class of more effective novel quadratic linear regularization models, which outperform the existing linear regularization models. In addition, by looking in a new perspective, we evaluate and compare the performance of deterministic and stochastic restoration algorithms and explore the validity of the performance claims made so far on those algorithms. Further, we critically challenge the ne- cessity of some complex mechanisms in Maximum A Posteriori (MAP) technique under stochastic image deconvolution algorithms. The next developments are focussed in blind image restoration, which is claimed to be more challenging. Constant Modulus Algorithm (CMA) is one of the most popular, computationally simple, tested and best performing blind equalization algorithms in the signal processing domain. In our research, we extend the use of CMA in image restoration and develop a broad class of blind image deconvolution algorithms, in particular algorithms for blurring kernels with a separable property. These algorithms show significantly faster convergence than conventional algorithms. Although CMA method has a proven record in signal processing applications related to data communications systems, no research has been carried out to the investigation of the applicability of CMA for image restoration in practice. In filling this gap and taking into account the differences of signal processing in im- age processing and data communications contexts, we extend our research on the applicability of CMA deconvolution under the assumptions on the ground truth image properties. Through analyzing the main assumptions of ground truth image properties being zero-mean, independent and uniformly distributed, which char- acterize the convergence of CMA deconvolution, we develop a novel technique to overcome the effects of image source correlation based on segmentation and higher order moments of the source. Multichannel image restoration techniques recently gained much attention over the single channel image restoration due to the benefits of diversity and redundancy of the information between the channels. Exploiting these benefits in real time applications is often restricted due to the unavailability of multiple copies of the same image. In order to overcome this limitation, as the last area of our research, we develop a novel multichannel blind restoration model with a single image, which eliminates the constraint of the necessity of multiple copies of the blurred image. We consider this as a major contribution which could be extended to wider areas of research integrated with multiple disciplines such as demosaicing.
APA, Harvard, Vancouver, ISO, and other styles
7

Carvajal, Rodrigo. "EM-based channel estimation for Multicarrier communication systems." Thesis, 2013. http://hdl.handle.net/1959.13/938545.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
This thesis addresses the general problem of channel estimation in Multicarrier communication systems. This estimation problem, inter-alia, includes the joint estimation of channel noise variance, carrier frequency offset and phase noise bandwidth. A general state-space model is developed for multicarrier systems that represents any modulation scheme, by separating the signals into their real and imaginary parts. The approach presented in this thesis relies on the statistical representation of the signals of interest. The approach is valid for any statistical representation. In particular, we present a linear and Gaussian structure associated with the transmitted signal, which is exploited by utilizing the Kalman filter. For nonlinear signals, nonlinear filtering is carried out by utilizing sequential Monte Carlo techniques. The estimation problem is solved by using Maximum Likelihood (ML) and Maximum a Posteriori (MAP) estimation, for which the Expectation-Maximization (EM) algorithm is considered. For ML estimation, a novel selection of hidden variables and parameters is proposed, whilst the maximization step is carried out by concentrating the cost in one variable (carrier frequency offset). For MAP estimation, the prior terms are expressed as variance-mean Gaussian mixtures. In this case, the channel estimate can be obtained in closed form within the EM framework. In the maximization step of the EM algorithm, the cost function is also concentrated in one variable (carrier frequency offset). For sparse channel estimation, an l1-norm regularization is considered. An Elastic Net penalty is also considered, which accounts for the different nature that communication channels can exhibit in a variety of environments. It is also shown that the utilization of variance-mean Gaussian mixtures present a general method for MAP estimation, which encompasses different penalizations and optimization methods, such as the Lasso, Group-Lasso, and local-linear/local-quadratic approximation for the Lasso, among others. The MAP estimation approach proposed in this thesis is illustrated with not only examples in MC communication systems, but also for sparse estimation with quantized data. Finally, it is also shown that the estimation of the channel noise variance is not straightforward, and that some modifications to the standard methods should be considered. It is shown that, in the proposed MAP estimation approach, those modifications can be included in a simple manner. The thesis also considers the impact of different levels of training on the overall parameter estimation problem. In particular, it is shown that the estimates of phase noise bandwidth are generally poor, and, hence, that high levels of training are required to obtain accurate channel estimates.
APA, Harvard, Vancouver, ISO, and other styles
8

Αγγελόπουλος, Aπόστολος. "Επαναληπτική αποκωδικοποίηση χωροχρονικών κωδικών (space-time codes) σε συστήματα ορθογώνιας πολυπλεξίας φερουσών: αναπαράσταση δεδομένων και πολυπλοκότητα." Thesis, 2006. http://nemertes.lis.upatras.gr/jspui/handle/10889/482.

Full text
Abstract:
Η χρήση πολλαπλών κεραιών παίζει πλέον ένα πολύ σημαντικό ρόλο στη βελτίωση των ραδιοτηλεπικοινωνιών. Για το λόγο αυτό, ο τομέας των τηλεπικοινωνιακών συστημάτων πολλαπλών κεραιών μετάδοσης – λήψης (συστήματα ΜΙΜΟ) βρίσκεται στο προσκήνιο της ασύρματης έρευνας. Πρόσφατα, αποτελέσματα ερευνών έδειξαν ότι υπάρχει δυνατότητα αύξησης της χωρητικότητας στα ασύρματα τηλεπικοινωνιακά συστήματα χρησιμοποιώντας τεχνικές διαφοροποίησης μεταξύ πομπού – δέκτη (antenna diversity), δηλαδή δημιουργίας πολλαπλών ανεξάρτητων καναλιών ανάμεσα τους. Στην παρούσα εργασία μελετούνται τεχνικές κωδικοποίησης που εκμεταλλεύονται τη χωρική διαφοροποίηση κάνοντας χρήση χωροχρονικών κωδικών (space – time coding). Η μελέτη εστιάζεται στη χρήση χωροχρονικών κωδικών ανά μπλοκ από την πλευρά του πομπού, εξαιτίας της απλότητας υλοποίησης τους καθώς και της ικανότητας υποστήριξης πολλαπλών κεραιών από τη πλευρά του σταθμού βάσης. Η ανάλυσή τους γίνεται με βάση την εφαρμογή τους σε συστήματα που χρησιμοποιούν διαμόρφωση με πολυπλεξία ορθογώνιων φερουσών (OFDM). Η διαμόρφωση αυτή επιλέχθηκε γιατί υποστηρίζει υψηλούς ρυθμούς δεδομένων στα ασύρματα συστήματα και δείχνει άριστη συμπεριφορά σε κανάλια με επιλεκτική παραμόρφωση στη συχνότητα. Στη συνέχεια μελετώνται αλγόριθμοι επαναληπτικής αποκωδικοποίησης, δίνοντας έμφαση σε ένα ευρέως διαδεδομένο αλγόριθμο, τον Μέγιστο εκ των Υστέρων (MAP). Αναλύονται διεξοδικά τα βήματα του, καθώς και διάφορες τροποποιήσεις – βελτιστοποιήσεις του. Οι επαναληπτικοί αλγόριθμοι αποκωδικοποίησης αποτελούν πλέον ένα πολύ ισχυρό εργαλείο για την αποκωδικοποίηση Forward Error Correction κωδικοποιήσεων με χρήση συνελικτικών κωδικών, προσδίδοντας στα συστήματα αποδόσεις κοντά στο όριο του Shannon. Τέλος, πραγματοποιούνται κατάλληλες υλοποιήσεις που προέκυψαν από το συνδυασμό των εν λόγω αλγορίθμων επαναληπτικής αποκωδικοποίησης με τους χωροχρονικούς κώδικες ανά μπλοκ πάνω σε ένα σύστημα κεραιών με χρήση OFDM. Γίνεται σύγκριση της απόδοσης των συστημάτων αυτών με βάση την αντίστοιχη υλοποίηση του εκάστοτε αλγορίθμου επαναληπτικής αποκωδικοποίησης και μελετούνται σε βάθος διάφορες τροποποιήσεις που μπορούν δεχθούν με κριτήριο τη χαμηλή πολυπλοκότητα υλοποίησης. Για την αξιολόγηση της απόδοσης, γίνεται μία περαιτέρω σύγκριση με χρήση αναπαράστασης σταθερής υποδιαστολής και εξάγονται σειρά συμπερασμάτων από τις πειραματικές μετρήσεις που προέκυψαν.
The use of multiple antennas is an essential issue in telecommunications, nowadays. So, multiple input – multiple output systems (MIMO) has attracted a lot of attention in wireless research. Lately, it has been shown that it can be an improvement in the capacity of wireless communication systems by using antenna diversity, that’s different independent channels between transmitter and receiver. In this thesis, we study coding techniques that exploit space diversity by using space – time codes. Particularly, we focus on space – time block coding (STBC) from the transmitter’s point of view, because of the simplicity of its implementation and the ability to support multiple antennas at the base stations. The analysis is based on the systems that use Orthogonal Frequency Division Multiplexing Systems (OFDM). This technique was chosen because it can support high data rates and it behaves very well in a frequency selective fading channel. Moreover, we study iterative decoding algorithms and we focus on a very well known algorithm, the Maximum A Posteriori (MAP). There, we analyze its steps and its modifications and improvements. The iterative decoding algorithms are a cornerstone on decoding Forward Error Correction codes, such as Convolutional codes, almost reaching the Shannon limit. Finally, there are different kinds of implementations using suitable iterative decoding algorithms in concatenation with space – time block coding with antennas and ODFM. We compare the performance of the corresponding systems and investigate the complexity trying to maintain it in a low level. For a thorough investigation, we also use fixed point arithmetic in these implementations.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Maximum a posteriori (MAP) framework"

1

Evensen, Geir, Femke C. Vossepoel, and Peter Jan van Leeuwen. "Maximum a Posteriori Solution." In Springer Textbooks in Earth Sciences, Geography and Environment, 27–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96709-3_3.

Full text
Abstract:
AbstractWe will now introduce a fundamental approximation used in most practical data-assimilation methods, namely the definition of Gaussian priors. This approximation simplifies the Bayesian posterior, which allows us to compute the maximum a posteriori (MAP) estimate and sample from the posterior pdf. This chapter will introduce the Gaussian approximation and then discuss the Gauss–Newton method for finding the MAP estimate. This method is the starting point for many of the data-assimilation algorithms discussed in the following chapters.
APA, Harvard, Vancouver, ISO, and other styles
2

"Maximum A Posteriori (MAP)." In Encyclopedia of Biometrics, 963. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_1071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

S., Shiyamala, Vijay Soorya J., Sanjay P. S., and Sathappan K. "Network-on-Chip for Low Power MAP Decoder Using Folded Technique and CORDIC Algorithm for 5G Network." In Design Methodologies and Tools for 5G Network Development and Application, 96–108. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4610-9.ch005.

Full text
Abstract:
With different constraint length (K), time scale, and code rate, modified MAP (maximum a posteriori) decoder architecture using folding technique, which has a linear life time chart, is developed, and dedicated turbo codes will be placed in a network-on-chip for various wireless applications. Folded techniques mitigated the number of latches used in interleaving and deinterleaving unit by adopting forward and backward resource utilizing method to M-2, where M is the number of rows and end-to-end delay get reduced to 2M. By replacing conventional full adder by high speed adder using 2 x 1 multiplexer to calculate the forward state metrics and reverse state metrics will minimize the power consumption utilization in an effective manner. In s similar way, CORDIC (Coordinated ROtation DIgital Computer) algorithm is used to calculate the LLR value and confer a highly precise value with less computational complexity by means of only shifting and adding methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Zribi, Amin, Sonia Zaibi, Ramesh Pyndiah, and Ammar Bouallègue. "Chase-Like Decoding of Arithmetic Codes with Applications." In Intelligent Computer Vision and Image Processing, 27–41. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3906-5.ch003.

Full text
Abstract:
Motivated by recent results in Joint Source/Channel (JSC) coding and decoding, this paper addresses the problem of soft input decoding of Arithmetic Codes (AC). A new length-constrained scheme for JSC decoding of these codes is proposed based on the Maximum a posteriori (MAP) sequence estimation criterion. The new decoder, called Chase-like arithmetic decoder is supposed to know the source symbol sequence and the compressed bit-stream lengths. First, Packet Error Rates (PER) in the case of transmission on an Additive White Gaussian Noise (AWGN) channel are investigated. Compared to classical arithmetic decoding, the Chase-like decoder shows significant improvements. Results are provided for Chase-like decoding for image compression and transmission on an AWGN channel. Both lossy and lossless image compression schemes were studied. As a final application, the serial concatenation of an AC with a convolutional code was considered. Iterative decoding, performed between the two decoders showed substantial performance improvement through iterations.
APA, Harvard, Vancouver, ISO, and other styles
5

Batini, Carlo, Anisa Rula, Monica Scannapieco, and Gianluigi Viscusi. "From Data Quality to Big Data Quality." In Big Data, 1934–56. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9840-6.ch089.

Full text
Abstract:
This chapter investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Maximum a posteriori (MAP) framework"

1

Liu, Risheng, Zi Li, Yuxi Zhang, Xin Fan, and Zhongxuan Luo. "Bi-level Probabilistic Feature Learning for Deformable Image Registration." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/101.

Full text
Abstract:
We address the challenging issue of deformable registration that robustly and efficiently builds dense correspondences between images. Traditional approaches upon iterative energy optimization typically invoke expensive computational load. Recent learning-based methods are able to efficiently predict deformation maps by incorporating learnable deep networks. Unfortunately, these deep networks are designated to learn deterministic features for classification tasks, which are not necessarily optimal for registration. In this paper, we propose a novel bi-level optimization model that enables jointly learning deformation maps and features for image registration. The bi-level model takes the energy for deformation computation as the upper-level optimization while formulates the maximum \emph{a posterior} (MAP) for features as the lower-level optimization. Further, we design learnable deep networks to simultaneously optimize the cooperative bi-level model, yielding robust and efficient registration. These deep networks derived from our bi-level optimization constitute an unsupervised end-to-end framework for learning both features and deformations. Extensive experiments of image-to-atlas and image-to-image deformable registration on 3D brain MR datasets demonstrate that we achieve state-of-the-art performance in terms of accuracy, efficiency, and robustness.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yan, Oscar Au, Xiaopeng Fan, Liwei Guo, and Peter H. W. Wong. "Maximum a Posteriori Based (MAP-Based) Video Denoising VIA Rate Distortion Optimization." In Multimedia and Expo, 2007 IEEE International Conference on. IEEE, 2007. http://dx.doi.org/10.1109/icme.2007.4285054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chakraborty, Debamitra, Bradley N. Mills, Jing Cheng, Scott A. Gerber, and Roman Sobolewski. "Maximum A-Posteriori Probability (MAP) Terahertz Parameter Extraction for Pancreatic Ductal Adenocarcinoma." In 2022 47th International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz). IEEE, 2022. http://dx.doi.org/10.1109/irmmw-thz50927.2022.9895919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Poore, Aubrey B., Benjamin J. Slocumb, Brian J. Suchomel, Fritz H. Obermeyer, Shawn M. Herman, and Sabino M. Gadaleta. "Batch maximum likelihood (ML) and maximum a posteriori (MAP) estimation with process noise for tracking applications." In Optical Science and Technology, SPIE's 48th Annual Meeting, edited by Oliver E. Drummond. SPIE, 2003. http://dx.doi.org/10.1117/12.506442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Q., B. Bai, Q. Li, A. M. Smith, N. Vu, and A. Chatziioannou. "Evaluation of the maximum a posteriori (MAP) reconstruction on a microPET Focus220 scanner." In 2007 IEEE Nuclear Science Symposium Conference Record. IEEE, 2007. http://dx.doi.org/10.1109/nssmic.2007.4436904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Yiqing. "Radio environment map based maximum a posteriori Doppler shift estimation for LTE-R." In 2014 International Workshop on High Mobility Wireless Communications (HMWC). IEEE, 2014. http://dx.doi.org/10.1109/hmwc.2014.7000241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pooja, S., M. Vivek, and Joonki Paik. "Space invariant deconvolution using Maximum A Posteriori (MAP) Estimation for imaging inverse problem." In 2016 International Conference on Electronics, Information, and Communications (ICEIC). IEEE, 2016. http://dx.doi.org/10.1109/elinfocom.2016.7562936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ozcelik, Taner, and Aggelos K. Katsaggelos. "Low bit-rate video compression based on maximum a posteriori (MAP) recovery techniques." In Visual Communications and Image Processing '94, edited by Aggelos K. Katsaggelos. SPIE, 1994. http://dx.doi.org/10.1117/12.185883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Blunt and Ho. "An iterative maximum a posteriori (MAP) estimator for multiuser detection in synchronous CDMA systems." In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1005146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Blunt, Shannon D., and K. C. Ho. "An iterative maximum a posteriori (MAP) estimator for multiuser detection in synchronous CDMA systems." In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5745108.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Maximum a posteriori (MAP) framework"

1

Wilson, Gregory L., Andrew C. Lindgren, Thomas M. Fitzgerald, Pamela S. Smith, and Russell C. Hardie. Maximum a Posteriori (MAP) Estimates for Hyperspectral Image Enhancement. Fort Belvoir, VA: Defense Technical Information Center, September 2004. http://dx.doi.org/10.21236/ada429581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Anderson, Timothy, A. R. Aminzadeh, Jennifer Drexler, and Wade Shen. Improved Phrase Translation Modeling Using Maximum A-Posteriori (MAP) Adaptation. Fort Belvoir, VA: Defense Technical Information Center, July 2013. http://dx.doi.org/10.21236/ada604450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sinclair, Samantha, and Sandra LeGrand. Reproducibility assessment and uncertainty quantification in subjective dust source mapping. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41523.

Full text
Abstract:
Accurate dust-source characterizations are critical for effectively modeling dust storms. A previous study developed an approach to manually map dust plume-head point sources in a geographic information system (GIS) framework using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery processed through dust-enhancement algorithms. With this technique, the location of a dust source is digitized and recorded if an analyst observes an unobscured plume head in the imagery. Because airborne dust must be sufficiently elevated for overland dust-enhancement algorithms to work, this technique may include up to 10 km in digitized dust-source location error due to downwind advection. However, the potential for error in this method due to analyst subjectivity has never been formally quantified. In this study, we evaluate a version of the methodology adapted to better enable reproducibility assessments amongst multiple analysts to determine the role of analyst subjectivity on recorded dust source location error. Four analysts individually mapped dust plumes in Southwest Asia and Northwest Africa using five years of MODIS imagery collected from 15 May to 31 August. A plume-source location is considered reproducible if the maximum distance between the analyst point-source markers for a single plume is ≤10 km. Results suggest analyst marker placement is reproducible; however, additional analyst subjectivity-induced error (7 km determined in this study) should be considered to fully characterize locational uncertainty. Additionally, most of the identified plume heads (> 90%) were not marked by all participating analysts, which indicates dust source maps generated using this technique may differ substantially between users.
APA, Harvard, Vancouver, ISO, and other styles
4

Sinclair, Samantha, and Sandra LeGrand. Reproducibility assessment and uncertainty quantification in subjective dust source mapping. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41542.

Full text
Abstract:
Accurate dust-source characterizations are critical for effectively modeling dust storms. A previous study developed an approach to manually map dust plume-head point sources in a geographic information system (GIS) framework using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery processed through dust-enhancement algorithms. With this technique, the location of a dust source is digitized and recorded if an analyst observes an unobscured plume head in the imagery. Because airborne dust must be sufficiently elevated for overland dust-enhancement algorithms to work, this technique may include up to 10 km in digitized dust-source location error due to downwind advection. However, the potential for error in this method due to analyst subjectivity has never been formally quantified. In this study, we evaluate a version of the methodology adapted to better enable reproducibility assessments amongst multiple analysts to determine the role of analyst subjectivity on recorded dust source location error. Four analysts individually mapped dust plumes in Southwest Asia and Northwest Africa using five years of MODIS imagery collected from 15 May to 31 August. A plume-source location is considered reproducible if the maximum distance between the analyst point-source markers for a single plume is ≤10 km. Results suggest analyst marker placement is reproducible; however, additional analyst subjectivity-induced error (7 km determined in this study) should be considered to fully characterize locational uncertainty. Additionally, most of the identified plume heads (> 90%) were not marked by all participating analysts, which indicates dust source maps generated using this technique may differ substantially between users.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography