Gotowa bibliografia na temat „A posteriori probability decoding”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „A posteriori probability decoding”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "A posteriori probability decoding"

1

Arar, Maher, Claude D'Amours i Abbas Yongacoglu. "Simplified LLRs for the Decoding of Single Parity Check Turbo Product Codes Transmitted Using 16QAM". Research Letters in Communications 2007 (2007): 1–4. http://dx.doi.org/10.1155/2007/53517.

Pełny tekst źródła
Streszczenie:
Iterative soft-decision decoding algorithms require channel log-likelihood ratios (LLRs) which, when using 16QAM modulation, require intensive computations to be obtained. Therefore, we derive four simple approximate LLR expressions. When using the maximum a posteriori probability algorithm for decoding single parity check turbo product codes (SPC/TPCs), these LLRs can be simplified even further. We show through computer simulations that the bit-error-rate performance of(8,7)2and(8,7)3SPC/TPCs, transmitted using 16QAM and decoded using the maximum a posteriori algorithm with our simplified LLRs, is nearly identical to the one achieved by using the exact LLRs.
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Jia, Mingyu Zhang, Chaoyong Wang, Rongjun Chen, Xiaofeng An i Yufei Wang. "Upper Bound on the Bit Error Probability of Systematic Binary Linear Codes via Their Weight Spectra". Discrete Dynamics in Nature and Society 2020 (29.01.2020): 1–11. http://dx.doi.org/10.1155/2020/1469090.

Pełny tekst źródła
Streszczenie:
In this paper, upper bound on the probability of maximum a posteriori (MAP) decoding error for systematic binary linear codes over additive white Gaussian noise (AWGN) channels is proposed. The proposed bound on the bit error probability is derived with the framework of Gallager’s first bounding technique (GFBT), where the Gallager region is defined to be an irregular high-dimensional geometry by using a list decoding algorithm. The proposed bound on the bit error probability requires only the knowledge of weight spectra, which is helpful when the input-output weight enumerating function (IOWEF) is not available. Numerical results show that the proposed bound on the bit error probability matches well with the maximum-likelihood (ML) decoding simulation approach especially in the high signal-to-noise ratio (SNR) region, which is better than the recently proposed Ma bound.
Style APA, Harvard, Vancouver, ISO itp.
3

Shrestha, Rahul, i Roy Paily. "Memory-Reduced Maximum A Posteriori Probability Decoding for High-Throughput Parallel Turbo Decoders". Circuits, Systems, and Signal Processing 35, nr 8 (22.09.2015): 2832–54. http://dx.doi.org/10.1007/s00034-015-0168-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shafieipour, Mohammad, Heng-Siong Lim i Teong-Chee Chuah. "Decoding of Turbo Codes in Symmetric Alpha-Stable Noise". ISRN Signal Processing 2011 (29.03.2011): 1–7. http://dx.doi.org/10.5402/2011/683972.

Pełny tekst źródła
Streszczenie:
This paper investigates the decoding of turbo codes in impulsive symmetric α-stable (SαS) noise. Due to the nonexistence of a closed-form expression for the probability density function (pdf) of α-stable processes, numerical-based SαS pdf is used to derive branch transition probability (btp) for the maximum a posteriori turbo decoder. Results show that in Gaussian noise, the turbo decoder achieves similar performance using both the conventional and the proposed btps, but in impulsive channels, the turbo decoder with the proposed btp substantially outperforms the turbo decoder utilizing the conventional btp. Results also confirm that the turbo decoder incorporating the proposed btp outperforms the existing Cauchy-based turbo decoder in non-Cauchy impulsive noise, while the two decoders accomplish similar performance in Cauchy noise.
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Li Na, i Xiao Liu. "Improved BP Decoding Algorithm for LDPC Codes". Advanced Materials Research 846-847 (listopad 2013): 925–28. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.925.

Pełny tekst źródła
Streszczenie:
In this paper, an improved belief propagation decoding algorithm was proposed for low density parity check codes. In the proposed decoding process, error bits can be detected once again after hard-decision in the conventional BP decoding algorithm. The detection criterion is based on check matrix characteristics and D-value between prior probability and posterior probability. Simulation results demonstrate the performance of the improved BP decoding algorithm outperform that of the conventional BP decoding algorithm.
Style APA, Harvard, Vancouver, ISO itp.
6

PARK, HEE-SEON, BONG-KEE SIN, JONGSUB MOON i SEONG-WHAN LEE. "A 2-D HMM METHOD FOR OFFLINE HANDWRITTEN CHARACTER RECOGNITION". International Journal of Pattern Recognition and Artificial Intelligence 15, nr 01 (luty 2001): 91–105. http://dx.doi.org/10.1142/s0218001401000757.

Pełny tekst źródła
Streszczenie:
In this paper we consider a hidden Markov mesh random field (HMMRF) for character recognition. The model consists of a "hidden" Markov mesh random field (MMRF) and an overlying probabilistic observation function of the MMRF. Just like the 1-D HMM, the hidden layer is characterized by the initial and the transition probability distributions, and the observation layer is defined by distribution functions for vector-quantized (VQ) observations. The HMMRF-based method consists of two phases: decoding and training. The decoding and the training algorithms are developed using dynamic programming and maximum likelihood estimation methods. To accelerate the computation in both phases, we employed a look-ahead scheme based on maximum marginal it a posteriori probability criterion for third-order HMMRF. Tested on a larget-set handwritten Korean Hangul character database, the model showed a promising result: up to 87.2% recognition rate with 8 state HMMRF and 128 VQ levels.
Style APA, Harvard, Vancouver, ISO itp.
7

Kim, Sang Wu, Taha Khalaf i Sangmun Kim. "MAP Detection of Misbehaving Relay in Wireless Multiple Access Relay Networks". IEEE Communications Letters 15, nr 3 (marzec 2011): 340–42. http://dx.doi.org/10.1109/lcomm.2011.012511.101323.

Pełny tekst źródła
Streszczenie:
We propose the maximum a posteriori (MAP) detection of the misbehaving relay that injects false data or adds channel errors into the network encoder in multiple access relay networks. The proposed scheme does not require sending extra bits at the source and is optimal in the sense of minimizing the probability of incorrect detection. We derive the probability of false alarm and misdetection, taking into account the lossy nature of wireless links. The side information regarding the presence of relay misbehavior is exploited at the decoder to mitigate the relay misbehavior and enhance the reliability of decoding.
Style APA, Harvard, Vancouver, ISO itp.
8

Ravi Kumar, Ch, i K. Padmaraju. "Hard Decision Decoding Performance Improved Using Turbo Product Codes". International Journal of Engineering & Technology 7, nr 3.12 (20.07.2018): 228. http://dx.doi.org/10.14419/ijet.v7i3.12.16030.

Pełny tekst źródła
Streszczenie:
The performance of soft decision decoding, whose for which the design is complex, is superior to the performance of hard decision decoding. In this paper, we propose a turbo product code with a bit flip algorithm to improve the performance of hard decision decoding. The performance of hard decision decoding is improved with low complexity using multidimensional turbo product codes. The reliability of decoding in a communication system to detect and correct errors is discussed .Maximum a posterior probability (MAP) decoding is employed to improve the hard decision performance of turbo product codes with multiple dimensions. Our results include comparisons of multiple dimensions—2D, 3D, 4D, and 5D—and the number of iterations in soft and hard decision decoding.
Style APA, Harvard, Vancouver, ISO itp.
9

Raza, Hasnain, Syed Azhar Ali Zaidi, Aamir Rashid i Shafiq Haider. "An area efficient and high throughput implementation of layered min-sum iterative construction a posteriori probability LDPC decoder". PLOS ONE 16, nr 3 (29.03.2021): e0249269. http://dx.doi.org/10.1371/journal.pone.0249269.

Pełny tekst źródła
Streszczenie:
Area efficient and high speed forward error correcting codes decoder are the demand of many high speed next generation communication standards. This paper explores a low complexity decoding algorithm of low density parity check codes, called the min-sum iterative construction a posteriori probability (MS-IC-APP), for this purpose. We performed the error performance analysis of MS-IC-APP for a (648,1296) regular QC-LDPC code and proposed an area and throughput optimized hardware implementation of MS-IC-APP. We proposed to use the layered scheduling of MS-IC-APP and performed other optimizations at architecture level to reduce the area and to increase the throughput of the decoder. Synthesis results show 6.95 times less area and 4 times high throughput as compared to the standard min-sum decoder. The area and throughput are also comparable to the improved variants of hard-decision bit-flipping (BF) decoders, whereas, the simulation results show a coding gain of 2.5 over the best implementation of BF decoder in terms of error performance.
Style APA, Harvard, Vancouver, ISO itp.
10

Chen, Xianqing, i Lenan Wu. "Nonlinear Demodulation and Channel Coding in EBPSK Scheme". Scientific World Journal 2012 (2012): 1–7. http://dx.doi.org/10.1100/2012/180469.

Pełny tekst źródła
Streszczenie:
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "A posteriori probability decoding"

1

Griffiths, Wayne Bradley. "On a posteriori probability decoding of linear block codes over discrete channels". University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0156.

Pełny tekst źródła
Streszczenie:
One of the facets of the mobile or wireless environment is that errors quite often occur in bursts. Thus, strong codes are required to provide protection against such errors. This in turn motivates the employment of decoding algorithms which are simple to implement, yet are still able to attempt to take the dependence or memory of the channel model into account in order to give optimal decoding estimates. Furthermore, such algorithms should be able to be applied for a variety of channel models and signalling alphabets. The research presented within this thesis describes a number of algorithms which can be used with linear block codes. Given the received word, these algorithms determine the symbol which was most likely transmitted, on a symbol-by-symbol basis. Due to their relative simplicity, a collection of algorithms for memoryless channels is reported first. This is done to establish the general style and principles of the overall collection. The concept of matrix diagonalisation may or may not be applied, resulting in two different types of procedure. Ultimately, it is shown that the choice between them should be motivated by whether storage space or computational complexity has the higher priority. As with all other procedures explained herein, the derivation is first performed for a binary signalling alphabet and then extended to fields of prime order. These procedures form the paradigm for algorithms used in conjunction with finite state channel models, where errors generally occur in bursts. In such cases, the necessary information is stored in matrices rather than as scalars. Finally, by analogy with the weight polynomials of a code and its dual as characterised by the MacWilliams identities, new procedures are developed for particular types of Gilbert-Elliott channel models. Here, the calculations are derived from three parameters which profile the occurrence of errors in those models. The decoding is then carried out using polynomial evaluation rather than matrix multiplication. Complementing this theory are several examples detailing the steps required to perform the decoding, as well as a collection of simulation results demonstrating the practical value of these algorithms.
Style APA, Harvard, Vancouver, ISO itp.
2

Blad, Anton. "Efficient Decoding Algorithms for Low-Density Parity-Check Codes". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2794.

Pełny tekst źródła
Streszczenie:

Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption.

We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong.

By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.

Style APA, Harvard, Vancouver, ISO itp.
3

Blasco-Serrano, Ricardo, Ragnar Thobaben, Vishwambhar Rathi i Mikael Skoglund. "Polar codes for compress-and-forward in binary relay channels". KTH, Kommunikationsteori, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-43727.

Pełny tekst źródła
Streszczenie:
We construct polar codes for binary relay channels with orthogonal receiver components. We show that polar codes achieve the cut-set bound when the channels are symmetric and the relay-destination link supports compress-and-forward relaying based on Slepian-Wolf coding. More generally, we show that a particular version of the compress-and-forward rate is achievable using polar codes for Wyner-Ziv coding. In both cases the block error probability can be bounded as O(2-Nβ) for 0 < β < 1/2 and sufficiently large block length N.

© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111207

Style APA, Harvard, Vancouver, ISO itp.
4

Xie, Li Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Finite horizon robust state estimation for uncertain finite-alphabet hidden Markov models". Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2004. http://handle.unsw.edu.au/1959.4/38664.

Pełny tekst źródła
Streszczenie:
In this thesis, we consider a robust state estimation problem for discrete-time, homogeneous, first-order, finite-state finite-alphabet hidden Markov models (HMMs). Based on Kolmogorov's Theorem on the existence of a process, we first present the Kolmogorov model for the HMMs under consideration. A new change of measure is introduced. The statistical properties of the Kolmogorov representation of an HMM are discussed on the canonical probability space. A special Kolmogorov measure is constructed. Meanwhile, the ergodicity of two expanded Markov chains is investigated. In order to describe the uncertainty of HMMs, we study probability distance problems based on the Kolmogorov model of HMMs. Using a change of measure technique, the relative entropy and the relative entropy rate as probability distances between HMMs, are given in terms of the HMM parameters. Also, we obtain a new expression for a probability distance considered in the existing literature such that we can use an information state method to calculate it. Furthermore, we introduce regular conditional relative entropy as an a posteriori probability distance to measure the discrepancy between HMMs when a realized observation sequence is given. A representation of the regular conditional relative entropy is derived based on the Radon-Nikodym derivative. Then a recursion for the regular conditional relative entropy is obtained using an information state method. Meanwhile, the well-known duality relationship between free energy and relative entropy is extended to the case of regular conditional relative entropy given a sub-[special character]-algebra. Finally, regular conditional relative entropy constraints are defined based on the study of the probability distance problem. Using a Lagrange multiplier technique and the duality relationship for regular conditional relative entropy, a finite horizon robust state estimator for HMMs with regular conditional relative entropy constraints is derived. A complete characterization of the solution to the robust state estimation problem is also presented.
Style APA, Harvard, Vancouver, ISO itp.
5

Elfverson, Daniel. "Multiscale Methods and Uncertainty Quantification". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262354.

Pełny tekst źródła
Streszczenie:
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
Style APA, Harvard, Vancouver, ISO itp.
6

Guenda, Kenza. "On algebraic geometric codes and some related codes". Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2033.

Pełny tekst źródła
Streszczenie:
Thesis (MSc (Mathematics))--University of Stellenbosch, 2006.
The main topic of this thesis is the construction of the algebraic geometric codes (Goppa codes), and their decoding by the list-decoding, which allows one to correct beyond half of the minimum distance. We also consider the list-decoding of the Reed–Solomon codes as they are subclass of the Goppa codes, and the determination of the parameters of the non primitive BCH codes. AMS Subject Classification: 4B05, 94B15, 94B35, 94B27, 11T71, 94B65,B70. Keywords: Linear codes, cyclic codes, BCH codes, Reed–Solomon codes, list-decoding, Algebraic Geometric codes, decoding, bound on codes, error probability.
Style APA, Harvard, Vancouver, ISO itp.
7

Tamatoro, Johng-Ay. "Approche stochastique de l'analyse du « residual moveout » pour la quantification de l'incertitude dans l'imagerie sismique". Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3044/document.

Pełny tekst źródła
Streszczenie:
Le principale objectif de l'imagerie sismique pétrolière telle qu'elle est réalisée de nos jours est de fournir une image représentative des quelques premiers kilomètres du sous-sol. Cette image permettra la localisation des structures géologiques formant les réservoirs où sont piégées les ressources en hydrocarbures. Pour pouvoir caractériser ces réservoirs et permettre la production des hydrocarbures, le géophysicien utilise la migration-profondeur qui est un outil d'imagerie sismique qui sert à convertir des données-temps enregistrées lors des campagnes d'acquisition sismique en des images-profondeur qui seront exploitées par l'ingénieur-réservoir avec l'aide de l'interprète sismique et du géologue. Lors de la migration profondeur, les évènements sismiques (réflecteurs,…) sont replacés à leurs positions spatiales correctes. Une migration-profondeur pertinente requiert une évaluation précise modèle de vitesse. La précision du modèle de vitesse utilisé pour une migration est jugée au travers l'alignement horizontal des évènements présents sur les Common Image Gather (CIG). Les évènements non horizontaux (Residual Move Out) présents sur les CIG sont dus au ratio du modèle de vitesse de migration par la vitesse effective du milieu. L'analyse du Residual Move Out (RMO) a pour but d'évaluer ce ratio pour juger de la pertinence du modèle de vitesse et permettre sa mise à jour. Les CIG qui servent de données pour l'analyse du RMO sont solutions de problèmes inverses mal posés, et sont corrompues par du bruit. Une analyse de l'incertitude s'avère nécessaire pour améliorer l'évaluation des résultats obtenus. Le manque d'outils d'analyse de l'incertitude dans l'analyse du RMO en fait sa faiblesse. L'analyse et la quantification de l'incertitude pourrait aider à la prise de décisions qui auront des impacts socio-économiques importantes. Ce travail de thèse a pour but de contribuer à l'analyse et à la quantification de l'incertitude dans l'analyse des paramètres calculés pendant le traitement des données sismiques et particulièrement dans l'analyse du RMO. Pour atteindre ces objectifs plusieurs étapes ont été nécessaires. Elles sont entre autres :- L’appropriation des différents concepts géophysiques nécessaires à la compréhension du problème (organisation des données de sismique réflexion, outils mathématiques et méthodologiques utilisés);- Présentations des méthodes et outils pour l'analyse classique du RMO;- Interprétation statistique de l’analyse classique;- Proposition d’une approche stochastique;Cette approche stochastique consiste en un modèle statistique hiérarchique dont les paramètres sont :- la variance traduisant le niveau de bruit dans les données estimée par une méthode basée sur les ondelettes, - une fonction qui traduit la cohérence des amplitudes le long des évènements estimée par des méthodes de lissages de données,- le ratio qui est considéré comme une variable aléatoire et non comme un paramètre fixe inconnue comme c'est le cas dans l'approche classique de l'analyse du RMO. Il est estimé par des méthodes de simulations de Monte Carlo par Chaîne de Markov.L'approche proposée dans cette thèse permet d'obtenir autant de cartes de valeurs du paramètre qu'on le désire par le biais des quantiles. La méthodologie proposée est validée par l'application à des données synthétiques et à des données réelles. Une étude de sensibilité de l'estimation du paramètre a été réalisée. L'utilisation de l'incertitude de ce paramètre pour quantifier l'incertitude des positions spatiales des réflecteurs est présentée dans ce travail de thèse
The main goal of the seismic imaging for oil exploration and production as it is done nowadays is to provide an image of the first kilometers of the subsurface to allow the localization and an accurate estimation of hydrocarbon resources. The reservoirs where these hydrocarbons are trapped are structures which have a more or less complex geology. To characterize these reservoirs and allow the production of hydrocarbons, the geophysicist uses the depth migration which is a seismic imaging tool which serves to convert time data recorded during seismic surveys into depth images which will be exploited by the reservoir engineer with the help of the seismic interpreter and the geologist. During the depth migration, seismic events (reflectors, diffractions, faults …) are moved to their correct locations in space. Relevant depth migration requires an accurate knowledge of vertical and horizontal seismic velocity variations (velocity model). Usually the so-called Common-Image-Gathers (CIGs) serve as a tool to verify correctness of the velocity model. Often the CIGs are computed in the surface offset (distance between shot point and receiver) domain and their flatness serve as criteria of the velocity model correctness. Residual moveout (RMO) of the events on CIGs due to the ratio of migration velocity model and effective velocity model indicates incorrectness of the velocity model and is used for the velocity model updating. The post-stacked images forming the CIGs which are used as data for the RMO analysis are the results of an inverse problem and are corrupt by noises. An uncertainty analysis is necessary to improve evaluation of the results. Dealing with the uncertainty is a major issue, which supposes to help in decisions that have important social and commercial implications. The goal of this thesis is to contribute to the uncertainty analysis and its quantification in the analysis of various parameters computed during the seismic processing and particularly in RMO analysis. To reach these goals several stages were necessary. We began by appropriating the various geophysical concepts necessary for the understanding of:- the organization of the seismic data ;- the various processing ;- the various mathematical and methodological tools which are used (chapters 2 and 3). In the chapter 4, we present different tools used for the conventional RMO analysis. In the fifth one, we give a statistical interpretation of the conventional RMO analysis and we propose a stochastic approach of this analysis. This approach consists in hierarchical statistical model where the parameters are: - the variance which express the noise level in the data ;- a functional parameter which express coherency of the amplitudes along events ; - the ratio which is assume to be a random variable and not an unknown fixed parameter as it is the case in conventional approach. The adjustment of data to the model done by using smoothing methods of data, combined with the using of the wavelets for the estimation of allow to compute the posterior distribution of given the data by the empirical Bayes methods. An estimation of the parameter is obtained by using Markov Chain Monte Carlo simulations of its posterior distribution. The various quantiles of these simulations provide different estimations of . The proposed methodology is validated in the sixth chapter by its application on synthetic data and real data. A sensitivity analysis of the estimation of the parameter was done. The using of the uncertainty of this parameter to quantify the uncertainty of the spatial positions of reflectors is presented in this thesis
Style APA, Harvard, Vancouver, ISO itp.
8

Erdogan, Ahmet Yasin. "Analysis of the effects of phase noise and frequency offset in orthogonal frequency division multiplexing (OFDM) systems". Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1712.

Pełny tekst źródła
Streszczenie:
Approved for public release, distribution is unlimited
Orthogonal frequency division multiplexing (OFDM) is being successfully used in numerous applications. It was chosen for IEEE 802.11a wireless local area network (WLAN) standard, and it is being considered for the fourthgeneration mobile communication systems. Along with its many attractive features, OFDM has some principal drawbacks. Sensitivity to frequency errors is the most dominant of these drawbacks. In this thesis, the frequency offset and phase noise effects on OFDM based communication systems are investigated under a variety of channel conditions covering both indoor and outdoor environments. The simulation performance results of the OFDM system for these channels are presented.
Lieutenant Junior Grade, Turkish Navy
Style APA, Harvard, Vancouver, ISO itp.
9

Hatefi, Atoosa. "Distributed Coding for Wireless Cooperative Networks". Thesis, Supélec, 2012. http://www.theses.fr/2012SUPL0015/document.

Pełny tekst źródła
Streszczenie:
Cette thèse est consacrée à l'étude théorique et à la conception pratique de schémas de codage conjoint réseau/canal adaptés à différents scénarii de communications dans les réseaux sans fil. Contrairement aux hypothèses conventionnelles retenues dans la littérature (accès multiple orthogonal, absence d'erreurs sur certains liens), les caractéristiques de diffusion et de superposition des signaux propres au canal radio et la présence d'évanouissements lents et de bruit sur tous les liens sont prises en compte dans la formulation du problème et exploitées. Différentes stratégies de coopération au niveau du ou des relais sont examinées et comparées. Le point commun entre toutes ces stratégies est que le système doit fonctionner même en absence de coopération. Seuls le ou les relais et la destination sont informés d'une coopération. Ni les sources, ni le ou les relais ne connaissent l'état du canal à l'émission. Le premier volet de la thèse porte sur le canal à accès multiple avec relais unique (slow fading MARC). Le problème du codage et décodage conjoint canal/réseau (JNCC/JNCD) est étudié sur un plan théorique et pratique. Différentes hypothèses au niveau de l'accès multiple (semi-orthogonal et non-orthogonal) et différents modes de fonctionnement du relais (half-duplex et full-duplex) sont envisagés. Une nouvelle stratégie de coopération adaptative (SDF pour selective decode and forward) est définie dans laquelle le relais calcule et retransmet une fonction déterministe des messages de sources qu'il a pu décoder sans erreur. Le ré-encodage, défini sur un corps fini (corps binaire), est également conçu de manière à assurer que la performance finale au niveau de la destination atteint bien un ordre de diversité 2. Le modèle de canal MARC est par la suite étendu à plusieurs relais (slow fading MAMRC). Une analyse théorique est conduite et des nouveaux schémas JNCC/JNCD permettant de s'approcher des limites théoriques sont décrits. Afin d'assurer la diversité pleine, nous proposons de combiner un codage canal binaire et un codage réseau non-binaire. Pour les deux types de canaux, nous montrons que l'interférence naturellement induite par la diffusion des signaux dans un environnement sans fil, n'est pas un inconvénient mais bien un avantage dès lors qu'on est en mesure de la traiter via des techniques de codage et de décodage sophistiquées (turbo codes et leur décodage, turbo détection). Les gains en termes de capacité (rapportée à une certaine probabilité de coupure) obtenus avec un accès multiple semi-orthogonal ou non-orthogonal sont substantiels comparés à un accès multiple orthogonal (référence). Dans la dernière partie de la thèse, la stratégie de coopération SDF est comparée à deux autres stratégies de coopération s'appuyant sur un procédé de décodage-et-retransmission "souple" (sans prise de décisions intermédiaires) : l'une basée sur les rapports logarithmiques de probabilité a posteriori sur les bits codés et l'autre basée sur l'estimation de l'erreur quadratique moyenne (MSE). Nous vérifions que la stratégie de coopération SDF fonctionne bien dans la plupart des configurations, les stratégies de coopération souples n'améliorant légèrement les performances que dans certains cas extrêmes
With the rapid growth of wireless technologies, devices and mobile applications, the quest of high throughput and ubiquitous connectivity in wireless communications increases rapidly as well. Relaying is undoubtedly a key concept to provide coverage extension and capacity increase in wireless networks. Network coding, which allows the intermediate nodes to share their computation capabilities in addition to their resource and their power, has grabbed a significant research attention since its inception in information theory. It has become an attractive candidate to bring promising performance improvement, especially in terms of throughput, in relay-based cellular networks. Substantial research efforts are currently focused on theoretical analysis, implementation and evaluation of network coding from a physical layer perspective. The question is, what is the most efficient and practical way to use network coding in wireless relay-based networks, and whether it is beneficial to exploit the broadcast and multiple-access properties of the wireless medium to perform network coding. It is in such a context, that this thesis proceeds. In the first part of the thesis, the problem of Joint Network-Channel Coding (JNCC) for a Multiple Access Relay Channel (MARC) is investigated in the presence of multiple access interferences and for both of the relay operating modes, namely, half-duplex and full-duplex. To this end, three new classes of MARC, referred to as Half-Duplex Semi-Orthogonal MARC (HD-SOMARC), Half-Duplex Non-Orthogonal MARC (HD-NOMARC), and Full-Duplex Non-Orthogonal MARC (FD-NOMARC) have been introduced and studied. The relaying function in all of the classes is based on a Selective Decode-and-Forward (SDF) strategy, which is individually implemented for each source, i.e, the relay forwards only a deterministic function of the error-free decoded messages. For each class, an information-theoretic analysis is conducted, and practical coding and decoding techniques are proposed. The proposed coding schemes, perform very close to the outage limit for both cases of HD-SOMARC and HD-NOMARC. Besides, in the case of HD-NOMARC, the optimal allocation of the transmission time to the relay is considered. It is also verified that exploiting multiple access interferences, either partially or totally, results in considerable gains for MARC compared to the existing interference-avoiding structures, even in the case of single receive antenna. In the second part of the thesis, the network model is extended by considering multiple relays which help multiple sources to communicate with a destination. A new class of Multiple Access Multiple Relay Channel (MAMRC), referred to as Half-Duplex Semi-Orthogonal MAMRC (HD-SOMAMRC) is then proposed and analyzed from both information theoretic and code design perspective. New practical JNCC schemes are proposed, in which binary channel coding and non binary network coding are combined, and they are shown to perform very close to the outage limit. Moreover, the optimal allocation of the transmission time to the sources and relays is considered. Finally, in the third part of the thesis, different ways of implementing cooperation, including practical relaying protocols are investigated for the half-duplex MARC with semi-orthogonal transmission protocol and in the case of JNCC. The hard SDF approach is compared with two Soft Decode and Forward (SoDF) relaying functions: one based on log a posterior probability ratios (LAPPRs) and the other based on Mean Square Error (MSE) estimate. It is then shown that SDF works well in most of the configurations and just in some extreme cases, soft relaying functions (based on LAPPR or MSE estimate) can slightly outperform the hard selective one
Style APA, Harvard, Vancouver, ISO itp.
10

"Use of tone information in Cantonese LVCSR based on generalized character posterior probability decoding". Thesis, 2005. http://library.cuhk.edu.hk/record=b6074029.

Pełny tekst źródła
Streszczenie:
Automatic recognition of Cantonese tones has long been regarded as a difficult task. Cantonese has one of the most complicated tone systems among all languages in the world. This thesis presents a novel approach of modeling Cantonese tones. We propose the use of supra-tone models. Each supra-tone unit covers a number of syllables in succession. The supra-tone model characterizes not only the tone contours of individual syllables but also the transitions among them. By including multiple tone contours in one modeling unit, the relative heights of the tones are captured explicitly. This is especially important for the discrimination among the level tones of Cantonese.
The decoding in conventional LVCSR systems aims at finding the sentence hypothesis, i.e. the string of words, which has the maximum a posterior (MAP) probability in comparison with other hypotheses. However, in most applications, the recognition performance is measured in terms of word error rate (or word accuracy). In Chinese languages, given that "word" is a rather ambiguous concept, speech recognition performance is usually measured in terms of the character error rate. In this thesis, we develop a decoding algorithm that can minimize the character error rate. The algorithm is applied to a reduced search space, e.g. a word graph or the N-best sentence list, which results from the 1st pass of search, and the generalized character posterior probability (GCPP) is maximized. (Abstract shortened by UMI.)
This thesis addresses two major problems of the existing large vocabulary continuous speech recognition (LVCSR) technology: (1) inadequate exploitation of alternative linguistic and acoustic information; and (2) the mismatch between the decoding (recognition) criterion and the performance evaluation. The study is focused on Cantonese, one of the major Chinese dialects, which is also monosyllabic and tonal. Tone is somewhat indispensable for lexical access and disambiguation of homonyms in Cantonese. However, tone information into Cantonese LVCSR requires effective tone recognition as well as a seamless integration algorithm.
Qian Yao.
"July 2005."
Adviser: Tan Lee.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4009.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (p. 100-110).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract in English and Chinese.
School code: 1307.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "A posteriori probability decoding"

1

Lamel, Lori, i Jean-Luc Gauvain. Speech Recognition. Redaktor Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0016.

Pełny tekst źródła
Streszczenie:
Speech recognition is concerned with converting the speech waveform, an acoustic signal, into a sequence of words. Today's approaches are based on a statistical modellization of the speech signal. This article provides an overview of the main topics addressed in speech recognition, which are, acoustic-phonetic modelling, lexical representation, language modelling, decoding, and model adaptation. Language models are used in speech recognition to estimate the probability of word sequences. The main components of a generic speech recognition system are, main knowledge sources, feature analysis, and acoustic and language models, which are estimated in a training phase, and the decoder. The focus of this article is on methods used in state-of-the-art speaker-independent, large-vocabulary continuous speech recognition (LVCSR). Primary application areas for such technology are dictation, spoken language dialogue, and transcription for information archival and retrieval systems. Finally, this article discusses issues and directions of future research.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "A posteriori probability decoding"

1

Stankiewicz, Olgierd, Krzysztof Wegner i Marek Domanski. "Depth Estimation Based on Maximization of a Posteriori Probability". W Computer Vision and Graphics, 253–65. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46418-3_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Spiteri, Trevor, i Victor Buttigieg. "Maximum a Posteriori Decoding of Arithmetic Codes in Joint Source-Channel Coding". W Communications in Computer and Information Science, 363–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25206-8_24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Frego, Marco. "On the Probability of Incorrect Decoding for Linear Codes". W Cryptography and Coding, 103–15. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71045-7_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Katsman, Gregory L., Michael A. Tsfasman i Serge G. Vladut. "Spectra of linear codes and error probability of decoding". W Lecture Notes in Mathematics, 82–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/bfb0087994.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Battail, Gérard. "Weighted decoding as a means for reestimating a probability distribution (abstract)". W Coding Theory and Applications, 150–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/3-540-19368-5_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Walrand, Jean. "Digital Link—A". W Probability in Electrical Engineering and Computer Science, 115–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_7.

Pełny tekst źródła
Streszczenie:
AbstractIn a digital link, a transmitter converts bits into signals and a receiver converts the signals it receives into bits. The receiver faces a decision problem that we study in Sect. 7.1. The main tool is Bayes’ Rule. The key notions are maximum a posteriori and maximum likelihood estimates. Transmission systems use codes to reduce the number of bits they need to transmit. Section 7.2 explains the Huffman codes that minimize the expected number of bits needed to transmit symbols; the idea is to use fewer bits for more likely symbols. Section 7.3 explores a commonly used model of a communication channel: the binary symmetric channel. It explains how to calculate the probability of errors. Section 7.4 studies a more complex modulation scheme employed by most smartphones and computers: QAM. Section 7.5 is devoted to a central problem in decision making: how to infer which situation is in force from observations. Does a test reveal the presence of a disease; how to balance the probability of false positive and that of false negative? The main result of that section is the Neyman–Pearson Theorem that the section illustrates with many examples.
Style APA, Harvard, Vancouver, ISO itp.
7

Na, Seung-Hoon, In-Su Kang i Jong-Hyeok Lee. "Improving Relevance Feedback in Language Modeling Approach: Maximum a Posteriori Probability Criterion and Three-Component Mixture Model". W Natural Language Processing – IJCNLP 2004, 130–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30211-7_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Walrand, Jean. "Speech Recognition: A". W Probability in Electrical Engineering and Computer Science, 205–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_11.

Pełny tekst źródła
Streszczenie:
AbstractSpeech recognition can be formulated as the problem of guessing a sequence of words that produces a sequence of sounds. The human brain is remarkably good at solving this problem, even though the same words correspond to many different sounds, because of accents or characteristics of the voice. Moreover, the environment is always noisy, to that the listeners hear a corrupted version of the speech.Computers are getting much better at speech recognition and voice command systems are now common for smartphones (Siri), automobiles (GPS, music, and climate control), call centers, and dictation systems. In this chapter, we explain the main ideas behind the algorithms for speech recognition and for related applications.The starting point is a model of the random sequence (e.g., words) to be recognized and of how this sequence is related to the observation (e.g., voice). The main model is called a hidden Markov chain. The idea is that the successive parts of speech form a Markov chain and that each word maps randomly to some sounds. The same model is used to decode strings of symbols in communication systems.Section 11.1 is a general discussion of learning. The hidden Markov chain model used in speech recognition and in error decoding is introduced in Sect. 11.2. That section explains the Viterbi algorithm. Section 11.3 discusses expectation maximization and clustering algorithms. Section 11.4 covers learning for hidden Markov chains.
Style APA, Harvard, Vancouver, ISO itp.
9

Tolhuizen, L. M. G. M. "A Universal Upper Bound on the Miscorrection Probability with Bounded Distance Decoding for a Code Used on an Error-Value Symmetric Channel". W Eurocode ’92, 313–20. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-2786-5_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Labghough, Souad, Fouad Ayoub i Mostafa Belkasmi. "A Closed Form Expression for the Bit Error Probability for Majority Logic Decoding of CSOC Codes over $$\varGamma \varGamma $$ Γ Γ Channels". W Advances in Intelligent Systems and Computing, 200–210. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76354-5_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "A posteriori probability decoding"

1

Yoshikawa, H. "Theoretical analysis of bit error probability for maximum a posteriori probability decoding". W IEEE International Symposium on Information Theory, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isit.2003.1228291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sadjadpour, Hamid R. "Maximum a posteriori decoding algorithms for turbo codes". W AeroSense 2000, redaktorzy Raghuveer M. Rao, Soheil A. Dianat i Michael D. Zoltowski. SPIE, 2000. http://dx.doi.org/10.1117/12.394081.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Popescu, Theodor D. "Signal segmentation using maximum a posteriori probability estimator". W 2013 7th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2013. http://dx.doi.org/10.1109/icaict.2013.6722734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ratnayake, Ruwan, Aleksandar Kavcic i Gu-Yeon Wei. "A High-Throughput Maximum a posteriori Probability Detector". W 2007 IEEE 29th Custom Integrated Circuits Conference. IEEE, 2007. http://dx.doi.org/10.1109/cicc.2007.4405772.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ono, Y., i S. Nakagawa. "Estimation of a posteriori probability using neural network". W 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170497.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Horne, B., i D. Hush. "Estimating a posteriori probability with P-type nodes". W 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137649.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kanda, Naoyuki, Xugang Lu i Hisashi Kawai. "Maximum a posteriori Based Decoding for CTC Acoustic Models". W Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-71.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

White, L. B. "Maximum a posteriori probability line tracking for nonstationary processes". W [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1991. http://dx.doi.org/10.1109/icassp.1991.150128.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hamamura, T., T. Akagi i B. Irie. "An Analytic Word Recognition Algorithm Using a Posteriori Probability". W Ninth International Conference on Document Analysis and Recognition (ICDAR 2007) Vol 2. IEEE, 2007. http://dx.doi.org/10.1109/icdar.2007.4376999.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Blackhall, Lachlan, i Michael Rotkowitz. "Maximum a posteriori vs maximum probability recursive sparse estimation". W 2009 European Control Conference (ECC). IEEE, 2009. http://dx.doi.org/10.23919/ecc.2009.7074447.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii