Dissertations / Theses on the topic 'Joint equalisation and decoding'

To see the other types of publications on this topic, follow the link: Joint equalisation and decoding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Joint equalisation and decoding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yeap, Bee Leong. "Turbo equalisation algorithms for full and partial response modulation." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shaheem, Asri. "Iterative detection for wireless communications." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0223.

Full text
Abstract:
[Truncated abstract] The transmission of digital information over a wireless communication channel gives rise to a number of issues which can detract from the system performance. Propagation effects such as multipath fading and intersymbol interference (ISI) can result in significant performance degradation. Recent developments in the field of iterative detection have led to a number of powerful strategies that can be effective in mitigating the detrimental effects of wireless channels. In this thesis, iterative detection is considered for use in two distinct areas of wireless communications. The first considers the iterative decoding of concatenated block codes over slow flat fading wireless channels, while the second considers the problem of detection for a coded communications system transmitting over highly-dispersive frequency-selective wireless channels. The iterative decoding of concatenated codes over slow flat fading channels with coherent signalling requires knowledge of the fading amplitudes, known as the channel state information (CSI). The CSI is combined with statistical knowledge of the channel to form channel reliability metrics for use in the iterative decoding algorithm. When the CSI is unknown to the receiver, the existing literature suggests the use of simple approximations to the channel reliability metric. However, these works generally consider low rate concatenated codes with strong error correcting capabilities. In some situations, the error correcting capability of the channel code must be traded for other requirements, such as higher spectral efficiency, lower end-to-end latency and lower hardware cost. ... In particular, when the error correcting capabilities of the concatenated code is weak, the conventional metrics are observed to fail, whereas the proposed metrics are shown to perform well regardless of the error correcting capabilities of the code. The effects of ISI caused by a frequency-selective wireless channel environment can also be mitigated using iterative detection. When the channel can be viewed as a finite impulse response (FIR) filter, the state-of-the-art iterative receiver is the maximum a posteriori probability (MAP) based turbo equaliser. However, the complexity of this receiver's MAP equaliser increases exponentially with the length of the FIR channel. Consequently, this scheme is restricted for use in systems where the channel length is relatively short. In this thesis, the use of a channel shortening prefilter in conjunction with the MAP-based turbo equaliser is considered in order to allow its use with arbitrarily long channels. The prefilter shortens the effective channel, thereby reducing the number of equaliser states. A consequence of channel shortening is that residual ISI appears at the input to the turbo equaliser and the noise becomes coloured. In order to account for the ensuing performance loss, two simple enhancements to the scheme are proposed. The first is a feedback path which is used to cancel residual ISI, based on decisions from past iterations. The second is the use of a carefully selected value for the variance of the noise assumed by the MAP-based turbo equaliser. Simulations are performed over a number of highly dispersive channels and it is shown that the proposed enhancements result in considerable performance improvements. Moreover, these performance benefits are achieved with very little additional complexity with respect to the unmodified channel shortened turbo equaliser.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jin. "Iterative source decoding, channel decoding and channel equalisation." Thesis, University of Southampton, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Irizar, José María Zabalegui. "Combined equalisation and decoding for OFDM over wireless fading channels." Thesis, Staffordshire University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abdulrazaq, Muhammad Bashir. "Reducing the complexity of equalisation and decoding of shingled writing." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9332.

Full text
Abstract:
Shingled Magnetic Recording (SMR) technology is important in the immediate need for expansion of magnetic hard disk beyond the limit of current disk technology. SMR provides a solution with the least change from current technology among contending technologies. Robust easy to implement Digital Signal Processing (DSP) techniques are needed to achieve the potentials of SMR. Current DSP techniques proposed border on the usage of Two Dimensional Magnetic Recording (TDMR) techniques in equalisation and detection, coupled with iterative error correction codes such as Low Density Parity Check (LDPC). Currently, Maximum Likelihood (ML) algorithms are normally used in TDMR detection. The shortcomings of the ML detections used is the exponential complexities with respect to the number of bits. Because of that, reducing the complexity of the processes in SMR Media is very important in order to actualise the deployment of this technology to personal computers in the near future. This research investigated means of reducing the complexities of equalisation and detection techniques. Linear equalisers were found to be adequate for low density situations. Combining ML detector across-track with linear equaliser along-track was found to provide low complexity, better performing alternative as compared to use of linear equaliser across track with ML along track. This is achieved if density is relaxed along track and compressed more across track. A gain of up to 10dB was achieved. In a situation with high density in both dimensions, full two dimensional (2D) detectors provide better performance. Low complexity full 2D detector was formed by serially concatenating two ML detectors, one for each direction, instead of single 2D ML detector used in other literature. This reduces complexity with respect to side interference from exponential to linear. The use of a single bit parity as run length limited code at the same time error correction code is also presented with a small gain of about 1dB at BER of 10^-5 recorded for the situation of high density.
APA, Harvard, Vancouver, ISO, and other styles
6

Iwaza, Lana, and Lana Iwaza. "Joint Source-Network Coding & Decoding." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00855787.

Full text
Abstract:
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme.
APA, Harvard, Vancouver, ISO, and other styles
7

Iwaza, Lana. "Joint Source-Network Coding & Decoding." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112048/document.

Full text
Abstract:
Dans les réseaux traditionnels, la transmission de flux de données s'effectuaient par routage des paquets de la source vers le ou les destinataires. Le codage réseau (NC) permet aux nœuds intermédiaires du réseau d'effectuer des combinaisons linéaires des paquets de données qui arrivent à leurs liens entrants. Les opérations de codage ont lieu dans un corps de Galois de taille finie q. Aux destinataires, le décodage se fait par une élimination de Gauss des paquets codés-réseau reçus. Cependant, dans les réseaux sans fils, le codage réseau doit souvent faire face à des erreurs de transmission causées par le bruit, les effacements, et les interférences. Ceci est particulièrement problématique pour les applications temps réel, telle la transmission de contenus multimédia, où les contraintes en termes de délais d'acheminement peuvent aboutir à la réception d'un nombre insuffisant de paquets, et par conséquent à des difficultés à décoder les paquets transmis. Dans le meilleurs des cas, certains paquets arrivent à être décodés. Dans le pire des cas, aucun paquet ne peut être décodé.Dans cette thèse, nous proposons des schémas de codage conjoint source-réseau dont l'objectif est de fournir une reconstruction approximative de la source, dans des situations où un décodage parfait est impossible. L'idée consiste à exploiter la redondance de la source au niveau du décodeur afin d'estimer les paquets émis, même quand certains de ces paquets sont perdus après avoir subi un codage réseau. La redondance peut être soit naturelle, c'est-à-dire déjà existante, ou introduite de manière artificielle.Concernant la redondance artificielle, le codage à descriptions multiples (MDC) est choisi comme moyen d'introduire de la redondance structurée entre les paquets non corrélés. En combinant le codage à descriptions multiples et le codage réseau, nous cherchons à obtenir une qualité de reconstruction qui s'améliore progressivement avec le nombre de paquets codés-réseau reçus.Nous considérons deux approches différentes pour générer les descriptions. La première approche consiste à générer les descriptions par une expansion sur trame appliquée à la source avant la quantification. La reconstruction de données se fait par la résolution d'un problème d' optimisation quadratique mixte. La seconde technique utilise une matrice de transformée dans un corps de Galois donné, afin de générer les descriptions, et le décodage se fait par une simple éliminationde Gauss. Ces schémas sont particulièrement intéressants dans un contexte de transmission de contenus multimédia, comme le streaming vidéo, où la qualité s'améliore avec le nombre de descriptions reçues.Une seconde application de tels schémas consiste en la diffusion de données vers des terminaux mobiles à travers des canaux de transmission dont les conditions sont variables. Dans ce contexte, nous étudions la qualité de décodage obtenue pour chacun des deux schémas de codage proposés, et nous comparons les résultats obtenus avec ceux fournis par un schéma de codage réseau classique.En ce qui concerne la redondance naturelle, un scénario typique est celui d'un réseau de capteurs, où des sources géographiquement distribuées prélèvent des mesures spatialement corrélées. Nous proposons un schéma dont l'objectif est d'exploiter cette redondance spatiale afin de fournir une estimation des échantillons de mesures transmises par la résolution d'un problème d'optimisation quadratique à variables entières. La qualité de reconstruction est comparée à celle obtenue à travers un décodage réseau classique
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
APA, Harvard, Vancouver, ISO, and other styles
8

Palanivelu, Arul Durai Murugan. "Tree search algorithms for joint detection and decoding." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1145039374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vu, Xuan Thang. "Joint Network / Channel Decoding over Noisy Wireless Networks." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01060330.

Full text
Abstract:
Network coding (NC) has gained much research attention as a potential candidate to solve the demand for higher spectral e ciency of modern wireless communications. Many research papers have investigated the performance of NC-aided networks such as throughput and outage capacity. However, the analysis of NC in practical systems where NC is combined with other techniques such as channel coding is still immature to fully understand its potential performance. In this thesis, we aim to design high performance receivers and analyze its performance for network-coded cooperative networks in practical scenarios. Firstly, we propose two Iterative Network/Channel Decoding (INCD) algorithms for the Multiple-Access Relay Channel (MARC) with two notable relaying schemes named Decode-and-Forward (DF) and Demodulate-and-Forward (DMF). The INCD algorithm operates based on turbo-like decoding methods and reduces the impact of the error propagation problem with the aid of a channel-aware receiver design. Both perfect Channel State Information (CSI) and imperfect CSI at the receiver side are investigated. We propose a practical method that forwards the quantized version of the relay decoding errors to the destination. It is shown that the proposed algorithms achieve full diversity gain and signi cantle outperforms solutions which do not take care of error propagation. We also show that the number of pilot symbols a ects only the coding gain but has a negligible impact on the diversity order, while the quantization level a cts both the diversity and coding gain. Secondly, we propose a Near Optimal Joint Network/Channel Decoding (NOJNCD) algorithm for the MARC that allows to analyze the system Bit Error Rate (BER). The NOJNCD algorithm performs network decoding and channel decoding in one decoding step of the super code, which comprises of all trellis states of individual code at the sources via NC. Furthermore, NC combined with Relay Selection (RS) is considered and the achievable diversity order is studied with the aid of outage analysis. We analytically show that Single Relay Selection (SRS) always achieves a diversity order two and Multiple Relay Selection (MRS) can achieve full diversity gain only when the number of selected relays exceeds the number of the sources. Last but not least, we propose a so-called partial relaying protocol to improve the spectral e ciency for channel coding assisted relay networks. Closed-form expression of the BER and the system diversity order are computed for partial relaying. We show, by analysis and simulations, that with a proper Convolutional Code (CC), partial relaying can achieve full diversity gain and same coding gain as the classical (full) relaying protocol in nite signal-to-noise ratio region while it obtains a better spectrum usage. Moreover, we propose a new protocol based on partial relaying in opportunistic relaying cooperative networks and show that this protocol signi cantly outperforms the NC-based cooperation in some circumstances.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Si 1981. "Joint synchronization, channel estimation and decoding techniques in OFDM systems." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99522.

Full text
Abstract:
Due to its high data transmission capability and robustness against multi-path propagation, Orthogonal Frequency Division Multiplexing (OFDM) has become increasingly popular for both wire-line and wireless communications. In signal recovery, the efficient and accurate estimation and correction of the symbol time offset (STO), carrier frequency offset (CFO), sampling frequency offset (SFO) and channel distortion are extremely important for the receiver to achieve good system performance.
In this thesis, we study and develop joint synchronization, channel estimation and decoding schemes to provide high system performance at a relatively low complexity for uncoded and coded OFDM systems.
We first investigate and evaluate the performance of low-complexity time-domain joint synchronization and channel estimation scheme suitable for uncoded OFDM systems. The proposed scheme can operate with a large initial CFO range (up to +/-100% of carrier spacing). Its complexity is reduced by using a special FFT block for time-to-frequency channel response conversion and a track-and-hold (TAH) estimation strategy based on mid-ambles to eliminate the additional IFFT block required by time-domain estimation.
We then consider the turbo concept to develop an iterative joint synchronization, channel estimation and decoding scheme for coded OFDM systems operating at very low signal-to-noise ratios (SNRs). Instead of hard decisions, the estimator uses soft decisions of the transmitted data obtained from previous soft-input soft-output (SISO) decoder and consequently produces better estimates of the unknown parameters. These estimation results will then help data detector to generate more reliable soft inputs to the decoder. The whole process will be performed in an iterative manner and good system performance can be achieved with only a few iterations for moderate initial synchronization errors.
APA, Harvard, Vancouver, ISO, and other styles
11

Vital, Juan Carlos Serrato. "Single and joint iterative decoding for higher order modulation schemes." Thesis, University of Leeds, 2008. http://etheses.whiterose.ac.uk/730/.

Full text
Abstract:
The research project described in this thesis concentrates on the study, and application of specific channel coding techniques, in particular, low-density parity-check (LDPC) codes, iterative decoding on Tanner graphs, and their application on joint iterative receivers based on the turbo principle, previously proposed. The construction of random LDPC codes that fulfil certain desirable characteristics, such as large girth, specific p and -y values, and acceptable BER and FER performance for short code lengths, traditionally requires a high degree of processing power (i. e. CPU cycles) to run stochastic routines that firstly search within all the possible combinations for those ones that match the desired characteristics of the LDPC matrix, and secondly determines the bit-error rate (BER) and frame-error rate (FER) performance. The construction of well structured LDPC codes by means of algebraic methods has provided LDPC codes that achieve excellent performance, with desirable structure on their LDPC matrices. However, from the universe of LDPC matrices, those ones created through well structured procedures are a small group. Multiple procedures to modify their characteristics such as length and rate have assisted to increase the pool of LDPC codes based on well structured procedures. This thesis study the problem of constructing random LDPC codes with particular length, girth, and column weight as design parameters, with reduced processing power, while providing, at the same time, a desirable structure to allow efficient use of the memory and of the parallel processing capacity to reduce delay through efficient encoding and decoding. Based on previous studies that analysed the same problem, an algorithm is introduced to construct the Girth-Partition and Shift (GPS) LDPC codes, which are half-rate quasi-cyclic (QC) LDPC codes. Several GPS constructions are analysed over the AWGN channel and the flat-fading channel. The effect on the BER and FER performance from variations on their design parameters, is included in this study. This work also includes the BER and FER performance of the concatenation in parallel of different LDPC codes, some of which are based on well structured procedures, such as Euclidean Geometries (EG) and Projective Geomtries (PG), and Margulis constructions based on the Cayley graph, while the rest are based on random procedures, such as Graphical Models (GM) and GPS-LDPC codes. The aim of the analysis of this scheme, combined with the referred LDPC code constructions, include the improvement of the BER and FER performance for short code lengths and the reduction of the encoding complexity. The BER and FER performance achieved by the parallel concatenation of the previously mentioned LDPC codes, is further analysed in a joint demapping, parallel channel decoding and source decoding system. The impact of each component on the overall system performance is also examined.
APA, Harvard, Vancouver, ISO, and other styles
12

Pu, Lingling, Zhenyu Wu, Ali Bilgin, Michael W. Marcellin, and Bane Vasic. "LDPC-BASED ITERATIVE JOINT SOURCE/CHANNEL DECODING SCHEME FOR JPEG2000." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605781.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
This paper presents a joint source-channel decoding scheme based on a JPEG2000 source coder and an LDPC channel coder. At the encoder, JPEG2000 is used to perform source coding with certain error resilience (ER) modes, and LDPC codes are used to perform channel coding. At the decoder, after one iteration of LDPC decoding, the output codestream is then decoded by JPEG2000. With the error resilience mode switches on, the source decoder detects the position of the first error within each codeblock of the JPEG2000 codestream. This information is fed back to the channel decoder, and incorporated into the calculation of likelihood values of variable nodes for the next iteration of LDPC decoding. Our results indicate that the proposed method has significant gains over conventional separate channel and source decoding.
APA, Harvard, Vancouver, ISO, and other styles
13

Luo, Yuhui. "A mixed cross-correlation and constant modulus adaptive algorithm for joint blind equalisation and source separation." Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tsai, Meng-Ying (Brady). "Iterative joint detection and decoding of LDPC-Coded V-BLAST systems." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Chuan. "Joint source-channel decoding and unequal error protection aided video transmission." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/400650/.

Full text
Abstract:
Shannon’s source and channel-coding separation theorem has been the basic philosophy followed by most state-of-the-art wireless transceivers, where all source signals are assumed to have the same importance and are protected equally by the channel encoder. However, due to its assumption of using idealized Gaussian channels, potentially infinite encoding/decoding delay as well as complexity, Shannon’s source and channel-coding theorem is not strictly applicable in practical wireless scenarios. This is because it is almost impossible to remove all the redundancy of the video source, therefore there exits a certain amount of residual correlation. Moreover, reallife applications are often required to transmit source signals associated with unequal importance, such as the base-layer and enhancement-layer of layered video codecs. Hence joint source-channel coding (JSCC) was proposed for wireless scenarios by jointly exploiting the residual redundancy of the video source and the intentionally imposed redundancy of unequal error protection (UEP) techniques. A video codec suitable for lossless video compression and iterative source-channel decoding (ISCD) is proposed by further developing the Markov random field (MRF) soft source decoder for the transmission of video sequences, rather than still video pictures. More explicitly, we used low complexity frame-differencing (FD) for removing the temporal-domain inter-frame redundancy. The low-complexity FD technique does not rely on power-thirsty motion-vector-based motion-compensation and as its other benefit, it does not require the transmission of motion vectors. However, this results in more residual redundancy than the more complex motion-vector based motion-compensation. Furthermore, variable-length code (VrLC) is used for removing the residual spatial redundancy of the FD signal, while exploiting the correlations amongst the FD pixels within the current frame with the aid of our MRF model based soft-in-soft-out (SISO) module. By adopting the MRF for modelling of the video pixels, we can infer maximum a posteriori (MAP) image estimates from the 2-D spatial redundancy between the video pixels, while simpler models like the Markov chain can only explore the 1-D spatial redundancy. Although the estimation of the MRF parameters is a challenge compared to the 1-D Markov models, we conceived novel extrinsic information transfer (EXIT)-chart-based estimation methods, which are shown to be effective. Moreover, a novel three-stage ISCD structure is proposed, which outperforms the two-stage architecture. Furthermore, we examined the convergence of the three-stage iterative decoding process using 3D EXIT charts. The performance of our system operating both with and without FD is compared to our benchmarker schemes. In support of inter-layer forward error correction (IL-FEC) coded layered video transmission, we conceived an adaptive truncated HARQ (ATHARQ) scheme for minimizing the video distortion under the constraint of a given total number of transmission time slots. More specifically, we investigated the merits of IL-FEC schemes in the context of truncated HARQ (THARQ) transmission schemes, where the packet scheduling arrangements were carefully designed for exploiting the specific characteristics of each IL-FEC coded packet. Furthermore, we developed a method of on-line optimization for our IL-ATHARQ transmission scheme, in order to find the most appropriate FEC code rate distribution among the video layers that reduced the video distortion. Type-I HARQ relying on Convolutional Codes (CC) was used for simplicity, because our focus was on the design of the scheduling schemes. The performance of our IL-ATHARQ scheme as well as of the rate-optimized IL-ATHARQ scheme using a RSC codec were compared to the benchmarkers using different video sequences, followed by characterizing both the effects of the delay as well as of the channel quality prediction errors on the attainable system performance. Finally, we conceived an UEP scheme for layered video transmission in the downlink of a visible light communication system, explicitly, we proposed a hierarchical colour-shift keying (HCSK) modulation scheme based on the standardized colourshift keying (CSK), which is capable of conveying inter-dependent layers of video signals. More specifically, we proposed the Type I and Type II HCSK arrangements, which can be flexibly configured by according to the channel quality, video quality, etc., where the Type II constellation allows us to use a wider range of FERs for the higher layers upon varying the constellation-shaping parameter dl. Our simulation results show that Type II HCSK provides a high grade of flexibility in terms of both its system configuration and optimization. Furthermore, we provided a detailed design example for the employment of HCSK transmitting scalable video sources with the aid of a RSC code. An optimisation method was conceived in order to enhance the UEP and to improve the quality of the received video. The performance of our optimised M-HCSK-RSC video transmission system using different HCSK constellation sizes was compared to the relevant benchmarker schemes using different video sequences.
APA, Harvard, Vancouver, ISO, and other styles
16

Minallah, Nasru. "Low-bit-rate joint source-channel decoding aided wireless video communications." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/159195/.

Full text
Abstract:
Detailed wireless video structures employing novel channel coding schemes for enhancing the achievable performance are designed. Although there is a plethora of papers on both robust video transmission, iterative detection and video telephony, there is a paucity of up-to-date research studies on the unified treatment of the topic of near capacity multimedia communication systems using iterative detection aided joint source-channel decoding employing sophisticated transmission techniques. Therefore in this thesis we focus our attention not only on the source and channel coding but also on their iterative decoding and transmission. Initially, we investigated the H.264 codec’s error sensitivity. The perceptually more important bits were provided with more strong protection relative to less important bits using Unequal Error Protection (UEP) by applying different-rate Recursive Systematic Convolutional (RSC) codes. We then further improved the attainable performance of a Data-Partitioned (DP) H.264 coded video transmission system using UEP based IrRegular Convolutional Codes (IRCC). An iterative detection aided combination of IRCC and a rate-1 precoder was used to improve the overall BER performance and to enhance the objective video quality expressed in terms of the Peak Signal-to-Noise Ratio (PSNR)1. More specifically, we exploited the innate UEP capability and high design flexibility of IRCCs, which are constituted by different-rate subcodes capable of maintaining an excellent iterative decoding performance. In contrast to regular convolutional codes, which encode the entire segment of the source signal using the same code, the IRCCs introduced encode the source signal by splitting it into segments having specifically designed lengths, each of which is encoded by a code having an appropriately designed code-rate. A novel Extrinsic Information Transfer (EXIT) chart matching procedure was used for the design of our specific IRCC which allowed us to design near-capacity schemes. Additionally, we developed a novel Unequal Source-Symbol Probability Aided (USSPA) design, which is capable of further enhancing the subjective video quality by exploiting the residual redundancy that remains in the source-coded stream after encoding. Furthermore, we proposed a family of Short Block Codes (SBCs) designed for guaranteed convergence in Iterative Source-Channel Decoding (ISCD). The DP H.264 source coded video stream was used to evaluate the performance of our system using SBCs in conjunction with RSCs for transmission over correlated narrowband Rayleigh fading channels. The effect of different SBC schemes having diverse minimum Hamming distances (dH,min) and code rates on the attainable system performance was quantified, when using iterative SBSD and channel decoding, while keeping the overall bit-rate budget constant by appropriately partitioning the total available bit rate budget between the source and channel codecs. EXIT charts were used for analysing the attainable system performance and it was observed from the EXIT-chart analysis that the convergence behaviour of ISCD is substantially improved with the aid of SBCs. The above-mentioned investigations evolved further by designing more sophisticated non-coherent-detection aided space time coding based Multiple-Input Multiple-Output (MIMO) schemes for near-capacity video transmissions without the need for any high-complexity MIMO channel estimation. Space time coding constitutes an effective transmit diversity technique of compensating the effects of wireless channels by exploiting the independent fading of the signal transmitted from multiple antennas. Space-time coding is capable of achieving a substantial diversity and power gain relative to its single-input and single-output counterpart, which is attained without any bandwidth expansion. More specifically, we proposed a new near-capacity Sphere Packing (SP) modulation aided Differential Space Time Spreading (DSTS) design for the transmission of the video coded stream. SP modulation is a specific scheme, which maintains the highest possible Euclidean distance of the modulated symbols, while constitutes DSTS a low-complexity MIMO technique that does not require any channel estimation, since it relies on non-coherent detection. Finally, in order to circumvent the BER floor imposed by conventional two-stage turbo-detection schemes, we considered jointly optimised three-stage source and channel decoding arrangements employing serially concatenated and iteratively decoded SBCs combined with a URC and multi-dimensional SP modulation. The mutual information between the SBC, URC and SP constituent components is iteratively exploited in a turbo process in order to improve the overall BER and objective video quality in terms of the PSNR. The resultant coded signal was transmitted using a non-coherently detected DSTS MIMO-aided transceiver designed for near capacity JSCD. The performance of the system was evaluated by considering interactive video telephony using the H.264/AVC source codec. Again, the convergence behaviour of the MIMO transceiver advocated was investigated with the aid of EXIT charts.
APA, Harvard, Vancouver, ISO, and other styles
17

Singhal, Rohit. "Multiple symbol decoding of differential space-time codes." Texas A&M University, 2003. http://hdl.handle.net/1969.1/344.

Full text
Abstract:
Multiple-symbol detection of space-time differential codes (MS-STDC) decodes N consecutive space-time symbols using maximum likelihood (ML) sequence detection to gain in performance over the conventional differential detection scheme. However its computational complexity is exponential in N . A fast algorithm for implementing the MD-STDC in block-fading channels with complexity O(N 4) is developed. Its performance in both block-fading and symbol-by-symbol fading channels is demonstrated through simulations. Set partitioning in hierarchical trees (SPIHT) coupled with rate compatible punctured convolution code (RCPC) and cyclic redundancy check (CRC) is employed as a generalized multiple description source coder with robustness to channel errors. We propose a serial concatenation of the above with a differential space-time code (STDC) and invoke an iterative joint source channel decoding procedure for decoding differentially space-time coded multiple descriptions. Experiments show a gain of up to 5 dB in PSNR with four iterations for image transmission in the absence of channel state information (CSI) at the receiver. A serial concatenation of SPIHT + RCPC/CRC is also considered with space-time codes (STC) instead of STDC. Experiments show a gain of up to 7 dB with four iterations in the absence of CSI
APA, Harvard, Vancouver, ISO, and other styles
18

Radhakrishnan, Rathnakumar. "Detection and Decoding for Magnetic Storage Systems." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/194396.

Full text
Abstract:
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
APA, Harvard, Vancouver, ISO, and other styles
19

KOBAYASHI, Kentaro, Takaya YAMAZATO, Hiraku OKADA, and Masaaki KATAYAMA. "Joint Channel Decoding of Spatially and Temporally Correlated Data in Wireless Sensor Networks." IEEE, 2008. http://hdl.handle.net/2237/12086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nangare, Nitin Ashok. "Non-iterative joint decoding and signal processing: universal coding approach for channels with memory." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3899.

Full text
Abstract:
A non-iterative receiver is proposed to achieve near capacity performance on intersymbol interference (ISI) channels. There are two main ingredients in the proposed design. i) The use of a novel BCJR-DFE equalizer which produces optimal soft estimates of the inputs to the ISI channel given all the observations from the channel and L past symbols exactly, where L is the memory of the ISI channel. ii) The use of an encoder structure that ensures that L past symbols can be used in the DFE in an error free manner through the use of a capacity achieving code for a memoryless channel. Computational complexity of the proposed receiver structure is less than that of one iteration of the turbo receiver. We also provide the proof showing that the proposed receiver achieves the i.i.d. capacity of any constrained input ISI channel. This DFE-based receiver has several advantages over an iterative (turbo) receiver, such as low complexity, the fact that codes that are optimized for memoryless channels can be used with channels with memory, and finally that the channel does not need to be known at the transmitter. The proposed coding scheme is universal in the sense that a single code of rate r; optimized for a memoryless channel, provides small error probability uniformly across all AWGN-ISI channels of i.i.d. capacity less than r: This general principle of a proposed non-iterative receiver also applies to other signal processing functions, such as timing recovery, pattern-dependent noise whiten ing, joint demodulation and decoding etc. This makes the proposed encoder and receiver structure a viable alternative to iterative signal processing. The results show significant complexity reduction and performance gain for the case of timing recovery and patter-dependent noise whitening for magnetic recording channels.
APA, Harvard, Vancouver, ISO, and other styles
21

Dubuc, Christian. "Joint iterative demodulation and decoding for turbo-coded PSK signals on flat fading channels." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/MQ57725.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Subbalakshmi, K. P. "Joint source-channel decoding of variable-length encoded sources with applications to image transmission." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0013/NQ61684.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Dubuc, Christian Carleton University Dissertation Engineering Systems and Computer. "Joint iterative demodulation and decoding for turbo-coded PSK signals on flat fading channels." Ottawa, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Illangakoon, Chathura. "On joint source-channel decoding and interference cancellation in CDMA-based large-scale wireless sensor networks." IEEE, 2013. http://hdl.handle.net/1993/22019.

Full text
Abstract:
Motivated by potential applications in wireless sensor networks, this thesis considers the problem of communicating a large number of correlated analog sources over a Gaussian multiple-access channel using non-orthogonal code-division multiple-access (CDMA). A joint source-channel decoder is presented which can exploit the inter-source correlation for interference reduction in the CDMA channel. This decoder uses a linear minimum mean square error (MMSE) multi-user detector (MUD) in tandem with a MMSE joint source decoder (JSD) for multiple sources to achieve a computational complexity that scales with the number of sources. The MUD and the JSD, then iteratively exchange extrinsic information to improve the interference cancellation. Experimental results show that, compared to a non-iterative decoder, the proposed iterative decoder is more robust against potential performance degradation due to correlated channel interference and offers better near far resistance.
APA, Harvard, Vancouver, ISO, and other styles
25

Kobayashi, Kentaro, Takaya Yamazato, Hiraku Okada, and Masaaki Katayama. "Iterative Joint Channel-Decoding Scheme Using the Correlation of Transmitted Information Sequences in Sensor Networks." IEEE, 2006. http://hdl.handle.net/2237/7755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shiao, Fu-Sheng. "Combined Channel Estimation and Turbo Equalization for Wireless Channels." Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1189.

Full text
Abstract:
Single-carrier linear modulation techniques combined with frequency-domain equalization provide a viable alternative to multicarrier techniques for combating multipath fading in channels with large delay spread. Such modulations tolerate frequency offset and have well controlled peak to average power ratio. They have comparable complexity to orthogonal frequency division multiplexing (OFDM) systems, and are more robust to synchronization errors. If error correction coding is used, then information can be iteratively passed between the equalizer and the decoder to improve performance. This is referred to as turbo equalization. To date, several turbo equalization schemes have been proposed, but little work has been done to address the problem of channel estimation for the turbo equalization process. The work in this thesis considers frequency-domain turbo equalization with imperfect channel state information (CSI) at the receiver for different wireless channels. A receiver structure incorporating joint frequency-domain turbo equalization and time- domain channel estimation is developed. The novelty of this scheme lies in the combination of time-domain channel estimation and frequency-domain turbo equalization, and in its extension to high level modulation formats. The performance of the system is investigated by a combination of analysis and computer simulation. It is found that the system performs well over a range of dispersive channels.
APA, Harvard, Vancouver, ISO, and other styles
27

España, Boquera Salvador. "Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition)." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62215.

Full text
Abstract:
[EN] This work is focused on problems (like automatic speech recognition (ASR) and handwritten text recognition (HTR)) that: 1) can be represented (at least approximately) in terms of one-dimensional sequences, and 2) solving these problems entails breaking the observed sequence down into segments which are associated to units taken from a finite repertoire. The required segmentation and classification tasks are so intrinsically interrelated ("Sayre's Paradox") that they have to be performed jointly. We have been inspired by what some works call the "successful trilogy", which refers to the synergistic improvements obtained when considering: - a good formalization framework and powerful algorithms; - a clever design and implementation taking the best profit of hardware; - an adequate preprocessing and a careful tuning of all heuristics. We describe and study "two stage generative models" (TSGMs) comprising two stacked probabilistic generative stages without reordering. This model not only includes Hidden Markov Models (HMMs, but also "segmental models" (SMs). "Two stage decoders" may be deduced by simply running a TSGM in reversed way, introducing non determinism when required: 1) A directed acyclic graph (DAG) is generated and 2) it is used together with a language model (LM). One-pass decoders constitute a particular case. A formalization of parsing and decoding in terms of semiring values and language equations proposes the use of recurrent transition networks (RTNs) as a normal form for Context Free Grammars (CFGs), using them in a parsing-as-composition paradigm, so that parsing CFGs result in a slight extension of regular ones. Novel transducer composition algorithms have been proposed that can work with RTNs and can deal with null transitions without resorting to filter-composition even in the presence of null transitions and non-idempotent semirings. A review of LMs is described and some contributions mainly focused on LM interfaces, LM representation and on the evaluation of Neural Network LMs (NNLMs) are provided. A review of SMs includes the combination of generative and discriminative segmental models and general scheme of frame emission and another one of SMs. Some fast cache-friendly specialized Viterbi lexicon decoders taking profit of particular HMM topologies are proposed. They are able to manage sets of active states without requiring dictionary look-ups (e.g. hashing). A dataflow architecture allowing the design of flexible and diverse recognition systems from a little repertoire of components has been proposed, including a novel DAG serialization protocol. DAG generators can take over-segmentation constraints into account, make use SMs other than HMMs, take profit of the specialized decoders proposed in this work and use a transducer model to control its behavior making it possible, for instance, to use context dependent units. Relating DAG decoders, they take profit of a general LM interface that can be extended to deal with RTNs. Some improvements for one pass decoders are proposed by combining the specialized lexicon decoders and the "bunch" extension of the LM interface, including an adequate parallelization. The experimental part is mainly focused on HTR tasks on different input modalities (offline, bimodal). We have proposed some novel preprocessing techniques for offline HTR which replace classical geometrical heuristics and make use of automatic learning techniques (neural networks). Experiments conducted on the IAM database using this new preprocessing and HMM hybridized with Multilayer Perceptrons (MLPs) have obtained some of the best results reported for this reference database. Among other HTR experiments described in this work, we have used over-segmentation information, tried lexicon free approaches, performed bimodal experiments and experimented with the combination of hybrid HMMs with holistic classifiers.
[ES] Este trabajo se centra en problemas (como reconocimiento automático del habla (ASR) o de escritura manuscrita (HTR)) que cumplen: 1) pueden representarse (quizás aproximadamente) en términos de secuencias unidimensionales, 2) su resolución implica descomponer la secuencia en segmentos que se pueden clasificar en un conjunto finito de unidades. Las tareas de segmentación y de clasificación necesarias están tan intrínsecamente interrelacionadas ("paradoja de Sayre") que deben realizarse conjuntamente. Nos hemos inspirado en lo que algunos autores denominan "La trilogía exitosa", refereido a la sinergia obtenida cuando se tiene: - un buen formalismo, que dé lugar a buenos algoritmos; - un diseño e implementación ingeniosos y eficientes, que saquen provecho de las características del hardware; - no descuidar el "saber hacer" de la tarea, un buen preproceso y el ajuste adecuado de los diversos parámetros. Describimos y estudiamos "modelos generativos en dos etapas" sin reordenamientos (TSGMs), que incluyen no sólo los modelos ocultos de Markov (HMM), sino también modelos segmentales (SMs). Se puede obtener un decodificador de "dos pasos" considerando a la inversa un TSGM introduciendo no determinismo: 1) se genera un grafo acíclico dirigido (DAG) y 2) se utiliza conjuntamente con un modelo de lenguaje (LM). El decodificador de "un paso" es un caso particular. Se formaliza el proceso de decodificación con ecuaciones de lenguajes y semianillos, se propone el uso de redes de transición recurrente (RTNs) como forma normal de gramáticas de contexto libre (CFGs) y se utiliza el paradigma de análisis por composición de manera que el análisis de CFGs resulta una extensión del análisis de FSA. Se proponen algoritmos de composición de transductores que permite el uso de RTNs y que no necesita recurrir a composición de filtros incluso en presencia de transiciones nulas y semianillos no idempotentes. Se propone una extensa revisión de LMs y algunas contribuciones relacionadas con su interfaz, con su representación y con la evaluación de LMs basados en redes neuronales (NNLMs). Se ha realizado una revisión de SMs que incluye SMs basados en combinación de modelos generativos y discriminativos, así como un esquema general de tipos de emisión de tramas y de SMs. Se proponen versiones especializadas del algoritmo de Viterbi para modelos de léxico y que manipulan estados activos sin recurrir a estructuras de tipo diccionario, sacando provecho de la caché. Se ha propuesto una arquitectura "dataflow" para obtener reconocedores a partir de un pequeño conjunto de piezas básicas con un protocolo de serialización de DAGs. Describimos generadores de DAGs que pueden tener en cuenta restricciones sobre la segmentación, utilizar modelos segmentales no limitados a HMMs, hacer uso de los decodificadores especializados propuestos en este trabajo y utilizar un transductor de control que permite el uso de unidades dependientes del contexto. Los decodificadores de DAGs hacen uso de un interfaz bastante general de LMs que ha sido extendido para permitir el uso de RTNs. Se proponen también mejoras para reconocedores "un paso" basados en algoritmos especializados para léxicos y en la interfaz de LMs en modo "bunch", así como su paralelización. La parte experimental está centrada en HTR en diversas modalidades de adquisición (offline, bimodal). Hemos propuesto técnicas novedosas para el preproceso de escritura que evita el uso de heurísticos geométricos. En su lugar, utiliza redes neuronales. Se ha probado con HMMs hibridados con redes neuronales consiguiendo, para la base de datos IAM, algunos de los mejores resultados publicados. También podemos mencionar el uso de información de sobre-segmentación, aproximaciones sin restricción de un léxico, experimentos con datos bimodales o la combinación de HMMs híbridos con reconocedores de tipo holístico.
[CAT] Aquest treball es centra en problemes (com el reconeiximent automàtic de la parla (ASR) o de l'escriptura manuscrita (HTR)) on: 1) les dades es poden representar (almenys aproximadament) mitjançant seqüències unidimensionals, 2) cal descompondre la seqüència en segments que poden pertanyer a un nombre finit de tipus. Sovint, ambdues tasques es relacionen de manera tan estreta que resulta impossible separar-les ("paradoxa de Sayre") i s'han de realitzar de manera conjunta. Ens hem inspirat pel que alguns autors anomenen "trilogia exitosa", referit a la sinèrgia obtinguda quan prenim en compte: - un bon formalisme, que done lloc a bons algorismes; - un diseny i una implementació eficients, amb ingeni, que facen bon us de les particularitats del maquinari; - no perdre de vista el "saber fer", emprar un preprocés adequat i fer bon us dels diversos paràmetres. Descrivim i estudiem "models generatiu amb dues etapes" sense reordenaments (TSGMs), que inclouen no sols inclouen els models ocults de Markov (HMM), sinò també models segmentals (SM). Es pot obtindre un decodificador "en dues etapes" considerant a l'inrevés un TSGM introduint no determinisme: 1) es genera un graf acíclic dirigit (DAG) que 2) és emprat conjuntament amb un model de llenguatge (LM). El decodificador "d'un pas" en és un cas particular. Descrivim i formalitzem del procés de decodificació basada en equacions de llenguatges i en semianells. Proposem emprar xarxes de transició recurrent (RTNs) com forma normal de gramàtiques incontextuals (CFGs) i s'empra el paradigma d'anàlisi sintàctic mitjançant composició de manera que l'anàlisi de CFGs resulta una lleugera extensió de l'anàlisi de FSA. Es proposen algorismes de composició de transductors que poden emprar RTNs i que no necessiten recorrer a la composició amb filtres fins i tot amb transicions nul.les i semianells no idempotents. Es proposa una extensa revisió de LMs i algunes contribucions relacionades amb la seva interfície, amb la seva representació i amb l'avaluació de LMs basats en xarxes neuronals (NNLMs). S'ha realitzat una revisió de SMs que inclou SMs basats en la combinació de models generatius i discriminatius, així com un esquema general de tipus d'emissió de trames i altre de SMs. Es proposen versions especialitzades de l'algorisme de Viterbi per a models de lèxic que permeten emprar estats actius sense haver de recórrer a estructures de dades de tipus diccionari, i que trauen profit de la caché. S'ha proposat una arquitectura de flux de dades o "dataflow" per obtindre diversos reconeixedors a partir d'un xicotet conjunt de peces amb un protocol de serialització de DAGs. Descrivim generadors de DAGs capaços de tindre en compte restriccions sobre la segmentació, emprar models segmentals no limitats a HMMs, fer us dels decodificadors especialitzats proposats en aquest treball i emprar un transductor de control que permet emprar unitats dependents del contexte. Els decodificadors de DAGs fan us d'una interfície de LMs prou general que ha segut extesa per permetre l'ús de RTNs. Es proposen millores per a reconeixedors de tipus "un pas" basats en els algorismes especialitzats per a lèxics i en la interfície de LMs en mode "bunch", així com la seua paral.lelització. La part experimental està centrada en el reconeiximent d'escriptura en diverses modalitats d'adquisició (offline, bimodal). Proposem un preprocés d'escriptura manuscrita evitant l'us d'heurístics geomètrics, en el seu lloc emprem xarxes neuronals. S'han emprat HMMs hibridats amb xarxes neuronals aconseguint, per a la base de dades IAM, alguns dels millors resultats publicats. També podem mencionar l'ús d'informació de sobre-segmentació, aproximacions sense restricció a un lèxic, experiments amb dades bimodals o la combinació de HMMs híbrids amb classificadors holístics.
España Boquera, S. (2016). Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62215
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, P. S. (Pen-Shun). "Decoding and lossy forwarding based multiple access relaying." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526207742.

Full text
Abstract:
Abstract The goal of this thesis is to provide a unified concept of lossy-forwarding from the theoretical analysis to practical scheme design for the decode-and-forward-based multiple access relay channel (MARC) system. To improve the performance of MARC with the relay subject to resources or/and time constraints, the erroneous estimates output from simple detection schemes are used at the relay are forwarded and exploited. A correlation is then found between two sequences: one is the network-coded sequence sent from the relay, and the other is their corresponding exclusive-OR-ed information sequence. Several joint network-channel coding (JNCC) techniques are provided in which the correlation is utilized to update the log-likelihood ratio sequences during the iterative decoding process at the destination. As a result, the bit error rate (BER) and frame error rate (FER) are improved compared with those of MARC with select DF strategy (SDF-MARC). The MARC proposed above is referred to as erroneous estimates-exploiting MARC (e-MARC). To investigate the achieved FER performance of the e-MARC system, the outage probability for e-MARC with two source nodes is theoretically derived. We re-formulate the e-MARC system and identify its admissible rate region according to the Slepian-Wolf theorem with a helper. Then, the outage probability is obtained by a set of integral over the rate region with respect to the probability density functions of all the links' instantaneous signal-to-noise power ratios. It is found through simulations that, as one of the source nodes is far away from both the relay and destination, e-MARC is superior to SDF-MARC in terms of outage performance. Furthermore, a joint adaptive network-channel coding (JANCC) technique is then proposed to support e-MARC with more source nodes. A vector is constructed at the destination in JANCC to identify the indices of the incorrectly decoded source node(s), and re-transmitted to the relay for requesting additional redundancy. The relay performs network-coding only over the estimates specified by the vector upon receiving the request. Numerical results show that JANCC-aided e-MARC is superior to e-MARC in terms of FER and goodput efficiency. In addition, compared iterative decoding is performed at relay with SDF-MARC, the use of differential detection with JANCC-aided e-MARC significantly reduces the computational complexity and latency with only a small loss in the FER
Tiivistelmä Tämän väitöskirjan tarkoituksena on tuottaa yhtenäinen kokonaisuus häviöllisestä lähetyksestä pura-ja-lähetä (DF) -pohjaisessa monikäyttörelejärjestelmässä (MARC) sekä teoreettisesta että käytännöllisestä näkökulmasta. Parantaakseen resurssi- tai aikarajoitetun MARC-järjestelmän suorituskykyä, vastaanotin hyödyntää riippuvuussuhdetta releen välittämien informaatiosekvenssien virheellisten estimaattien ja suoraan lähteestä tulevien informaatiosekvenssien välillä (e-MARC). Työssä ehdotetaan useita yhdistetyn verkko -ja kanavakoodauksen menetelmiä (JNCC), joissa log-uskottavuussuhdesekvenssit iteratiivisen purkamisprosessin aikana päivitetään hyödyntämällä sekvenssien riippuvuussuhdetta vastaanottimessa. Tämän tuloksena sekä bittivirhe- että kehysvirhesuhdetta saadaan parannettua verrattuna selektiiviseen pura-ja-lähetä menetelmää käyttävään MARC-strategiaan (SDF-MARC). Kehysvirheen suorituskyvyn tarkastelua varten työssä johdetaan teoreettinen epäkäytettävyyden todennäköisyys e-MARC-menetelmälle kahden lähettimen tapauksessa. Lisäksi e-MARC-menetelmälle määritetään tiedonsiirtonopeusalue Slepian-Wolf -teoreeman mukaisesti. Tämän jälkeen saadaan epäkäytettävyyden todennäköisyys kaikkien linkkien signaalikohinasuhteen todennäköisyystiheysfunktion integraalina tiedonsiirtonopeusalueen yli. Simulointitulokset osoittavat e-MARC-menetelmän paremman epäkäytettävyyden todennäköisyyden verrattuna SDF-MARC-menetelmään silloin kun yksi lähettimistä on kaukana sekä releestä että vastaanottimesta. Mahdollistaakseen useamman lähteen käytön e-MARC-menetelmässä, työssä ehdotetaan lisäksi adaptiivinen yhdistetyn verkko-ja kanavakoodauksen menetelmä (JANCC). Siinä vastaanotin määrittää väärin purettujen sekvenssien lähettimet ja ilmoittaa ne vektorimuodossa takaisin releelle pyytääkseen näiden lähettimien informaation uudelleenlähetystä. Tämän jälkeen rele suorittaa verkkokoodauksen vain tunnistusvektorin määrittämien informaatiosekvenssien estimaatteihin perustuen. Tulokset näyttävät, että JANCC-menetelmää käyttävä e-MARC saavuttaa paremman kehysvirheen ja hyödyllisen läpäisyn tehokkuuden verrattuna e-MARC-menetelmään
APA, Harvard, Vancouver, ISO, and other styles
29

Veaux, Christophe. "Étude de traitements en réception pour l'amélioration de la qualité de la parole : application au GSM." Paris, ENST, 2005. http://www.theses.fr/2005ENST0006.

Full text
Abstract:
Dans les communications radio-mobiles, la qualité de la parole est fortement impactée par les erreurs résiduelles en sortie du décodeur canal. Celles-ci résultent des paquets d'erreurs introduits par le canal radio alors que les contraintes de complexité limitent les performances du codage canal. Parallèlement, ces mêmes contraintes font qu'il subsiste une redondance résiduelle en sortie du codeur de parole. Cette redondance résiduelle fournit une information a priori qui peut être exploitée au niveau du décodeur de parole pour minimiser l'erreur d'estimation des paramètres transmis, ou au niveau du décodeur canal pour minimiser le taux d'erreurs résiduelles. Nous étudions chacune de ces approches ainsi que leur combinaison dans le cas du GSM EFR. Au niveau du décodeur parole, nous modélisons la redondance résiduelle des paramètres du codeur à l'aide de mélanges de gaussiennes ce qui permet de réduire la complexité de l'estimation par rapport aux méthodes existantes et délivre une information plus riche sur la redondance résiduelle. Au décodeur canal, nous améliorons la prise en compte de la redondance résiduelle entre les bits d'une même trame par une technique de prédiction le long des chemins du treillis. Ces approches sont évaluées sur des simulations de transmission par l'algorithme PESQ de mesure objective de la qualité vocale. Le décodeur de parole exploitant la redondance résiduelle apparaît supérieur au décodeur classique du GSM EFR avec masquage d'erreur. En revanche, les performances du décodeur canal exploitant la redondance résiduelle sont sensibles à la correspondance choisie ente les éléments du dictionnaire de quantification et leur étiquetage binaire
In digital mobile radio systems, the speech quality can be severly degraded if the channel decoder produces residual bit errors due to heavy burst errors on the radio path. On the other hand, in most speech coding schemes some residual redundancy remains because of complexity and delay limitations. A solution for improving the speech quality consists in using a priori information based on source residual redundancy at the receiver side. In one approach, a priori information is used at the source decoder to perform optimal estimation of transmitted codec parameters. In another approach, a priori information is used at the channel decoder in oder to minimize the residual bit errors rate. We study both approaches and apply them to the GSM EFR. At the speech decoder level, we model the residual redundancy of speech codec parameters by using Gaussian mixtures. This model reduces the complexity when compared to state-of-art methods and provides a better representation of residual redundancy. At the channel decoder level, we introduce an efficient method to exploit intra-frame bit redundancy during the channel decoding process. In this method, the value of the bit being decoded is predicted from the bits already decoded along the treillis path. These methods are evaluated on simulations of radio transmission by an objective measure of speech quality. In all cases, the performance of the speech decoder using our residual redundancy model turns out to be better than the GSM EFR decoder with standard error concealment procedure. The performance of the channel decoder using residual redundancy is strongly dependent of the index assignment used by the quantifier
APA, Harvard, Vancouver, ISO, and other styles
30

Wernersson, Niklas. "Source-channel coding for wireless networks." Licentiate thesis, Stockholm : Communication Theory, School of Electrical Engineering, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wernersson, Niklas. "Source-Channel Coding in Networks." Doctoral thesis, Stockholm : Communication Systems, Kommunikationssystem, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bayramoglu, Muhammet Fatih. "Sub-graph Approach In Iterative Sum-product Algorithm." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606550/index.pdf.

Full text
Abstract:
Sum-product algorithm can be employed for obtaining the marginal probability density functions from a given joint probability density function (p.d.f.). The sum-product algorithm operates on a factor graph which represents the dependencies of the random variables whose joint p.d.f. is given. The sum-product algorithm can not be operated on factor-graphs that contain loops. For these factor graphs iterative sum-product algorithm is used. A factor graph which contains loops can be divided in to loop-free sub-graphs. Sum-product algorithm can be operated in these loop-free sub-graphs and results of these sub-graphs can be combined for obtaining the result of the whole factor graph in an iterative manner. This method may increase the convergence rate of the algorithm significantly while keeping the complexity of an iteration and accuracy of the output constant. A useful by-product of this research that is introduced in this thesis is a good approximation to message calculation in factor nodes of the inter-symbol interference (ISI) factor graphs. This approximation has a complexity that is linearly proportional with the number of neighbors instead of being exponentially proportional. Using this approximation and the sub-graph idea we have designed and simulated joint decoding-equalization (turbo equalization) algorithm and obtained good results besides the low complexity.
APA, Harvard, Vancouver, ISO, and other styles
33

Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.

Full text
Abstract:
Cette thèse est consacrée à l'amélioration des performances de codage/décodage de systèmes de transmission d'images fixes sur des canaux bruités et réalistes. Nous proposons, à cet effet, le développement de méthodes de transmission d'images optimisées en se focalisant sur les deux couches application et physique des réseaux sans fil. Au niveau de la couche application et afin d'assurer une bonne qualité de service, on utilise des algorithmes de compression efficaces permettant au récepteur de reconstruire l'image avec un maximum de fidélité (JPEG2000 et JPWL). Afin d'assurer une transmission sur des canaux sans fil avec un minimum de TEB à la réception, des techniques de transmission, de codage et de modulation avancées sont utilisées au niveau de la couche physique (système MIMO-OFDM, modulation adaptative, CCE, etc). Dans un premier temps, nous proposons un système de transmission robuste d'images codées JPWL intégrant un schéma de décodage conjoint source-canal basé sur des techniques de décodage à entrées pondérées. On considère, ensuite, l'optimisation d'une chaîne de transmission d'images sur un canal MIMO-OFDM sans fil réaliste. La stratégie de transmission d'images optimisée s'appuie sur des techniques de décodage à entrées pondérées et une approche d'adaptation de lien. Ainsi, le schéma de transmission proposé offre la possibilité de mettre en oeuvre conjointement de l'UEP, de l'UPA, de la modulation adaptative, du codage de source adaptatif et de décodage conjoint pour améliorer la qualité de l'image à la réception. Dans une seconde partie, nous proposons un système robuste de transmission de flux progressifs basé sur le principe de turbo décodage itératif de codes concaténés offrant une stratégie de protection inégale de données. Ainsi, l'originalité de cette étude consiste à proposer des solutions performantes d'optimisation globale d'une chaîne de communication numérique pour améliorer la qualité de transmission
This work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
APA, Harvard, Vancouver, ISO, and other styles
34

Ali, Usman. "WiBOX - Une passerelle pour une réception robuste de vidéo diffusée via WIMAX et une rediffusion indoor via WIFI." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00576262.

Full text
Abstract:
Cette thèse étudie un certain nombre d'outils (rassemblés dans la WiBox) nécessaires pour une réception fiable de vidéo diffusée sur WiMAX, puis rediffusée sur Wifi. Il s'agit de fournir des services WiMAX à des utilisateurs WiFi, avec une qualité de réception vidéo raisonnable, même avec un très faible signal WiMAX. Pour cela, des techniques de décodage conjoint de paquets erronés sont indispensables afin de limiter les retards liés aux retransmissions. Dans la première partie de cette thèse, nous considérons le problème de la délinéation de paquets agrégés en macro-paquets. Cette opération d'agrégation est réalisée dans de nombreux protocoles afin d'améliorer le rapport en-tête/charge utile des systèmes de communication. Plusieurs méthodes de délinéation sont proposées. Elles exploitent d'une part les informations souples provenant des couches protocolaires basses ainsi que la redondance présente dans les paquets devant être séparés. L'ensemble des successions possibles de paquets au sein d'un macro-paquet est décrit à l'aide d'un trellis. Le problème de délinéation est transformé en un problème d'estimation de l'état d'une variable aléatoire Markovienne, pour lequel de nombreux algorithmes (BCJR, Viterbi) sont disponibles. Cette technique est très efficace mais complexe. De plus, elle nécessite la réception de l'ensemble du macro-paquet, ce qui peut induire des latences importantes. Dans une deuxième étape, nous proposons une technque où le décodage se fait sur une fenêtre glissante contenant une partie du macro-paquet. Un treillis glissant est considéré cette fois. La taille de la fenêtre permet d'ajuster un comproimis entre complexité et efficacité de décodage. Enfin, une méthode de décodage à la volée exploitant un automate à 3 état et des tests d'hypothèses Bayésiens permet de réaliser une délinéation moins efficace, mais sans latence. Ces méthodes sont comparées sur un problème de délinéation de paquets MAC dans des macro-paquets PHY dans WiMAX. Dans la deuxième partie de la thèse, nous proposons de réaliser un décodage souple des codes en blocs utilisés dans certaines couches de piles protocolaires pour le multimédia. Cdes sorties souples sont générées afin de permettre un décodage conjoint des en-têtes et de la charge utile au niveau des couches supérieures. Nous avons en particulier étudié des outils de décodage souple ldans le cas de la norme RTP FEC, et avons comparé les performances du décodeur proposé avec des approches classiques de décodage. En résumé, les techniques de décodage conjoint proposées permettent de réduire le nombre de paquets perdus, d'augmenter le nombre de paquets transmis vers les couches applicatives où des décodeurs source-canal conjoints peuvent être utilisés pour améliorer la qualité de la vidéo reçue.
APA, Harvard, Vancouver, ISO, and other styles
35

Jabaian, Bassam. "Systèmes de compréhension et de traduction de la parole : vers une approche unifiée dans le cadre de la portabilité multilingue des systèmes de dialogue." Phd thesis, Université d'Avignon, 2012. http://tel.archives-ouvertes.fr/tel-00818970.

Full text
Abstract:
La généralisation de l'usage des systèmes de dialogue homme-machine accroît la nécessité du développement rapide des différents composants de ces systèmes. Les systèmes de dialogue peuvent être conçus pour différents domaines d'application et dans des langues différentes. La nécessité d'une production rapide pour de nouvelles langues reste un problème ouvert et crucial auquel il est nécessaire d'apporter des solutions efficaces.Nos travaux s'intéressent particulièrement au module de compréhension de la parole et proposent des approches pour la portabilité rapide peu coûteuse de ce module.Les méthodes statistiques ont montré de bonnes performances pour concevoir les modules de compréhension de la parole pour l'étiquetage sémantique de tours de dialogue.Cependant ces méthodes nécessitent de larges corpus pour être apprises. La collecte de ces corpus est aussi coûteuse en temps et en expertise humaine.Dans cette thèse, nous proposons plusieurs approches pour porter un système de compréhension d'une langue vers une autre en utilisant les techniques de la traduction automatique. Les premiers travaux consistent à appliquer la traduction automatique à plusieurs niveaux du processus de portabilité du système de compréhension afin de réduire le coût lié à production de nouvelles données d'apprentissage. Les résultats expérimentaux montrent que l'utilisation de la traduction automatique permet d'obtenir des systèmes performant avec un minimum de contribution humaine.Cette thèse traite donc à la fois de la traduction automatique et de la compréhension de la parole. Nous avons effectué une comparaison approfondie entre les méthodes utilisées pour chacune des tâches et nous avons proposé un décodage conjoint basé sur une méthode discriminante qui à la fois traduit une phrase et lui attribue ses étiquettes sémantiques. Ce décodage est obtenu par une approche à base de graphe qui permet de composer un graphe de traduction avec un graphe de compréhension. Cette représentation peut être généralisée pour permettre des transmissions d'informations riches entre les composants du système de dialogue
APA, Harvard, Vancouver, ISO, and other styles
36

Hatefi, Atoosa. "Distributed Coding for Wireless Cooperative Networks." Thesis, Supélec, 2012. http://www.theses.fr/2012SUPL0015/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude théorique et à la conception pratique de schémas de codage conjoint réseau/canal adaptés à différents scénarii de communications dans les réseaux sans fil. Contrairement aux hypothèses conventionnelles retenues dans la littérature (accès multiple orthogonal, absence d'erreurs sur certains liens), les caractéristiques de diffusion et de superposition des signaux propres au canal radio et la présence d'évanouissements lents et de bruit sur tous les liens sont prises en compte dans la formulation du problème et exploitées. Différentes stratégies de coopération au niveau du ou des relais sont examinées et comparées. Le point commun entre toutes ces stratégies est que le système doit fonctionner même en absence de coopération. Seuls le ou les relais et la destination sont informés d'une coopération. Ni les sources, ni le ou les relais ne connaissent l'état du canal à l'émission. Le premier volet de la thèse porte sur le canal à accès multiple avec relais unique (slow fading MARC). Le problème du codage et décodage conjoint canal/réseau (JNCC/JNCD) est étudié sur un plan théorique et pratique. Différentes hypothèses au niveau de l'accès multiple (semi-orthogonal et non-orthogonal) et différents modes de fonctionnement du relais (half-duplex et full-duplex) sont envisagés. Une nouvelle stratégie de coopération adaptative (SDF pour selective decode and forward) est définie dans laquelle le relais calcule et retransmet une fonction déterministe des messages de sources qu'il a pu décoder sans erreur. Le ré-encodage, défini sur un corps fini (corps binaire), est également conçu de manière à assurer que la performance finale au niveau de la destination atteint bien un ordre de diversité 2. Le modèle de canal MARC est par la suite étendu à plusieurs relais (slow fading MAMRC). Une analyse théorique est conduite et des nouveaux schémas JNCC/JNCD permettant de s'approcher des limites théoriques sont décrits. Afin d'assurer la diversité pleine, nous proposons de combiner un codage canal binaire et un codage réseau non-binaire. Pour les deux types de canaux, nous montrons que l'interférence naturellement induite par la diffusion des signaux dans un environnement sans fil, n'est pas un inconvénient mais bien un avantage dès lors qu'on est en mesure de la traiter via des techniques de codage et de décodage sophistiquées (turbo codes et leur décodage, turbo détection). Les gains en termes de capacité (rapportée à une certaine probabilité de coupure) obtenus avec un accès multiple semi-orthogonal ou non-orthogonal sont substantiels comparés à un accès multiple orthogonal (référence). Dans la dernière partie de la thèse, la stratégie de coopération SDF est comparée à deux autres stratégies de coopération s'appuyant sur un procédé de décodage-et-retransmission "souple" (sans prise de décisions intermédiaires) : l'une basée sur les rapports logarithmiques de probabilité a posteriori sur les bits codés et l'autre basée sur l'estimation de l'erreur quadratique moyenne (MSE). Nous vérifions que la stratégie de coopération SDF fonctionne bien dans la plupart des configurations, les stratégies de coopération souples n'améliorant légèrement les performances que dans certains cas extrêmes
With the rapid growth of wireless technologies, devices and mobile applications, the quest of high throughput and ubiquitous connectivity in wireless communications increases rapidly as well. Relaying is undoubtedly a key concept to provide coverage extension and capacity increase in wireless networks. Network coding, which allows the intermediate nodes to share their computation capabilities in addition to their resource and their power, has grabbed a significant research attention since its inception in information theory. It has become an attractive candidate to bring promising performance improvement, especially in terms of throughput, in relay-based cellular networks. Substantial research efforts are currently focused on theoretical analysis, implementation and evaluation of network coding from a physical layer perspective. The question is, what is the most efficient and practical way to use network coding in wireless relay-based networks, and whether it is beneficial to exploit the broadcast and multiple-access properties of the wireless medium to perform network coding. It is in such a context, that this thesis proceeds. In the first part of the thesis, the problem of Joint Network-Channel Coding (JNCC) for a Multiple Access Relay Channel (MARC) is investigated in the presence of multiple access interferences and for both of the relay operating modes, namely, half-duplex and full-duplex. To this end, three new classes of MARC, referred to as Half-Duplex Semi-Orthogonal MARC (HD-SOMARC), Half-Duplex Non-Orthogonal MARC (HD-NOMARC), and Full-Duplex Non-Orthogonal MARC (FD-NOMARC) have been introduced and studied. The relaying function in all of the classes is based on a Selective Decode-and-Forward (SDF) strategy, which is individually implemented for each source, i.e, the relay forwards only a deterministic function of the error-free decoded messages. For each class, an information-theoretic analysis is conducted, and practical coding and decoding techniques are proposed. The proposed coding schemes, perform very close to the outage limit for both cases of HD-SOMARC and HD-NOMARC. Besides, in the case of HD-NOMARC, the optimal allocation of the transmission time to the relay is considered. It is also verified that exploiting multiple access interferences, either partially or totally, results in considerable gains for MARC compared to the existing interference-avoiding structures, even in the case of single receive antenna. In the second part of the thesis, the network model is extended by considering multiple relays which help multiple sources to communicate with a destination. A new class of Multiple Access Multiple Relay Channel (MAMRC), referred to as Half-Duplex Semi-Orthogonal MAMRC (HD-SOMAMRC) is then proposed and analyzed from both information theoretic and code design perspective. New practical JNCC schemes are proposed, in which binary channel coding and non binary network coding are combined, and they are shown to perform very close to the outage limit. Moreover, the optimal allocation of the transmission time to the sources and relays is considered. Finally, in the third part of the thesis, different ways of implementing cooperation, including practical relaying protocols are investigated for the half-duplex MARC with semi-orthogonal transmission protocol and in the case of JNCC. The hard SDF approach is compared with two Soft Decode and Forward (SoDF) relaying functions: one based on log a posterior probability ratios (LAPPRs) and the other based on Mean Square Error (MSE) estimate. It is then shown that SDF works well in most of the configurations and just in some extreme cases, soft relaying functions (based on LAPPR or MSE estimate) can slightly outperform the hard selective one
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, Byung Hak. "Joint Equalization and Decoding via Convex Optimization." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10755.

Full text
Abstract:
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
APA, Harvard, Vancouver, ISO, and other styles
38

Samarawickrama, Upul. "Joint source-channel decoding of multiple descriptions." 2005. http://hdl.handle.net/1993/18097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Chun-Chia, and 黃俊嘉. "Content-aware Joint Video Retargeting and Decoding in Compressed Domain." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/62965561984608106225.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
101
Since display devices have diverse resolution sizes and aspect ratios, videos are sometimes required to be resized before displaying on a variety of devise. Content-aware video retargeting technique aims to provide a better way than traditional simple cropping and uniform scaling which usually squeeze or discard the region which is less important and preserve the region of interest. Most existing retargeting algorithms work in the pixel domain. When a compressed video is sent from one device to the other device, decoding the bitstream to video sequence is first carried out and the video sequence is retargeted to desired size if the receiver has different resolution or aspect ratios. This work presents a method to retarget the compressed video in compressed domain where video decoding and retargeting are performed in one step. Experimental results show that the proposed method can save up to 60% computation times, compared to that of pixel-domain video retargeting. Furthermore, the retargeted video quality is also better than traditional uniform scaling.
APA, Harvard, Vancouver, ISO, and other styles
40

Cruz, Susana Pereira Bulas. "Decoder design and decoding models for joint source-network coding." Dissertação, 2010. http://hdl.handle.net/10216/62125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Zhi-Xian, and 林志賢. "Joint source-channel decoding for convolutionally encoded variable length codes." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/52681082637156508108.

Full text
Abstract:
碩士
國立交通大學
電信工程研究所
100
Reliable transmission of convolutionally encoded source codec parameters over noisy channels can benefit from the use of joint source‐channel decoding (JSCD). This thesis focuses on the design of a symbol-based JSCD for convolutionally encoded variable‐length codes (VLC). We also investigate bidirectional decoding schemes by using reversible variable length code (RVLC). Our work starts with a sectionalized code trellis whose two-dimensional states are created by combing the symbol-based VLC trellis and the state transition of a channel encoder. Then, we derive the recursive implementation of JSCD algorithm according to the three-dimensional sectionalized code trellis. Simulation results on both Gauss-Markov sources and ECG signals demonstrate the error-resilience capabilities of symbol-based joint decoding by using the first-order source correlation.
APA, Harvard, Vancouver, ISO, and other styles
42

Cruz, Susana Pereira Bulas. "Decoder design and decoding models for joint source-network coding." Master's thesis, 2010. http://hdl.handle.net/10216/62125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Wei-I., and 吳威怡. "Bit- and Trellis- Based Joint Huffman and Convolutional Sequential Decoding Algorithms." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/05464860396541934297.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
96
According to the Shannon’s separation theory, the performance of the overall system is optimal while the source coding and the channel coding are separately optimized. However, due to the constraints on complexity and delay, the performance of separate decoding is usually not optimal. In the past, in order to further improve the performance of separate decoding, the residual redundancy left after compression, the source priori information, and the channel statictical information are exploited and fully utilized for presenting a so-called joint source-channel decoding (JSCD) scheme. In tradition, the trellis adopted in Viterbi decoding algorithm will become tremendously large while all the source and channel information are utilized. Although the decoding performance is optimal, the decoding complexity becomes quite expensive. Therefore, it is not practical. In this work, a new maximum a posteriori probability (MAP) metric with lower computational complexity is derived first, and then we propose a bit- and trellis- based jointly sequential decoding algorithm along with a suboptimal solution. Simulation results indicate the suboptimal method can provide nearly the same performance as optimal scheme while exhibiting a significantly lower complexity.
APA, Harvard, Vancouver, ISO, and other styles
44

Tang, Fu-Qiao, and 湯復喬. "On Outage Analysis for Full-Duplex Relay Networks with Joint Decoding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/54st97.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
105
With the public demand for mobile data is increasing year by year, in the case of limited spectrum resources, how to improve the use of the spectrum becomes a hot topic. The full-duplex system has been widely discussed in recent years because of its capability in sending and receiving messages at the same time and the same frequency band, which is the system model adopted in this paper. This paper investigates the outage performance in full-duplex relay channel that adopts amplify-and-forward relay and decode-and-forward relay protocol. Previous works almost focused on that destination treated the self-interference signal as interference only. We provide a new system model which make self-interference signal assist information decoding at the destination, and we derived an approximate, yet accurate, closed-form expression for the end-to-end outage probability. In addition, the processing delay is larger than 1 in practice, the proposed close-form outage probability expression is still valid. Finally, we corroborate our theoretical results with simulations, the results indicate that the proposed system model can outperform traditional full-duplex and half-duplex systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Jing-ShiunLin and 林敬衒. "High-performance Joint Iterative Detection and Decoding Design in Coded-MIMO Systems." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/92297209659102558260.

Full text
Abstract:
博士
國立成功大學
電機工程學系
103
Multiple-input multiple-output (MIMO) techniques have been widely used to increase the transmission rate and improve the signal quality in modern wireless communication systems. To further ensure transmission reliability, error-correcting codes like turbo codes with performance close to the theoretical Shannon limit are adopted for wireless communication systems. Up to now, many MIMO detection algorithms have been proposed in the literature to recover the transmitted signals from the received noisy signals. In particular, soft-output MIMO detection algorithms can be used to provide a priori information of the codeword to the following soft-input soft-output (SISO) decoders, such as the turbo decoder in coded-MIMO systems. In some application scenarios with Rayleigh fading channels, the induced unreliable a priori information might result in system performance degradation. Exploring high-performance coded-MIMO systems thus becomes a challenging and crucial research area. In this dissertation, we have developed a high-performance joint iterative detection and turbo decoding design in coded-MIMO systems. First, a low-complexity high-throughput complex-valued QR factorization (CQRF) design is presented. Based on coordinate rotation digital computer (CORDIC) arithmetic, a massively parallel array architecture consisting of pipelined and folded CORDIC modules was developed to enhance the throughput. The chip implementation result indicates the design, with an equivalent gate count of 192.1K, can operate at 200 MHz and accomplish the highest 3-Gb/s data rate in 4×4 MIMO systems. The proposed design also outperforms related designs in two compound performance indices: data rate normalized with respect to gate count and power consumption. Second, we developed a mean-aided early-pruned scheme in MIMO detector based on sort-free fixed-complexity sphere decoding algorithm. The modified MIMO detector can reduce the number of node computations while maintaining the BER performance of the original sort-free algorithm. Experimental results show that the proposed reconfigurable detector design with 4×4 antenna array has a higher normalized throughput than those of existing detectors using the same system configuration. Third, an efficient joint iterative MIMO detection and turbo decoding design was developed. For the turbo decoding, a modified parallel-window MAP algorithm was proposed to reduce the warm-up computation. In the initial iteration, the reliability of the soft information of a MIMO detector can be greatly improved by applying the proposed extended tree search scheme. A modified updating strategy is presented to acquire the highly reliable extrinsic information from the soft-output MIMO detector in iterative system development. Compared to the list sphere decoding (LSD) algorithm, about 94% reduction in the memory requirement of log-likelihood ratio (LLR) computation can be achieved by using the proposed candidate node selection strategy. Finally, based on the analysis of iteration profile, the overall system performance can be maintained with a fewer number of searched nodes than existing works.
APA, Harvard, Vancouver, ISO, and other styles
46

Onwuatuelo, Daniel Obinna. "Joint decoding of parallel power line communication and visible light communication systems." Thesis, 2018. https://hdl.handle.net/10539/26917.

Full text
Abstract:
AdissertationsubmittedtotheFacultyofEngineeringandtheBuiltEnvironment, UniversityoftheWitwatersrand,infulfilmentoftherequirementsforthedegreeof MasterofScienceinEngineering, June 2018
Many indoor applications operate at narrow band (3kHz148.5kHz) speed and for such applications, power line communication (PLC) and visible light communication (VLC) networks can be naturally connected and adapted to complement each other in order to gain more overall system performance in terms of bit error rate (BER) and computational complexity. In this research,the joint decoding of parallel PLC and VLC systems is proposed and its BER performance is compared to that of the PLCa nd the VLC systems. The joint decoding is applied either at the inner (Viterbi) or at the outer (Reed-Solomon) decoder. The proposed system is adopted according to the PLC G3 physical layer specification but direct current optical orthogonal frequency division multiplexing OFDM (DCO-OFDM) is used in the VLC system to ensure that only positive (unipolar) signals are transmitted. A realistic VLC channel model is adopted in this research by considering the VLC channel as an additive white Gaussian noise (AWGN) channel affected by attenuation in terms of angle of orientation between the source and the receiver and effective surface area of the receiver. Furthermore, the PLC channel is modeled as an AWGN channel with background and impulsive noise generated using Middleton Class Anoisedistributionmodel. Itisshownthroughsimulationresultsandanalysisthatthe proposed joint decoded system outperforms the PLC and the VLC systems in terms of BERperformancedependingonthedistanceofseparationbetweenthesourceandthe receiver. Key words: Power line communication (PLC), Visible light communication (VLC), Bit error rate (BER), Joint decoding, Orthogonal frequency division multiplexing (OFDM), DCopticalOFDM(DCO-OFDM),AdditivewhiteGaussiannoise(AWGN).
XL2019
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Kai-Liang, and 林凱亮. "Study on Joint Detection and Decoding Algorithm Based on 5G NR Specification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/anza7p.

Full text
Abstract:
碩士
國立中山大學
通訊工程研究所
107
The fifth generation mobile network (5G), also known as New Radio (NR), is expected to be commercialized in 2020. The 5G NR operates including the sub-6GHz frequency bands and millimeter wave bands. In the non-standalone mode, the millimeter wave bands are operated with the aided by sub-6GHz bands. In the standalone mode, the sub-6GHz and the millimeter wave bands can be operated independently. Time Division Duplexing (TDD) mode is usually adopted in the standalone mode. Effectively operating the communications between base station and the user equipment in the TDD mode is a challenging issue. In order to measure the channel environments of different transmissions over the air, we developed a multi-input multi-output (MIMO) measurement platform for TDD mode. The platform mainly designs signal frame based on 5G NR specification, and we study ways to improve signal demodulation and decoding performance under limited bandwidth conditions, and pursue higher signal quality. We design a joint detection and decoding (JDD) algorithm in this thesis. The transmitted waveform adopts low-density parity check (LDPC) encoding based on 5G NR specification. At the receiver, the LDPC decoder feeds the output information bits back to the detector. JDD can sacrifice the cost of computational complexity for the reliability of information bits. JDD algorithm compare to the case without joint decoding, the effective signal noise ratio (ESNR) can be improved at least 3.29 dB. In order to further increase the efficiency of execution, LDPC decoder and detector use the architecture of single instruction multiple data (SIMD), which can increase online throughput. In this thesis, we design and simulate the JDD algorithm via channel, it can reduce systematic bit error rate. The JDD algorithm is also included into the TDD MIMO measurement platform. Through the over the air testing, we demonstrate that the JDD can reduce bit error rate, and can increase system throughput. Keywords: 5G NR, Joint Detection and Decoding, LDPC, MIMO, TDD.
APA, Harvard, Vancouver, ISO, and other styles
48

Chao, Pi-Chung, and 趙必昌. "Joint Channel Estimation, Symbol Detection and LDPC Decoding in Time-Varying Fading Channels." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/81878965891603577571.

Full text
Abstract:
碩士
國立交通大學
電信工程系所
97
An iterative receiver structure for joint channel estimation, symbol detection and channel decoding is proposed for the non-coherent decoding of the low-density parity check code in Rayleigh fading channels. Performance of the proposed algorithm is studied for both the flat and frequency-selective fading channels without using any pilot or training symbol. In flat fading channels, simulation results show that the performance of the non-coherent algorithm is only half decibel inferior to the coherent one, which matches the analysis using extrinsic information transfer chart, while in multiple fading channels, the performance gap against the coherent one is still large, which requires further investigations.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Yu-Hao, and 張育豪. "Generalized Piecewise Integer Mapping Based Arithmetic Coding and its Joint Source-Channel Decoding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/y2daq8.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
101
Arithmetic coding (AC) is an efficient data compression technique and widely adopted in image and video compression standards. However, the high complexity of arithmetic coding in computation is the bottleneck that many researchers endeavor to overcome. Piecewise integer mapping based arithmetic coding can eliminate all multiplicative operations in both encoder and decoder by the technique of replacing them with the operations of comparison and addition. It can reduce the computational complexity with a little penalty of compression loss. A generalized piecewise integer mapping arithmetic coding can provide higher compression efficiency with maintaining almost the same coding speed.   The iterative decoding to a communication scheme of which an arithmetic code with forbidden symbol is used for source coding and a recursive systematic convolutional code is used for channel coding. In this system, it adopts a low complexity SISO technique (called modified SOVA algorithm) for arithmetic coding, where arithmetic code can be modeled as a finite state machine and then can be decoded by using a trellis structure.   Experimental results show that the piecewise integer mapping based AC or the generalized piecewise integer mapping based AC can not only eliminate all multiplicative operations to lower down the computational complexity, but also owns a little capability of error correcting. Besides, the compression efficiency of it sometime becomes better than that of the traditional integer AC. In general, the integer AC with forbidden symbol will outperform the integer AC without forbidden symbol in terms of the error correcting performance.Furthermore, the EXIT chart is also presented for analyzing the iterative decoding.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Chien-Hao, and 黃建豪. "Tree Search Based Configurable Joint Detection and Decoding Algorithms for MIMO Wireless Communications." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/69645524744701606845.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
103
Multi Input Multi Output (MIMO) technology plays an essential role in modern wireless communication system owing to its capability to significantly improve communication quality and efficiency without drastically increasing the occupied bandwidth. However, such lucrative features also come along with the prominent drawback of significantly increased system complexity, especially on the receiver side. It has been shown that the Joint MIMO Detection and Channel Decoding (JDD) scheme can effectively reduce the complexity and improve the signal quality of the MIMO receiver. However, this previously proposed structure can only be applied with the system using specific code rate as well as modulation scheme such that it cannot be utilized in practical wireless system. In this thesis, we present the Configurable Joint Detection-Decoding (CJDD) scheme which can be used with several combinations of system settings and thus resolve the previous limitation. Moreover, since the JDD/CJDD approach is based on the tree search algorithms and there exists many different kinds of such methods with various characteristics, it is of great importance to understand the trade-offs between each tree search algorithm and their impacts on different aspects of system performances. In this thesis, we investigate three types of tree search algorithms: breadth-first based approach, depth-first based approach, and the Fano algorithm. We explore each of these algorithms on the basis of timing complexity, area complexity, and run-time variation. Our aim is to shed more lights on the design and application of tree search type approaches on the MIMO wireless communication systems for various needs of the system considerations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography