To see the other types of publications on this topic, follow the link: Digital communications Reliability Mathematics.

Dissertations / Theses on the topic 'Digital communications Reliability Mathematics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Digital communications Reliability Mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Britton, Matthew Scott. "Stochastic task scheduling in time-critical information delivery systems." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phb8629.pdf.

Full text
Abstract:
"January 2003" Includes bibliographical references (leaves 120-129) Presents performance analyses of dynamic, stochastic task scheduling policies for a real- time-communications system where tasks lose value as they are delayed in the system.
APA, Harvard, Vancouver, ISO, and other styles
2

Ellis, David Lambert. "The reliability and efficiency of serial digital data in industrial communications." Thesis, Liverpool John Moores University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hassanien, Mohamed A. M. "Error rate performance metrics for digital communications systems." Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42497.

Full text
Abstract:
In this thesis, novel error rate performance metrics and transmission solutions are investigated for delay limited communication systems and for co-channel interference scenarios. The following four research problems in particular were considered. The first research problem is devoted to analysis of the higher order ergodic moments of error rates for digital communication systems with time- unlimited ergodic transmissions and the statistics of the conditional error rates of digital modulations over fading channels are considered. The probability density function and the higher order moments of the conditional error rates are obtained. Non-monotonic behavior of the moments of the conditional bit error rates versus some channel model parameters is observed for a Ricean distributed channel fading amplitude at the detector input. Properties and possible applications of the second central moments are proposed. The second research problem is the non-ergodic error rate analysis and signaling design for communication systems processing a single finite length received sequence. A framework to analyze the error rate properties of non-ergodic transmissions is established. The Bayesian credible intervals are used to estimate the instantaneous bit error rate. A novel degree of ergodicity measure is introduced using the credible interval estimates to quantify the level of ergodicity of the received sequence with respect to the instantaneous bit error rate and to describe the transition of the data detector from the non-ergodic to ergodic zone of operation. The developed non-ergodic analysis is used to define adaptive forward error correction control and adaptive power control policies that can guarantee, with a given probability, the worst case instantaneous bit error rate performance of the detector in its transition fi'om the non-ergodic to ergodic zone of operation. In the third research problem, novel retransmission schemes are developed for delay-limited retransmissions. The proposed scheme relies on a reliable reverse link for the error-free feedback message delivery. Unlike the conventional automatic repeat request schemes, the proposed scheme does not require the use of cyclic redundancy check bits for error detection. In the proposed scheme, random permutations are exploited to locate the bits for retransmission in the predefined window within the packet. The retransmitted bits are combined using the maximal-ratio combining. The complexity-performance trade-offs of the proposed scheme is investigated by mathematical analysis as well as computer simulations. The bit error rate of the proposed scheme is independent of the packet length while the throughput is dependent on the packet length. Three practical techniques suitable for implementation are proposed. The performance of the proposed retransmission scheme was compared to the block repetition code corresponding to a conventional ARQ retransmission strategy. It was shown that, for the same number of retransmissions, and the same packet length, the proposed scheme always outperforms such repetition coding, and, in some scenarios, the performance improvement is found to be significant. Most of our analysis has been done for the case of AWGN channel, however, the case of a slow Rayleigh block fading channel was also investigated. The proposed scheme appears to provide the throughput and the BER reduction gains only for the medium to large SNR values. Finally, the last research problem investigates the link error rate performance with a single co-channel interference. A novel metric to assess whether the standard Gaussian approximation of a single interferer underestimates or overestimates the link bit error rate is derived. This metric is a function of the interference channel fading statistics. However, it is otherwise independent of the statistics of the desired signal. The key step in derivation of the proposed metric is to construct the standard Gaussian approximation of the interference by a non-linear transformation. A closed form expression of the metric is obtained for a Nakagami distributed interference fading amplitude. Numerical results for the case of Nakagami and lognormal distributed interference fading amplitude confirm the validity of the proposed metric. The higher moments, interval estimators and non-linear transformations were investigated to evaluate the error rate performance for different wireless communication scenarios. The synchronization channel is also used jointly with the communication link to form a transmission diversity and subsequently, to improve the error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Cheng, and 李鋮. "Carrier frequency offset estimation for multicarrier communications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29725331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Eun-jik. "Applications of wavelets to nonlinear wave analysis and digital communication /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Giddens, Spencer. "Applications of Mathematical Optimization Methods to Digital Communications and Signal Processing." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8601.

Full text
Abstract:
Mathematical optimization is applicable to nearly every scientific discipline. This thesis specifically focuses on optimization applications to digital communications and signal processing. Within the digital communications framework, the channel encoder attempts to encode a message from a source (the sender) in such a way that the channel decoder can utilize the encoding to correct errors in the message caused by the transmission over the channel. Low-density parity-check (LDPC) codes are an especially popular code for this purpose. Following the channel encoder in the digital communications framework, the modulator converts the encoded message bits to a physical waveform, which is sent over the channel and converted back to bits at the demodulator. The modulator and demodulator present special challenges for what is known as the two-antenna problem. The main results of this work are two algorithms related to the development of optimization methods for LDPC codes and the two-antenna problem. Current methods for optimization of LDPC codes analyze the degree distribution pair asymptotically as block length approaches infinity. This effectively ignores the discrete nature of the space of valid degree distribution pairs for LDPC codes of finite block length. While large codes are likely to conform reasonably well to the infinite block length analysis, shorter codes have no such guarantee. Chapter 2 more thoroughly introduces LDPC codes, and Chapter 3 presents and analyzes an algorithm for completely enumerating the space of all valid degree distribution pairs for a given block length, code rate, maximum variable node degree, and maximum check node degree. This algorithm is then demonstrated on an example LDPC code of finite block length. Finally, we discuss how the result of this algorithm can be utilized by discrete optimization routines to form novel methods for the optimization of small block length LDPC codes. In order to solve the two-antenna problem, which is introduced in greater detail in Chapter 2, it is necessary to obtain reliable estimates of the timing offset and channel gains caused by the transmission of the signal through the channel. The timing offset estimator can be formulated as an optimization problem, and an optimization method used to solve it was previously developed. However, this optimization method does not utilize gradient information, and as a result is inefficient. Chapter 4 presents and analyzes an improved gradient-based optimization method that solves the two-antenna problem much more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
7

Shepherd, David. "Optimisation of iterative multi-user receivers using analytical tools /." View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081114.221408/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vafi, Sina. "On the design of turbo codes with convolutional interleavers." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060515.092832/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shaheem, Asri. "Iterative detection for wireless communications." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0223.

Full text
Abstract:
[Truncated abstract] The transmission of digital information over a wireless communication channel gives rise to a number of issues which can detract from the system performance. Propagation effects such as multipath fading and intersymbol interference (ISI) can result in significant performance degradation. Recent developments in the field of iterative detection have led to a number of powerful strategies that can be effective in mitigating the detrimental effects of wireless channels. In this thesis, iterative detection is considered for use in two distinct areas of wireless communications. The first considers the iterative decoding of concatenated block codes over slow flat fading wireless channels, while the second considers the problem of detection for a coded communications system transmitting over highly-dispersive frequency-selective wireless channels. The iterative decoding of concatenated codes over slow flat fading channels with coherent signalling requires knowledge of the fading amplitudes, known as the channel state information (CSI). The CSI is combined with statistical knowledge of the channel to form channel reliability metrics for use in the iterative decoding algorithm. When the CSI is unknown to the receiver, the existing literature suggests the use of simple approximations to the channel reliability metric. However, these works generally consider low rate concatenated codes with strong error correcting capabilities. In some situations, the error correcting capability of the channel code must be traded for other requirements, such as higher spectral efficiency, lower end-to-end latency and lower hardware cost. ... In particular, when the error correcting capabilities of the concatenated code is weak, the conventional metrics are observed to fail, whereas the proposed metrics are shown to perform well regardless of the error correcting capabilities of the code. The effects of ISI caused by a frequency-selective wireless channel environment can also be mitigated using iterative detection. When the channel can be viewed as a finite impulse response (FIR) filter, the state-of-the-art iterative receiver is the maximum a posteriori probability (MAP) based turbo equaliser. However, the complexity of this receiver's MAP equaliser increases exponentially with the length of the FIR channel. Consequently, this scheme is restricted for use in systems where the channel length is relatively short. In this thesis, the use of a channel shortening prefilter in conjunction with the MAP-based turbo equaliser is considered in order to allow its use with arbitrarily long channels. The prefilter shortens the effective channel, thereby reducing the number of equaliser states. A consequence of channel shortening is that residual ISI appears at the input to the turbo equaliser and the noise becomes coloured. In order to account for the ensuing performance loss, two simple enhancements to the scheme are proposed. The first is a feedback path which is used to cancel residual ISI, based on decisions from past iterations. The second is the use of a carefully selected value for the variance of the noise assumed by the MAP-based turbo equaliser. Simulations are performed over a number of highly dispersive channels and it is shown that the proposed enhancements result in considerable performance improvements. Moreover, these performance benefits are achieved with very little additional complexity with respect to the unmodified channel shortened turbo equaliser.
APA, Harvard, Vancouver, ISO, and other styles
10

Pratt, Brian Hogan. "Analysis and Mitigation of SEU-induced Noise in FPGA-based DSP Systems." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2482.

Full text
Abstract:
This dissertation studies the effects of radiation-induced single-event upsets (SEUs) on digital signal processing (DSP) systems designed for field-programmable gate arrays (FPGAs). It presents a novel method for evaluating the effects of radiation on DSP and digital communication systems. By using an application-specific measurement of performance in the presence of SEUs, this dissertation demonstrates that only 5-15% of SEUs affecting a communications receiver (i.e. 5-15% of sensitive SEUs) cause critical performance loss. It also reports that the most critical SEUs are those that affect the clock, global reset, and most significant bits (MSBs) of computation. This dissertation also demonstrates reduced-precision redundancy (RPR) as an effective and efficient alternative to the popular triple modular redundancy (TMR) for FPGA-based communications systems. Fault injection experiments show that RPR can improve the failure rate of a communications system by over 20 times over the unmitigated system at a cost less than half that of TMR by focusing on the critical SEUs. This dissertation contrasts the cost and performance of three different variations of RPR, one of which is a novel variation developed here, and concludes that the variation referred to as "Threshold RPR" is superior to the others for FPGA systems. Finally, this dissertation presents several methods for applying Threshold RPR to a system with the goal of reducing mitigation cost and increasing the system performance in the presence of SEUs. Additional fault injection experiments show that optimizing the application of RPR can result in a decrease in critical SEUs by as much 65% at no additional hardware cost.
APA, Harvard, Vancouver, ISO, and other styles
11

Dua, Akshay. "Trust-but-Verify: Guaranteeing the Integrity of User-generated Content in Online Applications." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1425.

Full text
Abstract:
Online applications that are open to participation lack reliable methods to establish the integrity of user-generated information. Users may unknowingly own compromised devices, or intentionally publish forged information. In these scenarios, applications need some way to determine the "correctness" of autonomously generated information. Towards that end, this thesis presents a "trust-but-verify" approach that enables open online applications to independently verify the information generated by each participant. In addition to enabling independent verification, our framework allows an application to verify less information from more trustworthy users and verify more information from less trustworthy ones. Thus, an application can trade-off performance for more integrity, or vice versa. We apply the trust-but-verify approach to three different classes of online applications and show how it can enable 1) high-integrity, privacy-preserving, crowd-sourced sensing 2) non-intrusive cheat detection in online games, and 3) effective spam prevention in online messaging applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Oliveira, Lucas de. "Impacto da utilização de tecnicas de lifting explorando diferença entre ferramentas interpretada e compilada na velocidade computacional da codificação wavelet de imagens estaticas." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260073.

Full text
Abstract:
Orientadores: Yuzo Iano, Ana Lucia Mendes Cruz Silvestre da Silva
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-14T10:02:55Z (GMT). No. of bitstreams: 1 Oliveira_Lucasde_M.pdf: 5052260 bytes, checksum: 34b1fa9e0e25d00990c118c6c93f964c (MD5) Previous issue date: 2009
Resumo: O campo de análise e compressão de imagens tem recebido especial atenção da comunidade científica recentemente por abranger os principais sistemas de TV digital, comunicações via satélite e comunicações móveis. Neste contexto, as Transformadas Wavelets Discretas (TWD) surgem como uma ferramenta poderosa e suas inúmeras vantagens permitiram sua inserção em importantes padrões tais como JPEG2000, MPEG4 e MPEG7. O método lifting realiza a TWD sub-amostrando o sinal antes do processo de filtragem, implicando em rápido processamento por fazer uso das similaridades entre filtros passa-alta e passa-baixa, acelerando o cálculo dos coeficientes. Na literatura, ganhos aritméticos teóricos de diferentes proporções foram obtidos sobre o método tradicional, destacando-se os trabalhos de Daubechies e Sweldens e de Reichel. O objetivo desta pesquisa consiste em estender esta discussão de resultados teóricos quando implementados através de ferramentas, interpretada e compilada, focando no tempo de processamento necessário para a realização (decomposição e reconstrução) de diferentes imagens estáticas empregando wavelets pertencentes às famílias de Daubechies, Symlets e Biortogonais. Medidas de PSNR foram utilizadas de forma a corroborar a perfeita implementação do lifting, dado que o mesmo não pode implicar em deterioração na imagem reconstruída.
Abstract: The field of analysis and compression of images has been received special attention of the scientific community because of including in the main systems of digital TV, satellite communications and mobile communications. In this context, the Discrete Transform Wavelet (TWD) arise how a powerful tool and its many advantages allowed your insertion in important standards such as JPEG2000, MPEG4 and MPEG7. The lifting method realizes the TWD sub-sampling the signal before the filtering, resulting in faster processing by using the similarities between high and low pass filters, speeding up the calculation of coefficients. In the literature, arithmetic theoretical gains of different proportions were obtained on the traditional method, especially the works of Daubechies and Sweldens and of Reichel. The objective of this research is to extend this discussion of theoretical results when implemented through tools, interpreted and compiled, targeting the processing time required to perform (decomposition and reconstruction) of different images using wavelets belonging to the families of Daubechies, and Symlets Biortogonais. Measures of PSNR were used to corroborate the perfect implementation of the lifting, since that cannot result in deterioration in the reconstructed image.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
13

"Combating channels with long impulse response using combined turbo equalization and turbo decoding." 2000. http://library.cuhk.edu.hk/record=b5890402.

Full text
Abstract:
by Chan Yiu Tong.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (leaves 56-[59]).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Communications and Coding Technology --- p.2
Chapter 1.2 --- The Emerge of Turbo Codes --- p.3
Chapter 1.3 --- The Extension of Turbo Principle --- p.3
Chapter 1.4 --- Receiver Structures for Practical Situations --- p.4
Chapter 1.5 --- Thesis Overview --- p.5
Chapter 2 --- ISI Channel Model and Channel Equalization --- p.6
Chapter 2.1 --- A Discrete Time ISI Channel Model --- p.6
Chapter 2.1.1 --- Optimum Maximum Likelihood Receiver --- p.8
Chapter 2.1.2 --- The Whitened Matched Filter --- p.11
Chapter 2.2 --- Equalization Techniques for Combating ISI --- p.13
Chapter 2.2.1 --- Linear MMSE Equalizer --- p.13
Chapter 2.2.2 --- MLSE Equalizer in Viterbi Algorithm --- p.15
Chapter 3 --- An Overview of Turbo Codes --- p.18
Chapter 3.1 --- The Turbo Encoder --- p.19
Chapter 3.2 --- The Turbo Interleaver --- p.21
Chapter 3.3 --- The Iterative Decoder --- p.22
Chapter 3.3.1 --- The MAP Algorithm --- p.23
Chapter 3.3.2 --- The Max-Log MAP Algorithm --- p.25
Chapter 3.3.3 --- The Log-MAP Algorithm --- p.28
Chapter 4 --- Receivers for Channels with Long Impulse Responses --- p.29
Chapter 4.1 --- Shortcomings for the Existing Models --- p.30
Chapter 4.2 --- Proposed System Architecture --- p.30
Chapter 4.2.1 --- Optimized Model for Channel Shortening Filter --- p.31
Chapter 4.2.2 --- Method One - Separate Trellises for EQ and DEC --- p.35
Chapter 4.2.3 --- Method Two - Combined Trellises for EQ and DEC --- p.37
Chapter 5 --- Performance Analysis --- p.40
Chapter 5.1 --- Simulation Model and Settings --- p.40
Chapter 5.2 --- Performance Expectations --- p.43
Chapter 5.3 --- Simulation Results and Discussions --- p.49
Chapter 6 --- Concluding Remarks --- p.55
Bibliography --- p.56
APA, Harvard, Vancouver, ISO, and other styles
14

Ogundile, Olanyika Olaolu. "Symbol level decoding of Reed-Solomon codes with improved reliability information over fading channels." Thesis, 2016. http://hdl.handle.net/10539/21060.

Full text
Abstract:
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Electrical and Information Engineering, 2016
Reliable and e cient data transmission have been the subject of current research, most especially in realistic channels such as the Rayleigh fading channels. The focus of every new technique is to improve the transmission reliability and to increase the transmission capacity of the communication links for more information to be transmitted. Modulation schemes such as M-ary Quadrature Amplitude Modulation (M-QAM) and Orthogonal Frequency Division Multiplexing (OFDM) were developed to increase the transmission capacity of communication links without additional bandwidth expansion, and to reduce the design complexity of communication systems. On the contrary, due to the varying nature of communication channels, the message transmission reliability is subjected to a couple of factors. These factors include the channel estimation techniques and Forward Error Correction schemes (FEC) used in improving the message reliability. Innumerable channel estimation techniques have been proposed independently, and in combination with di erent FEC schemes in order to improve the message reliability. The emphasis have been to improve the channel estimation performance, bandwidth and power consumption, and the implementation time complexity of the estimation techniques. Of particular interest, FEC schemes such as Reed-Solomon (RS) codes, Turbo codes, Low Density Parity Check (LDPC) codes, Hamming codes, and Permutation codes, are proposed to improve the message transmission reliability of communication links. Turbo and LDPC codes have been used extensively to combat the varying nature of communication channels, most especially in joint iterative channel estimation and decoding receiver structures. In this thesis, attention is focused on using RS codes to improve the message reliability of a communication link because RS codes have good capability of correcting random and burst errors, and are useful in di erent wireless applications. This study concentrates on symbol level soft decision decoding of RS codes. In this regards, a novel symbol level iterative soft decision decoder for RS codes based on parity-check equations is developed. This Parity-check matrix Transformation Algorithm (PTA) is based on the soft reliability information derived from the channel output in order to perform syndrome checks in an iterative process. Performance analysis verify that this developed PTA outperforms the conventional RS hard decision decoding algorithms and the symbol level Koetter and Vardy (KV ) RS soft decision decoding algorithm. In addition, this thesis develops an improved Distance Metric (DM) method of deriving reliability information over Rayleigh fading channels for combined demodulation with symbol level RS soft decision decoding algorithms. The newly proposed DM method incorporates the channel state information in deriving the soft reliability information over Rayleigh fading channels. Analysis verify that this developed metric enhances the performance of symbol level RS soft decision decoders in comparison with the conventional method. Although, in this thesis, the performance of the developed DM method of deriving soft reliability information over Rayleigh fading channels is only veri ed for symbol level RS soft decision decoders, it is applicable to any symbol level soft decision decoding FEC scheme. Besides, the performance of the all FEC decoding schemes plummet as a result of the Rayleigh fading channels. This engender the development of joint iterative channel estimation and decoding receiver structures in order to improve the message reliability, most especially with Turbo and LDPC codes as the FEC schemes. As such, this thesis develops the rst joint iterative channel estimation and Reed- Solomon decoding receiver structure. Essentially, the joint iterative channel estimation and RS decoding receiver is developed based on the existing symbol level soft decision KV algorithm. Consequently, the joint iterative channel estimation and RS decoding receiver is extended to the developed RS parity-check matrix transformation algorithm. The PTA provides design ease and exibility, and lesser computational time complexity in an iterative receiver structure in comparison with the KV algorithm. Generally, the ndings of this thesis are relevant in improving the message transmission reliability of a communication link with RS codes. For instance, it is pertinent to numerous data transmission technologies such as Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB), Digital Subscriber Line (DSL), WiMAX, and long distance satellite communications. Equally, the developed, less computationally intensive, and performance e cient symbol level decoding algorithm for RS codes can be use in consumer technologies like compact disc and digital versatile disc.
GS2016
APA, Harvard, Vancouver, ISO, and other styles
15

Britton, Matthew Scott. "Stochastic task scheduling in time-critical information delivery systems / Matthew Britton." Thesis, 2003. http://hdl.handle.net/2440/21899.

Full text
Abstract:
"January 2003"
Includes bibliographical references (leaves 120-129)
x, 129 leaves : ill. ; 30 cm.
Presents performance analyses of dynamic, stochastic task scheduling policies for a real- time-communications system where tasks lose value as they are delayed in the system.
Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2003
APA, Harvard, Vancouver, ISO, and other styles
16

Tong, Jian. "Active optical filters based integrated photonic circuits /." 2006. http://proquest.umi.com/pqdweb?did=1225139861&sid=1&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

BENACEM, Lucien. "Cooperative DVB-H: Raptor-Network Coding Protocols for Reliable and Energy Efficient Multimedia Communications." Thesis, 2010. http://hdl.handle.net/1974/5955.

Full text
Abstract:
Reliable and energy-efficient delivery of multimedia to mobile terminals in dynamic networks is a very challenging problem. In this thesis, we focus on a cooperative extension to the Digital Video Broadcasting – Handheld (DVB-H) standard, forming a cooperative broadcast network whereby terminal-to-terminal cooperation creates a distributed form of multi-input-multi-output (MIMO) that supplements existing fixed network infrastructure. First, we develop a novel and computationally-efficient hierarchical Markov model that is able to accurately perform a cross-layer packet error mapping between the physical and transport layers of the DVB-H/IPDC (IP DataCast) protocol stack. We then construct a discrete-event simulator in MATLAB® that incorporates all of the necessary modules to conduct dynamic multiterminal network simulations. Next, the convergence of cooperative wireless communication, Raptor application layer forward error correction (AL-FEC) and Network Coding (NC) is examined. Originally proposed for broadcasting over the Internet, the application of Raptor codes to wireless cooperative communications networks has been limited to date, but they have been mandated for use in DVB-H. Network coding is used to reduce energy consumption by opportunistically recombining and rebroadcasting required combinations of packets. Two novel coding-enabled cooperative relaying protocols are developed for multicast and multiple unicast file distribution scenarios that are transparent, fully distributed, and backwards compatible with today's systems. Our protocols are able to exploit several different forms of diversity inherent to modern wireless networks, including spatial diversity, radio interface diversity, and symbol diversity. Extensive simulations show that our protocols simultaneously achieve breakthroughs in network energy efficiency and reliability for different terminal classes and densities, allowing greatly improved user experiences.
Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2010-08-03 19:45:54.943
APA, Harvard, Vancouver, ISO, and other styles
18

Rangwala, Mohammed M. "Secure Digital Provenance: Challenges and a New Design." Thesis, 2014. http://hdl.handle.net/1805/6051.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Derived from the field of art curation, digital provenance is an unforgeable record of a digital object's chain of successive custody and sequence of operations performed on the object. It plays an important role in accessing the trustworthiness of the object, verifying its reliability and conducting audit trails of its lineage. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Since history of an object cannot be changed, once a provenance chain has been created it must be protected in order to guarantee its reliability. Provenance can face attacks against the integrity of records and the confidentiality of user information, making security an important trait required for digital provenance. The digital object and its associated provenance can have different security requirements, and this makes the security of provenance different from that of traditional data. Research on digital provenance has primarily focused on provenance generation, storage and management frameworks in different fields. Security of digital provenance has also gained attention in recent years, particularly as more and more data is migrated in cloud environments which are distributed and are not under the complete control of data owners. However, there still lacks a viable secure digital provenance scheme which can provide comprehensive security for digital provenance, particularly for generic and dynamic ones. In this work, we address two important aspects of secure digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the digital object as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the typical secure provenance schemes with comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography