Dissertations / Theses on the topic 'Information theory and compression'

To see the other types of publications on this topic, follow the link: Information theory and compression.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information theory and compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Presnell, Stuart. "Minimal resources in quantum information theory : compression and measurement." Thesis, University of Bristol, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reid, Mark Montgomery. "Path-dictated, lossless volumetric data compression." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shih, An-Zen. "Fractal compression analysis of superdeformed nucleus data." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Edwin S. "Group testing for image compression /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zemouri, Rachid. "Data compression of speech using sub-band coding." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, P. S. "Data compression for high precision digital waveform recording." Thesis, City University London, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Jianmin. "Multi-media data compression and real-time novel architectures implementation." Thesis, University of Southampton, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reeder, Brian Martin. "Application of artificial neural networks for spacecraft instrument data compression." Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yamazato, Takaya, Iwao Sasase, and Shinsaku Mori. "Interlace Coding System Involving Data Compression Code, Data Encryption Code and Error Correcting Code." IEICE, 1992. http://hdl.handle.net/2237/7844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Beirami, Ahmad. "Network compression via network memory: fundamental performance limits." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53448.

Full text
Abstract:
The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Jing. "Information theoretic approach for low-complexity adaptive motion estimation." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Russ, Samuel H. "An information-theoretic approach to analysis of computer architectures and compression of instruction memory usage." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/13357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Boström, Kim Joris. "Lossless quantum data compression and secure direct communication : new concepts and methods for quantum information theory /." Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3022795&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Floor, Pål Anders. "On the Theory of Shannon-Kotel'nikov Mappings in Joint Source-Channel Coding." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2193.

Full text
Abstract:

In this thesis an approach to joint source-channel coding using direct source to channel mappings is studied. The system studied communicates i.i.d. Gaussian sources on a point-to-point Gaussian memoryless channel with limited feedback (supporting channel state information at most). The mappings, named Shannon-Kotel'nikov (SK) mappings, are memoryless mappings between the source space of dimension M and the channel space of dimension N. Such mappings can be used for error control when MN, called dimension reduction. The SK-mappings operate on amplitude continuous and time discrete signals (meaning that there is no bits involved) through (piecewise) continuous curves or hyper surfaces in general.

The reason for studying SK-mappings is that they are delay free, robust against varying channel conditions, and have quite good performance at low complexity.

First a theory for determining and categorizing the distortion using SK-mappings for communication is introduced and developed. This theory is further used to show that SK-mappings can reach the information theoretical bound optimal performance theoretically attainable (OPTA) when their dimension approach infinity.

One problem is to determine the overall optimal geometry of the SK-mappings. Indications on the overall geometry can be found by studying the codebooks and channel constellations of power constrained channel optimized vector quantizers (PCCOVQ). The PCCOVQ algorithm will find the optimal placing of quantizer representation vectors in the source space and channel symbols in the channel space. A PCCOVQ algorithm giving well performing mappings for the dimension reduction case has been found in the past. In this thesis the PCCOVQ algorithm is modified to give well performing dimension expanding mappings for scalar sources, and 1:2 and 1:3 PCCOVQ examples are given.

Some example SK-mappings are proposed and analyzed. 2:1 and 1:2 PCCOVQ mappings are used as inspiration for making 2:1 and 1:2 SK-mappings based on the Archimedean spiral. Further 3:1, 4:1, 3:2 and 2:3 SK-mappings are found and analyzed. All example SK-mappings are modeled mathematically using the proposed theory on SK-mappings. These mathematical models are further used to find the optimal coefficients for all the proposed SK-mappings as a function of the channel signal-to-noise ratio (CSNR), making adaptations to varying channel conditions simple.

APA, Harvard, Vancouver, ISO, and other styles
15

Mather, Paul M. "An evaluation of an extant proposed new theory of computing based on information-theoretic principles and data compression." Thesis, Bangor University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bluhm, Andreas [Verfasser], Michael M. [Akademischer Betreuer] Wolf, Michael M. [Gutachter] Wolf, and Matthias [Gutachter] Christandl. "Compression and measurements in quantum information theory / Andreas Bluhm ; Gutachter: Michael M. Wolf, Matthias Christandl ; Betreuer: Michael M. Wolf." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1193650496/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Devulapalli, Venkata Lakshmi Narasimha. "Application of Huffman Data Compression Algorithm in Hashing Computation." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2614.

Full text
Abstract:
Cryptography is the art of protecting information by encrypting the original message into an unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of the text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by the hash function are called as the message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing [Thomsen and Knudsen, 2005]. The purpose of this study is to apply Huffman data compression algorithm to the SHA-1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [Huffman, 1952]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
APA, Harvard, Vancouver, ISO, and other styles
18

Baghali, Khanian Zahra. "From Quantum Source Compression to Quantum Thermodynamics." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671034.

Full text
Abstract:
Aquesta tesi aborda problemes en el camp de la teoria de la informació quàntica, específicament, la teoria quàntica de Shannon. La primera part de la tesi comença amb definicions concretes de models de fonts quàntiques generals i la seva compressió, i cada capítol següent aborda la compressió d’un model de font específic com a casos especials dels models generals definits inicialment. Primer, trobem la taxa de compressió òptima d’una font d’estats barreja general que inclou com a casos especials tots els models prèviament estudiats, com les fonts pures i de col·lectivitats de Schumacher, i altres models de col·lectiuvitats d’estats barreja. Per a una interpolació entre els models de col·lectivitats visible i cec de Schumacher, trobem la regió de compressió òptima per les taxes d’entrellaçament i les taxes quàntiques. A continuació, estudiem exhaustivament la variació clàssic-quàntica del famós problema de Slepian-Wolf i trobem les taxes òptimes considerant la fidelitat per còpia; per la fidelitat de bloc trobem expressions tancades per les fites assolibles i inverses que coincideixen, sota la condició de que una funció que apareix a les dues fites sigui continua. La primera part de la tesi tanca amb un capítol sobre el model de col·lectivitats per la redistribució d’estats quàntics per al qual trobem la taxa de compressió òptima considerant la fidelitat per còpia i les fites assolibles i inverses, que de nou que coincideixen sota la condició de continuïtat d’una certa funció. La segona part de la tesis gira al voltant de la termódinamica quàntica sota de la perspectiva de la teoria de la informació. Comencem amb un punt de vista de la teoria de recursos d’un sistema quàntic amb múltiples càrregues que no commuten i amb objectes i operacions permeses que son termodinàmicament significatives; utilitzant eines de la teoria quàntica de Shannon classifiquem els objectes i trobem operacions quàntiques explícites que relacionen els objectes de la mateixa classe entre sí. Posteriorment, apliquem aquest marc de la teoria de recursos per estudiar una configuració termodinàmica tradicional amb múltiples quantitats conservades que no commuten que consta d’un sistema principal, un reservori calòric i bateries per emmagatzemar diverses quantitats conservades del sistema. Enunciem les lleis de la termodinàmica per a aquest sistema, i mostrem que un efecte purament quàntic té lloc en algunes transformacions del sistema, és a dir, algunes transformacions només són factibles si hi ha correlacions quàntiques entre l’estat final del sistema i del reservori calòric.
Esta tesis aborda problemas en el campo de la teoría de la información cuántica, específicamente, la teoría cuántica de Shannon. La primera parte de la tesis comienza con definiciones concretas de modelos de fuentes cuánticas generales y su compresión, y cada capítulo subsiguiente aborda la compresión de un modelo de fuente específico como casos especiales de los modelos generales definidos inicialmente. Primero, encontramos la tasa de compresión óptima de una fuente de estado mixto general que incluye como casos especiales todos los modelos previamente estudiados, como las fuentes pura y colectiva de Schumacher, y otros modelos colectivos de estado mixto. Para una interpolación entre el modelo colectivo visible y ciego de Schumacher, encontramos la región de tasa de compresión óptima para el entrelazamiento y las tasas cuánticas. A continuación, estudiamos exhaustivamente la variación clásico-cuántica del célebre problema de Slepian-Wolf y encontramos las tasas óptimas considerando la fidelidad por copia; con la fidelidad de bloque encontramos límites alcanzables e inversos que coinciden con la continuidad de una función que aparece en los límites. La primera parte de la tesis cierra con un capítulo sobre el modelo colectivo de redistribución de estado cuántico para el cual encontramos la tasa de compresión óptima considerando la fidelidad por copia y los límites alcanzables e inversos que coinciden con la continuidad de una función que aparece en los límites. La segunda parte de la tesis gira en torno a la perspectiva teórica de la información de la termodinámica cuántica. Comenzamos con un punto de vista de la teoría de recursos de un sistema cuántico con múltiples cargas no conmutables con objetos y operaciones permitidas que son termodinámicamente significativas; usando herramientas de la teoría cuántica de Shannon clasificamos los objetos y encontramos operaciones cuánticas explícitas que mapean los objetos de la misma clase entre sí. Posteriormente, aplicamos este marco de la teoría de recursos para estudiar una configuración termodinámica tradicional con múltiples cantidades no conmutables compuesta por un sistema principal, un reservorio calórico y baterías para almacenar varias cantidades conservadas del sistema. Enunciamos las leyes de la termodinámica para este sistema, y mostramos que ocurre un efecto puramente cuántico en algunas transformaciones del sistema, es decir, algunas transformaciones solo son factibles si existen correlaciones cuánticas entre el estado final del sistema y del reservorio calórico.
This thesis addresses problems in the field of quantum information theory, specifically, quantum Shannon theory. The first part of the thesis is opened with concrete definitions of general quantum source models and their compression, and each subsequent chapter addresses the compression of a specific source model as a special case of the initially defined general models. First, we find the optimal compression rate of a general mixed state source which includes as special cases all the previously studied models such as Schumacher’s pure and ensemble sources and other mixed state ensemble models. For an interpolation between the visible and blind Schumacher’s ensemble model, we find the optimal compression rate region for the entanglement and quantum rates. Later, we comprehensively study the classical-quantum variation of the celebrated Slepian-Wolf problem and find the optimal rates considering per-copy fidelity; with block fidelity we find single letter achievable and converse bounds which match up to continuity of a function appearing in the bounds. The first part of the thesis is closed with a chapter on the ensemble model of quantum state redistribution for which we find the optimal compression rate considering per-copy fidelity and single-letter achievable and converse bounds matching up to continuity of a function which appears in the bounds. The second part of the thesis revolves around information theoretical perspective of quantum thermodynamics. We start with a resource theory point of view of a quantum system with multiple non-commuting charges where the objects and allowed operations are thermodynamically meaningful; using tools from quantum Shannon theory we classify the objects and find explicit quantum operations which map the objects of the same class to one another. Subsequently, we apply this resource theory framework to study a traditional thermodynamics setup with multiple non-commuting conserved quantities consisting of a main system, a thermal bath and batteries to store various conserved quantities of the system. We state the laws of the thermodynamics for this system, and show that a purely quantum effect happens in some transformations of the system, that is, some transformations are feasible only if there are quantum correlations between the final state of the system and the thermal bath.
APA, Harvard, Vancouver, ISO, and other styles
19

Sinha, Anurag R. "Optimization of a new digital image compression algorithm based on nonlinear dynamical systems /." Online version of thesis, 2008. http://hdl.handle.net/1850/5544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Marka, Madhavi. "Object-based unequal error protection." Thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-06242002-152555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dupraz, Elsa. "Codage de sources avec information adjacente et connaissance incertaine des corrélations." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00955100.

Full text
Abstract:
Dans cette thèse, nous nous sommes intéressés au problème de codage de sources avec information adjacente au décodeur seulement. Plus précisément, nous avons considéré le cas où la distribution jointe entre la source et l'information adjacente n'est pas bien connue. Dans ce contexte, pour un problème de codage sans pertes, nous avons d'abord effectué une analyse de performance à l'aide d'outils de la théorie de l'information. Nous avons ensuite proposé un schéma de codage pratique efficace malgré le manque de connaissance sur la distribution de probabilité jointe. Ce schéma de codage s'appuie sur des codes LDPC non-binaires et sur un algorithme de type Espérance-Maximisation. Le problème du schéma de codage proposé, c'est que les codes LDPC non-binaires utilisés doivent être performants. C'est à dire qu'ils doivent être construits à partir de distributions de degrés qui permettent d'atteindre un débit proche des performances théoriques. Nous avons donc proposé une méthode d'optimisation des distributions de degrés des codes LDPC. Enfin, nous nous sommes intéressés à un cas de codage avec pertes. Nous avons supposé que le modèle de corrélation entre la source et l'information adjacente était décrit par un modèle de Markov caché à émissions Gaussiennes. Pour ce modèle, nous avons également effectué une analyse de performance, puis nous avons proposé un schéma de codage pratique. Ce schéma de codage s'appuie sur des codes LDPC non-binaires et sur une reconstruction MMSE. Ces deux composantes exploitent la structure avec mémoire du modèle de Markov caché.
APA, Harvard, Vancouver, ISO, and other styles
22

Cai, Jianfei. "Robust error control and optimal bit allocation for image and video transmission over wireless channels /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3052158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Su, Yong. "Mathematical modeling with applications in high-performance coding." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1127139848.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiv, 130 p.; also includes graphics (some col.). Includes bibliographical references (p. 125-130). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
24

Edler, Daniel. "Interactive map generator for simplifying and highlighting important structures in large networks." Thesis, Umeå University, Department of Physics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34816.

Full text
Abstract:

Understanding the structure of a network is an essential part in understanding the behavior of the system it represents, but as the system becomes really large, a visualization of the full network looses its potential to reveal important structural relationships. Then we need ways to simplify and highlight the important structures of the network while the details are filtered out, just like good maps do. We have developed an interactive application that utilizes mathematical methods based on network and information theory to reveal the important patterns hidden in huge amount of interaction data. The application gives the professional as well as the nonprofessional user the ability to load his or hers own file containing the network data, from mobile phone networks and social online networks to transport networks and financial networks and lets you explore the data and generate a customized map which highlights the influential patterns in your data. A demo application is also developed to demonstrate the mathematical and information-theoretical principles behind the map generation.


För att koppla form till funktion är nätverk ett oumbärligt verktyg, men när systemen blir riktigt stora förlorar nätverken sin förmåga att avslöja viktiga strukturella samband. Då behövs det kraftfulla metoder för att förenkla och framhäva de viktiga strukturerna i nätverken samtidigt som detaljerna filtreras bort, precis som bra kartor gör. Vi har utvecklat en interaktiv applikation som utnyttjar matematiska metoder baserat på nätverks- och in- formationsteori för att avslöja viktiga mönster som ligger dolt i myllret av interaktionsdata. Du kan läsa in din egen fil med nätverksdata, från telekommunikationsnätverk och sociala online-nätverk till transportnärverk och finansiella nätverk, och få tillbaka en skräddarsydd karta som låter dig upptäcka de inflytelserika mönstren i nätverket. En demo-applikation är också utvecklad för att demonstrera de matematiska och informationsteoretiska principerna bakom kartgenereringen.

APA, Harvard, Vancouver, ISO, and other styles
25

Yaginuma, Karina Yuriko. "Compressão de dados baseada nos modelos de Markov minimos." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307243.

Full text
Abstract:
Orientador: Jesus Enrique Garcia
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-15T18:18:34Z (GMT). No. of bitstreams: 1 Yaginuma_KarinaYuriko_M.pdf: 793513 bytes, checksum: 80908040b7ddf985dbe851b78dc4f279 (MD5) Previous issue date: 2010
Resumo: Nesta dissertação e proposta uma metodologia de compressão de dados usando Modelos de Markov Mínimos (MMM). Para tal fim estudamos cadeias de Markov de alcance variavel (VLMC, Variable Length Markov Chains) e MMM. Apresentamos entao uma aplicacão dos MMM a dados linguísticos. Paralelamente estudamos o princípio MDL (Minimum Description Length) e o problema de compressão de dados. Propomos uma metodologia de compressao de dados utilizando MMM e apresentamos um algoritmo próprio para compressao usando MMM. Comparamos mediante simulacão e aplicacao a dados reais as características da compressao de dados utilizando as cadeias completas de Markov, VLMC e MMM
Abstract: In this dissertation we propose a methodology for data compression using Minimal Markov Models (MMM). To this end we study Variable Length Markov Chains (VLMC) and MMM. Then present an application of MMM to linguistic data. In parallel we studied the MDL principle (Minimum Description Length) and the problem of data compression. We propose a method of data compression using MMM and present an algorithm suitable for compression using MMM. Compared through simulation and application to real data characteristics of data compression using the complete Markov chains, VLMC and MMM
Mestrado
Probabilidade e Estatistica
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
26

Flores, Rodriguez Andrea Carolina 1987. "Compressão de dados de demanda elétrica em Smart Metering." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259131.

Full text
Abstract:
Orientador: Gustavo Fraidenraich
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-26T03:16:11Z (GMT). No. of bitstreams: 1 FloresRodriguez_AndreaCarolina_M.pdf: 1415054 bytes, checksum: 6b986968e8d7ec4e6459e4cea044d379 (MD5) Previous issue date: 2014
Resumo: A compressão dos dados de consumo residencial de energia elétrica registrados torna-se extremadamente necessária em Smart Metering, a fim de resolver o problema de grandes volumes de dados gerados pelos medidores. A principal contribuição desta tese é a proposta de um esquema de representação teórica da informação registrada na forma mais compacta, sugerindo uma forma de atingir o limite fundamental de compressão estabelecido pela entropia da fonte sobre qualquer técnica de compressão disponibilizada no medidor. A proposta consiste na transformação de codificação dos dados, baseado no processamento por segmentação: no tempo em taxas de registros de 1/900 Hz a 1 Hz, e nos valores de consumo residencial de energia elétrica. Este último subdividido em uma compressão por amplitude mudando sua granularidade e compressão dos dados digitais para representar o consumo com o menor número de bits possíveis usando: PCM-Huffman, DPCM-Huffman e codificação de entropia supondo diferentes ordens de distribuição da fonte. O esquema é aplicado sobre dados modelados por cadeias de Markov não homogêneas para as atividades dos membros da casa que influenciam no consumo elétrico e dados reais disponibilizados publicamente. A avaliação do esquema é feita analisando o compromisso da compressão entre as altas taxas de registro, distorção resultante da digitalização dos dados, e exploração da correlação entre amostras consecutivas. Vários exemplos numéricos são apresentados ilustrando a eficiência dos limites de compressão. Os resultados revelam que os melhores esquemas de compressão de dados são encontrados explorando a correlação entre as amostras
Abstract: Data compression of recorded residential electricity consumption becomes extremely necessary on Smart Metering, in order to solve the problem of large volumes of data generated by meters. The main contribution of this thesis is to propose a scheme of theoretical representation of recorded information in the most compact form, which suggests a way to reach the fundamental limit of compression set by the entropy of the source, of any compression technique available in the meter. The proposal consists in the transformation of data encoding, based on the processing by segmentation: in time by registration rate from 1/900 Hz to 1 Hz, and in the values of residential electricity consumption. The latter is subdivided into compression: by amplitude changing their regularity, and digital data compression to represent consumption as few bits as possible. It is using PCM-Huffman, DPCM-Huffman and entropy encoding by assuming different orders of the source. The scheme is applied to modeled data by inhomogeneous Markov chains to create the activities of household members that influence electricity consumption, and real data publicly available. The assessment scheme is made by analyzing the trade off of compression between high registration rates, the distortion resulting from the digitization of data, and analyzing the correlation of consecutive samples. Several examples are presented to illustrate the efficiency of the compression limits. The analysis reveals that better data compression schemes can be found by exploring the correlation among the samples
Mestrado
Telecomunicações e Telemática
Mestra em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Jian Electrical Engineering Australian Defence Force Academy UNSW. "Error resilience for video coding services over packet-based networks." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Electrical Engineering, 1999. http://handle.unsw.edu.au/1959.4/38652.

Full text
Abstract:
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
APA, Harvard, Vancouver, ISO, and other styles
28

Viola, Márcio Luis Lanfredi 1978. "Tópicos em seleção de modelos markovianos." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307242.

Full text
Abstract:
Orientador: Jesus Enrique Garcia
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-19T15:10:51Z (GMT). No. of bitstreams: 1 Viola_MarcioLuisLanfredi_D.pdf: 951071 bytes, checksum: 87d2c8b2501105bc64aab5e92c769ea4 (MD5) Previous issue date: 2011
Resumo: Nesta tese abordamos o problema estatístico de seleção de um modelo Markoviano de ordem finita que se ajuste bem a um conjunto de dados em duas situações diferentes. Em relação ao primeiro caso, propomos uma metodologia para a estimação de uma árvore de contextos utilizando-se amostras independentes sendo que a maioria delas são provenientes de um mesmo processo de Markov de memória variável finita e as demais provêm de um outro processo Markoviano de memória variável finita. O método proposto é baseado na taxa de entropia relativa simetrizada como uma medida de similaridade. Além disso, o conceito de ponto de ruptura assintótico foi adaptado ao nosso problema de seleção a fim de mostrar que o procedimento proposto, nesta tese, é robusto. Em relação ao segundo problema, considerando um processo de Markov multivariado de ordem finita, propomos uma metodologia consistente que fornece a partição mais fina das coordenadas do processo de forma que os seus elementos sejam condionalmente independentes. O método obtido é baseado no BIC (Critério de Informação Bayesiano). Porém, quando o número de coordenadas do processo cresce, o custo computacional do critério BIC torna-se excessivo. Neste caso, propomos um algoritmo mais eficiente do ponto de vista computacional e a sua consistência é demonstrada. A eficiência das metodologias propostas foi estudada através de simulações e elas foram aplicadas em dados linguísticos
Abstract: This work related two statistical problems involving the selection of a Markovian model of finite order. Firstly, we propose a procedure to choose a context tree from independent samples, with more than half of them being realizations of the same finite memory Markovian processes with finite alphabet with law P. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. We define the asymptotic breakdown point for a model selection procedure, and show the asymptotic breakdown point for our procedure. Moreover, we study the robust procedure by simulations and it is applied to linguistic data. The aim of other problem is to develop a consistent methodology for obtain the finner partitions of the coordinates of an multivariate Markovian stationary process such that their elements are conditionally independents. The proposed method is establishment by Bayesian information criterion (BIC). However, when the number of the coordinates of process increases, the computing of criterion BIC becomes excessive. In this case, we propose an algorithm more efficient and the its consistency is demonstrated. It is tested by simulations and applied to linguistic data
Doutorado
Estatistica
Doutor em Estatística
APA, Harvard, Vancouver, ISO, and other styles
29

Doshi, Vishal D. (Vishal Devendra). "Functional compression : theory and application." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43038.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, (S.M. in Technology and Policy)--Massachusetts Institute of Technology Engineering Systems Division, Technology and Policy Program, 2008.
Includes bibliographical references (p. 75-77).
We consider the problem of functional compression. The objective is to separately compress possibly correlated discrete sources such that an arbitrary deterministic function of those sources can be computed given the compressed data from each source. This is motivated by problems in sensor networks and database privacy. Our architecture gives a quantitative definition of privacy for database statistics. Further, we show that it can provide significant coding gains in sensor networks. We consider both the lossless and lossy computation of a function. Specifically, we present results of the rate regions for three instances of the problem where there are two sources: 1) lossless computation where one source is available at the decoder, 2) under a special condition, lossless computation where both sources are separately encoded, and 3) lossy computation where one source is available at the decoder. Wyner and Ziv (1976) considered the third problem for the special case f(X, Y) = X and derived a rate distortion function. Yamamoto (1982) extended this result to a general function. Both of these results are in terms of an auxiliary random variable. Orlitsky and Roche (2001), for the zero distortion case, gave this variable a precise interpretation in terms of the properties of the characteristic graph; this led to a particular coding scheme. We extend that result by providing an achievability scheme that is based on the coloring of the characteristic graph. This suggests a layered architecture where the functional layer controls the coloring scheme, and the data layer uses existing distributed source coding schemes. We extend this graph coloring method to provide algorithms and rates for all three problems.
by Vishal D. Doshi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Ibikunle, John Olutayo. "Projection domain compression of image information." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/47614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Maniccam, Suchindran S. "Image-video compression, encryption and information hiding /." Online version via UMI:, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hoque, Abu Sayed Md Latiful. "Compression of structured and semi-structured information." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hong, Lihao. "Pavement Information System: Detection, Classification and Compression." Connect to full text in OhioLINK ETD Center, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=toledo1260901996.

Full text
Abstract:
Thesis (M.S.)--University of Toledo, 2009.
Typescript. "Submitted as partial fulfillment of the requirements for The Master of Science Degree in Engineering." "A thesis entitled"--at head of title. Bibliography: leaves 55-56.
APA, Harvard, Vancouver, ISO, and other styles
34

Rezazadeh, Arezou. "Error exponent analysis for the multiple access channel with correlated sources." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667611.

Full text
Abstract:
Due to delay constraints of modern communication systems, studying reliable communication with finite-length codewords is much needed. Error exponents are one approach to study the finite-length regime from the information-theoretic point of view. In this thesis, we study the achievable exponent for single-user communication and also multiple-access channel with both independent and correlated sources. By studying different coding schemes including independent and identically distributed, independent and conditionally distributed, message-dependent, generalized constant-composition and conditional constant-composition ensembles, we derive a number of achievable exponents for both single-user and multi-user communication, and we analyze them.
A causa de les restriccions de retard dels sistemes de comunicació moderns, estudiar la fiabilitat de la comunicació amb paraules de codis de longitud finita és important. Els exponents d’error són un mètode per estudiar el règim de longitud finita des del punt de vista de la teoria de la informació. En aquesta tesi, ens centrem en assolir l’exponent per a la comunicació d’un sol usuari i també per l’accés múltiple amb fonts independents i correlacionades. En estudiar els següents esquemes de codificació amb paraules independents i idènticament distribuïdes, independents i condicionalment distribuïdes, depenent del missatge, composició constant generalitzada, i conjunts de composició constant condicional, obtenim i analitzem diversos exponents d’error assolibles tant per a la comunicació d’un sol usuari com per la de múltiples usuaris.
Las restricciones cada vez más fuertes en el retraso de transmisión de los sistemas de comunicación modernos hacen necesario estudiar la fiabilidad de la comunicación con palabras de códigos de longitud finita. Los exponentes de error son un método para estudiar el régimen de longitud finita desde el punto de vista la teoría de la información. En esta tesis, nos centramos en calcular el exponente para la comunicación tanto de un solo usuario como para el acceso múltiple con fuentes independientes y correladas. Estudiando diferentes familias de codificación, como son esquemas independientes e idénticamente distribuidos, independientes y condicionalmente distribuidos, que dependen del mensaje, de composición constante generalizada, y conjuntos de composición constante condicional, obtenemos y analizamos varios exponentes alcanzables tanto para la comunicación de un solo usuario como para la de múltiples usuarios.
APA, Harvard, Vancouver, ISO, and other styles
35

Bond, Rachael Louise. "Relational information theory." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/76664/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sahai, Anant. "Anytime information theory." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8770.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 171-175).
We study the reliable communication of delay-sensitive bit streams through noisy channels. To bring the issues into sharp focus, we will focus on the specific problem of communicating the values of an unstable real-valued discrete-time Markov random process through a finite capacity noisy channel so as to have finite average squared error from end-to-end. On the source side, we give a coding theorem for such unstable processes that shows that we can achieve the rate-distortion bound even in the infinite horizon case if we are willing to tolerate bounded delays in encoding and decoding. On the channel side, we define a new parametric notion of capacity called anytime capacity that corresponds to a sense of reliable transmission that is stronger than the traditional Shannon capacity sense but is less demanding than the sense underlying zero-error capacity. We show that anytime capacity exists for memoryless channels without feedback and is connected to standard random coding error exponents. The main result of the thesis is a new source/channel separation theorem that encompasses unstable processes and establishes that the stronger notion of anytime capacity is required to be able to deal with delay-sensitive bit streams. This theorem is then applied in the control systems context to show that anytime capacity is also required to evaluate channels if we intend to use them as part of a feedback link from sensing to actuation. Finally, the theorem is used to shed light on the concept of "quality of service requirements" by examining a toy mathematical example for which we prove the absolute necessity of differentiated service without appealing to human preferences.
by Anant Sahai.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
37

Schumann, Robert Helmut. "Quantum information theory." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51892.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2000
ENGLISH ABSTRACT: What are the information processing capabilities of physical systems? As recently as the first half of the 20th century this question did not even have a definite meaning. What is information, and how would one process it? It took the development of theories of computing (in the 1930s) and information (late in the 1940s) for us to formulate mathematically what it means to compute or communicate. Yet these theories were abstract, based on axiomatic mathematics: what did physical systems have to do with these axioms? Rolf Landauer had the essential insight - "Information is physical" - that information is always encoded in the state of a physical system, whose dynamics on a microscopic level are well-described by quantum physics. This means that we cannot discuss information without discussing how it is represented, and how nature dictates it should behave. Wigner considered the situation from another perspective when he wrote about "the unreasonable effectiveness of mathematics in the natural sciences". Why are the computational techniques of mathematics so astonishingly useful in describing the physical world [1]? One might begin to suspect foul play in the universe's operating principles. Interesting insights into the physics of information accumulated through the 1970s and 1980s - most sensationally in the proposal for a "quantum computer". If we were to mark a particular year in which an explosion of interest took place in information physics, that year would have to be 1994, when Shor showed that a problem of practical interest (factorisation of integers) could be solved easily on a quantum computer. But the applications of information in physics - and vice versa - have been far more widespread than this popular discovery. These applications range from improved experimental technology, more sophisticated measurement techniques, methods for characterising the quantum/classical boundary, tools for quantum chaos, and deeper insight into quantum theory and nature. In this thesis I present a short review of ideas in quantum information theory. The first chapter contains introductory material, sketching the central ideas of probability and information theory. Quantum mechanics is presented at the level of advanced undergraduate knowledge, together with some useful tools for quantum mechanics of open systems. In the second chapter I outline how classical information is represented in quantum systems and what this means for agents trying to extract information from these systems. The final chapter presents a new resource: quantum information. This resource has some bewildering applications which have been discovered in the last ten years, and continually presents us with unexpected insights into quantum theory and the universe.
AFRIKAANSE OPSOMMING: Tot watter mate kan fisiese sisteme informasie verwerk? So onlangs soos die begin van die 20ste eeu was dié vraag nog betekenisloos. Wat is informasie, en wat bedoel ons as ons dit wil verwerk? Dit was eers met die ontwikkeling van die teorieë van berekening (in die 1930's) en informasie (in die laat 1940's) dat die tegnologie beskikbaar geword het wat ons toelaat om wiskundig te formuleer wat dit beteken om te bereken of te kommunikeer. Hierdie teorieë was egter abstrak en op aksiomatiese wiskunde gegrond - mens sou wel kon wonder wat fisiese sisteme met hierdie aksiomas te make het. Dit was Rolf Landauer wat uiteindelik die nodige insig verskaf het - "Informasie is fisies" - informasie word juis altyd in 'n fisiese toestand gekodeer, en so 'n fisiese toestand word op die mikroskopiese vlak akkuraat deur kwantumfisika beskryf. Dit beteken dat ons nie informasie kan bespreek sonder om ook na die fisiese voorstelling te verwys nie, of sonder om in ag te neem nie dat die natuur die gedrag van informasie voorskryf. Hierdie situasie is vanaf 'n ander perspektief ook deur Wigner beskou toe hy geskryf het oor "die onredelike doeltreffendheid van wiskunde in die natuurwetenskappe". Waarom slaag wiskundige strukture en tegnieke van wiskunde so uitstekend daarin om die fisiese wêreld te beskryf [1]? Dit laat 'n mens wonder of die beginsels waarvolgens die heelal inmekaar steek spesiaal so saamgeflans is om ons 'n rat voor die oë te draai. Die fisika van informasie het in die 1970's en 1980's heelwat interessante insigte opgelewer, waarvan die mees opspraakwekkende sekerlik die gedagte van 'n kwantumrekenaar is. As ons één jaar wil uitsonder as die begin van informasiefisika, is dit die jaar 1994 toe Shor ontdek het dat 'n belangrike probleem van algemene belang (die faktorisering van groot heelgetalle) moontlik gemaak word deur 'n kwantumrekenaar. Die toepassings van informasie in fisika, en andersom, strek egter veel wyer as hierdie sleutel toepassing. Ander toepassings strek van verbeterde eksperimentele metodes, deur gesofistikeerde meetmetodes, metodes vir die ondersoek en beskrywing van kwantumchaos tot by dieper insig in die samehang van kwantumteorie en die natuur. In hierdie tesis bied ek 'n kort oorsig oor die belangrikste idees van kwantuminformasie teorie. Die eerste hoofstuk bestaan uit inleidende materiaal oor die belangrikste idees van waarskynlikheidsteorie en klassieke informasie teorie. Kwantummeganika word op 'n gevorderde voorgraadse vlak ingevoer, saam met die nodige gereedskap van kwantummeganika vir oop stelsels. In die tweede hoofstuk spreek ek die voorstelling van klassieke informasie en kwantumstelsels aan, en die gepaardgaande moontlikhede vir 'n agent wat informasie uit sulke stelsels wil kry. Die laaste hoofstuk ontgin 'n nuwe hulpbron: kwantuminformasie. Gedurende die afgelope tien jaar het hierdie nuwe hulpbron tot verbysterende nuwe toepassings gelei en ons keer op keer tot onverwagte nuwe insigte oor kwantumteorie en die heelal gelei.
APA, Harvard, Vancouver, ISO, and other styles
38

Baylon, David M. "Video compression with complete information for pre-recorded sources." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81520.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 123-130).
by David Michael Baylon.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Yun, Hee Cheol. "Compression of computer animation frames." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Toufie, Moegamat Zahir. "Real-time loss-less data compression." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Technikon, Cape Town, 2000
Data stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a pair. To encode the position portion of the pair, we make use of a sliding scale method. The sliding scale method works as follows. Let the position in the input buffer, of the current character to be compressed be held by inpos, where inpos is initially set to 3. It is then only possible for a match to occur at position 1 or 2. Hence the position of a match will never be greater than 2, and therefore the position portion can be encoded using only 1 bit. As "inpos" is incremented as each character is encoded, the match position range increases and therefore more bits will be required to encode the match position. The reason why a decimal 2 can be encoded 'sing only I bit can be explained as follows. When decimal values are converted to binary values, we get 010 = 02, 110 = 12, 210, = 102etc. As a position of 0 will never be used, it is possible to develop a coding scheme where a decimal value of 1 can be represented by a binary value of 0, and a decimal value of 2 can be represented by binary value of 1. Only I bit is therefore needed to encode match position I and match position 2. In general. any decimal value n ca:) be represented by the binary equivalent for (n - 1). The number of bits needed to encode (n - 1), indicates the number of bits needed to encode the match position. The length portion of the pair is encoded using a variable length coding (vlc) approach. The vlc method performs its encoding by using binary blocks. The first binary block is 3 bits long, where binary values 000 through 110 represent decimal values I through 7.
APA, Harvard, Vancouver, ISO, and other styles
41

Safar, Felix G. "Signal compression and reconstruction using multiple bases representation." Thesis, This resource online, 1988. http://scholar.lib.vt.edu/theses/available/etd-06112009-063321/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Shao-Lun Ph D. Massachusetts Institute of Technology. "Euclidean network information theory." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84888.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-123).
Many network information theory problems face the similar difficulty of single letterization. We argue that this is due to the lack of a geometric structure on the space of probability distributions. In this thesis, we develop such a structure by assuming that the distributions of interest are all close to each other. Under this assumption, the Kullback-Leibler (K-L) divergence is reduced to the squared Euclidean metric in an Euclidean space. In addition, we construct the notion of coordinate and inner product, which will facilitate solving communication problems. We will present the application of this approach to the point-to-point channels, general broadcast channels (BC), multiple access channels (MAC) with common sources, interference channels, and multi-hop layered communication networks without or with feedback. It can be shown that with this approach, information theory problems, such as the single-letterization, can be reduced to some linear algebra problems. Solving these linear algebra problems, we will show that for the general broadcast channels, transmitting the common message to receivers can be formulated as the trade-off between linear systems. We also provide an example to visualize this trade-off in a geometric way. For the MAC with common sources, we observe a coherent combining gain due to the cooperation between transmitters, and this gain can be obtained quantitively by applying our technique. In addition, the developments of the broadcast channels and multiple access channels suggest a trade-off relation between generating common messages for multiple users and transmitting them as the common sources to exploit the coherent combining gain, when optimizing the throughputs of communication networks. To study the structure of this trade-off and understand its role in optimizing the network throughput, we construct a deterministic model by our local approach that captures the critical channel parameters and well models the network. With this deterministic model, for multi-hop layered networks, we analyze the optimal network throughputs, and illustrate what kinds of common messages should be generated to achieve the optimal throughputs. Our results provide the insight of how users in a network should cooperate with each other to transmit information efficiently.
by Shao-Lun Huang.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Reusens, Emmanuel. "Visual data compression using the fractal theory and dynamic coding /." Lausanne : Ecole polytechnique fédérale, 1997. http://library.epfl.ch/theses/?nr=1590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Coben, Muhammed Z. "Region-based subband coding of image sequences." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kheirkhahzadeh, Antonio. "On the performance of markup language compression." Thesis, University of West London, 2015. https://repository.uwl.ac.uk/id/eprint/1266/.

Full text
Abstract:
Data compression is used in our everyday life to improve computer interaction or simply for storage purposes. Lossless data compression refers to those techniques that are able to compress a file in such ways that the decompressed format is the replica of the original. These techniques, which differ from the lossy data compression, are necessary and heavily used in order to reduce resource usage and improve storage and transmission speeds. Prior research led to huge improvements in compression performance and efficiency for general purpose tools which are mainly based on statistical and dictionary encoding techniques. Extensible Markup Language (XML) is based on redundant data which is parsed as normal text by general-purpose compressors. Several tools for compressing XML data have been developed, resulting in improvements for compression size and speed using different compression techniques. These tools are mostly based on algorithms that rely on variable length encoding. XML Schema is a language used to define the structure and data types of an XML document. As a result of this, it provides XML compression tools additional information that can be used to improve compression efficiency. In addition, XML Schema is also used for validating XML data. For document compression there is a need to generate the schema dynamically for each XML file. This solution can be applied to improve the efficiency of XML compressors. This research investigates a dynamic approach to compress XML data using a hybrid compression tool. This model allows the compression of XML data using variable and fixed length encoding techniques when their best use cases are triggered. The aim of this research is to investigate the use of fixed length encoding techniques to support general-purpose XML compressors. The results demonstrate the possibility of improving on compression size when a fixed length encoder is used to compress most XML data types.
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Wei. "Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2890.

Full text
Abstract:
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint.

From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity.

We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel.

The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder.

Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
APA, Harvard, Vancouver, ISO, and other styles
47

Bell, Timothy. "A unifying theory and improvements for existing approaches to text compression." Thesis, University of Canterbury. Computer Science, 1986. http://hdl.handle.net/10092/8411.

Full text
Abstract:
More than 40 different schemes for performing text compression have been proposed in the literature. Many of these schemes appear to use quite different approaches, such as Huffman coding, dictionary substitution, predictive modelling, and modelling with Finite State Automata (FSA). From the many schemes in the literature, a representative sample has been selected to include all schemes of current interest (i.e. schemes which are in popular use, or those which have been proposed recently). The main result given in the thesis is that each of these schemes disguises some form of variable-order Markov model (VOMM), which is a relatively inexact model for text. In a variable-order Markov model, each symbol is predicted using a finite number of directly preceding symbols as a context. An important class of FSAs, called Finite Context Automata (FCAs) is defined, and is shown that FCAs implement a form of variable-order Markov modelling. Informally, an FCA is an FSA where the current state is determined by some finite number of immediately preceding input symbols. Three types of proof are used to show that text compression schemes use variable-order Markov modelling: (1) some schemes, such as Cleary and Witten's "Prediction by Partial Matching", use a VOMM by definition, (2) Cormack and Horspool's "Dynamic Markov Compression" scheme uses an FSA for prediction, and it is shown that the FSAs generated will always be FCAs, (3) a class of compression schemes called Greedy Macro (GM) schemes is defined, and a wide range of compression schemes, including Ziv-Lempel (LZ) coding, are shown to belong to that class. A construction is then given to generate an FSA equivalent to any GM scheme, and the FSA is shown to implement a form of variable-order Markov modelling. Because variable-order Markov models are only a crude model for text, the main conclusion of the thesis is that more powerful models, such as Pushdown Automata, combined with arithmetic coding, offer better compression than any existing schemes, and should be explored further. However, there is room for improvement in the compression and speed of some existing schemes, and this is explored as follows. The LZ schemes are currently regarded as the most practical, in that they achieve good compression, are usually very fast, and require relatively little memory to perform well. To study these schemes more closely, an explicit probabalistic symbol-wise model is given, which is equivalent to one of the LZ schemes, LZ77. This model is suitable for providing probabilities for character-by-character Huffman or arithmetic coding. Using the insight gained by examining the symbol-wise model, improvements have been found which can be reflected in LZ schemes, resulting in a scheme called LZB, which offers improved compression, and for which the choice of parameters is less critical. Experiments verify that LZB gives better compression than competing LZ schemes for a large number of texts. Although the time complexity for encoding using LZB and similar schemes is O(n) for a text of n characters, straightforward implementations are very slow. The time consuming step of these algorithms is a search for the longest string match. An algorithm is given which uses a binary search tree to find the longest string match, and experiments show that this results in a dramatic increase in encoding speed.
APA, Harvard, Vancouver, ISO, and other styles
48

Thompson, Luke Francis. "Through-thickness compression testing and theory of carbon fibre composite materials." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/throughthickness-compression-testing-and-theory-of-carbon-fibre-composite-materials(02ad7cfa-b779-4e69-9361-3c5bb44c6114).html.

Full text
Abstract:
This study investigates the through-thickness behaviour of carbon/epoxy laminates. A through-thickness compression test regime was conducted utilising three specimen designs, which are waisted, hollow cylindrical and cubic specimens. An assessment and comparison of each specimen is given regarding their advantages and disadvantages in characterising the through-thickness response of [+45/-45/90/0]s quasi-isotropic AS4/8552 carbon/epoxy laminates. A finite element (FE) study of the three specimens is presented which results in specimen geometries that provided a macroscopically uniform stress response throughout the gauge length whilst also minimising other features such as stress concentrations. Further to the final geometries being presented, the method of manufacture for the laminate and machining processes for each of the specimens is given. A mesoscopic FE study is presented relating to the free-edge effects induced by through-thickness loading in quasi-isotropic laminates. The results presented show that free-edge effects will be present in the test specimens and will have a larger overall impact on the hollow cylindrical specimen. The free-edge effects also increase the stress concentrations present in the corners of the waisted and cubic specimens. Characteristic stress strain curves are presented for each specimen with strain data taken from post yield strain gauges attached to the specimens. The extracted initial Young's modulus Ez and Poisson's ratios vzx and vzy show a small variation between specimens. The strength values for the three specimens vary greatly with the waisted specimen being the strongest and cylindrical specimen the weakest, indicating that the chosen specimen geometry dominates failure. The experimental data will be used for test case 12 in the Second World Wide Failure Exercise (WWFE-II). A study is presented to predict the effective elastic properties of Z-pinned laminates. The materials under consideration are UD and [0/90]s cross-ply AS4/3501-6 carbon/epoxy laminates. Estimates on the effective properties are provided by two FE approaches and two analytical bounding approaches; namely Voigt and Reuss bounds and Walpole's bounding theory. The two FE approaches are based on extreme assumptions about the in-plane fibre volume fraction in the presence of Z-pins and provide a tight range of values in which the real result should lie. Furthermore, whilst the bounding methods are simple and in the case of Young's moduli produce very wide bounds the selection of the suitable bound result can lead to a good estimate in comparison with the FE data. Typically the best bounding method result for each elastic property is within 10% of the FE predictions.
APA, Harvard, Vancouver, ISO, and other styles
49

Faghfoor, Maghrebi Mohammad. "Information gain in quantum theory." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2724.

Full text
Abstract:
In this thesis I address the fundamental question that how the information gain is possible in the realm of quantum mechanics where a single measurement alters the state of the system. I study an ensemble of particles in some unknown (but product) state in detail and suggest an optimal way of gaining the maximum information and also quantify the corresponding information exactly. We find a rather novel result which is quite different from other well-known definitions of the information gain in quantum theory.
APA, Harvard, Vancouver, ISO, and other styles
50

Vedral, Vlatko. "Quantum information theory of entanglement." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography