Academic literature on the topic 'Coding, information theory and compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Coding, information theory and compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Coding, information theory and compression"

1

KUNT, MURAT. "PROGRESS IN HIGH COMPRESSION IMAGE CODING." International Journal of Pattern Recognition and Artificial Intelligence 02, no. 03 (September 1988): 387–405. http://dx.doi.org/10.1142/s0218001488000236.

Full text
Abstract:
The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and to reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10: 1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100: 1. This paper presents recent progress on some of the main avenues of object-based methods. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophisics and scene analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Dahuan, and Gang Feng. "Compression and Storage Algorithm of Key Information of Communication Data Based on Backpropagation Neural Network." Mathematical Problems in Engineering 2022 (April 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/2885735.

Full text
Abstract:
This paper presents a backpropagation neural network algorithm for data compression and data storage. Data compression or establishing model ten coding is the most basic idea of traditional data compression. The traditionally designed ideas are mainly based on reducing the redundancy in the information and focus on the coding design, and its compression ratio has been hovering around dozens of percent. After the traditional coding compression of information, it is difficult to further compress by similar methods. In order to solve the above problems, the information that takes up less signal space can be used to represent the information that takes up more signal space to realize data compression. This new design idea of data compression breaks through the traditional limitation of relying only on coding to reduce data redundancy and achieves a higher compression ratio. At the same time, the information after such compression can be repeatedly compressed, and it has a very good performance. This is the basic idea of the combination of neural network and data compression introduced in this paper. According to the theory of multiobjective function optimization, this paper puts forward the theoretical model of multiobjective optimization neural network and studies a multiobjective data compression method based on neural network. According to the change of data characteristics, this method automatically adjusts the structural parameters (connection weight and bias value) of neural network to obtain the largest amount of data compression at the cost of small information loss. This method has the characteristics of strong adaptability, parallel processing, knowledge distributed storage, and anti-interference. Experimental results show that, compared with other methods, the proposed method has significant advantages in performance index, compression time and compression effect, high efficiency. and high-quality robustness.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Dale, Christopher W. Lynn, Zaixu Cui, Rastko Ciric, Graham L. Baum, Tyler M. Moore, David R. Roalf, et al. "Efficient coding in the economics of human brain connectomics." Network Neuroscience 6, no. 1 (2022): 234–74. http://dx.doi.org/10.1162/netn_a_00223.

Full text
Abstract:
Abstract In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8–23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior—beyond the conventional network efficiency metric—for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
APA, Harvard, Vancouver, ISO, and other styles
4

ROMEO, AUGUST, ENRIQUE GAZTAÑAGA, JOSE BARRIGA, and EMILIO ELIZALDE. "INFORMATION CONTENT IN UNIFORMLY DISCRETIZED GAUSSIAN NOISE: OPTIMAL COMPRESSION RATES." International Journal of Modern Physics C 10, no. 04 (June 1999): 687–716. http://dx.doi.org/10.1142/s0129183199000528.

Full text
Abstract:
We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), ( w.n. +1/f) noise, and piecewise ( w.n. +1/f | w.n. +1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.
APA, Harvard, Vancouver, ISO, and other styles
5

SGARRO, ANDREA, and LIVIU PETRIŞOR DINU. "POSSIBILISTIC ENTROPIES AND THE COMPRESSION OF POSSIBILISTIC DATA." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, no. 06 (December 2002): 635–53. http://dx.doi.org/10.1142/s0218488502001697.

Full text
Abstract:
We re-take the possibilistic model for information sources recently put forward by the first author, as opposed to the standard probabilistic models of information theory. Based on an interpretation of possibilistic source coding inspired by utility functions, we define a notion of possibilistic entropy for a suitable class of interactive possibilistic sources, and compare it with the possibilistic entropy of stationary non-interactive sources. Both entropies have a coding-theoretic nature, being obtained as limit values for the rates of optimal compression codes. We list properties of the two entropies, which might support their use as measures of "possibilistic ignorance".
APA, Harvard, Vancouver, ISO, and other styles
6

Silverstein, Steven M., Michael Wibral, and William A. Phillips. "Implications of Information Theory for Computational Modeling of Schizophrenia." Computational Psychiatry 1 (December 2017): 82–101. http://dx.doi.org/10.1162/cpsy_a_00004.

Full text
Abstract:
Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory—such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio—can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Jie, Xiao Lin Jiang, and Xiao Yang Yu. "Based on Compression Sensing of Video Coding in Robot Servo System." Applied Mechanics and Materials 241-244 (December 2012): 1913–17. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1913.

Full text
Abstract:
The robot visual servo control is the current robot control of a main research direction, this design based on the analysis of the theory of compression sensing, distributed video decoding and image super-resolution reconstruction. Simulation and experimental results show that the compression sensing theory applied to image super-resolution reconstruction, make high resolution image reconstruction can fully exert the original low resolution image of the structural characteristics, in order to protect the original low resolution image edge details such as information, and the traditional calibration approach compared with high resolution image can improve the image edge, texture, and other details of the characteristics of reconstruction effect and improve the precision of the recognition.
APA, Harvard, Vancouver, ISO, and other styles
8

Arora, H. D., and Anjali Dhiman. "Comparative Study of Generalized Quantitative-Qualitative Inaccuracy Fuzzy Measures for Noiseless Coding Theorem and 1:1 Codes." International Journal of Mathematics and Mathematical Sciences 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/258675.

Full text
Abstract:
In coding theory, we study various properties of codes for application in data compression, cryptography, error correction, and network coding. The study of codes is introduced in Information Theory, electrical engineering, mathematics, and computer sciences for the transmission of data through reliable and efficient methods. We have to consider how coding of messages can be done efficiently so that the maximum number of messages can be sent over a noiseless channel in a given time. Thus, the minimum value of mean codeword length subject to a given constraint on codeword lengths has to be founded. In this paper, we have introduced mean codeword length of orderαand typeβfor 1:1 codes and analyzed the relationship between average codeword length and fuzzy information measures for binary 1:1 codes. Further, noiseless coding theorem associated with fuzzy information measure has been established.
APA, Harvard, Vancouver, ISO, and other styles
9

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Full text
Abstract:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Voges, Jan, Tom Paridaens, Fabian Müntefering, Liudmila S. Mainzer, Brian Bliss, Mingyu Yang, Idoia Ochoa, Jan Fostier, Jörn Ostermann, and Mikel Hernaez. "GABAC: an arithmetic coding solution for genomic data." Bioinformatics 36, no. 7 (December 12, 2019): 2275–77. http://dx.doi.org/10.1093/bioinformatics/btz922.

Full text
Abstract:
Abstract Motivation In an effort to provide a response to the ever-expanding generation of genomic data, the International Organization for Standardization (ISO) is designing a new solution for the representation, compression and management of genomic sequencing data: the Moving Picture Experts Group (MPEG)-G standard. This paper discusses the first implementation of an MPEG-G compliant entropy codec: GABAC. GABAC combines proven coding technologies, such as context-adaptive binary arithmetic coding, binarization schemes and transformations, into a straightforward solution for the compression of sequencing data. Results We demonstrate that GABAC outperforms well-established (entropy) codecs in a significant set of cases and thus can serve as an extension for existing genomic compression solutions, such as CRAM. Availability and implementation The GABAC library is written in C++. We also provide a command line application which exercises all features provided by the library. GABAC can be downloaded from https://github.com/mitogen/gabac. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Coding, information theory and compression"

1

Zemouri, Rachid. "Data compression of speech using sub-band coding." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reid, Mark Montgomery. "Path-dictated, lossless volumetric data compression." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shih, An-Zen. "Fractal compression analysis of superdeformed nucleus data." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, P. S. "Data compression for high precision digital waveform recording." Thesis, City University London, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yamazato, Takaya, Iwao Sasase, and Shinsaku Mori. "Interlace Coding System Involving Data Compression Code, Data Encryption Code and Error Correcting Code." IEICE, 1992. http://hdl.handle.net/2237/7844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Jianmin. "Multi-media data compression and real-time novel architectures implementation." Thesis, University of Southampton, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reeder, Brian Martin. "Application of artificial neural networks for spacecraft instrument data compression." Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Beirami, Ahmad. "Network compression via network memory: fundamental performance limits." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53448.

Full text
Abstract:
The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.
APA, Harvard, Vancouver, ISO, and other styles
9

Floor, Pål Anders. "On the Theory of Shannon-Kotel'nikov Mappings in Joint Source-Channel Coding." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2193.

Full text
Abstract:

In this thesis an approach to joint source-channel coding using direct source to channel mappings is studied. The system studied communicates i.i.d. Gaussian sources on a point-to-point Gaussian memoryless channel with limited feedback (supporting channel state information at most). The mappings, named Shannon-Kotel'nikov (SK) mappings, are memoryless mappings between the source space of dimension M and the channel space of dimension N. Such mappings can be used for error control when MN, called dimension reduction. The SK-mappings operate on amplitude continuous and time discrete signals (meaning that there is no bits involved) through (piecewise) continuous curves or hyper surfaces in general.

The reason for studying SK-mappings is that they are delay free, robust against varying channel conditions, and have quite good performance at low complexity.

First a theory for determining and categorizing the distortion using SK-mappings for communication is introduced and developed. This theory is further used to show that SK-mappings can reach the information theoretical bound optimal performance theoretically attainable (OPTA) when their dimension approach infinity.

One problem is to determine the overall optimal geometry of the SK-mappings. Indications on the overall geometry can be found by studying the codebooks and channel constellations of power constrained channel optimized vector quantizers (PCCOVQ). The PCCOVQ algorithm will find the optimal placing of quantizer representation vectors in the source space and channel symbols in the channel space. A PCCOVQ algorithm giving well performing mappings for the dimension reduction case has been found in the past. In this thesis the PCCOVQ algorithm is modified to give well performing dimension expanding mappings for scalar sources, and 1:2 and 1:3 PCCOVQ examples are given.

Some example SK-mappings are proposed and analyzed. 2:1 and 1:2 PCCOVQ mappings are used as inspiration for making 2:1 and 1:2 SK-mappings based on the Archimedean spiral. Further 3:1, 4:1, 3:2 and 2:3 SK-mappings are found and analyzed. All example SK-mappings are modeled mathematically using the proposed theory on SK-mappings. These mathematical models are further used to find the optimal coefficients for all the proposed SK-mappings as a function of the channel signal-to-noise ratio (CSNR), making adaptations to varying channel conditions simple.

APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Jing. "Information theoretic approach for low-complexity adaptive motion estimation." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013068.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Coding, information theory and compression"

1

Moffat, Alistair. Compression and Coding Algorithms. Boston, MA: Springer US, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Andrew, Turpin, ed. Compression and coding algorithms. Boston: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

The mathematics of coding theory: Information, compression, error correction, and finite fields. Upper Saddle River, NJ: Pearson Prentice Hall, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Manstetten, Dietrich. Schranken für die Redundanz des Huffman-Codes. Heidelberg: A. Hüthig, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krichevsky, Rafail. Universal Compression and Retrieval. Dordrecht: Springer Netherlands, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu shi zhen xin yuan bian ma jiu cuo yi ma li lun yu ji shu. Beijing Shi: Guo fang gong ye chu ban she, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Merhav, Neri. Fast inverse motion compensation algorithms for MPEG-2 and for partial DCT information. Palo Alto, CA: Hewlett-Packard Laboratories, Technical Publications Department, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramamohan, Rao K., and SPIE--the International Society for Optical Engineering., eds. Standards and common interfaces for video information systems: Proceedings of a conference held 25-26 October 1995, Philadelphia, Pennsylvania. Bellingham, Wash., USA: SPIE Optical Engineering Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schuster, Guido M. Rate-distortion based video compression: Optimal video frame compression and object boundary encoding. Boston: Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hamming, R. W. Coding and information theory. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Coding, information theory and compression"

1

Caire, Giuseppe, Shlomo Shamai, Amin Shokrollahi, and Sergio Verdú. "Fountain codes for lossless data compression." In Algebraic Coding Theory and Information Theory, 1–20. Providence, Rhode Island: American Mathematical Society, 2005. http://dx.doi.org/10.1090/dimacs/068/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ivaniš, Predrag, and Dušan Drajić. "Data Compression (Source Encoding)." In Information Theory and Coding - Solved Problems, 45–90. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49370-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gibson, Jerry. "Lossless Source Coding." In Information Theory and Rate Distortion Theory for Communications and Compression, 29–48. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-01680-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gibson, Jerry. "Rate Distortion Theory and Lossy Source Coding." In Information Theory and Rate Distortion Theory for Communications and Compression, 81–102. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-01680-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gray, Robert M. "Source Coding Theorems." In Entropy and Information Theory, 241–75. New York, NY: Springer New York, 1990. http://dx.doi.org/10.1007/978-1-4757-3982-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gray, Robert M. "Source Coding Theorems." In Entropy and Information Theory, 295–334. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-1-4419-7970-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Flueckiger, Gerald E. "Information Theory and Coding." In Control, Information, and Technological Change, 101–18. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-011-0377-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hayashi, Masahito. "Source Coding in Quantum Systems." In Quantum Information Theory, 569–605. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-49725-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Borda, Monica. "Source Coding." In Fundamentals in Information Theory and Coding, 53–119. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20347-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Borda, Monica. "Channel Coding." In Fundamentals in Information Theory and Coding, 209–387. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20347-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Coding, information theory and compression"

1

Yang, Yuxiang, Ge Bai, Giulio Chiribella, and Masahito Hayashi. "Compression for quantum population coding." In 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017. http://dx.doi.org/10.1109/isit.2017.8006874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang Yang and Zixiang Xiong. "Distributed source coding without Slepian-Wolf compression." In 2009 Information Theory and Applications Workshop (ITA). IEEE, 2009. http://dx.doi.org/10.1109/ita.2009.5044974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang Yang and Zixiang Xiong. "Distributed source coding without Slepian-Wolf compression." In 2009 IEEE International Symposium on Information Theory - ISIT. IEEE, 2009. http://dx.doi.org/10.1109/isit.2009.5205621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Source Coding and Data Compression." In 2021 IEEE 3rd International Conference on Advanced Trends in Information Theory (ATIT). IEEE, 2021. http://dx.doi.org/10.1109/atit54053.2021.9678788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Syed Usama Khalid and Muhammad Asim Noor. "UR coding - a novel algorithm for data compression." In 2010 IEEE International Conference on Information Theory and Information Security (ICITIS). IEEE, 2010. http://dx.doi.org/10.1109/icitis.2010.5688740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hucke, Danny, and Markus Lohrey. "Universal tree source coding using grammar-based compression." In 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017. http://dx.doi.org/10.1109/isit.2017.8006830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Asnani, Himanshu, Ilan Shomorony, A. Salman Avestimehr, and Tsachy Weissman. "Operational extremality of Gaussianity in network compression, communication, and coding." In 2013 IEEE Information Theory Workshop (ITW 2013). IEEE, 2013. http://dx.doi.org/10.1109/itw.2013.6691220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ho, Siu-Wai, Lifeng Lai, and Alex Grant. "On the separation of encryption and compression in secure distributed source coding." In 2011 IEEE Information Theory Workshop (ITW). IEEE, 2011. http://dx.doi.org/10.1109/itw.2011.6089527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Si, Hongbo, O. Ozan Koyluoglu, and Sriram Vishwanath. "Lossy compression of exponential and laplacian sources using expansion coding." In 2014 IEEE International Symposium on Information Theory (ISIT). IEEE, 2014. http://dx.doi.org/10.1109/isit.2014.6875395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Runxin, Junya Honda, Hirosuke Yamamoto, and Rongke Liu. "FV polar coding for lossy compression with an improved exponent." In 2015 IEEE International Symposium on Information Theory (ISIT). IEEE, 2015. http://dx.doi.org/10.1109/isit.2015.7282709.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Coding, information theory and compression"

1

Moran, William. Coding Theory Information Theory and Radar. Fort Belvoir, VA: Defense Technical Information Center, September 2005. http://dx.doi.org/10.21236/ada456510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Calderbank, Arthur R. Coding Theory Information Theory and Radar. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada434253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography