Journal articles on the topic 'Coding, information theory and compression'

To see the other types of publications on this topic, follow the link: Coding, information theory and compression.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Coding, information theory and compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

KUNT, MURAT. "PROGRESS IN HIGH COMPRESSION IMAGE CODING." International Journal of Pattern Recognition and Artificial Intelligence 02, no. 03 (September 1988): 387–405. http://dx.doi.org/10.1142/s0218001488000236.

Full text
Abstract:
The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and to reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10: 1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100: 1. This paper presents recent progress on some of the main avenues of object-based methods. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophisics and scene analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Dahuan, and Gang Feng. "Compression and Storage Algorithm of Key Information of Communication Data Based on Backpropagation Neural Network." Mathematical Problems in Engineering 2022 (April 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/2885735.

Full text
Abstract:
This paper presents a backpropagation neural network algorithm for data compression and data storage. Data compression or establishing model ten coding is the most basic idea of traditional data compression. The traditionally designed ideas are mainly based on reducing the redundancy in the information and focus on the coding design, and its compression ratio has been hovering around dozens of percent. After the traditional coding compression of information, it is difficult to further compress by similar methods. In order to solve the above problems, the information that takes up less signal space can be used to represent the information that takes up more signal space to realize data compression. This new design idea of data compression breaks through the traditional limitation of relying only on coding to reduce data redundancy and achieves a higher compression ratio. At the same time, the information after such compression can be repeatedly compressed, and it has a very good performance. This is the basic idea of the combination of neural network and data compression introduced in this paper. According to the theory of multiobjective function optimization, this paper puts forward the theoretical model of multiobjective optimization neural network and studies a multiobjective data compression method based on neural network. According to the change of data characteristics, this method automatically adjusts the structural parameters (connection weight and bias value) of neural network to obtain the largest amount of data compression at the cost of small information loss. This method has the characteristics of strong adaptability, parallel processing, knowledge distributed storage, and anti-interference. Experimental results show that, compared with other methods, the proposed method has significant advantages in performance index, compression time and compression effect, high efficiency. and high-quality robustness.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Dale, Christopher W. Lynn, Zaixu Cui, Rastko Ciric, Graham L. Baum, Tyler M. Moore, David R. Roalf, et al. "Efficient coding in the economics of human brain connectomics." Network Neuroscience 6, no. 1 (2022): 234–74. http://dx.doi.org/10.1162/netn_a_00223.

Full text
Abstract:
Abstract In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8–23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior—beyond the conventional network efficiency metric—for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
APA, Harvard, Vancouver, ISO, and other styles
4

ROMEO, AUGUST, ENRIQUE GAZTAÑAGA, JOSE BARRIGA, and EMILIO ELIZALDE. "INFORMATION CONTENT IN UNIFORMLY DISCRETIZED GAUSSIAN NOISE: OPTIMAL COMPRESSION RATES." International Journal of Modern Physics C 10, no. 04 (June 1999): 687–716. http://dx.doi.org/10.1142/s0129183199000528.

Full text
Abstract:
We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), ( w.n. +1/f) noise, and piecewise ( w.n. +1/f | w.n. +1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.
APA, Harvard, Vancouver, ISO, and other styles
5

SGARRO, ANDREA, and LIVIU PETRIŞOR DINU. "POSSIBILISTIC ENTROPIES AND THE COMPRESSION OF POSSIBILISTIC DATA." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, no. 06 (December 2002): 635–53. http://dx.doi.org/10.1142/s0218488502001697.

Full text
Abstract:
We re-take the possibilistic model for information sources recently put forward by the first author, as opposed to the standard probabilistic models of information theory. Based on an interpretation of possibilistic source coding inspired by utility functions, we define a notion of possibilistic entropy for a suitable class of interactive possibilistic sources, and compare it with the possibilistic entropy of stationary non-interactive sources. Both entropies have a coding-theoretic nature, being obtained as limit values for the rates of optimal compression codes. We list properties of the two entropies, which might support their use as measures of "possibilistic ignorance".
APA, Harvard, Vancouver, ISO, and other styles
6

Silverstein, Steven M., Michael Wibral, and William A. Phillips. "Implications of Information Theory for Computational Modeling of Schizophrenia." Computational Psychiatry 1 (December 2017): 82–101. http://dx.doi.org/10.1162/cpsy_a_00004.

Full text
Abstract:
Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory—such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio—can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Jie, Xiao Lin Jiang, and Xiao Yang Yu. "Based on Compression Sensing of Video Coding in Robot Servo System." Applied Mechanics and Materials 241-244 (December 2012): 1913–17. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1913.

Full text
Abstract:
The robot visual servo control is the current robot control of a main research direction, this design based on the analysis of the theory of compression sensing, distributed video decoding and image super-resolution reconstruction. Simulation and experimental results show that the compression sensing theory applied to image super-resolution reconstruction, make high resolution image reconstruction can fully exert the original low resolution image of the structural characteristics, in order to protect the original low resolution image edge details such as information, and the traditional calibration approach compared with high resolution image can improve the image edge, texture, and other details of the characteristics of reconstruction effect and improve the precision of the recognition.
APA, Harvard, Vancouver, ISO, and other styles
8

Arora, H. D., and Anjali Dhiman. "Comparative Study of Generalized Quantitative-Qualitative Inaccuracy Fuzzy Measures for Noiseless Coding Theorem and 1:1 Codes." International Journal of Mathematics and Mathematical Sciences 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/258675.

Full text
Abstract:
In coding theory, we study various properties of codes for application in data compression, cryptography, error correction, and network coding. The study of codes is introduced in Information Theory, electrical engineering, mathematics, and computer sciences for the transmission of data through reliable and efficient methods. We have to consider how coding of messages can be done efficiently so that the maximum number of messages can be sent over a noiseless channel in a given time. Thus, the minimum value of mean codeword length subject to a given constraint on codeword lengths has to be founded. In this paper, we have introduced mean codeword length of orderαand typeβfor 1:1 codes and analyzed the relationship between average codeword length and fuzzy information measures for binary 1:1 codes. Further, noiseless coding theorem associated with fuzzy information measure has been established.
APA, Harvard, Vancouver, ISO, and other styles
9

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Full text
Abstract:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Voges, Jan, Tom Paridaens, Fabian Müntefering, Liudmila S. Mainzer, Brian Bliss, Mingyu Yang, Idoia Ochoa, Jan Fostier, Jörn Ostermann, and Mikel Hernaez. "GABAC: an arithmetic coding solution for genomic data." Bioinformatics 36, no. 7 (December 12, 2019): 2275–77. http://dx.doi.org/10.1093/bioinformatics/btz922.

Full text
Abstract:
Abstract Motivation In an effort to provide a response to the ever-expanding generation of genomic data, the International Organization for Standardization (ISO) is designing a new solution for the representation, compression and management of genomic sequencing data: the Moving Picture Experts Group (MPEG)-G standard. This paper discusses the first implementation of an MPEG-G compliant entropy codec: GABAC. GABAC combines proven coding technologies, such as context-adaptive binary arithmetic coding, binarization schemes and transformations, into a straightforward solution for the compression of sequencing data. Results We demonstrate that GABAC outperforms well-established (entropy) codecs in a significant set of cases and thus can serve as an extension for existing genomic compression solutions, such as CRAM. Availability and implementation The GABAC library is written in C++. We also provide a command line application which exercises all features provided by the library. GABAC can be downloaded from https://github.com/mitogen/gabac. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Su Ying. "Redundant Use of Visual Images of Different Domain Information Steganographic Technique." Advanced Materials Research 912-914 (April 2014): 1327–30. http://dx.doi.org/10.4028/www.scientific.net/amr.912-914.1327.

Full text
Abstract:
The full realization of human visual redundancy safely embedded secret information is information hiding goal Steganography existing image information technology in many fields of image data to achieve the same paper will quantify the signal compression coding theory and methods. information used in dense cover, we propose a fully redundant human visual images of different domain information steganographic method and designed three new error correction method to ensure the accuracy of blind ciphertext information extraction and recovery experiments show that the algorithm spatial statistical properties of the image data changes less with a capacity of more hidden PCT Gao and safety.
APA, Harvard, Vancouver, ISO, and other styles
12

Han, Bo, and Bolang Li. "Lossless Compression of Data Tables in Mobile Devices by Using Co-clustering." International Journal of Computers Communications & Control 11, no. 6 (October 17, 2016): 776. http://dx.doi.org/10.15837/ijccc.2016.6.2554.

Full text
Abstract:
Data tables have been widely used for storage of a collection of related records in a structured format in many mobile applications. The lossless compression of data tables not only brings benefits for storage, but also reduces network transmission latencies and energy costs in batteries. In this paper, we propose a novel lossless compression approach by combining co-clustering and information coding theory. It reorders table columns and rows simultaneously for shaping homogeneous blocks and further optimizes alignment within a block to expose redundancy, such that standard lossless encoders can significantly improve compression ratios. We tested the approach on a synthetic dataset and ten UCI real-life datasets by using a standard compressor 7Z. The extensive experimental results suggest that compared with the direct table compression without co-clustering and within-block alignment, our approach can boost compression rates at least 21% and up to 68%. The results also show that the compression time cost of the co-clustering approach is linearly proportional to a data table size. In addition, since the inverse transform of co-clustering is just exchange of rows and columns according to recorded indexes, the decompression procedure runs very fast and the decompression time cost is similar to the counterpart without using co-clustering. Thereby, our approach is suitable for lossless compression of data tables in mobile devices with constrained resources.
APA, Harvard, Vancouver, ISO, and other styles
13

Song, Yuxuan, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, and Yong Yu. "Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5834–41. http://dx.doi.org/10.1609/aaai.v34i04.6041.

Full text
Abstract:
Although Shannon theory states that it is asymptotically optimal to separate the source and channel coding as two independent processes, in many practical communication scenarios this decomposition is limited by the finite bit-length and computational power for decoding. Recently, neural joint source-channel coding (NECST) (Choi et al. 2018) is proposed to sidestep this problem. While it leverages the advancements of amortized inference and deep learning (Kingma and Welling 2013; Grover and Ermon 2018) to improve the encoding and decoding process, it still cannot always achieve compelling results in terms of compression and error correction performance due to the limited robustness of its learned coding networks. In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme. More specifically, on the encoder side, we propose to explicitly maximize the mutual information between the codeword and data; while on the decoder side, the amortized reconstruction is regularized within an adversarial framework. Extensive experiments conducted on various real-world datasets evidence that our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
14

Wolff, J. Gerard. "Information Compression as a Unifying Principle in Human Learning, Perception, and Cognition." Complexity 2019 (February 20, 2019): 1–38. http://dx.doi.org/10.1155/2019/1879746.

Full text
Abstract:
This paper reviews evidence for the idea that much of human learning, perception, and cognition may be understood as information compression and often more specifically as “information compression via the matching and unification of patterns” (ICMUP). Evidence includes the following: information compression can mean selective advantage for any creature; the storage and utilisation of the relatively enormous quantities of sensory information would be made easier if the redundancy of incoming information was to be reduced; content words in natural languages, with their meanings, may be seen as ICMUP; other techniques for compression of information—such as class-inclusion hierarchies, schema-plus-correction, run-length coding, and part-whole hierarchies—may be seen in psychological phenomena; ICMUP may be seen in how we merge multiple views to make one, in recognition, in binocular vision, in how we can abstract object concepts via motion, in adaptation of sensory units in the eye of Limulus, the horseshoe crab, and in other examples of adaptation; the discovery of the segmental structure of language (words and phrases), grammatical inference, and the correction of over- and undergeneralisations in learning may be understood in terms of ICMUP; information compression may be seen in the perceptual constancies; there is indirect evidence for ICMUP in human cognition via kinds of redundancy such as the decimal expansion of π which are difficult for people to detect; much of the structure and workings of mathematics—an aid to human thinking—may be understood in terms of ICMUP; and there is additional evidence via the SP Theory of Intelligence and its realisation in the SP Computer Model. Three objections to the main thesis of this paper are described, with suggested answers. These ideas may be seen to be part of a “Big Picture” with six components, outlined in the paper.
APA, Harvard, Vancouver, ISO, and other styles
15

Nakahara, Yuta, and Toshiyasu Matsushima. "A Stochastic Model for Block Segmentation of Images Based on the Quadtree and the Bayes Code for It." Entropy 23, no. 8 (July 30, 2021): 991. http://dx.doi.org/10.3390/e23080991.

Full text
Abstract:
In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.
APA, Harvard, Vancouver, ISO, and other styles
16

Mondelli, Marco, S. Hamed Hassani, and Rüdiger Urbanke. "A New Coding Paradigm for the Primitive Relay Channel." Algorithms 12, no. 10 (October 18, 2019): 218. http://dx.doi.org/10.3390/a12100218.

Full text
Abstract:
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been introduced for this setting: decode-and-forward and compress-and-forward. In decode-and-forward, the relay completely decodes the message and sends some information to the destination; in compress-and-forward, the relay does not decode, and it sends a compressed version of the received signal to the destination using Wyner–Ziv coding. In this paper, we present a novel coding paradigm that provides an improved achievable rate for the primitive relay channel. The idea is to combine compress-and-forward and decode-and-forward via a chaining construction. We transmit over pairs of blocks: in the first block, we use compress-and-forward; and, in the second block, we use decode-and-forward. More specifically, in the first block, the relay does not decode, it compresses the received signal via Wyner–Ziv, and it sends only part of the compression to the destination. In the second block, the relay completely decodes the message, it sends some information to the destination, and it also sends the remaining part of the compression coming from the first block. By doing so, we are able to strictly outperform both compress-and-forward and decode-and-forward. Note that the proposed coding scheme can be implemented with polar codes. As such, it has the typical attractive properties of polar coding schemes, namely, quasi-linear encoding and decoding complexity, and error probability that decays at super-polynomial speed. As a running example, we take into account the special case of the erasure relay channel, and we provide a comparison between the rates achievable by our proposed scheme and the existing upper and lower bounds.
APA, Harvard, Vancouver, ISO, and other styles
17

Riznyk, V. V. "FORMALIZATION CODING METHODS OF INFORMATION UNDER TOROIDAL COORDINATE SYSTEMS." Radio Electronics, Computer Science, Control, no. 2 (July 7, 2021): 144–53. http://dx.doi.org/10.15588/1607-3274-2021-2-15.

Full text
Abstract:
Contents. Coding and processing large information content actualizes the problem of formalization of interdependence between information parameters of vector data coding systems on a single mathematical platform. Objective. The formalization of relationships between information parameters of vector data coding systems in the optimized basis of toroidal coordinate systems with the achievement of a favorable compromise between contradictory goals. Method. The method involves the establishing harmonious mutual penetration of symmetry and asymmetry as the remarkable property of real space, which allows use decoded information for forming the mathematical principle relating to the optimal placement of structural elements in spatially or temporally distributed systems, using novel designs based on the concept of Ideal Ring Bundles (IRB)s. IRBs are cyclic sequences of positive integers which dividing a symmetric sphere about center of the symmetry. The sums of connected sub-sequences of an IRB enumerate the set of partitions of a sphere exactly R times. Two-and multidimensional IRBs, namely the “Glory to Ukraine Stars”, are sets of t-dimensional vectors, each of them as well as all modular sums of them enumerate the set node points grid of toroid coordinate system with the corresponding sizes and dimensionality exactly R times. Moreover, we require each indexed vector data “category-attribute” mutually uniquely corresponds to the point with the eponymous set of the coordinate system. Besides, a combination of binary code with vector weight discharges of the database is allowed, and the set of all values of indexed vector data sets are the same that a set of numerical values. The underlying mathematical principle relates to the optimal placement of structural elements in spatially and/or temporally distributed systems, using novel designs based on tdimensional “star” combinatorial configurations, including the appropriate algebraic theory of cyclic groups, number theory, modular arithmetic, and IRB geometric transformations. Results. The relationship of vector code information parameters (capacity, code size, dimensionality, number of encodingvectors) with geometric parameters of the coordinate system (dimension, dimensionality, and grid sizes), and vector data characteristic (number of attributes and number of categories, entity-attribute-value size list) have been formalized. The formula system is derived as a functional dependency between the above parameters, which allows achieving a favorable compromise between the contradictory goals (for example, the performance and reliability of the coding method). Theorem with corresponding corollaries about the maximum vector code size of conversion methods for t-dimensional indexed data sets “category-attribute” proved. Theoretically, the existence of an infinitely large number of minimized basis, which give rise to numerous varieties of multidimensional “star” coordinate systems, which can find practical application in modern and future multidimensional information technologies, substantiated. Conclusions. The formalization provides, essentially, a new conceptual model of information systems for optimal coding and processing of big vector data, using novel design based on the remarkable properties and structural perfection of the “Glory to Ukraine Stars” combinatorial configurations. Moreover, the optimization has been embedded in the underlying combinatorial models. The favorable qualities of the combinatorial structures can be applied to vector data coded design of multidimensional signals, signal compression and reconstruction for communications and radar, and other areas to which the GUS-model can be useful. There are many opportunities to apply them to numerous branches of sciences and advanced systems engineering, including information technologies under the toroidal coordinate systems. A perfection, harmony and beauty exists not only in the abstract models but in the real world also.
APA, Harvard, Vancouver, ISO, and other styles
18

Mogheer, Hussein Shakor, and Khamees Khalaf Hasan. "Implementation of Clock Gating for Power Optimizing in Synchronous Design." Tikrit Journal of Engineering Sciences 25, no. 3 (August 8, 2018): 12–18. http://dx.doi.org/10.25130/tjes.25.3.03.

Full text
Abstract:
Huffman coding is very important technique in information theory. Compression technique is the technology for reducing the amount of data used to denote any content without decreasing the quality. Furthermore, Clock gating is an effective method for decreasing power consumption in a sequence design. It saves more power by dividing the main clock and distributing the clock to the logic blocks only when there is a need for those blocks to be activated. This paper aim to design Huffman coding and decoding process with proposing a novel method of clock gating to achieve low power consumption. Huffman design is executed by expending ASIC design procedures. With the purpose of executing the encoder and decoder structures, 130 nm typical cell technology libraries are utilized for ASIC implementation. The simulations are completed by utilizing Modelsim tool. The design of coding and decoding process has been made using Verilog HDL language. Moreover, it carried out using Quartus II 14.1 Web Edition (64-Bit).
APA, Harvard, Vancouver, ISO, and other styles
19

Hayashi, M., and K. Matsumoto. "Simple construction of quantum universal variable-length source coding." Quantum Information and Computation 2, Special (November 2002): 519–29. http://dx.doi.org/10.26421/qic2.s-2.

Full text
Abstract:
We simply construct a quantum universal variable-length source code in which, independent of information source, both of the average error and the probability that the coding rate is greater than the entropy rate H(\overline{\rho}_p), tend to 0. If H(\overline{\rho}_p) is estimated, we can compress the coding rate to the admissible rate H(\overline{\rho}_p) with a probability close to 1. However, when we perform a naive measurement for the estimation of H(\overline{\rho}_p), the input state is demolished. By smearing the measurement, we successfully treat the trade-off between the estimation of H(\overline{\rho}_p) and the non-demolition of the input state. Our protocol can be used not only for the Schumacher's scheme but also for the compression of entangled states.
APA, Harvard, Vancouver, ISO, and other styles
20

Sajjad, Muhammad, Tariq Shah, Robinson-Julian Serna, Zagalo Enrique Suárez Aguilar, and Omaida Sepúlveda Delgado. "Fundamental Results of Cyclic Codes over Octonion Integers and Their Decoding Algorithm." Computation 10, no. 12 (December 14, 2022): 219. http://dx.doi.org/10.3390/computation10120219.

Full text
Abstract:
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection, error correction, data transmission, and data storage. Codes are studied by various scientific disciplines, such as information theory, electrical engineering, mathematics, linguistics, and computer science, to design efficient and reliable data transmission methods. Many authors in the previous literature have discussed codes over finite fields, Gaussian integers, quaternion integers, etc. In this article, the author defines octonion integers, fundamental theorems related to octonion integers, encoding, and decoding of cyclic codes over the residue class of octonion integers with respect to the octonion Mannheim weight one. The comparison of primes, lengths, cardinality, dimension, and code rate with respect to Quaternion Integers and Octonion Integers will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Yuansheng, Limsoon Wong, and Jinyan Li. "Allowing mutations in maximal matches boosts genome compression performance." Bioinformatics 36, no. 18 (June 17, 2020): 4675–81. http://dx.doi.org/10.1093/bioinformatics/btaa572.

Full text
Abstract:
Abstract Motivation A maximal match between two genomes is a contiguous non-extendable sub-sequence common in the two genomes. DNA bases mutate very often from the genome of one individual to another. When a mutation occurs in a maximal match, it breaks the maximal match into shorter match segments. The coding cost using these broken segments for reference-based genome compression is much higher than that of using the maximal match which is allowed to contain mutations. Results We present memRGC, a novel reference-based genome compression algorithm that leverages mutation-containing matches (MCMs) for genome encoding. MemRGC detects maximal matches between two genomes using a coprime double-window k-mer sampling search scheme, the method then extends these matches to cover mismatches (mutations) and their neighbouring maximal matches to form long and MCMs. Experiments reveal that memRGC boosts the compression performance by an average of 27% in reference-based genome compression. MemRGC is also better than the best state-of-the-art methods on all of the benchmark datasets, sometimes better by 50%. Moreover, memRGC uses much less memory and de-compression resources, while providing comparable compression speed. These advantages are of significant benefits to genome data storage and transmission. Availability and implementation https://github.com/yuansliu/memRGC. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
22

Gomathi, R., and A. Vincent Antony Kumar. "A Multiresolution Image Completion Algorithm for Compressing Digital Color Images." Journal of Applied Mathematics 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/757318.

Full text
Abstract:
This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory ofp-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearletp-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV) inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.
APA, Harvard, Vancouver, ISO, and other styles
23

Zenil, Hector, Fernando Soler-Toscano, Jean-Paul Delahaye, and Nicolas Gauvrit. "Two-dimensional Kolmogorov complexity and an empirical validation of the Coding theorem method by compressibility." PeerJ Computer Science 1 (September 30, 2015): e23. http://dx.doi.org/10.7717/peerj-cs.23.

Full text
Abstract:
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
APA, Harvard, Vancouver, ISO, and other styles
24

Planton, Samuel, Timo van Kerkoerle, Leïla Abbih, Maxime Maheu, Florent Meyniel, Mariano Sigman, Liping Wang, Santiago Figueira, Sergio Romano, and Stanislas Dehaene. "A theory of memory for binary sequences: Evidence for a mental compression algorithm in humans." PLOS Computational Biology 17, no. 1 (January 19, 2021): e1008598. http://dx.doi.org/10.1371/journal.pcbi.1008598.

Full text
Abstract:
Working memory capacity can be improved by recoding the memorized information in a condensed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using an abstract internal language and a recursive compression algorithm. The theory predicts that the psychological complexity of a given sequence should be proportional to the length of its shortest description in the proposed language, which can capture any nested pattern of repetitions and alternations using a limited number of instructions. Five experiments examine the capacity of the theory to predict human adults’ memory for a variety of auditory and visual sequences. We probed memory using a sequence violation paradigm in which participants attempted to detect occasional violations in an otherwise fixed sequence. Both subjective complexity ratings and objective violation detection performance were well predicted by our theoretical measure of complexity, which simply reflects a weighted sum of the number of elementary instructions and digits in the shortest formula that captures the sequence in our language. While a simpler transition probability model, when tested as a single predictor in the statistical analyses, accounted for significant variance in the data, the goodness-of-fit with the data significantly improved when the language-based complexity measure was included in the statistical model, while the variance explained by the transition probability model largely decreased. Model comparison also showed that shortest description length in a recursive language provides a better fit than six alternative previously proposed models of sequence encoding. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.
APA, Harvard, Vancouver, ISO, and other styles
25

Et.al, Sunil Kumar M. "Channel State Information (CSI) based Sparse Reconstruction for Biomedical Applications Using hybrid mm-WAVE MIMO System." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 1557–68. http://dx.doi.org/10.17762/turcomat.v12i3.965.

Full text
Abstract:
The significance of Channel State Information (CSI) is very essential in a hybrid mm-WAVE Multiple Input Multiple Input (MIMO) System due to its direct dependency on medium capacity and energy efficiency of a network. Therefore, a Channel State Information (CSI)-based Sparse Reconstruction (CSISR) technique is adopted for effective evaluation of CSI for future 5G cellular network implementation. A hybrid mm-WAVE MIMO communication system is also employed for effective bandwidth spectrum utilization. Furthermore, a joint sparse coding algorithm is introduced to study the channel matrices of hybrid mm-WAVE MIMO system. The proposed CSISR technique ensure proficient signal reconstruction, signal compression and resource reduction by exploiting sparsity of channel matrix. The proposed CSISR technique under low SNR conditions as well for hybrid mm-WAVE MIMO system with optimization of pre-processors and combiners. The performance throughput of proposed CSISR technique is measured against conventional algorithms considering power consumption, Normalized Mean Square Error (NMSE) and spectral efficiency of the mm-Wave MIMO system. The superiority of proposed CSISR technique is concluded based on simulations considering different system configurations and performance matrices.
APA, Harvard, Vancouver, ISO, and other styles
26

Painsky, Amichai, Meir Feder, and Naftali Tishby. "Nonlinear Canonical Correlation Analysis:A Compressed Representation Approach." Entropy 22, no. 2 (February 12, 2020): 208. http://dx.doi.org/10.3390/e22020208.

Full text
Abstract:
Canonical Correlation Analysis (CCA) is a linear representation learning method that seeks maximally correlated variables in multi-view data. Nonlinear CCA extends this notion to a broader family of transformations, which are more powerful in many real-world applications. Given the joint probability, the Alternating Conditional Expectation (ACE) algorithm provides an optimal solution to the nonlinear CCA problem. However, it suffers from limited performance and an increasing computational burden when only a finite number of samples is available. In this work, we introduce an information-theoretic compressed representation framework for the nonlinear CCA problem (CRCCA), which extends the classical ACE approach. Our suggested framework seeks compact representations of the data that allow a maximal level of correlation. This way, we control the trade-off between the flexibility and the complexity of the model. CRCCA provides theoretical bounds and optimality conditions, as we establish fundamental connections to rate-distortion theory, the information bottleneck and remote source coding. In addition, it allows a soft dimensionality reduction, as the compression level is determined by the mutual information between the original noisy data and the extracted signals. Finally, we introduce a simple implementation of the CRCCA framework, based on lattice quantization.
APA, Harvard, Vancouver, ISO, and other styles
27

Fushing, Hsieh, and Tania Roy. "Complexity of possibly gapped histogram and analysis of histogram." Royal Society Open Science 5, no. 2 (February 2018): 171026. http://dx.doi.org/10.1098/rsos.171026.

Full text
Abstract:
We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.
APA, Harvard, Vancouver, ISO, and other styles
28

Balonin, Nikolay, Alexander Sergeev, and Olga Sinitshina. "Finite field and group algorithms for orthogonal sequence search." Information and Control Systems, no. 4 (September 13, 2021): 2–17. http://dx.doi.org/10.31799/1684-8853-2021-4-2-17.

Full text
Abstract:
Introduction: Hadamard matrices consisting of elements 1 and –1 are an ideal object for a visual application of finite dimensional mathematics operating with a finite number of addresses for –1 elements. The notation systems of abstract algebra methods, in contrast to the conventional matrix algebra, have been changing intensively, without being widely spread, leading to the necessity to revise and systematize the accumulated experience. Purpose: To describe the algorithms of finite fields and groups in a uniform notation in order to facilitate the perception of the extensive knowledge necessary for finding orthogonal and suborthogonal sequences. Results: Formulas have been proposed for calculating relatively unknown algorithms (or their versions) developed by Scarpis, Singer, Szekeres, Goethal — Seidel, and Noboru Ito, as well as polynomial equations used to prove the theorems about the existence of finite-dimensional solutions. This replenished the significant lack of information both in the domestic literature (most of these issues are published here for the first time) and abroad. Practical relevance: Orthogonal sequences and methods for their effective finding via the theory of finite fields and groups are of direct practical importance for noise-immune coding, compression and masking of video data.
APA, Harvard, Vancouver, ISO, and other styles
29

Balonin, Nikolay, Mikhail Sergeev, and Anton Vostrikov. "Prime Fermat numbers and maximum determinant matrix conjecture." Information and Control Systems, no. 2 (April 20, 2020): 2–9. http://dx.doi.org/10.31799/1684-8853-2020-2-2-9.

Full text
Abstract:
Purpose: Solution to the problem of optimizing the determinants of matrices with a modulus of entries < 1. Developing a theory of such matrices based on preliminary research results. Methods: Extreme solutions (in terms of the determinant) are found by minimizing the absolute values of orthogonal matrix elements, and their subsequent classification. Results: Matrices of orders equal to prime Fermat numbers have been found. They are special, as their absolute determinant maximums can be reached on a simple structure. We provide a precise evaluation of the determinant maximum for these matrices and formulate a conjecture about it. We discuss the close relation between the solutions of extremal problems with the limitation on the matrix column orthogonality and without it. It has been shown that relative maximums of orthogonality-limited matrix determinants correspond to absolute maximums of orthogonality-unlimited matrix determinants. We also discuss the ways to build extremal matrix families for the orders equal to Mersenne numbers. Practical relevance: Maximum determinant matrices are used extensively in the problems of error-free coding, compression and masking of video information. Programs for maximum determinant matrix search and a library of constructed matrices are used in the mathematical network “mathscinet.ru” along with executable online algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Kobozeva, Alla, and Ivan Bobok. "DEVELOPMENT OF A STEGANOGRAPHIC METHOD RESISTANT TO ATTACKS AGAINST BUILT-IN MESSAGE." Information systems and technologies security, no. 1 (2) (2020): 16–22. http://dx.doi.org/10.17721/ists.2020.1.16-22.

Full text
Abstract:
Features of modern network communications make it necessary to use in the organization of the hidden channel communication of steganographic algorithms that are resistant to loss compression, and leaving the tasks of developing new effective steganographic methods are relevant. The paper develops a new block steganographic method, which is resistant to attacks against the built-in message, including strong attacks. This method preserves the reliability of the perception of the formed quilting due to the mathematical basis used. It is based on a general approach to the analysis of the state and technology of information systems functioning, matrix analysis, perturbation theory. A digital image is treated as a container. The bandwidth of a hidden link that is built using the developed method is equal to n-2 bpp, n×n is the size of the blocks of the container that are obtained by the standard breakdown of its matrix. Such bandwidth is achieved with any algorithmic implementation of the method. Additional information is a binary sequence, it is the result of pre-coding of the information that is hidden. The embedding of additional information is done by using formal container matrix parameters that are insensitive to perturbation. These are singular values of its small blocks (n≤8). Increasing the maximum singular value of the block, which occurs when embedding additional information, leads to the stability of the method to the perturbing action and to ensure the reliability of the perception of the hip. The magnitude of the increase in the maximum singular value is determined using the values obtained by raising the singular values of the block to a natural degree k. Algorithmic implementation of the method requires additional studies to determine the parameter k.
APA, Harvard, Vancouver, ISO, and other styles
31

Balonin, Nikolay, and Mikhail Sergeev. "Odin and Shadow Cretan matrices accompanying primes and their powers." Information and Control Systems, no. 1 (March 2, 2022): 2–7. http://dx.doi.org/10.31799/1684-8853-2022-1-2-7.

Full text
Abstract:
Introduction: Cretan matrices – orthogonal matrices, consisting of the elements 1 and –b (real number), are an ideal object for the visual application of finite-dimensional mathematics. These matrices include, in particular, the Hadamard matrices and, with the expansion of the number of elements, the conference matrices. The most convenient research apparatus is to use field theory and multiplicative Galois groups, which is especially important for new types of Cretan matrices. Purpose: To study the symmetries of the Cretan matrices and to investigate two new types of matrices of odd and even orders, distinguished by symmetries, respectively, which differ significantly from the previously known Mersenne, Euler and Fermat matrices. Results: Formulas for levels are given and symmetries of new Cretan matrices: Odin bicycles (with a border) of orders 4t – 1 and 4t – 3 and shadow matrices of orders 4t – 2 and 4t – 4 are described. For odd character sizes equal to prime numbers and powers of primes, the existence of matrix symmetries of special types, doubly symmetric, consisting of skew-symmetric (with respect to the signs of elements) and symmetric cyclic blocks, is proved. It is shown that the previously distinguished Cretan matrices are their special case: Mersenne matrices of orders 4t – 1 and Euler matrices of orders 4t – 2 existing in the absence of symmetry for all selected orders without exception. Practical relevance: Оrthogonal sequences and methods of their effective finding by the theory of finite fields and groups are of direct practical importance for the problems of noise-immune coding, compression and masking of video information.
APA, Harvard, Vancouver, ISO, and other styles
32

Balonin, Nikolay, and Alexander Sergeev. "Hadamard matrices as a result of Scarpis product without cyclic shifts." Information and Control Systems, no. 3 (June 24, 2022): 2–8. http://dx.doi.org/10.31799/1684-8853-2022-3-2-8.

Full text
Abstract:
Introduction: Orthogonal Hadamard matrices consisting of elements 1 and –1 (real number) exist for orders that are multiples of 4. The study considers the product of an orthogonal Hadamard matrix and its core, which is called the Scarpis product, and is similar in meaning to the Kronecker product. Purpose: To show by revealing the symmetries of the block Hadamard matrices that their observance contributes to a product that generalizes the Scarpis method to the nonexistence of a finite field. Results: The study demonstrates that orthogonality is an invariant of the product under discussion, subject to the two conditions: one of the multipliers is inserted into the other one, the sign of the elements of the second multiplier taken into account (the Kronecker product), but with a selective action of the sign on the elements and, most importantly, with the cyclic permutation of the core which depends on the insertion location. The paper shows that such shifts can be completely avoided by using symmetries that are characteristic of the universal forms of Hadamard matrices. In addition, this technique is common for many varieties of adjustable Kronecker products. Practical relevance: Orthogonal sequences and effective methods for their finding by the theory of finite fields and groups are of direct practical importance for the problems of noiseless coding, video compression and visual masking.
APA, Harvard, Vancouver, ISO, and other styles
33

Dvornikov, Sergey, Sergey Dvornikov, and Andrew Ustinov. "Analysis of the Correlation Properties of the Wavelet Transform Coefficients of Typical Images." Informatics and Automation 21, no. 5 (September 28, 2022): 983–1015. http://dx.doi.org/10.15622/ia.21.5.6.

Full text
Abstract:
The increasing flow of photo and video information transmitted through the channels of infocommunication systems and complexes stimulates the search for effective compression algorithms that can significantly reduce the volume of transmitted traffic, while maintaining its quality. In the general case, the compression algorithms are based on the operations of converting the correlated brightness values of the pixels of the image matrix into their uncorrelated parameters, followed by encoding the obtained conversion coefficients. Since the main known decorrelating transformations are quasi-optimal, the task of finding transformations that take into account changes in the statistical characteristics of compressed video data is still relevant. These circumstances determined the direction of the study, related to the analysis of the decorrelating properties of the generated wavelet coefficients obtained as a result of multi-scale image transformation. The main result of the study was to establish the fact that the wavelet coefficients of the multi-scale transformation have the structure of nested matrices defined as submatrices. Therefore, it is advisable to carry out the correlation analysis of the wavelet transformation coefficients separately for the elements of each submatrix at each level of decomposition (decomposition). The main theoretical result is the proof that the core of each subsequent level of the multi-scale transformation is a matrix consisting of the wavelet coefficients of the previous level of decomposition. It is this fact that makes it possible to draw a conclusion about the dependence of the corresponding elements of neighboring levels. In addition, it has been found that there is a linear relationship between the wavelet coefficients within the local area of ​​the image with a size of 8×8 pixels. In this case, the maximum correlation of submatrix elements is directly determined by the form of their representation, and is observed between neighboring elements located, respectively, in a row, column or diagonally, which is confirmed by the nature of the scattering. The obtained results were confirmed by the analysis of samples from more than two hundred typical images. At the same time, it is substantiated that between the low-frequency wavelet coefficients of the multi-scale transformation of the upper level of the expansion, approximately the same dependences are preserved uniformly in all directions. The practical significance of the study is determined by the fact that all the results obtained in the course of its implementation confirm the presence of characteristic dependencies between the wavelet transform coefficients at different levels of image decomposition. This fact indicates the possibility of achieving higher compression ratios of video data in the course of their encoding. The authors associate further research with the development of a mathematical model for adaptive arithmetic coding of video data and images, which takes into account the correlation properties of wavelet coefficients of a multi-scale transformation.
APA, Harvard, Vancouver, ISO, and other styles
34

Vostrikov, Anton. "Matrix vitrages and regular Hadamard matrices." Information and Control Systems, no. 5 (October 26, 2021): 2–9. http://dx.doi.org/10.31799/1684-8853-2021-5-2-9.

Full text
Abstract:
Introduction: The Kronecker product of Hadamard matrices when a matrix of order n replaces each element in another matrix of order m, inheriting the sign of the replaced element, is a basis for obtaining orthogonal matrices of order nm. The matrix insertion operation when not only signs but also structural elements (ornamental patterns of matrix portraits) are inherited provides a more general result called a "vitrage". Vitrages based on typical quasi-orthogonal Mersenne (M), Seidel (S) or Euler (E) matrices, in addition to inheriting the sign and pattern, inherit the value of elements other than unity (in amplitude) in a different way, causing the need to revise and systematize the accumulated experience. Purpose: To describe new algorithms for generalized product of matrices, highlighting the constructions that produce regular high-order Hadamard matrices. Results: We have proposed an algorithm for obtaining matrix vitrages by inserting Mersenne matrices into Seidel matrices, which makes it possible to expand the additive chains of matrices of the form M-E-M-E-… and S-E-M-E-…, obtained by doubling the orders and adding an edge. The operation of forming a matrix vitrage allows you to obtain matrices of high orders, keeping the ornamental pattern as an important invariant of the structure. We have shown that the formation of a matrix vitrage inherits the logic of the Scarpi product, but is cannot be reduced to it, since a nonzero distance in order between the multiplicands M and S simplifies the final regular matrix ornamental pattern due to the absence of cyclic displacements. The alternation of M and S matrices allows you to extend the multiplicative chains up to the known gaps in the S matrices. This sheds a new light on the theory of a regular Hadamard matrix as a product of Mersenne and Seidel matrices. Practical relevance: Orthogonal sequences with floating levels and efficient algorithms for finding regular Hadamard matrices with certain useful properties are of direct practical importance for the problems of noise-proof coding, compression and masking of video data.
APA, Harvard, Vancouver, ISO, and other styles
35

Balonin, N. A., and M. B. Sergeev. "Helping Hadamard conjecture to become a theorem. Part 1." Information and Control Systems, no. 6 (December 18, 2018): 2–13. http://dx.doi.org/10.31799/1684-8853-2018-6-2-13.

Full text
Abstract:
Introduction:Hadamard conjecture about the existence of specific square matrices was formulated not by Hadamard but by other mathematicians in the early 20th century. Later, this problem was revised by Ryser together with Bruck and Chowla, and also by Hall, one of the founders of discrete mathematics. This is a problem of the boundary mixed type, as it includes both the continuous and discrete components. The combinatorial approach used in the framework of the discrete component has run its course by the end of the century. The article discusses an alternative based on both concepts.Purpose:To analyze the reasons why the conjecture about the existence of Hadamard matrices of all orders n = 4t is considered unproven, and to propose possible ways to prove it.Methods:Transition, by lowering the order n = 4t– 2, to two-level quasiorthogonal matrices with elements 1 and –b whose existence on all specified orders is not a difficult problem due to the possible irrationality of their entries. Subsequent construction of a chain of transformations to matrix orders n = 4t – 1, n = 4t, n = 4t + 1.Results:It is proved that Gauss points on an x2+ 2y2+ z2= n spheroid are in one-to-one correspondence with symmetric Hadamard matrices (constructed on the basis of the Balonin — Seberry arrays), covering up the gaps on the unsolvable orders 140, 112, etc. known in Williamson’s array theory. Solution tables are found and systematized, which include so-called «best» three-block matrices L (p, q), where p ≥ q is the number of non-conjugated symmetric matrices of the order in question, and q is the number of block-symmetric matrices which coincide with Williamson’s solutions. The iterative Procrustes algorithm which reduces the norm of the maximum entry in a matrix is proposed for obtaining Hadamard matrices by searching for local and global conditional extremes of the determinant.Practical relevance:The obtained Hadamard matrices and quasi-orthogonal matrices of orders n = 4t – 2, n = 4t – 1, n = 4t + 1 are of immediate practical importance for the problems of noise-resistant coding, compression and masking of video information.
APA, Harvard, Vancouver, ISO, and other styles
36

Balonina, N. A., and M. B. Sergeeva. "Helping Hadamard conjecture to become a theorem. Part 2." Information and Control Systems, no. 1 (February 19, 2019): 2–10. http://dx.doi.org/10.31799/1684-8853-2019-1-2-10.

Full text
Abstract:
Introduction:Hadamard conjecture about the existence of specific square matrices was formulated not by Hadamard but by other mathematicians in the early 20th century. Later, this problem was revised by Ryser together with Bruck and Chowla, and also by Hall, one of the founders of discrete mathematics. This is a problem of the boundary mixed type, as it includes both the continuous and discrete components. The combinatorial approach used in the framework of the discrete component has run its course by the end of the century. The article discusses an alternative based on both concepts.Purpose:To analyze the reasons why the conjecture about the existence of Hadamard matrices of all ordersn =4tis considered unproven, and to propose possible ways to prove it.Methods:Transition, by lowering the ordern =4t— 2, to two-level quasiorthogonal matrices with elements 1 and –bwhose existence on all specified orders is not a difficult problem due to the possible irrationality of their entries. Subsequent construction of a chain of transformations to matrix ordersn= 4t–1,n= 4t,n= 4t+ 1.Results:It is proved that Gauss points on anx2 + 2y2 +z2 =nspheroid are in one-to-one correspondence with symmetric Hadamard matrices (constructed on the basis of the Balonin — Seberry arrays), covering up the gaps on the unsolvable orders 140, 112, etc. known in Williamson’s array theory. Solution tables are found and systematized, which include so-called «best» three-block matricesL(p,q), wherep³qis the number of non-conjugated symmetric matrices of the order in question, andqis the number of block-symmetric matrices which coincide with Williamson’s solutions. The iterative Procrustes algorithm which reduces the norm of the maximum entry in a matrix is proposed for obtaining Hadamard matrices by searching for local and global conditional extremes of the determinant.Practical relevance:The obtained Hadamard matrices and quasi-orthogonal matrices of ordersn =4t– 2,n =4t– 1,n =4t +1 are of immediate practical importance for the problems of noise-resistant coding, compression and masking of video information.
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Yuxiang, Ge Bai, Giulio Chiribella, and Masahito Hayashi. "Compression for Quantum Population Coding." IEEE Transactions on Information Theory 64, no. 7 (July 2018): 4766–83. http://dx.doi.org/10.1109/tit.2017.2788407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Baylis, John, Gareth A. Jones, and J. Mary Jones. "Information and Coding Theory." Mathematical Gazette 85, no. 503 (July 2001): 377. http://dx.doi.org/10.2307/3622076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jayant, Nikil. "Signal Compression." International Journal of High Speed Electronics and Systems 08, no. 01 (March 1997): 1–12. http://dx.doi.org/10.1142/s0129156497000020.

Full text
Abstract:
This article is an introduction to a special issue on signal coding and compression. We begin by defining the concepts of digital coding and audiovisual signal compression. We then describe the four dimensions of coding performance: bit rate, signal quality, processing delay and complexity. We illustrate the two basic principles of audiovisual coding, removal of signal redundancy and the matching of the quantizing system to the properties of the human perceptual system, with specific recent examples of coding algorithms. We then summarize standards for, and applications of audiovisual signal compression. A fast-emerging application is the internetworking of audiovisual information, a field that is too recent to be covered in the articles in this collection. We conclude our article by presenting our views about future research directions in the field.
APA, Harvard, Vancouver, ISO, and other styles
40

Borst, Alexander, and Frédéric E. Theunissen. "Information theory and neural coding." Nature Neuroscience 2, no. 11 (November 1999): 947–57. http://dx.doi.org/10.1038/14731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abliz, Wayit, Hao Wu, Maihemuti Maimaiti, Jiamila Wushouer, Kahaerjiang Abiderexiti, Tuergen Yibulayin, and Aishan Wumaier. "A Syllable-Based Technique for Uyghur Text Compression." Information 11, no. 3 (March 23, 2020): 172. http://dx.doi.org/10.3390/info11030172.

Full text
Abstract:
To improve utilization of text storage resources and efficiency of data transmission, we proposed two syllable-based Uyghur text compression coding schemes. First, according to the statistics of syllable coverage of the corpus text, we constructed a 12-bit and 16-bit syllable code tables and added commonly used symbols—such as punctuation marks and ASCII characters—to the code tables. To enable the coding scheme to process Uyghur texts mixed with other language symbols, we introduced a flag code in the compression process to distinguish the Unicode encodings that were not in the code table. The experiments showed that the 12-bit coding scheme had an average compression ratio of 0.3 on Uyghur text less than 4 KB in size and that the 16-bit coding scheme had an average compression ratio of 0.5 on text less than 2 KB in size. Our compression schemes outperformed GZip, BZip2, and the LZW algorithm on short text and could be effectively applied to the compression of Uyghur short text for storage and applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Redinbo, G. R. "Protecting data compression: arithmetic coding." IEE Proceedings - Computers and Digital Techniques 147, no. 4 (2000): 221. http://dx.doi.org/10.1049/ip-cdt:20000530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chuang, I. L., and D. S. Modha. "Reversible arithmetic coding for quantum data compression." IEEE Transactions on Information Theory 46, no. 3 (May 2000): 1104–16. http://dx.doi.org/10.1109/18.841192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Reichenbach, Stephen E., Zia-Ur Rahman, and Ramkumar Narayanswamy. "Transform-Coding Image Compression for Information Efficiency and Restoration." Journal of Visual Communication and Image Representation 4, no. 3 (September 1993): 215–24. http://dx.doi.org/10.1006/jvci.1993.1020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

van Lint, J. H. "Coding theory introduction." IEEE Transactions on Information Theory 34, no. 5 (September 1988): 1274–75. http://dx.doi.org/10.1109/tit.1988.8862503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ko, Hyung-Hwa. "Enhanced Binary MQ Arithmetic Coder with Look-Up Table." Information 12, no. 4 (March 26, 2021): 143. http://dx.doi.org/10.3390/info12040143.

Full text
Abstract:
Binary MQ arithmetic coding is widely used as a basic entropy coder in multimedia coding system. MQ coder esteems high in compression efficiency to be used in JBIG2 and JPEG2000. The importance of arithmetic coding is increasing after it is adopted as a unique entropy coder in HEVC standard. In the binary MQ coder, arithmetic approximation without multiplication is used in the process of recursive subdivision of range interval. Because of the MPS/LPS exchange activity that happens in the MQ coder, the output byte tends to increase. This paper proposes an enhanced binary MQ arithmetic coder to make use of look-up table (LUT) for (A × Qe) using quantization skill to improve the coding efficiency. Multi-level quantization using 2-level, 4-level and 8-level look-up tables is proposed in this paper. Experimental results applying to binary documents show about 3% improvement for basic context-free binary arithmetic coding. In the case of JBIG2 bi-level image compression standard, compression efficiency improved about 0.9%. In addition, in the case of lossless JPEG2000 compression, compressed byte decreases 1.5% using 8-level LUT. For the lossy JPEG2000 coding, this figure is a little lower, about 0.3% improvement of PSNR at the same rate.
APA, Harvard, Vancouver, ISO, and other styles
47

Wolff, J. Gerard. "How the SP System May Promote Sustainability in Energy Consumption in IT Systems." Sustainability 13, no. 8 (April 20, 2021): 4565. http://dx.doi.org/10.3390/su13084565.

Full text
Abstract:
The SP System (SPS), referring to the SP Theory of Intelligence and its realisation as the SP Computer Model, has the potential to reduce demands for energy from IT, especially in AI applications and in the processing of big data, in addition to reductions in CO2 emissions when the energy comes from the burning of fossil fuels. The biological foundations of the SPS suggest that with further development, the SPS may approach the extraordinarily low (20 W)energy demands of the human brain. Some of these savings may arise in the SPS because, like people, the SPS may learn usable knowledge from a single exposure or experience. As a comparison, deep neural networks (DNNs) need many repetitions, with much consumption of energy, for the learning of one concept. Another potential saving with the SPS is that like people, it can incorporate old learning in new. This contrasts with DNNs where new learning wipes out old learning (‘catastrophic forgetting’). Other ways in which the mature SPS is likely to prove relatively parsimonious in its demands for energy arise from the central role of information compression (IC) in the organisation and workings of the system: by making data smaller, there is less to process; because the efficiency of searching for matches between patterns can be improved by exploiting probabilities that arise from the intimate connection between IC and probabilities; and because, with SPS-derived ’Model-Based Codings’ of data, there can be substantial reductions in the demand for energy in transmitting data from one place to another.
APA, Harvard, Vancouver, ISO, and other styles
48

Syuhada, Ibnu. "Implementasi Algoritma Arithmetic Coding dan Sannon-Fano Pada Kompresi Citra PNG." TIN: Terapan Informatika Nusantara 2, no. 9 (February 25, 2022): 527–32. http://dx.doi.org/10.47065/tin.v2i9.1027.

Full text
Abstract:
The rapid development of technology plays an important role in the rapid exchange of information. In sending information in the form of images, there are still problems, including because of the large size of the image so that the solution to this problem is to perform compression. In this thesis, we will implement and compare the performance of the Arithmetic Coding and Shannon-Fano algorithms by calculating the compression ratio, compressed file size, compression and decompression process speed. Based on all test results, that the Arithmetic Coding algorithm produces an average compression ratio of 62.88% and a Shannon-Fano compression ratio of 61.73%, then Arithmetic Coding the average speed in image compression is 0.072449 seconds and Shannon-Fano 0.077838 second. Then the Shannon-Fano algorithm has an average speed for decompression of 0.028946 seconds and the Arithmetic Coding algorithm 0.034169 seconds. The decompressed image on the Arithmetic Coding and Shannon-Fano algorithm is in accordance with the original image. It can be concluded from the test results that the Arithmetic Coding algorithm is more efficient in compressing *.png images than the Shannon-Fano algorithm, although in terms of decompression Shannon-Fanose is a little faster compared to Arithmetic Coding.
APA, Harvard, Vancouver, ISO, and other styles
49

Nurasyiah. "Perancangan Aplikasi Kompresi File Audio dengan Algoritma Aritmetic Coding." JUKI : Jurnal Komputer dan Informatika 3, no. 1 (May 29, 2021): 25–34. http://dx.doi.org/10.53842/juki.v3i1.38.

Full text
Abstract:
Information exchange nowadays requires speed in sending information. The speed of this transmission depends on the size of the information. One solution to the above problem is compression. There are lots of data compression methods available today, but in this thesis we will discuss the working principles of the Arithmetic Coding algorithm with an implementation using Visual Basic 6.0. This algorithm performance analysis aims to determine the performance of this algorithm in * .MP3 and * .WAV audio files. In this system there are compression and decompression stages. The compression stage aims to compress the audio file size, while the decompression stage aims to restore the audio file size to its original size.
APA, Harvard, Vancouver, ISO, and other styles
50

Xin Zhang, Jun Chen, S. B. Wicker, and T. Berger. "Successive Coding in Multiuser Information Theory." IEEE Transactions on Information Theory 53, no. 6 (June 2007): 2246–54. http://dx.doi.org/10.1109/tit.2007.896857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography