Journal articles on the topic 'Information theory and compression'

To see the other types of publications on this topic, follow the link: Information theory and compression.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Information theory and compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lawton, Wayne. "Information theory, wavelets, and image compression." International Journal of Imaging Systems and Technology 7, no. 3 (1996): 180–90. http://dx.doi.org/10.1002/(sici)1098-1098(199623)7:3<180::aid-ima4>3.0.co;2-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gibson, Jerry. "Information Theory and Rate Distortion Theory for Communications and Compression." Synthesis Lectures on Communications 6, no. 1 (December 31, 2013): 1–127. http://dx.doi.org/10.2200/s00556ed1v01y201312com009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bookstein, Abraham, and Shmuel T. Klein. "Compression, information theory, and grammars: a unified approach." ACM Transactions on Information Systems 8, no. 1 (January 3, 1990): 27–49. http://dx.doi.org/10.1145/78915.78917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Mingjie, and Qingguo Li. "Compression of Dynamic Fuzzy Relation Information Systems." Fundamenta Informaticae 142, no. 1-4 (December 9, 2015): 285–306. http://dx.doi.org/10.3233/fi-2015-1295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

ROMEO, AUGUST, ENRIQUE GAZTAÑAGA, JOSE BARRIGA, and EMILIO ELIZALDE. "INFORMATION CONTENT IN UNIFORMLY DISCRETIZED GAUSSIAN NOISE: OPTIMAL COMPRESSION RATES." International Journal of Modern Physics C 10, no. 04 (June 1999): 687–716. http://dx.doi.org/10.1142/s0129183199000528.

Full text
Abstract:
We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), ( w.n. +1/f) noise, and piecewise ( w.n. +1/f | w.n. +1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Franz, Arthur, Oleksandr Antonenko, and Roman Soletskyi. "A theory of incremental compression." Information Sciences 547 (February 2021): 28–48. http://dx.doi.org/10.1016/j.ins.2020.08.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bu, Yuheng, Weihao Gao, Shaofeng Zou, and Venugopal Veeravalli. "Information-Theoretic Understanding of Population Risk Improvement with Model Compression." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3300–3307. http://dx.doi.org/10.1609/aaai.v34i04.5730.

Full text
Abstract:
We show that model compression can improve the population risk of a pre-trained model, by studying the tradeoff between the decrease in the generalization error and the increase in the empirical risk with model compression. We first prove that model compression reduces an information-theoretic bound on the generalization error; this allows for an interpretation of model compression as a regularization technique to avoid overfitting. We then characterize the increase in empirical risk with model compression using rate distortion theory. These results imply that the population risk could be improved by model compression if the decrease in generalization error exceeds the increase in empirical risk. We show through a linear regression example that such a decrease in population risk due to model compression is indeed possible. Our theoretical results further suggest that the Hessian-weighted K-means clustering compression approach can be improved by regularizing the distance between the clustering centers. We provide experiments with neural networks to support our theoretical assertions.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yen-Liang, and Fang-Chi Chi. "Summarization of information systems based on rough set theory." Journal of Intelligent & Fuzzy Systems 40, no. 1 (January 4, 2021): 1001–15. http://dx.doi.org/10.3233/jifs-201160.

Full text
Abstract:
In the rough set theory proposed by Pawlak, the concept of reduct is very important. The reduct is the minimum attribute set that preserves the partition of the universe. A great deal of research in the past has attempted to reduce the representation of the original table. The advantage of using a reduced representation table is that it can summarize the original table so that it retains the original knowledge without distortion. However, using reduct to summarize tables may encounter the problem of the table still being too large, so users will be overwhelmed by too much information. To solve this problem, this article considers how to further reduce the size of the table without causing too much distortion to the original knowledge. Therefore, we set an upper limit for information distortion, which represents the maximum degree of information distortion we allow. Under this upper limit of distortion, we seek to find the summary table with the highest compression. This paper proposes two algorithms. The first is to find all summary tables that satisfy the maximum distortion constraint, while the second is to further select the summary table with the greatest degree of compression from these tables.
APA, Harvard, Vancouver, ISO, and other styles
9

Marzen, Sarah E., and Simon DeDeo. "The evolution of lossy compression." Journal of The Royal Society Interface 14, no. 130 (May 2017): 20170166. http://dx.doi.org/10.1098/rsif.2017.0166.

Full text
Abstract:
In complex environments, there are costs to both ignorance and perception. An organism needs to track fitness-relevant information about its world, but the more information it tracks, the more resources it must devote to perception. As a first step towards a general understanding of this trade-off, we use a tool from information theory, rate–distortion theory, to study large, unstructured environments with fixed, randomly drawn penalties for stimuli confusion (‘distortions’). We identify two distinct regimes for organisms in these environments: a high-fidelity regime where perceptual costs grow linearly with environmental complexity, and a low-fidelity regime where perceptual costs are, remarkably, independent of the number of environmental states. This suggests that in environments of rapidly increasing complexity, well-adapted organisms will find themselves able to make, just barely, the most subtle distinctions in their environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Maslov, V. P., and V. E. Nazaikinskii. "Remark on the notion of optimal data compression in information theory." Mathematical Notes 99, no. 3-4 (March 2016): 616–18. http://dx.doi.org/10.1134/s0001434616030378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Daggubati, Siva Phanindra, Venkata Rao Kasukurthi, and Prasad Reddy PVGD. "Cryptography and Reference Sequence Based DNA/RNA Sequence Compression Algorithms." Ingénierie des systèmes d information 27, no. 3 (June 30, 2022): 509–14. http://dx.doi.org/10.18280/isi.270319.

Full text
Abstract:
This paper proposes two methods for the compression of biological sequences like DNA/RNA. Although many algorithms both lossy and lossless exist in the literature, they vary by the compression ratio. Moreover, existing algorithms show different compression ratios for different inputs. Our proposed methods exhibit nearly constant compression ratio which helps us to know the amount of storage needed in advance. For the first method, we call it CryptoCompress, we use a blend of Cryptographic hash function and partition theory to achieve this compression. The second method, we call it RefCompress, uses a reference DNA for compression. This paper showcases that the proposed methods have constant compression ratio compared to most of the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Krichevsky, R. E. "Information compression and Varshamov-Gilbert bound." Information and Computation 74, no. 1 (July 1987): 1–14. http://dx.doi.org/10.1016/0890-5401(87)90008-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Marsden, Alan. "New Prospects for Information Theory in Arts Research." Leonardo 53, no. 3 (May 2020): 274–80. http://dx.doi.org/10.1162/leon_a_01860.

Full text
Abstract:
Information Theory provoked the interest of arts researchers from its inception in the mid-twentieth century but failed to produce the expected impact, partly because the data and computing systems required were not available. With the modern availability of data from public collections and sophisticated software, there is renewed interest in Information Theory. Successful application in the analysis of music implies potential success in other art forms also. The author gives an illustrative example, applying the Information-Theoretic similarity measure normalized compression distance with the aim of ranking paintings in a large collection by their conventionality.
APA, Harvard, Vancouver, ISO, and other styles
14

Jankowski, C., D. Reda, M. Mańkowski, and G. Borowik. "Discretization of data using Boolean transformations and information theory based evaluation criteria." Bulletin of the Polish Academy of Sciences Technical Sciences 63, no. 4 (December 1, 2015): 923–32. http://dx.doi.org/10.1515/bpasts-2015-0105.

Full text
Abstract:
Abstract Discretization is one of the most important parts of decision table preprocessing. Transforming continuous values of attributes into discrete intervals influences further analysis using data mining methods. In particular, the accuracy of generated predictions is highly dependent on the quality of discretization. The paper contains a description of three new heuristic algorithms for discretization of numeric data, based on Boolean reasoning. Additionally, an entropy-based evaluation of discretization is introduced to compare the results of the proposed algorithms with the results of leading university software for data analysis. Considering the discretization as a data compression method, the average compression ratio achieved for databases examined in the paper is 8.02 while maintaining the consistency of databases at 100%.
APA, Harvard, Vancouver, ISO, and other styles
15

Klöwer, Milan, Miha Razinger, Juan J. Dominguez, Peter D. Düben, and Tim N. Palmer. "Compressing atmospheric data into its real information content." Nature Computational Science 1, no. 11 (November 2021): 713–24. http://dx.doi.org/10.1038/s43588-021-00156-2.

Full text
Abstract:
AbstractHundreds of petabytes are produced annually at weather and climate forecast centers worldwide. Compression is essential to reduce storage and to facilitate data sharing. Current techniques do not distinguish the real from the false information in data, leaving the level of meaningful precision unassessed. Here we define the bitwise real information content from information theory for the Copernicus Atmospheric Monitoring Service (CAMS). Most variables contain fewer than 7 bits of real information per value and are highly compressible due to spatio-temporal correlation. Rounding bits without real information to zero facilitates lossless compression algorithms and encodes the uncertainty within the data itself. All CAMS data are 17× compressed relative to 64-bit floats, while preserving 99% of real information. Combined with four-dimensional compression, factors beyond 60× are achieved. A data compression Turing test is proposed to optimize compressibility while minimizing information loss for the end use of weather and climate forecast data.
APA, Harvard, Vancouver, ISO, and other styles
16

Bu, Yuheng, Weihao Gao, Shaofeng Zou, and Venugopal V. Veeravalli. "Population Risk Improvement with Model Compression: An Information-Theoretic Approach." Entropy 23, no. 10 (September 27, 2021): 1255. http://dx.doi.org/10.3390/e23101255.

Full text
Abstract:
It has been reported in many recent works on deep model compression that the population risk of a compressed model can be even better than that of the original model. In this paper, an information-theoretic explanation for this population risk improvement phenomenon is provided by jointly studying the decrease in the generalization error and the increase in the empirical risk that results from model compression. It is first shown that model compression reduces an information-theoretic bound on the generalization error, which suggests that model compression can be interpreted as a regularization technique to avoid overfitting. The increase in empirical risk caused by model compression is then characterized using rate distortion theory. These results imply that the overall population risk could be improved by model compression if the decrease in generalization error exceeds the increase in empirical risk. A linear regression example is presented to demonstrate that such a decrease in population risk due to model compression is indeed possible. Our theoretical results further suggest a way to improve a widely used model compression algorithm, i.e., Hessian-weighted K-means clustering, by regularizing the distance between the clustering centers. Experiments with neural networks are provided to validate our theoretical assertions.
APA, Harvard, Vancouver, ISO, and other styles
17

Permuter, Haim H., Young-Han Kim, and Tsachy Weissman. "Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing." IEEE Transactions on Information Theory 57, no. 6 (June 2011): 3248–59. http://dx.doi.org/10.1109/tit.2011.2136270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Weijs, S. V., N. van de Giesen, and M. B. Parlange. "Data compression to define information content of hydrological time series." Hydrology and Earth System Sciences 17, no. 8 (August 6, 2013): 3171–87. http://dx.doi.org/10.5194/hess-17-3171-2013.

Full text
Abstract:
Abstract. When inferring models from hydrological data or calibrating hydrological models, we are interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory, (A)IT, to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction and learning (understanding is compression). The analysis is performed on time series of a set of catchments. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the following question: "How much information is contained in this data set?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) information about which unknown quantities? and (2) what is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in the current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively without specifying prior beliefs.
APA, Harvard, Vancouver, ISO, and other styles
19

Chanda, Pritam, Eduardo Costa, Jie Hu, Shravan Sukumar, John Van Hemert, and Rasna Walia. "Information Theory in Computational Biology: Where We Stand Today." Entropy 22, no. 6 (June 6, 2020): 627. http://dx.doi.org/10.3390/e22060627.

Full text
Abstract:
“A Mathematical Theory of Communication” was published in 1948 by Claude Shannon to address the problems in the field of data compression and communication over (noisy) communication channels. Since then, the concepts and ideas developed in Shannon’s work have formed the basis of information theory, a cornerstone of statistical learning and inference, and has been playing a key role in disciplines such as physics and thermodynamics, probability and statistics, computational sciences and biological sciences. In this article we review the basic information theory based concepts and describe their key applications in multiple major areas of research in computational biology—gene expression and transcriptomics, alignment-free sequence comparison, sequencing and error correction, genome-wide disease-gene association mapping, metabolic networks and metabolomics, and protein sequence, structure and interaction analysis.
APA, Harvard, Vancouver, ISO, and other styles
20

Tang, Jun Fang. "Research on Information Applied Technology with Video Compression Algorithms Based on the Optimal Multi-Band Haar Wavelet Transform." Advanced Materials Research 886 (January 2014): 633–36. http://dx.doi.org/10.4028/www.scientific.net/amr.886.633.

Full text
Abstract:
Video playback has been one of the most important online communication ways. With the application of stereo video, large amount of video data need to be stored and transported so that fluency and clarity of demand system, and how to efficiently conduct compressed encoding for stereoscopic video data becomes a hot topic currently. In view of this problem, this paper puts forward the video-on-demand compression algorithm based on the optimal multi-band Haar wavelet transform, through the research on wavelet transform algorithm model to reinforce the algorithm secondly, strengthening from the binary wavelet theory into octal wavelet system theory to get better compression capability. The simulation experiments show that video-on-demand compression algorithm based on the optimal multi-band Haar wavelet transform proposed in this paper has a good compression performance not only under medium and high bit- rate conditions, and also reaches the H. 263 under low bit-rate condition.
APA, Harvard, Vancouver, ISO, and other styles
21

Kowalski, Tomasz M., and Szymon Grabowski. "PgRC: pseudogenome-based read compressor." Bioinformatics 36, no. 7 (December 9, 2019): 2082–89. http://dx.doi.org/10.1093/bioinformatics/btz919.

Full text
Abstract:
Abstract Motivation The amount of sequencing data from high-throughput sequencing technologies grows at a pace exceeding the one predicted by Moore’s law. One of the basic requirements is to efficiently store and transmit such huge collections of data. Despite significant interest in designing FASTQ compressors, they are still imperfect in terms of compression ratio or decompression resources. Results We present Pseudogenome-based Read Compressor (PgRC), an in-memory algorithm for compressing the DNA stream, based on the idea of building an approximation of the shortest common superstring over high-quality reads. Experiments show that PgRC wins in compression ratio over its main competitors, SPRING and Minicom, by up to 15 and 20% on average, respectively, while being comparably fast in decompression. Availability and implementation PgRC can be downloaded from https://github.com/kowallus/PgRC. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
22

Weijs, S. V., N. van de Giesen, and M. B. Parlange. "Data compression to define information content of hydrological time series." Hydrology and Earth System Sciences Discussions 10, no. 2 (February 14, 2013): 2029–65. http://dx.doi.org/10.5194/hessd-10-2029-2013.

Full text
Abstract:
Abstract. When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, HongBo, Wei Zheng, Jie Wu, and GuoJian Tang. "Investigation on single-star stellar-inertial guidance principle using equivalent information compression theory." Science in China Series E: Technological Sciences 52, no. 10 (September 12, 2009): 2924–29. http://dx.doi.org/10.1007/s11431-009-0190-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Teal, Tracy K., and Charles E. Taylor. "Effects of Compression on Language Evolution." Artificial Life 6, no. 2 (April 2000): 129–43. http://dx.doi.org/10.1162/106454600568366.

Full text
Abstract:
Abstract For many adaptive complex systems information about the environment is not simply recorded in a look-up table, but is rather encoded in a theory, schema, or model, which compresses information. The grammar of a language can be viewed as such a schema or theory. In a prior study [Teal et al., 1999] we proposed several conjectures about the learning and evolution of language that should follow from these observations: (C1) compression aids in generalization; (C2) compression occurs more easily in a “smooth”, as opposed to a “rugged”, problem space; and (C3) constraints from compression make it likely that natural languages evolve towards smooth string spaces. This previous work found general, if not complete support for these three conjectures. Here we build on that study to clarify the relationship between Minimum Description Length (MDL) and error in our model and examine evolution of certain languages in more detail. Our results suggest a fourth conjecture: that all else being equal, (C4) more complex languages change more rapidly during evolution.
APA, Harvard, Vancouver, ISO, and other styles
25

Du, Xin, Katayoun Farrahi, and Mahesan Niranjan. "Information Bottleneck Theory Based Exploration of Cascade Learning." Entropy 23, no. 10 (October 18, 2021): 1360. http://dx.doi.org/10.3390/e23101360.

Full text
Abstract:
In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation (I(X;T)) and the representation to the target (I(T;Y)). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, I(T;Y)/I(X;T), and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.
APA, Harvard, Vancouver, ISO, and other styles
26

Wei, Dahuan, and Gang Feng. "Compression and Storage Algorithm of Key Information of Communication Data Based on Backpropagation Neural Network." Mathematical Problems in Engineering 2022 (April 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/2885735.

Full text
Abstract:
This paper presents a backpropagation neural network algorithm for data compression and data storage. Data compression or establishing model ten coding is the most basic idea of traditional data compression. The traditionally designed ideas are mainly based on reducing the redundancy in the information and focus on the coding design, and its compression ratio has been hovering around dozens of percent. After the traditional coding compression of information, it is difficult to further compress by similar methods. In order to solve the above problems, the information that takes up less signal space can be used to represent the information that takes up more signal space to realize data compression. This new design idea of data compression breaks through the traditional limitation of relying only on coding to reduce data redundancy and achieves a higher compression ratio. At the same time, the information after such compression can be repeatedly compressed, and it has a very good performance. This is the basic idea of the combination of neural network and data compression introduced in this paper. According to the theory of multiobjective function optimization, this paper puts forward the theoretical model of multiobjective optimization neural network and studies a multiobjective data compression method based on neural network. According to the change of data characteristics, this method automatically adjusts the structural parameters (connection weight and bias value) of neural network to obtain the largest amount of data compression at the cost of small information loss. This method has the characteristics of strong adaptability, parallel processing, knowledge distributed storage, and anti-interference. Experimental results show that, compared with other methods, the proposed method has significant advantages in performance index, compression time and compression effect, high efficiency. and high-quality robustness.
APA, Harvard, Vancouver, ISO, and other styles
27

Mishra, Ishani, and Sanjay Jain. "Soft computing based compressive sensing techniques in signal processing: A comprehensive review." Journal of Intelligent Systems 30, no. 1 (September 11, 2020): 312–26. http://dx.doi.org/10.1515/jisys-2019-0215.

Full text
Abstract:
Abstract In this modern world, a massive amount of data is processed and broadcasted daily. This includes the use of high energy, massive use of memory space, and increased power use. In a few applications, for example, image processing, signal processing, and possession of data signals, etc., the signals included can be viewed as light in a few spaces. The compressive sensing theory could be an appropriate contender to manage these limitations. “Compressive Sensing theory” preserves extremely helpful while signals are sparse or compressible. It very well may be utilized to recoup light or compressive signals with less estimation than customary strategies. Two issues must be addressed by CS: plan of the estimation framework and advancement of a proficient sparse recovery calculation. The essential intention of this work expects to audit a few ideas and utilizations of compressive sensing and to give an overview of the most significant sparse recovery calculations from every class. The exhibition of acquisition and reconstruction strategies is examined regarding the Compression Ratio, Reconstruction Accuracy, Mean Square Error, and so on.
APA, Harvard, Vancouver, ISO, and other styles
28

Collins, Benoît, Motohisa Fukuda, and Ping Zhong. "Estimates for compression norms and additivity violation in quantum information." International Journal of Mathematics 26, no. 01 (January 2015): 1550002. http://dx.doi.org/10.1142/s0129167x15500020.

Full text
Abstract:
The free contraction norm (or the (t)-norm) was introduced by Belinschi, Collins and Nechita as a tool to compute the typical location of the collection of singular values associated to a random subspace of the tensor product of two Hilbert spaces. In turn, it was used again by them in order to obtain sharp bounds for the violation of the additivity of the minimum output entropy (MOE) for random quantum channels with Bell states. This free contraction norm, however, is difficult to compute explicitly. The purpose of this note is to give a good estimate for this norm. Our technique is based on results of super convergence in the context of free probability theory. As an application, we give a new, simple and conceptual proof of the violation of the additivity of the MOE.
APA, Harvard, Vancouver, ISO, and other styles
29

Barnsley, Michael F., Anca Deliu, and Ruifeng Xie. "Stationary Stochastic Processes and Fractal Data Compression." International Journal of Bifurcation and Chaos 07, no. 03 (March 1997): 551–67. http://dx.doi.org/10.1142/s021812749700039x.

Full text
Abstract:
It is shown that the invariant measure of a stationary nonatomic stochastic process yields an iterated function system with probabilities and an associated dynamical system that provide the basis for optimal lossless data compression algorithms. The theory is illustrated for the case of finite-order Markov processes: For a zero-order process, it produces the arithmetic compression method; while for higher order processes it yields dynamical systems, constructed from piecewise affine mappings from the interval [0, 1] into itself, that may be used to store information efficiently. The theory leads to a new geometrical approach to the development of compression algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Bowins, Brad. "Sliding Scale Theory of Attention and Consciousness/Unconsciousness." Behavioral Sciences 12, no. 2 (February 10, 2022): 43. http://dx.doi.org/10.3390/bs12020043.

Full text
Abstract:
Attention defined as focusing on a unit of information plays a prominent role in both consciousness and the cognitive unconscious, due to its essential role in information processing. Existing theories of consciousness invariably address the relationship between attention and conscious awareness, ranging from attention is not required to crucial. However, these theories do not adequately or even remotely consider the contribution of attention to the cognitive unconscious. A valid theory of consciousness must also be a robust theory of the cognitive unconscious, a point rarely if ever considered. Current theories also emphasize human perceptual consciousness, primarily visual, despite evidence that consciousness occurs in diverse animal species varying in cognitive capacity, and across many forms of perceptual and thought consciousness. A comprehensive and parsimonious perspective applicable to the diversity of species demonstrating consciousness and the various forms—sliding scale theory of attention and consciousness/unconsciousness—is proposed with relevant research reviewed. Consistent with the continuous organization of natural events, attention occupies a sliding scale in regards to time and space compression. Unconscious attention in the form of the “cognitive unconscious” is time and spaced diffused, whereas conscious attention is tightly time and space compressed to the present moment. Due to the special clarity derived from brief and concentrated signals, the tight time and space compression yields conscious awareness as an emergent property. The present moment enhances the time and space compression of conscious attention, and contributes to an evolutionary explanation of conscious awareness.
APA, Harvard, Vancouver, ISO, and other styles
31

Lynn, Christopher W., and Danielle S. Bassett. "Quantifying the compressibility of complex networks." Proceedings of the National Academy of Sciences 118, no. 32 (August 4, 2021): e2023473118. http://dx.doi.org/10.1073/pnas.2023473118.

Full text
Abstract:
Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate those networks under marked resource constraints. Networks that survive are those that are amenable to constrained encoding—or, in other words, are compressible. But how compressible is a network? And what features make one network more compressible than another? Here, we answer these questions by modeling networks as information sources before compressing them using rate-distortion theory. Each network yields a unique rate-distortion curve, which specifies the minimal amount of information that remains at a given scale of description. A natural definition then emerges for the compressibility of a network: the amount of information that can be removed via compression, averaged across all scales. Analyzing an array of real and model networks, we demonstrate that compressibility increases with two common network properties: transitivity (or clustering) and degree heterogeneity. These results indicate that hierarchical organization—which is characterized by modular structure and heterogeneous degrees—facilitates compression in complex networks. Generally, our framework sheds light on the interplay between a network’s structure and its capacity to be compressed, enabling investigations into the role of compression in shaping real-world networks.
APA, Harvard, Vancouver, ISO, and other styles
32

Kryukov, Kirill, Mahoko Takahashi Ueda, So Nakagawa, and Tadashi Imanishi. "Nucleotide Archival Format (NAF) enables efficient lossless reference-free compression of DNA sequences." Bioinformatics 35, no. 19 (February 25, 2019): 3826–28. http://dx.doi.org/10.1093/bioinformatics/btz144.

Full text
Abstract:
Abstract Summary DNA sequence databases use compression such as gzip to reduce the required storage space and network transmission time. We describe Nucleotide Archival Format (NAF)—a new file format for lossless reference-free compression of FASTA and FASTQ-formatted nucleotide sequences. Nucleotide Archival Format compression ratio is comparable to the best DNA compressors, while providing dramatically faster decompression. We compared our format with DNA compressors: DELIMINATE and MFCompress, and with general purpose compressors: gzip, bzip2, xz, brotli and zstd. Availability and implementation NAF compressor and decompressor, as well as format specification are available at https://github.com/KirillKryukov/naf. Format specification is in public domain. Compressor and decompressor are open source under the zlib/libpng license, free for nearly any use. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
33

Last, Cadell. "Human Metasystem Transition (HMST) Theory." Journal of Ethics and Emerging Technologies 25, no. 1 (January 1, 2015): 1–16. http://dx.doi.org/10.55613/jeet.v25i1.36.

Full text
Abstract:
Metasystem transitions are events representing the evolutionary emergence of a higher level of organization through the integration of subsystems into a higher “metasystem” (A1+A2+A3=B). Such events have occurred several times throughout the history of life (e.g., emergence of life, multicellular life, sexual reproduction). The emergence of new levels of organization has occurred within the human system three times, and has resulted in three broadly defined levels of higher control, producing three broadly defined levels of group selection (e.g., band/tribe, chiefdom/kingdom, nation-state/international). These are “Human Metasystem Transitions” (HMST). Throughout these HMST several common system-level patterns have manifested that are fundamental to understanding the nature and evolution of the human system, as well as our potential future development. First, HMST have been built around the control of three mostly distinct primary energy sources (e.g., hunting, agriculture, industry). Second, the control of new energy sources has always been achieved and stabilized by utilizing the evolutionary emergence of a more powerful information-processing medium (e.g., language, writing, printing press). Third, new controls emerge with the capability of organizing energy flows over larger expanses of space in shorter durations of time: bands/tribes controlled regional space and stabilized for hundreds of thousand of years, chiefdoms/kingdoms controlled semi-continental expanses of space and stabilized for thousands of years, and nation-states control continental expanses of space and have stabilized for centuries. This space-time component of hierarchical metasystem emergence can be conceptualized as the active compression of space-time-energy-matter (STEM compression) enabled by higher informational and energetic properties within the human system, which allow for more complex organization (i.e., higher subsystem integration). In this framework, increased information-energy control and feedback, and the consequent metasystem compression of space-time, represent the theoretical pillars of HMST theory. Most importantly, HMST theory may have practical application in modeling the future of the human system and the nature of the next human metasystem.
APA, Harvard, Vancouver, ISO, and other styles
34

Asnaoui, Khalid El. "Image Compression Based on Block SVD Power Method." Journal of Intelligent Systems 29, no. 1 (April 2, 2019): 1345–59. http://dx.doi.org/10.1515/jisys-2018-0034.

Full text
Abstract:
Abstract In recent years, the important and fast growth in the development and demand of multimedia products is contributing to an insufficiency in the bandwidth of devices and network storage memory. Consequently, the theory of data compression becomes more significant for reducing data redundancy in order to allow more transfer and storage of data. In this context, this paper addresses the problem of lossy image compression. Indeed, this new proposed method is based on the block singular value decomposition (SVD) power method that overcomes the disadvantages of MATLAB’s SVD function in order to make a lossy image compression. The experimental results show that the proposed algorithm has better compression performance compared with the existing compression algorithms that use MATLAB’s SVD function. In addition, the proposed approach is simple in terms of implementation and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.
APA, Harvard, Vancouver, ISO, and other styles
35

Hussein, Amr, Hossam Kasem, and Mohamed Adel. "Efficient spectrum sensing technique based on energy detector, compressive sensing, and de-noising techniques." International Journal of Engineering & Technology 6, no. 1 (December 7, 2016): 1. http://dx.doi.org/10.14419/ijet.v6i1.6672.

Full text
Abstract:
Highdata rate cognitive radio (CR) systems require high speed Analog-to-Digital Converters (ADC). This requirement imposes many restrictions on the realization of the CR systems. The necessity of high sampling rate can be significantly alleviated by utilizing analog to information converter (AIC). AIC is inspired by the recent theory of Compressive Sensing (CS), which states that a discrete signal has a sparse representation in some dictionary, which can be recovered from a small number of linear projections of that signal. This paper proposes an efficient spectrum sensing technique based on energy detection, compression sensing, and de-noising techniques. De-noising filters are utilized to enhance the traditional Energy Detector performance through Signal-to-Noise (SNR) boosting. On the other hand, the ordinary sampling provides an ideal performance at a given conditions. A near optimal performance can be achieved by applying compression sensing. Compression sensing allows signal to be sampled at sampling rates much lower than the Nyquist rate. The system performance and ADC speed can be easily controlled by adjusting the compression ratio. In addition, a proposed energy detector technique is introduced by using an optimum compression ratio. The optimum compression ratio is determined using a Genetic Algorithm (GA) optimization tool. Simulation results revealed that the proposed techniques enhanced system performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Han, Bo, and Bolang Li. "Lossless Compression of Data Tables in Mobile Devices by Using Co-clustering." International Journal of Computers Communications & Control 11, no. 6 (October 17, 2016): 776. http://dx.doi.org/10.15837/ijccc.2016.6.2554.

Full text
Abstract:
Data tables have been widely used for storage of a collection of related records in a structured format in many mobile applications. The lossless compression of data tables not only brings benefits for storage, but also reduces network transmission latencies and energy costs in batteries. In this paper, we propose a novel lossless compression approach by combining co-clustering and information coding theory. It reorders table columns and rows simultaneously for shaping homogeneous blocks and further optimizes alignment within a block to expose redundancy, such that standard lossless encoders can significantly improve compression ratios. We tested the approach on a synthetic dataset and ten UCI real-life datasets by using a standard compressor 7Z. The extensive experimental results suggest that compared with the direct table compression without co-clustering and within-block alignment, our approach can boost compression rates at least 21% and up to 68%. The results also show that the compression time cost of the co-clustering approach is linearly proportional to a data table size. In addition, since the inverse transform of co-clustering is just exchange of rows and columns according to recorded indexes, the decompression procedure runs very fast and the decompression time cost is similar to the counterpart without using co-clustering. Thereby, our approach is suitable for lossless compression of data tables in mobile devices with constrained resources.
APA, Harvard, Vancouver, ISO, and other styles
37

Koepnick, Lutz. "Reading in the Age of Compression." Poetics Today 42, no. 2 (June 1, 2021): 193–206. http://dx.doi.org/10.1215/03335372-8883206.

Full text
Abstract:
Abstract Compression is often considered a royal road to process data in ever-shorter time and to cater to our desire to outspeed the accelerating transmission of information in the digital age. This article explores how different techniques of accelerated text dissemination and reading, such as consonant writing, speed-reading apps, and the PDF file format, borrow from the language of compression yet, precisely in so doing, obscure the constitutive multilayered temporality of reading and the embodied role of the reader. While discussing different methods aspiring to compress textual objects and processes of reading, the author illuminates hidden assumptions that accompany the rhetoric of text compression and compressed reading.
APA, Harvard, Vancouver, ISO, and other styles
38

Shah, Stark, and Bauch. "Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method." Algorithms 12, no. 9 (September 10, 2019): 192. http://dx.doi.org/10.3390/a12090192.

Full text
Abstract:
The information bottleneck method is a generic clustering framework from the fieldof machine learning which allows compressing an observed quantity while retaining as much ofthe mutual information it shares with the quantity of primary relevance as possible. The frameworkwas recently used to design message-passing decoders for low-density parity-check codes in whichall the arithmetic operations on log-likelihood ratios are replaced by table lookups of unsignedintegers. This paper presents, in detail, the application of the information bottleneck method to polarcodes, where the framework is used to compress the virtual bit channels defined in the code structureand show that the benefits are twofold. On the one hand, the compression restricts the outputalphabet of the bit channels to a manageable size. This facilitates computing the capacities of the bitchannels in order to identify the ones with larger capacities. On the other hand, the intermediatesteps of the compression process can be used to replace the log-likelihood ratio computations inthe decoder with table lookups of unsigned integers. Hence, a single procedure produces a polarencoder as well as its tailored, quantized decoder. Moreover, we also use a technique called messagealignment to reduce the space complexity of the quantized decoder obtained using the informationbottleneck framework.
APA, Harvard, Vancouver, ISO, and other styles
39

SGARRO, ANDREA, and LIVIU PETRIŞOR DINU. "POSSIBILISTIC ENTROPIES AND THE COMPRESSION OF POSSIBILISTIC DATA." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, no. 06 (December 2002): 635–53. http://dx.doi.org/10.1142/s0218488502001697.

Full text
Abstract:
We re-take the possibilistic model for information sources recently put forward by the first author, as opposed to the standard probabilistic models of information theory. Based on an interpretation of possibilistic source coding inspired by utility functions, we define a notion of possibilistic entropy for a suitable class of interactive possibilistic sources, and compare it with the possibilistic entropy of stationary non-interactive sources. Both entropies have a coding-theoretic nature, being obtained as limit values for the rates of optimal compression codes. We list properties of the two entropies, which might support their use as measures of "possibilistic ignorance".
APA, Harvard, Vancouver, ISO, and other styles
40

HOU, YU. "A COMPACTLY SUPPORTED, SYMMETRICAL AND QUASI-ORTHOGONAL WAVELET." International Journal of Wavelets, Multiresolution and Information Processing 08, no. 06 (November 2010): 931–40. http://dx.doi.org/10.1142/s0219691310003900.

Full text
Abstract:
Based on the wavelet theory and optimization method, a class of single wavelets with compact support, symmetry and quasi-orthogonality are designed and constructed. Some mathematical properties of the wavelets, such as orthogonality, linear phase property and vanishing moments and so on, are studied. A speech compression experiment is implemented in order to investigate the performance of signal reconstruction and speech compression for the proposed wavelets. Comparison with some conventional wavelets shows that the proposed wavelets have a very good performance of signal reconstruction and speech compression.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Shanxiong, Maoling Peng, Hailing Xiong, and Xianping Yu. "SVM Intrusion Detection Model Based on Compressed Sampling." Journal of Electrical and Computer Engineering 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/3095971.

Full text
Abstract:
Intrusion detection needs to deal with a large amount of data; particularly, the technology of network intrusion detection has to detect all of network data. Massive data processing is the bottleneck of network software and hardware equipment in intrusion detection. If we can reduce the data dimension in the stage of data sampling and directly obtain the feature information of network data, efficiency of detection can be improved greatly. In the paper, we present a SVM intrusion detection model based on compressive sampling. We use compressed sampling method in the compressed sensing theory to implement feature compression for network data flow so that we can gain refined sparse representation. After that SVM is used to classify the compression results. This method can realize detection of network anomaly behavior quickly without reducing the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhou, Dale, Christopher W. Lynn, Zaixu Cui, Rastko Ciric, Graham L. Baum, Tyler M. Moore, David R. Roalf, et al. "Efficient coding in the economics of human brain connectomics." Network Neuroscience 6, no. 1 (2022): 234–74. http://dx.doi.org/10.1162/netn_a_00223.

Full text
Abstract:
Abstract In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8–23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior—beyond the conventional network efficiency metric—for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Sungyeop, and Junghyo Jo. "Information Flows of Diverse Autoencoders." Entropy 23, no. 7 (July 5, 2021): 862. http://dx.doi.org/10.3390/e23070862.

Full text
Abstract:
Deep learning methods have had outstanding performances in various fields. A fundamental query is why they are so effective. Information theory provides a potential answer by interpreting the learning process as the information transmission and compression of data. The information flows can be visualized on the information plane of the mutual information among the input, hidden, and output layers. In this study, we examine how the information flows are shaped by the network parameters, such as depth, sparsity, weight constraints, and hidden representations. Here, we adopt autoencoders as models of deep learning, because (i) they have clear guidelines for their information flows, and (ii) they have various species, such as vanilla, sparse, tied, variational, and label autoencoders. We measured their information flows using Rényi’s matrix-based α-order entropy functional. As learning progresses, they show a typical fitting phase where the amounts of input-to-hidden and hidden-to-output mutual information both increase. In the last stage of learning, however, some autoencoders show a simplifying phase, previously called the “compression phase”, where input-to-hidden mutual information diminishes. In particular, the sparsity regularization of hidden activities amplifies the simplifying phase. However, tied, variational, and label autoencoders do not have a simplifying phase. Nevertheless, all autoencoders have similar reconstruction errors for training and test data. Thus, the simplifying phase does not seem to be necessary for the generalization of learning.
APA, Harvard, Vancouver, ISO, and other styles
44

Fredriksson, Kimmo, and Fedor Nikitin. "Simple Random Access Compression." Fundamenta Informaticae 92, no. 1-2 (2009): 63–81. http://dx.doi.org/10.3233/fi-2009-0066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chandak, Shubham, Kedar Tatwawadi, Idoia Ochoa, Mikel Hernaez, and Tsachy Weissman. "SPRING: a next-generation compressor for FASTQ data." Bioinformatics 35, no. 15 (December 7, 2018): 2674–76. http://dx.doi.org/10.1093/bioinformatics/bty1015.

Full text
Abstract:
Abstract Motivation High-Throughput Sequencing technologies produce huge amounts of data in the form of short genomic reads, associated quality values and read identifiers. Because of the significant structure present in these FASTQ datasets, general-purpose compressors are unable to completely exploit much of the inherent redundancy. Although there has been a lot of work on designing FASTQ compressors, most of them lack in support of one or more crucial properties, such as support for variable length reads, scalability to high coverage datasets, pairing-preserving compression and lossless compression. Results In this work, we propose SPRING, a reference-free compressor for FASTQ files. SPRING supports a wide variety of compression modes and features, including lossless compression, pairing-preserving compression, lossy compression of quality values, long read compression and random access. SPRING achieves substantially better compression than existing tools, for example, SPRING compresses 195 GB of 25× whole genome human FASTQ from Illumina’s NovaSeq sequencer to less than 7 GB, around 1.6× smaller than previous state-of-the-art FASTQ compressors. SPRING achieves this improvement while using comparable computational resources. Availability and implementation SPRING can be downloaded from https://github.com/shubhamchandak94/SPRING. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
46

Merhav, Neri, and Igal Sason. "An Integral Representation of the Logarithmic Function with Applications in Information Theory." Entropy 22, no. 1 (December 30, 2019): 51. http://dx.doi.org/10.3390/e22010051.

Full text
Abstract:
We explore a well-known integral representation of the logarithmic function, and demonstrate its usefulness in obtaining compact, easily computable exact formulas for quantities that involve expectations and higher moments of the logarithm of a positive random variable (or the logarithm of a sum of i.i.d. positive random variables). The integral representation of the logarithm is proved useful in a variety of information-theoretic applications, including universal lossless data compression, entropy and differential entropy evaluations, and the calculation of the ergodic capacity of the single-input, multiple-output (SIMO) Gaussian channel with random parameters (known to both transmitter and receiver). This integral representation and its variants are anticipated to serve as a useful tool in additional applications, as a rigorous alternative to the popular (but non-rigorous) replica method (at least in some situations).
APA, Harvard, Vancouver, ISO, and other styles
47

Silverstein, Steven M., Michael Wibral, and William A. Phillips. "Implications of Information Theory for Computational Modeling of Schizophrenia." Computational Psychiatry 1 (December 2017): 82–101. http://dx.doi.org/10.1162/cpsy_a_00004.

Full text
Abstract:
Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory—such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio—can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.
APA, Harvard, Vancouver, ISO, and other styles
48

Dufort y Álvarez, Guillermo, Gadiel Seroussi, Pablo Smircich, José Sotelo, Idoia Ochoa, and Álvaro Martín. "ENANO: Encoder for NANOpore FASTQ files." Bioinformatics 36, no. 16 (May 29, 2020): 4506–7. http://dx.doi.org/10.1093/bioinformatics/btaa551.

Full text
Abstract:
Abstract Motivation The amount of genomic data generated globally is seeing explosive growth, leading to increasing needs for processing, storage and transmission resources, which motivates the development of efficient compression tools for these data. Work so far has focused mainly on the compression of data generated by short-read technologies. However, nanopore sequencing technologies are rapidly gaining popularity due to the advantages offered by the large increase in the average size of the produced reads, the reduction in their cost and the portability of the sequencing technology. We present ENANO (Encoder for NANOpore), a novel lossless compression algorithm especially designed for nanopore sequencing FASTQ files. Results The main focus of ENANO is on the compression of the quality scores, as they dominate the size of the compressed file. ENANO offers two modes, Maximum Compression and Fast (default), which trade-off compression efficiency and speed. We tested ENANO, the current state-of-the-art compressor SPRING and the general compressor pigz on several publicly available nanopore datasets. The results show that the proposed algorithm consistently achieves the best compression performance (in both modes) on every considered nanopore dataset, with an average improvement over pigz and SPRING of &gt;24.7% and 6.3%, respectively. In addition, in terms of encoding and decoding speeds, ENANO is 2.9× and 1.7× times faster than SPRING, respectively, with memory consumption up to 0.2 GB. Availability and implementation ENANO is freely available for download at: https://github.com/guilledufort/EnanoFASTQ. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
49

Park, Seok-Hwan, Osvaldo Simeone, Onur Sahin, and Shlomo Shamai Shitz. "Fronthaul Compression for Cloud Radio Access Networks: Signal processing advances inspired by network information theory." IEEE Signal Processing Magazine 31, no. 6 (November 2014): 69–79. http://dx.doi.org/10.1109/msp.2014.2330031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Di Martino, Ferdinando, and Salvatore Sessa. "A Multilevel Fuzzy Transform Method for High Resolution Image Compression." Axioms 11, no. 10 (October 13, 2022): 551. http://dx.doi.org/10.3390/axioms11100551.

Full text
Abstract:
The Multilevel Fuzzy Transform technique (MF-tr) is a hierarchical image compression method based on Fuzzy Transform, which is successfully used to compress images and manage the information loss of the reconstructed image. Unlike other lossy image compression methods, it ensures that the quality of the reconstructed image is not lower than a prefixed threshold. However, this method is not suitable for compressing massive images due to the high processing times and memory usage. In this paper, we propose a variation of MF-tr for the compression of massive images. The image is divided into tiles, each of which is individually compressed using MF-tr; thereafter, the image is reconstructed by merging the decompressed tiles. Comparative tests performed on remote sensing images show that the proposed method provides better performance than MF-tr in terms of compression rate and CPU time. Moreover, comparison tests show that our method reconstructs the image with CPU times that are at least two times less than those obtained using the MF-tr algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography