Journal articles on the topic 'Information bottleneck theory'

To see the other types of publications on this topic, follow the link: Information bottleneck theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Information bottleneck theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nguyen, Thanh Tang, and Jaesik Choi. "Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks." Entropy 21, no. 10 (October 6, 2019): 976. http://dx.doi.org/10.3390/e21100976.

Full text
Abstract:
While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST.
APA, Harvard, Vancouver, ISO, and other styles
2

LIU, YONGLI, YUANXIN OUYANG, and ZHANG XIONG. "INCREMENTAL CLUSTERING USING INFORMATION BOTTLENECK THEORY." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 05 (August 2011): 695–712. http://dx.doi.org/10.1142/s0218001411008622.

Full text
Abstract:
Document clustering is one of the most effective techniques to organize documents in an unsupervised manner. In this paper, an Incremental method for document Clustering based on Information Bottleneck theory (ICIB) is presented. The ICIB is designed to improve the accuracy and efficiency of document clustering, and resolve the issue that an arbitrary choice of document similarity measure could produce an inaccurate clustering result. In our approach, document similarity is calculated using information bottleneck theory and documents are grouped incrementally. A first document is selected randomly and classified as one cluster, then each remaining document is processed incrementally according to the mutual information loss introduced by the merger of the document and each existing cluster. If the minimum value of mutual information loss is below a certain threshold, the document will be added to its closest cluster; otherwise it will be classified as a new cluster. The incremental clustering process is low-precision and order-dependent, which cannot guarantee accurate clustering results. Therefore, an improved sequential clustering algorithm (SIB) is proposed to adjust the intermediate clustering results. In order to test the effectiveness of ICIB method, ten independent document subsets are constructed based on the 20NewsGroup and Reuters-21578 corpora. Experimental results show that our ICIB method achieves higher accuracy and time performance than K-Means, AIB and SIB algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Xichuan, Kui Liu, Cong Shi, Haijun Liu, and Ji Liu. "Optimizing Information Theory Based Bitwise Bottlenecks for Efficient Mixed-Precision Activation Quantization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3590–98. http://dx.doi.org/10.1609/aaai.v35i4.16474.

Full text
Abstract:
Recent researches on information theory shed new light on the continuous attempts to open the black box of neural signal encoding. Inspired by the problem of lossy signal compression for wireless communication, this paper presents a Bitwise Bottleneck approach for quantizing and encoding neural network activations. Based on the rate-distortion theory, the Bitwise Bottleneck attempts to determine the most significant bits in activation representation by assigning and approximating the sparse coefficients associated with different bits. Given the constraint of a limited average code rate, the bottleneck minimizes the distortion for optimal activation quantization in a flexible layer-by-layer manner. Experiments over ImageNet and other datasets show that, by minimizing the quantization distortion of each layer, the neural network with bottlenecks achieves the state-of-the-art accuracy with low-precision activation. Meanwhile, by reducing the code rate, the proposed method can improve the memory and computational efficiency by over six times compared with the deep neural network with standard single-precision representation. The source code is available on GitHub: https://github.com/CQUlearningsystemgroup/BitwiseBottleneck.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Junjie, and Ding Liu. "Information Bottleneck Theory on Convolutional Neural Networks." Neural Processing Letters 53, no. 2 (February 18, 2021): 1385–400. http://dx.doi.org/10.1007/s11063-021-10445-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Geiger, Bernhard C., and Gernot Kubin. "Information Bottleneck: Theory and Applications in Deep Learning." Entropy 22, no. 12 (December 14, 2020): 1408. http://dx.doi.org/10.3390/e22121408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Xin, Katayoun Farrahi, and Mahesan Niranjan. "Information Bottleneck Theory Based Exploration of Cascade Learning." Entropy 23, no. 10 (October 18, 2021): 1360. http://dx.doi.org/10.3390/e23101360.

Full text
Abstract:
In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation (I(X;T)) and the representation to the target (I(T;Y)). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, I(T;Y)/I(X;T), and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Saxe, Andrew M., Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D. Tracey, and David D. Cox. "On the information bottleneck theory of deep learning." Journal of Statistical Mechanics: Theory and Experiment 2019, no. 12 (December 20, 2019): 124020. http://dx.doi.org/10.1088/1742-5468/ab3985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ke, Qiao, Jiangshe Zhang, H. M. Srivastava, Wei Wei, and Guang-Sheng Chen. "Independent Component Analysis Based on Information Bottleneck." Abstract and Applied Analysis 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/386201.

Full text
Abstract:
The paper is mainly used to provide the equivalence of two algorithms of independent component analysis (ICA) based on the information bottleneck (IB). In the viewpoint of information theory, we attempt to explain the two classical algorithms of ICA by information bottleneck. Furthermore, via the numerical experiments with the synthetic data, sonic data, and image, ICA is proved to be an edificatory way to solve BSS successfully relying on the information theory. Finally, two realistic numerical experiments are conducted via FastICA in order to illustrate the efficiency and practicality of the algorithm as well as the drawbacks in the process of the recovery images the mixing images.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Qingyun, Jianxin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, and Philip S. Yu. "Graph Structure Learning with Variational Information Bottleneck." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4165–74. http://dx.doi.org/10.1609/aaai.v36i4.20335.

Full text
Abstract:
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real-world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL is the first attempt to advance the Information Bottleneck (IB) principle for graph structure learning, providing a more elegant and universal framework for mining underlying task-relevant relations. VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks. VIB-GSL deduces a variational approximation for irregular graph data to form a tractable IB objective function, which facilitates training stability. Extensive experimental results demonstrate that the superior effectiveness and robustness of the proposed VIB-GSL.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Z. "INFORMATION THEORY OF CARTOGRAPHY: A FRAMEWORK FOR ENTROPY-BASED CARTOGRAPHIC COMMUNICATION THEORY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (August 24, 2020): 11–16. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-11-2020.

Full text
Abstract:
Abstract. Map is an effective communication means. It carries and transmits spatial information about spatial objects and phenomena, from map makers to map users. Therefore, cartography can be regarded as a communication system. Efforts has been made on the application of Shannon Information theory developed in digital communication to cartography to establish an information theory of cartography, or simply cartographic information theory (or map information theory). There was a boom during the period from later 1960s to early 1980s. Since later 1980s, researcher have almost given up the dream of establishing the information theory of cartography because they met a bottleneck problem. That is, Shannon entropy is only able to characterize the statistical information of map symbols but not capable of characterizing the spatial configuration (patterns) of map symbols. Fortunately, break-through has been made, i.e. the building of entropy models for metric and thematic information as well as a feasible computational model for Boltzmann entropy. This paper will review the evolutional processes, examine the bottleneck problems and the solutions, and finally propose a framework for the information theory of cartography. It is expected that such a theory will become the most fundamental theory of cartography in the big data era.
APA, Harvard, Vancouver, ISO, and other styles
11

Gu, Junhua, Zichen Zheng, Wenmiao Zhou, Yajuan Zhang, Zhengjun Lu, and Liang Yang. "Self-Supervised Graph Representation Learning via Information Bottleneck." Symmetry 14, no. 4 (March 24, 2022): 657. http://dx.doi.org/10.3390/sym14040657.

Full text
Abstract:
Graph representation learning has become a mainstream method for processing network structured data, and most graph representation learning methods rely heavily on labeling information for downstream tasks. Since labeled information is rare in the real world, adopting self-supervised learning to solve the graph neural network problem is a significant challenge. Currently, existing graph neural network approaches attempt to maximize mutual information for self-supervised learning, which leads to a large amount of redundant information in the graph representation and thus affects the performance of downstream tasks. Therefore, the self-supervised graph information bottleneck (SGIB) proposed in this paper uses the symmetry and asymmetry of graphs to establish comparative learning and introduces the information bottleneck theory as a loss training model. This model extracts the common features of both views and the independent features of each view by maximizing the mutual information estimation between the local high-level representation of one view and the global summary vector of the other view. It also removes redundant information not relevant to the target task by minimizing the mutual information between the local high-level representations of the two views. Based on the extensive experimental results of three public datasets and two large-scale datasets, it has been shown that the SGIB model can learn higher quality node representations and that several classical network analysis experiments such as node classification and node clustering can be improved compared to existing models in an unsupervised environment. In addition, an in-depth network experiment is designed for in-depth analysis, and the results show that the SGIB model can also alleviate the over-smoothing problem to a certain extent. Therefore, we can infer from different network analysis experiments that it would be an effective improvement of the performance of downstream tasks through introducing information bottleneck theory to remove redundant information.
APA, Harvard, Vancouver, ISO, and other styles
12

Xiao, Yong Liang, and Shao Ping Zhu. "Key Frame Extraction Algorithm Based on Information Theory." Advanced Materials Research 129-131 (August 2010): 95–98. http://dx.doi.org/10.4028/www.scientific.net/amr.129-131.95.

Full text
Abstract:
Key frames play a very important role in video indexing and retrieval. In this paper, we propose a novel method to extract key frames based on information theory. We use improved Bayesian Information Criterion to determining the number of key frames, and then automatically extract key frames to represent video shot based on information bottleneck cluster method. Experimental results and the comparisons with other methods on various types of video sequences show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
13

Zuo, Lianrui, Blake E. Dewey, Yihao Liu, Yufan He, Scott D. Newsome, Ellen M. Mowry, Susan M. Resnick, Jerry L. Prince, and Aaron Carass. "Unsupervised MR harmonization by learning disentangled representations using information bottleneck theory." NeuroImage 243 (November 2021): 118569. http://dx.doi.org/10.1016/j.neuroimage.2021.118569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Zhanquan, Geoffrey Fox, Weidong Gu, and Zhao Li. "A parallel clustering method combined information bottleneck theory and centroid-based clustering." Journal of Supercomputing 69, no. 1 (April 4, 2014): 452–67. http://dx.doi.org/10.1007/s11227-014-1174-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wei, Lesong, Xiucai Ye, Tetsuya Sakurai, Zengchao Mu, and Leyi Wei. "ToxIBTL: prediction of peptide toxicity based on information bottleneck and transfer learning." Bioinformatics 38, no. 6 (January 6, 2022): 1514–24. http://dx.doi.org/10.1093/bioinformatics/btac006.

Full text
Abstract:
Abstract Motivation Recently, peptides have emerged as a promising class of pharmaceuticals for various diseases treatment poised between traditional small molecule drugs and therapeutic proteins. However, one of the key bottlenecks preventing them from therapeutic peptides is their toxicity toward human cells, and few available algorithms for predicting toxicity are specially designed for short-length peptides. Results We present ToxIBTL, a novel deep learning framework by utilizing the information bottleneck principle and transfer learning to predict the toxicity of peptides as well as proteins. Specifically, we use evolutionary information and physicochemical properties of peptide sequences and integrate the information bottleneck principle into a feature representation learning scheme, by which relevant information is retained and the redundant information is minimized in the obtained features. Moreover, transfer learning is introduced to transfer the common knowledge contained in proteins to peptides, which aims to improve the feature representation capability. Extensive experimental results demonstrate that ToxIBTL not only achieves a higher prediction performance than state-of-the-art methods on the peptide dataset, but also has a competitive performance on the protein dataset. Furthermore, a user-friendly online web server is established as the implementation of the proposed ToxIBTL. Availability and implementation The proposed ToxIBTL and data can be freely accessible at http://server.wei-group.net/ToxIBTL. Our source code is available at https://github.com/WLYLab/ToxIBTL. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
16

Tan, Andrew K., Max Tegmark, and Isaac L. Chuang. "Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck." Entropy 24, no. 6 (May 30, 2022): 771. http://dx.doi.org/10.3390/e24060771.

Full text
Abstract:
At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when optimized over discrete search spaces. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to other two-objective clustering problems. We study general properties of the Pareto frontier, and we give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space, and additionally, we propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory-inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Jiangshe, Cong Ma, Junmin Liu, and Guang Shi. "Penetrating the influence of regularizations on neural network based on information bottleneck theory." Neurocomputing 393 (June 2020): 76–82. http://dx.doi.org/10.1016/j.neucom.2020.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sachdeva, Vedant, Thierry Mora, Aleksandra M. Walczak, and Stephanie E. Palmer. "Optimal prediction with resource constraints using the information bottleneck." PLOS Computational Biology 17, no. 3 (March 8, 2021): e1008743. http://dx.doi.org/10.1371/journal.pcbi.1008743.

Full text
Abstract:
Responding to stimuli requires that organisms encode information about the external world. Not all parts of the input are important for behavior, and resource limitations demand that signals be compressed. Prediction of the future input is widely beneficial in many biological systems. We compute the trade-offs between representing the past faithfully and predicting the future using the information bottleneck approach, for input dynamics with different levels of complexity. For motion prediction, we show that, depending on the parameters in the input dynamics, velocity or position information is more useful for accurate prediction. We show which motion representations are easiest to re-use for accurate prediction in other motion contexts, and identify and quantify those with the highest transferability. For non-Markovian dynamics, we explore the role of long-term memory in shaping the internal representation. Lastly, we show that prediction in evolutionary population dynamics is linked to clustering allele frequencies into non-overlapping memories.
APA, Harvard, Vancouver, ISO, and other styles
19

Liao, Hongpeng, Jianwu Xu, and Zhuliang Yu. "Novel Convolutional Neural Network with Variational Information Bottleneck for P300 Detection." Entropy 23, no. 1 (December 29, 2020): 39. http://dx.doi.org/10.3390/e23010039.

Full text
Abstract:
In the area of brain-computer interfaces (BCI), the detection of P300 is a very important technique and has a lot of applications. Although this problem has been studied for decades, it is still a tough problem in electroencephalography (EEG) signal processing owing to its high dimension features and low signal-to-noise ratio (SNR). Recently, neural networks, like conventional neural networks (CNN), has shown excellent performance on many applications. However, standard convolutional neural networks suffer from performance degradation on dealing with noisy data or data with too many redundant information. In this paper, we proposed a novel convolutional neural network with variational information bottleneck for P300 detection. Wiht the CNN architecture and information bottleneck, the proposed network termed P300-VIB-Net could remove the redundant information in data effectively. The experimental results on BCI competition data sets show that P300-VIB-Net achieves cutting-edge character recognition performance. Furthermore, the proposed model is capable of restricting the flow of irrelevant information adaptively in the network from perspective of information theory. The experimental results show that P300-VIB-Net is a promising tool for P300 detection.
APA, Harvard, Vancouver, ISO, and other styles
20

Dong, Lihong, Xirong Wang, Beizhan Liu, Tianwei Zheng, and Zheng Wang. "Information Acquisition Incentive Mechanism Based on Evolutionary Game Theory." Wireless Communications and Mobile Computing 2021 (August 17, 2021): 1–11. http://dx.doi.org/10.1155/2021/5525791.

Full text
Abstract:
Based on evolutionary game theory, this paper proposes a new information acquisition mechanism for intelligent mine construction, which solves the problem of incomplete information acquisition in the construction of new intelligent mining area and reduces the difficulty of information acquisition, which solves the problem of the imperfect mine information acquisition in the construction of a new smart mine regions and decreases the difficulty of a mine information acquisition. Based on the evolutionary game model, the perceptual incentive model based on group is established. The reliability of information collection is ensured by sharing and modifying the information collector. Through the analysis of the simulation results, it is found that the regional coverage model based on the cooperation in game theory and evolutionary game theory has a good effect on solving the bottleneck problem of the current intelligent mining area. This paper has an enlightening effect on the optimization of the mine information acquisition system. Through the improvement of the mine information acquisition system, the working efficiency of the information acquisition terminal can be effectively increased by 6%.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Tao, F. Xiong, J. B. Du, and Guo Qing Qu. "Research of Digital Nervous System Based on the Game Mechanism." Applied Mechanics and Materials 743 (March 2015): 758–64. http://dx.doi.org/10.4028/www.scientific.net/amm.743.758.

Full text
Abstract:
In the current bottleneck for the development of enterprises, combined with artificial neural networks, game management theory, developing a new set of future information management systems and elaborate various aspects of its structure, function modules, the system features and so on.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Chun Yan, and Zhu Lin Liu. "Philosophy Applying in Information Engineering." Advanced Materials Research 403-408 (November 2011): 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2127.

Full text
Abstract:
With the computer software and technology continuously improving, because of various factors influencing, computer engineers are very tough on how to improve the quality of software products, this is a bottleneck problem we must solve. We think that the information engineering combining with the philosophy thought would make engineer’s ideas suddenly enlightened. We find a method and model to solve software engineering problems from the philosophical Angle, and put forward the importance of information philosophy in the study of information engineering by standing at this altitude of the information engineering. As a new field, information philosophy provides a unified, convergence theory frame, it can satisfy the requirement of further specialized. Information philosophy will become most exciting and productive philosophy research field in our era's.
APA, Harvard, Vancouver, ISO, and other styles
23

Franzese, Giulio, and Monica Visintin. "Probabilistic Ensemble of Deep Information Networks." Entropy 22, no. 1 (January 14, 2020): 100. http://dx.doi.org/10.3390/e22010100.

Full text
Abstract:
We describe a classifier made of an ensemble of decision trees, designed using information theory concepts. In contrast to algorithms C4.5 or ID3, the tree is built from the leaves instead of the root. Each tree is made of nodes trained independently of the others, to minimize a local cost function (information bottleneck). The trained tree outputs the estimated probabilities of the classes given the input datum, and the outputs of many trees are combined to decide the class. We show that the system is able to provide results comparable to those of the tree classifier in terms of accuracy, while it shows many advantages in terms of modularity, reduced complexity, and memory requirements.
APA, Harvard, Vancouver, ISO, and other styles
24

LAMARCHE-PERRIN, ROBIN, SVEN BANISCH, and ECKEHARD OLBRICH. "THE INFORMATION BOTTLENECK METHOD FOR OPTIMAL PREDICTION OF MULTILEVEL AGENT-BASED SYSTEMS." Advances in Complex Systems 19, no. 01n02 (February 2016): 1650002. http://dx.doi.org/10.1142/s0219525916500028.

Full text
Abstract:
Because the dynamics of complex systems is the result of both decisive local events and reinforced global effects, the prediction of such systems could not do without a genuine multilevel approach. This paper proposes to found such an approach on information theory. Starting from a complete microscopic description of the system dynamics, we are looking for observables of the current state that allows to efficiently predict future observables. Using the framework of the information bottleneck (IB) method, we relate optimality to two aspects: the complexity and the predictive capacity of the retained measurement. Then, with a focus on agent-based models (ABMs), we analyze the solution space of the resulting optimization problem in a generic fashion. We show that, when dealing with a class of feasible measurements that are consistent with the agent structure, this solution space has interesting algebraic properties that can be exploited to efficiently solve the problem. We then present results of this general framework for the voter model (VM) with several topologies and show that, especially when predicting the state of some sub-part of the system, multilevel measurements turn out to be the optimal predictors.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Gongliang, and Wenjing Kang. "IDMA-Based Compressed Sensing for Ocean Monitoring Information Acquisition with Sensor Networks." Mathematical Problems in Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/430275.

Full text
Abstract:
The ocean monitoring sensor network is a typically energy-limited and bandwidth-limited system, and the technical bottleneck of which is the asymmetry between the demand for large-scale and high-resolution information acquisition and the limited network resources. The newly arising compressed sensing theory provides a chance for breaking through the bottleneck. In view of this and considering the potential advantages of the emerging interleave-division multiple access (IDMA) technology in underwater channels, this paper proposes an IDMA-based compressed sensing scheme in underwater sensor networks with applications to environmental monitoring information acquisition. Exploiting the sparse property of the monitored objects, only a subset of sensors is required to measure and transmit the measurements to the monitoring center for accurate information reconstruction, reducing the requirements for channel bandwidth and energy consumption significantly. Furthermore, with the aid of the semianalytical technique of IDMA, the optimal sensing probability of each sensor is determined to minimize the reconstruction error of the information map. Simulation results with real oceanic monitoring data validate the efficiency of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
26

Jin, Bo, and Xinghua Lu. "Identifying informative subsets of the Gene Ontology with information bottleneck methods." Bioinformatics 26, no. 19 (August 11, 2010): 2445–51. http://dx.doi.org/10.1093/bioinformatics/btq449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

SUN, XIAO-YAN, RUI JIANG, QIAO-MING WANG, and BING-HONG WANG. "INFLUENCE OF TRAFFIC BOTTLENECK ON TWO-ROUTE SCENARIO WITH MEAN VELOCITY INFORMATION FEEDBACK." International Journal of Modern Physics C 21, no. 05 (May 2010): 695–707. http://dx.doi.org/10.1142/s0129183110015427.

Full text
Abstract:
In this paper, traffic bottleneck is introduced on one of the routes (say route A) in a two-route scenario with mean velocity information feedback. The simulations show that four different system states, i.e. zero state (no dynamic vehicle chooses route A), periodic oscillation state (mean velocity on route A is in periodic oscillations), alternation state (alternation of zero state and oscillation state), and equal velocity state (mean velocities on the two routes are equal), are identified. Complex nonlinear changing behavior of critical vehicle arrival probability λc depending on bottleneck length and location as well as dynamic vehicle ratio is revealed. Our work is expected to be useful for designing Advanced Traveller Information Systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Amjad, Rana Ali, and Bernhard C. Geiger. "Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 9 (September 1, 2020): 2225–39. http://dx.doi.org/10.1109/tpami.2019.2909031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fang, Min, Yi Min Liu, Wan Liu, and Hui Chen. "The Study of Image Reconstruction Based on Compressed Sensing Theory." Applied Mechanics and Materials 127 (October 2011): 32–35. http://dx.doi.org/10.4028/www.scientific.net/amm.127.32.

Full text
Abstract:
Compressed Sensing (Compressed Sensing, referred to as CS) is a new theory of data acquisition technology. On sparse or compressible signals, it can capture and represent the compressible signal at a rate significantly below Nyquist rate and adopt non-adaptive linear projection to keep the information and structure of original signal, and then reconstructs the original signal accurately by solving the optimizational problem. Compressed sensing breaks the bottleneck of the Shannon Theorem because it cuts down the costs of saving and transmission in data transfer. This paper briefly describes theoretical framework and the key technology of the CS theory, focuses on introducing the application in reconstructing image information of CS theory and then makes a simulation using matlab. As expected, the simulation results show that CS can reconstruct the original signal accurately under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
30

Shah, Stark, and Bauch. "Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method." Algorithms 12, no. 9 (September 10, 2019): 192. http://dx.doi.org/10.3390/a12090192.

Full text
Abstract:
The information bottleneck method is a generic clustering framework from the fieldof machine learning which allows compressing an observed quantity while retaining as much ofthe mutual information it shares with the quantity of primary relevance as possible. The frameworkwas recently used to design message-passing decoders for low-density parity-check codes in whichall the arithmetic operations on log-likelihood ratios are replaced by table lookups of unsignedintegers. This paper presents, in detail, the application of the information bottleneck method to polarcodes, where the framework is used to compress the virtual bit channels defined in the code structureand show that the benefits are twofold. On the one hand, the compression restricts the outputalphabet of the bit channels to a manageable size. This facilitates computing the capacities of the bitchannels in order to identify the ones with larger capacities. On the other hand, the intermediatesteps of the compression process can be used to replace the log-likelihood ratio computations inthe decoder with table lookups of unsigned integers. Hence, a single procedure produces a polarencoder as well as its tailored, quantized decoder. Moreover, we also use a technique called messagealignment to reduce the space complexity of the quantized decoder obtained using the informationbottleneck framework.
APA, Harvard, Vancouver, ISO, and other styles
31

Funk, Christopher, Benjamin Noack, and Uwe Hanebeck. "Conservative Quantization of Covariance Matrices with Applications to Decentralized Information Fusion." Sensors 21, no. 9 (April 28, 2021): 3059. http://dx.doi.org/10.3390/s21093059.

Full text
Abstract:
Information fusion in networked systems poses challenges with respect to both theory and implementation. Limited available bandwidth can become a bottleneck when high-dimensional estimates and associated error covariance matrices need to be transmitted. Compression of estimates and covariance matrices can endanger desirable properties like unbiasedness and may lead to unreliable fusion results. In this work, quantization methods for estimates and covariance matrices are presented and their usage with the optimal fusion formulas and covariance intersection is demonstrated. The proposed quantization methods significantly reduce the bandwidth required for data transmission while retaining unbiasedness and conservativeness of the considered fusion methods. Their performance is evaluated using simulations, showing their effectiveness even in the case of substantial data reduction.
APA, Harvard, Vancouver, ISO, and other styles
32

Xie, Cuijie, Haijuan Wang, and Jianhong Jiao. "Cross-Border E-Commerce Logistics Collaboration Model Based on Supply Chain Theory." Security and Communication Networks 2022 (April 28, 2022): 1–13. http://dx.doi.org/10.1155/2022/1498765.

Full text
Abstract:
With the rapid development of cross-border e-commerce, however, logistics has become a bottleneck in the development of cross-border electricity traders. The results of studies on cross-border e-commerce logistics are still less, and the relevant theoretical studies are not yet mature enough. As cross-border e-commerce occupies a share of foreign trade in foreign trade increases, so does their influence. In order to eliminate bottlenecks in the cross-border logistics of an electric enterprise, it is of great importance to systematically study the issues of synergy in the logistics of the supply chain of a cross-border electric enterprise and validate how cross-border traders and cross-border logistics work together using cross-border discussion based on the perspective of cross-border e-commerce ecosystem. At the same time, an analysis of the need for cross-border logistics collaboration electricity traders and cross-border logistics is being carried out, as well as an in-depth study of synergy mechanisms between cross-border electricity traders and cross-border logistics based on a cross-border ecosystem perspective. The empirical results show that cross-border logistics is available function service capability; cross-border logistics information sharing level, cross-border logistics resource optimization and allocation capability, and the opening level of cross-border logistics environment have different contributions to the impact on the efficiency of cross-border e-commerce logistics. Among them, the level of cross-border logistics information exchange has the most significant influence on the logistics efficiency of cross-border traders, followed by cross-border logistics functional services capabilities, again the level of openness of the cross-border logistics environment, and finally, the ability to optimally distribute cross-border logistics resources.
APA, Harvard, Vancouver, ISO, and other styles
33

CHEN, BOKUI, XIAOYAN SUN, HUA WEI, CHUANFEI DONG, and BINGHONG WANG. "PIECEWISE FUNCTION FEEDBACK STRATEGY IN INTELLIGENT TRAFFIC SYSTEMS WITH A SPEED LIMIT BOTTLENECK." International Journal of Modern Physics C 22, no. 08 (August 2011): 849–60. http://dx.doi.org/10.1142/s0129183111016658.

Full text
Abstract:
The road capacity can be greatly improved if an appropriate and effective information feedback strategy is adopted in the traffic system. In this paper, a strategy called piecewise function feedback strategy (PFFS) is introduced and applied into an asymmetrical two-route scenario with a speed limit bottleneck in which the dynamic information can be generated and displayed on the information board to guide road users to make a choice. Meanwhile, the velocity-dependent randomization (VDR) mechanism is adopted which can better reflect the dynamic behavior of vehicles in the system than NS mechanism. Simulation results adopting PFFS have demonstrated high efficiency in controlling spatial distribution of traffic patterns compared with the previous strategies.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Xiao-Yan, Zhong-Jun Ding, and Guo-Hua Huang. "Effect of density feedback on the two-route traffic scenario with bottleneck." International Journal of Modern Physics C 27, no. 06 (May 13, 2016): 1650058. http://dx.doi.org/10.1142/s0129183116500583.

Full text
Abstract:
In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities [Formula: see text] and [Formula: see text]. When vehicle entry probability [Formula: see text], four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval [Formula: see text], the free flow and transition state disappear, and there is only congestion state when [Formula: see text]. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Bin, Zhi Jian Wang, Rong Zhi Qi, and Xin Lv. "Key Performance Information Collection Architecture Based on Cloud Computing." Applied Mechanics and Materials 509 (February 2014): 182–88. http://dx.doi.org/10.4028/www.scientific.net/amm.509.182.

Full text
Abstract:
Cloud Computing has become another buzzword in recent years. Follow the popular research and use of the cloud system the performance become the bottleneck of the Newborn. More and more researches are turning their attention to analyze the performance of the cloud services. However, it is hard to extract accurate information from the different type of the cloud components such as datacenter, host, Virtual Machines (VM) in the cloud. Thus, it is significant to collect sufficient row data of the Cloud systems for the performance analysis. In this paper, we described an analysis framework to evaluate comprehensive performance guideline of cloud computing center. The analysis architecture is built based on the performance agent and server interface method (PASI), which consists of performance client (PMC), performance agent (PMA) and performance server (PMS), and we put forward a mathematical model based on the PASI information and queuing theory to forecast the idle rate and availability of the cloud environment. It is proved that the PASI architecture is correctly and effectively evaluates the performance of the cloud component and whole cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Grazioli, Filippo, Raman Siarheyeu, Israa Alqassem, Andreas Henschel, Giampaolo Pileggi, and Andrea Meiser. "Microbiome-based disease prediction with multimodal variational information bottlenecks." PLOS Computational Biology 18, no. 4 (April 11, 2022): e1010050. http://dx.doi.org/10.1371/journal.pcbi.1010050.

Full text
Abstract:
Scientific research is shedding light on the interaction of the gut microbiome with the human host and on its role in human health. Existing machine learning methods have shown great potential in discriminating healthy from diseased microbiome states. Most of them leverage shotgun metagenomic sequencing to extract gut microbial species-relative abundances or strain-level markers. Each of these gut microbial profiling modalities showed diagnostic potential when tested separately; however, no existing approach combines them in a single predictive framework. Here, we propose the Multimodal Variational Information Bottleneck (MVIB), a novel deep learning model capable of learning a joint representation of multiple heterogeneous data modalities. MVIB achieves competitive classification performance while being faster than existing methods. Additionally, MVIB offers interpretable results. Our model adopts an information theoretic interpretation of deep neural networks and computes a joint stochastic encoding of different input data modalities. We use MVIB to predict whether human hosts are affected by a certain disease by jointly analysing gut microbial species-relative abundances and strain-level markers. MVIB is evaluated on human gut metagenomic samples from 11 publicly available disease cohorts covering 6 different diseases. We achieve high performance (0.80 < ROC AUC < 0.95) on 5 cohorts and at least medium performance on the remaining ones. We adopt a saliency technique to interpret the output of MVIB and identify the most relevant microbial species and strain-level markers to the model’s predictions. We also perform cross-study generalisation experiments, where we train and test MVIB on different cohorts of the same disease, and overall we achieve comparable results to the baseline approach, i.e. the Random Forest. Further, we evaluate our model by adding metabolomic data derived from mass spectrometry as a third input modality. Our method is scalable with respect to input data modalities and has an average training time of < 1.4 seconds. The source code and the datasets used in this work are publicly available.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Xinmeng, Jia Cui, Jingqi Song, Mingyu Jia, Zhenxing Zou, Guocheng Ding, and Yuanjie Zheng. "Contextual Features and Information Bottleneck-Based Multi-Input Network for Breast Cancer Classification from Contrast-Enhanced Spectral Mammography." Diagnostics 12, no. 12 (December 12, 2022): 3133. http://dx.doi.org/10.3390/diagnostics12123133.

Full text
Abstract:
In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering the contextual features between them and their adjacent tissues. Furthermore, for contrast-enhanced spectral mammography, the existing studies have only performed feature extraction on a single image per breast. In this paper, we propose a multi-input deep learning network for automatic breast cancer classification. Specifically, we simultaneously input four images of each breast with different feature information into the network. Then, we processed the feature maps in both horizontal and vertical directions, preserving the pixel-level contextual information within the neighborhood of the tumor during the pooling operation. Furthermore, we designed a novel loss function according to the information bottleneck theory to optimize our multi-input network and ensure that the common information in the multiple input images could be fully utilized. Our experiments on 488 images (256 benign and 232 malignant images) from 122 patients show that the method’s accuracy, precision, sensitivity, specificity, and f1-score values are 0.8806, 0.8803, 0.8810, 0.8801, and 0.8806, respectively. The qualitative, quantitative, and ablation experiment results show that our method significantly improves the accuracy of breast cancer classification and reduces the false positive rate of diagnosis. It can reduce misdiagnosis rates and unnecessary biopsies, helping doctors determine accurate clinical diagnoses of breast cancer from multiple CESM images.
APA, Harvard, Vancouver, ISO, and other styles
38

Abel, David, Dilip Arumugam, Kavosh Asadi, Yuu Jinnai, Michael L. Littman, and Lawson L. S. Wong. "State Abstraction as Compression in Apprenticeship Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3134–42. http://dx.doi.org/10.1609/aaai.v33i01.33013134.

Full text
Abstract:
State abstraction can give rise to models of environments that are both compressed and useful, thereby enabling efficient sequential decision making. In this work, we offer the first formalism and analysis of the trade-off between compression and performance made in the context of state abstraction for Apprenticeship Learning. We build on Rate-Distortion theory, the classic Blahut-Arimoto algorithm, and the Information Bottleneck method to develop an algorithm for computing state abstractions that approximate the optimal tradeoff between compression and performance. We illustrate the power of this algorithmic structure to offer insights into effective abstraction, compression, and reinforcement learning through a mixture of analysis, visuals, and experimentation.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Qiucen, Zedong Du, Zhikui Chen, Xiaodi Huang, and Qiu Li. "Multiview Deep Forest for Overall Survival Prediction in Cancer." Computational and Mathematical Methods in Medicine 2023 (January 18, 2023): 1–12. http://dx.doi.org/10.1155/2023/7931321.

Full text
Abstract:
Overall survival (OS) in cancer is crucial for cancer treatment. Many machine learning methods have been applied to predict OS, but there are still the challenges of dealing with multiview data and overfitting. To overcome these problems, we propose a multiview deep forest (MVDF) in this paper. MVDF can learn the features of each view and fuse them with integrated learning and multiple kernel learning. Then, a gradient boost forest based on the information bottleneck theory is proposed to reduce redundant information and avoid overfitting. In addition, a pruning strategy for a cascaded forest is used to limit the impact of outlier data. Comprehensive experiments have been carried out on a data set from West China Hospital of Sichuan University and two public data sets. Results have demonstrated that our method outperforms the compared methods in predicting overall survival.
APA, Harvard, Vancouver, ISO, and other styles
40

Lee-Post, Anita. "Developing numeracy and problem-solving skills by overcoming learning bottlenecks." Journal of Applied Research in Higher Education 11, no. 3 (July 1, 2019): 398–414. http://dx.doi.org/10.1108/jarhe-03-2018-0049.

Full text
Abstract:
Purpose The purpose of this paper is to present an educational approach to elevating problem-solving and numeracy competencies of business undergraduates to meet workplace demand. The approach is grounded in the theory of constraints following the Decoding the Discipline model. The authors investigated a cognitive bottleneck involving problem modeling and an affective bottleneck concerning low self-efficacy of numeracy and designed specific interventions to address both bottlenecks simultaneously. The authors implemented the proposed approach in an introductory level analytics course in business operations. Design/methodology/approach The authors use an empirical study to evaluate the effectiveness of the proposed approach in addressing deficiency in numeracy and problem-solving skills. Cognitive and affective learning interventions were introduced in an undergraduate core course in analytics. The perceived effectiveness of the interventions was evaluated with the use of a survey at the end of the course. To further investigate the effectiveness of the proposed interventions beyond self-reporting, the impact of the interventions on actual learning was evaluated by comparing the exam scores between classes with and without the interventions. Findings Students who underwent the interventions successfully overcame both learning bottlenecks and indicated a positive change in attitude toward the analytics discipline as well as achieved higher exam scores in the analytics course. Research limitations/implications This study succeeds in strengthening the body of research in teaching and learning. The authors also offer a holistic treatment of cognitive and affective learning bottlenecks, and provide empirical evidence to support the effectiveness of the proposed approach in elevating numeracy and problem-solving competencies of business undergraduates. Practical implications The proposed approach is useful for business educators to improve business students’ quantitative modeling skill and attitude. Researchers can also extend the approach to other courses and settings to build up the body of research in learning and skill development. Educational policy makers may consider promoting promising approaches to improve students’ quantitative skill development. They can also set a high standard for higher education institutions to assess students’ numeracy and problem-solving competencies. Employers will find college graduates bring to their initial positions the high levels of numeracy and problem-solving skills demanded for knowledge work to sustain business growth and innovation. Social implications As students’ numeracy and problem-solving skills are raised, they will develop an aptitude for quantitative-oriented coursework that equips them with the set of quantitative information-processing skills needed to succeed in the twenty-first century society and global economy. Originality/value The proposed approach provides a goal-oriented three-step process to improve learning by overcoming learning bottlenecks as constraints of a learning process. The integral focus on identifying learning bottlenecks, creating learning interventions and assessing learning outcomes in the proposed approach is instrumental in introducing manageable interventions to address challenges in student learning thereby elevating students’ numeracy and problem-solving competencies.
APA, Harvard, Vancouver, ISO, and other styles
41

Wagener, Thorsten, Patrick Reed, Kathryn van Werkhoven, Yong Tang, and Zhenxing Zhang. "Advances in the identification and evaluation of complex environmental systems models." Journal of Hydroinformatics 11, no. 3-4 (July 1, 2009): 266–81. http://dx.doi.org/10.2166/hydro.2009.040.

Full text
Abstract:
Advances in our ability to model complex environmental systems are currently driven by at least four needs: (1) the need for the inclusion of uncertainty in monitoring, modelling and decision-making; (2) the need to provide environmental predictions everywhere; (3) the need to predict the impacts of environmental change; and (4) the need to adaptively evolve observation networks to better resolve environmental systems and embrace sensing innovations. Satisfying these needs will require improved theory, improved models and improved frameworks for making and evaluating predictions. All of these improvements should result in the long-term evolution and improvement of observation systems. In the context of this paper we discuss current bottlenecks and opportunities for advancing environmental modelling with and without local observations of system response. More realistic representations of real-world thresholds, nonlinearities and feedbacks motivates the use of more complex models as well as the consequent need for more rigorous evaluations of model performance. In the case of gauged systems, we find that global sensitivity analysis provides a widely underused tool for evaluating models' assumptions and estimating the information content of data. In the case of ungauged systems, including the modelling of environmental change impacts, we propose that the definition of constraints on the expected system response provides a promising way forward. Examples of our own work are included to support the conclusions of this discussion paper. Overall, we conclude that an important bottleneck currently limiting environmental predictions lies in how our model evaluation and identification approaches are extracting, using and evolving the information available for environmental systems at the watershed scale.
APA, Harvard, Vancouver, ISO, and other styles
42

Slavova, Angela, and Ventsislav Ignatov. "Edge of Chaos in Memristor Cellular Nonlinear Networks." Mathematics 10, no. 8 (April 12, 2022): 1288. http://dx.doi.org/10.3390/math10081288.

Full text
Abstract:
Information processing in the brain takes place in a dense network of neurons connected through synapses. The collaborative work between these two components (Synapses and Neurons) allows for basic brain functions such as learning and memorization. The so-called von Neumann bottleneck, which limits the information processing capability of conventional systems, can be overcome by the efficient emulation of these computational concepts. To this end, mimicking the neuronal architectures with silicon-based circuits, on which neuromorphic engineering is based, is accompanied by the development of new devices with neuromorphic functionalities. We shall study different memristor cellular nonlinear networks models. The rigorous mathematical analysis will be presented based on local activity theory, and the edge of chaos domain will be determined in the models under consideration. Simulations of these models working on the edge of chaos will show the generation of static and dynamic patterns.
APA, Harvard, Vancouver, ISO, and other styles
43

da Fonseca, María, and Inés Samengo. "Derivation of Human Chromatic Discrimination Ability from an Information-Theoretical Notion of Distance in Color Space." Neural Computation 28, no. 12 (December 2016): 2628–55. http://dx.doi.org/10.1162/neco_a_00903.

Full text
Abstract:
The accuracy with which humans detect chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable. Therefore, although photoreceptor absorption curves are expected to influence the accuracy of conscious discriminability, there is no reason to believe that they should suffice to explain it. Here we develop information-theoretical tools based on the Fisher metric that demonstrate that photoreceptor absorption properties explain about 87% of the variance of human color discrimination ability, as tested by previous behavioral experiments. In the context of this theory, the bottleneck in chromatic information processing is determined by photoreceptor absorption characteristics. Subsequent encoding stages modify only marginally the chromatic discriminability at the photoreceptor level.
APA, Harvard, Vancouver, ISO, and other styles
44

Soflaei, Masoumeh, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, and Richong Zhang. "Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5810–17. http://dx.doi.org/10.1609/aaai.v34i04.6038.

Full text
Abstract:
We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Shanxiong, Maoling Peng, Hailing Xiong, and Xianping Yu. "SVM Intrusion Detection Model Based on Compressed Sampling." Journal of Electrical and Computer Engineering 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/3095971.

Full text
Abstract:
Intrusion detection needs to deal with a large amount of data; particularly, the technology of network intrusion detection has to detect all of network data. Massive data processing is the bottleneck of network software and hardware equipment in intrusion detection. If we can reduce the data dimension in the stage of data sampling and directly obtain the feature information of network data, efficiency of detection can be improved greatly. In the paper, we present a SVM intrusion detection model based on compressive sampling. We use compressed sampling method in the compressed sensing theory to implement feature compression for network data flow so that we can gain refined sparse representation. After that SVM is used to classify the compression results. This method can realize detection of network anomaly behavior quickly without reducing the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Su, Yixin, Rui Zhang, Sarah Erfani, and Zhenghua Xu. "Detecting Beneficial Feature Interactions for Recommender Systems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4357–65. http://dx.doi.org/10.1609/aaai.v35i5.16561.

Full text
Abstract:
Feature interactions are essential for achieving high accuracy in recommender systems. Many studies take into account the interaction between every pair of features. However, this is suboptimal because some feature interactions may not be that relevant to the recommendation result and taking them into account may introduce noise and decrease recommendation accuracy. To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy. The automatic feature interaction detection is achieved via edge prediction with an L0 activation regularization. Our proposed model is proved to be effective through the information bottleneck principle and statistical interaction theory. Experimental results show that our model (i) outperforms existing baselines in terms of accuracy, and (ii) automatically identifies beneficial feature interactions.
APA, Harvard, Vancouver, ISO, and other styles
47

Hetzel, Sara, Pay Giesselmann, Knut Reinert, Alexander Meissner, and Helene Kretzmer. "RLM: fast and simplified extraction of read-level methylation metrics from bisulfite sequencing data." Bioinformatics 37, no. 21 (October 2, 2021): 3934–35. http://dx.doi.org/10.1093/bioinformatics/btab663.

Full text
Abstract:
Abstract Summary Bisulfite sequencing data provide value beyond the straightforward methylation assessment by analyzing single-read patterns. Over the past years, various metrics have been established to explore this layer of information. However, limited compatibility with alignment tools, reference genomes or the measurements they provide present a bottleneck for most groups to routinely perform read-level analysis. To address this, we developed RLM, a fast and scalable tool for the computation of several frequently used read-level methylation statistics. RLM supports standard alignment tools, works independently of the reference genome and handles most sequencing experiment designs. RLM can process large input files with a billion reads in just a few hours on common workstations. Availability and implementation https://github.com/sarahet/RLM. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
48

Painsky, Amichai, Meir Feder, and Naftali Tishby. "Nonlinear Canonical Correlation Analysis:A Compressed Representation Approach." Entropy 22, no. 2 (February 12, 2020): 208. http://dx.doi.org/10.3390/e22020208.

Full text
Abstract:
Canonical Correlation Analysis (CCA) is a linear representation learning method that seeks maximally correlated variables in multi-view data. Nonlinear CCA extends this notion to a broader family of transformations, which are more powerful in many real-world applications. Given the joint probability, the Alternating Conditional Expectation (ACE) algorithm provides an optimal solution to the nonlinear CCA problem. However, it suffers from limited performance and an increasing computational burden when only a finite number of samples is available. In this work, we introduce an information-theoretic compressed representation framework for the nonlinear CCA problem (CRCCA), which extends the classical ACE approach. Our suggested framework seeks compact representations of the data that allow a maximal level of correlation. This way, we control the trade-off between the flexibility and the complexity of the model. CRCCA provides theoretical bounds and optimality conditions, as we establish fundamental connections to rate-distortion theory, the information bottleneck and remote source coding. In addition, it allows a soft dimensionality reduction, as the compression level is determined by the mutual information between the original noisy data and the extracted signals. Finally, we introduce a simple implementation of the CRCCA framework, based on lattice quantization.
APA, Harvard, Vancouver, ISO, and other styles
49

Helberger, Natali, Katharina Kleinen-von Königslöw, and Rob van der Noll. "Regulating the new information intermediaries as gatekeepers of information diversity." info 17, no. 6 (September 14, 2015): 50–71. http://dx.doi.org/10.1108/info-05-2015-0034.

Full text
Abstract:
Purpose – The purposes of this paper are to deal with the questions: because search engines, social networks and app-stores are often referred to as gatekeepers to diverse information access, what is the evidence to substantiate these gatekeeper concerns, and to what extent are existing regulatory solutions to control gatekeeper control suitable at all to address new diversity concerns? It will also map the different gatekeeper concerns about media diversity as evidenced in existing research before the background of network gatekeeping theory critically analyses some of the currently discussed regulatory approaches and develops the contours of a more user-centric approach towards approaching gatekeeper control and media diversity. Design/methodology/approach – This is a conceptual research work based on desk research into the relevant and communications science, economic and legal academic literature and the relevant laws and public policy documents. Based on the existing evidence as well as on applying the insights from network gatekeeping theory, this paper then critically reviews the existing legal/policy discourse and identifies elements for an alternative approach. Findings – This paper finds that when looking at search engines, social networks and app stores, many concerns about the influence of the new information intermediaries on media diversity have not so much their source in the control over critical resources or access to information, as the traditional gatekeepers do. Instead, the real bottleneck is access to the user, and the way the relationship between social network, search engine or app platforms and users is given form. Based on this observation, the paper concludes that regulatory initiatives in this area would need to pay more attention to the dynamic relationship between gatekeeper and gated. Research limitations/implications – Because this is a conceptual piece based on desk-research, meaning that our assumptions and conclusions have not been validated by own empirical research. Also, although the authors have conducted to their best knowledge the literature review as broad and as concise as possible, seeing the breadth of the issue and the diversity of research outlets, it cannot be excluded that we have overlooked one or the other publication. Practical implications – This paper makes a number of very concrete suggestions of how to approach potential challenges from the new information intermediaries to media diversity. Social implications – The societal implications of search engines, social networks and app stores for media diversity cannot be overestimated. And yet, it is the position of users, and their exposure to diverse information that is often neglected in the current dialogue. By drawing attention to the dynamic relationship between gatekeeper and gated, this paper highlights the importance of this relationship for diverse exposure to information. Originality/value – While there is currently much discussion about the possible challenges from search engines, social networks and app-stores for media diversity, a comprehensive overview in the scholarly literature on the evidence that actually exists is still lacking. And while most of the regulatory solutions still depart from a more pre-networked, static understanding of “gatekeeper”, we develop our analysis on the basis for a more dynamic approach that takes into account the fluid and interactive relationship between the roles of “gatekeepers” and “gated”. Seen from this perspective, the regulatory solutions discussed so far appear in a very different light.
APA, Harvard, Vancouver, ISO, and other styles
50

Et al., Amit Grover. "Diverse Congestion Control Schemes for Wireless Sensor Networks." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 5, 2021): 2380–89. http://dx.doi.org/10.17762/turcomat.v12i6.5401.

Full text
Abstract:
Wireless Sensor Networks (WSNs) comprised of battery operated sensor nodes that collect data from their neighbor nodes and transmit the aggregated information to the sink node or the Base Station (BS). This may result in congestion near the BS and leads to a bottleneck situation in the network. In this paper, an extensive study of earlier reported diverse congestion techniques explicitly diverse Algorithm based - and Layer based-congestion techniques is carried out. Accordingly, a recommendation is drawn based upon their performance comparison. Furthermore, a demonstration is carried out for contemporary earlier reported strategies such as Pro-AODV, CC-AODV, EDAPR, ED-AODV and PCC-AODV by evaluating delay, packet delivery ratio (PDR) and packet loss ratio (PLR). Accordingly, a recommended congestion strategy is suggested depending upon the comparison of the demonstrated schemes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography