Academic literature on the topic 'Information bottleneck theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Information bottleneck theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Information bottleneck theory"

1

Nguyen, Thanh Tang, and Jaesik Choi. "Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks." Entropy 21, no. 10 (October 6, 2019): 976. http://dx.doi.org/10.3390/e21100976.

Full text
Abstract:
While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST.
APA, Harvard, Vancouver, ISO, and other styles
2

LIU, YONGLI, YUANXIN OUYANG, and ZHANG XIONG. "INCREMENTAL CLUSTERING USING INFORMATION BOTTLENECK THEORY." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 05 (August 2011): 695–712. http://dx.doi.org/10.1142/s0218001411008622.

Full text
Abstract:
Document clustering is one of the most effective techniques to organize documents in an unsupervised manner. In this paper, an Incremental method for document Clustering based on Information Bottleneck theory (ICIB) is presented. The ICIB is designed to improve the accuracy and efficiency of document clustering, and resolve the issue that an arbitrary choice of document similarity measure could produce an inaccurate clustering result. In our approach, document similarity is calculated using information bottleneck theory and documents are grouped incrementally. A first document is selected randomly and classified as one cluster, then each remaining document is processed incrementally according to the mutual information loss introduced by the merger of the document and each existing cluster. If the minimum value of mutual information loss is below a certain threshold, the document will be added to its closest cluster; otherwise it will be classified as a new cluster. The incremental clustering process is low-precision and order-dependent, which cannot guarantee accurate clustering results. Therefore, an improved sequential clustering algorithm (SIB) is proposed to adjust the intermediate clustering results. In order to test the effectiveness of ICIB method, ten independent document subsets are constructed based on the 20NewsGroup and Reuters-21578 corpora. Experimental results show that our ICIB method achieves higher accuracy and time performance than K-Means, AIB and SIB algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Xichuan, Kui Liu, Cong Shi, Haijun Liu, and Ji Liu. "Optimizing Information Theory Based Bitwise Bottlenecks for Efficient Mixed-Precision Activation Quantization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3590–98. http://dx.doi.org/10.1609/aaai.v35i4.16474.

Full text
Abstract:
Recent researches on information theory shed new light on the continuous attempts to open the black box of neural signal encoding. Inspired by the problem of lossy signal compression for wireless communication, this paper presents a Bitwise Bottleneck approach for quantizing and encoding neural network activations. Based on the rate-distortion theory, the Bitwise Bottleneck attempts to determine the most significant bits in activation representation by assigning and approximating the sparse coefficients associated with different bits. Given the constraint of a limited average code rate, the bottleneck minimizes the distortion for optimal activation quantization in a flexible layer-by-layer manner. Experiments over ImageNet and other datasets show that, by minimizing the quantization distortion of each layer, the neural network with bottlenecks achieves the state-of-the-art accuracy with low-precision activation. Meanwhile, by reducing the code rate, the proposed method can improve the memory and computational efficiency by over six times compared with the deep neural network with standard single-precision representation. The source code is available on GitHub: https://github.com/CQUlearningsystemgroup/BitwiseBottleneck.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Junjie, and Ding Liu. "Information Bottleneck Theory on Convolutional Neural Networks." Neural Processing Letters 53, no. 2 (February 18, 2021): 1385–400. http://dx.doi.org/10.1007/s11063-021-10445-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Geiger, Bernhard C., and Gernot Kubin. "Information Bottleneck: Theory and Applications in Deep Learning." Entropy 22, no. 12 (December 14, 2020): 1408. http://dx.doi.org/10.3390/e22121408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Xin, Katayoun Farrahi, and Mahesan Niranjan. "Information Bottleneck Theory Based Exploration of Cascade Learning." Entropy 23, no. 10 (October 18, 2021): 1360. http://dx.doi.org/10.3390/e23101360.

Full text
Abstract:
In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation (I(X;T)) and the representation to the target (I(T;Y)). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, I(T;Y)/I(X;T), and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Saxe, Andrew M., Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D. Tracey, and David D. Cox. "On the information bottleneck theory of deep learning." Journal of Statistical Mechanics: Theory and Experiment 2019, no. 12 (December 20, 2019): 124020. http://dx.doi.org/10.1088/1742-5468/ab3985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ke, Qiao, Jiangshe Zhang, H. M. Srivastava, Wei Wei, and Guang-Sheng Chen. "Independent Component Analysis Based on Information Bottleneck." Abstract and Applied Analysis 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/386201.

Full text
Abstract:
The paper is mainly used to provide the equivalence of two algorithms of independent component analysis (ICA) based on the information bottleneck (IB). In the viewpoint of information theory, we attempt to explain the two classical algorithms of ICA by information bottleneck. Furthermore, via the numerical experiments with the synthetic data, sonic data, and image, ICA is proved to be an edificatory way to solve BSS successfully relying on the information theory. Finally, two realistic numerical experiments are conducted via FastICA in order to illustrate the efficiency and practicality of the algorithm as well as the drawbacks in the process of the recovery images the mixing images.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Qingyun, Jianxin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, and Philip S. Yu. "Graph Structure Learning with Variational Information Bottleneck." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4165–74. http://dx.doi.org/10.1609/aaai.v36i4.20335.

Full text
Abstract:
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real-world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL is the first attempt to advance the Information Bottleneck (IB) principle for graph structure learning, providing a more elegant and universal framework for mining underlying task-relevant relations. VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks. VIB-GSL deduces a variational approximation for irregular graph data to form a tractable IB objective function, which facilitates training stability. Extensive experimental results demonstrate that the superior effectiveness and robustness of the proposed VIB-GSL.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Z. "INFORMATION THEORY OF CARTOGRAPHY: A FRAMEWORK FOR ENTROPY-BASED CARTOGRAPHIC COMMUNICATION THEORY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (August 24, 2020): 11–16. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-11-2020.

Full text
Abstract:
Abstract. Map is an effective communication means. It carries and transmits spatial information about spatial objects and phenomena, from map makers to map users. Therefore, cartography can be regarded as a communication system. Efforts has been made on the application of Shannon Information theory developed in digital communication to cartography to establish an information theory of cartography, or simply cartographic information theory (or map information theory). There was a boom during the period from later 1960s to early 1980s. Since later 1980s, researcher have almost given up the dream of establishing the information theory of cartography because they met a bottleneck problem. That is, Shannon entropy is only able to characterize the statistical information of map symbols but not capable of characterizing the spatial configuration (patterns) of map symbols. Fortunately, break-through has been made, i.e. the building of entropy models for metric and thematic information as well as a feasible computational model for Boltzmann entropy. This paper will review the evolutional processes, examine the bottleneck problems and the solutions, and finally propose a framework for the information theory of cartography. It is expected that such a theory will become the most fundamental theory of cartography in the big data era.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Information bottleneck theory"

1

Mole, Christopher. Attention. Edited by Eric Margolis, Richard Samuels, and Stephen P. Stich. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195309799.013.0009.

Full text
Abstract:
The article focuses on Broadbent's approach to the explanation of attention. Broadbent shows that one's information-processing resources have sufficient capacity to encode the simple physical properties of all the stimuli that one is presented with, but have only a limited capacity for the encoding of the semantic properties of those stimuli. The resulting model depicts perceptual processing as proceeding in two stages. The first stage entails that a large capacity sensory system processes the physical features of all stimuli in parallel. A subset of the representations generated by the large capacity system are selected to be passed on to a second perceptual system, which has a smaller processing capacity, and which has the job of processing the stimuli's semantic properties. Broadbent's theory would explain that pre-bottleneck processing is responsible for the detection of simple physical features, and also for own-name detection. The phenomenology of one's shifting awareness in conditions of binocular rivalry is naturally described as the manifestation of a competition, and perhaps of a biased competition.
APA, Harvard, Vancouver, ISO, and other styles
2

Mehta, Vaishali, Dolly Sharma, Monika Mangla, Anita Gehlot, Rajesh Singh, and Sergio Márquez Sánchez, eds. Challenges and Opportunities for Deep Learning Applications in Industry 4.0. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150360601220101.

Full text
Abstract:
The competence of deep learning for the automation and manufacturing sector has received astonishing attention in recent times. The manufacturing industry has recently experienced a revolutionary advancement despite several issues. One of the limitations for technical progress is the bottleneck encountered due to the enormous increase in data volume for processing, comprising various formats, semantics, qualities and features. Deep learning enables detection of meaningful features that are difficult to perform using traditional methods. The book takes the reader on a technological voyage of the industry 4.0 space. Chapters highlight recent applications of deep learning and the associated challenges and opportunities it presents for automating industrial processes and smart applications. Chapters introduce the reader to a broad range of topics in deep learning and machine learning. Several deep learning techniques used by industrial professionals are covered, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical project methodology. Readers will find information on the value of deep learning in applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. The book also discusses prospective research directions that focus on the theory and practical applications of deep learning in industrial automation. Therefore, the book aims to serve as a comprehensive reference guide for industrial consultants interested in industry 4.0, and as a handbook for beginners in data science and advanced computer science courses.
APA, Harvard, Vancouver, ISO, and other styles
3

Goodin, Robert E., and Kai Spiekermann. An Epistemic Theory of Democracy. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198823452.001.0001.

Full text
Abstract:
One attractive feature of democracy is its ability to track the truth by information aggregation. The formal support for this claim goes back to Condorcet’s famous jury theorem. However, the theorem has often been dismissed as a mathematical curiosity because the assumptions on which the theorem is based are demanding. Such quick dismissals tend to misunderstand the original theorem. They also fail to appreciate how Condorcet’s assumptions can be weakened to obtain jury theorems that are readily applicable in the real world. The first part of the book explains the original theorem and its various extensions and introduces results to deal with the challenge of voter dependence. Part II considers opportunities to make democracies perform better in epistemic terms by improving voter competence and diversity, by dividing epistemic labour, and by preceding voting with deliberation. In the third part, political practices are looked at through an epistemic lens, focusing on the influence of tradition, following opinion leaders or cues, and on settings in which the electorate falls into diverging factions. Part IV analyses the implications for the structures of government. While arguing against the case for epistocracy, the use of deliberation and expert advice in representative democracy can lead to improved truth-tracking, provided epistemic bottlenecks are avoided. The final part summarizes the results and explores how epistemic democracy might be undermined, using as case studies the Trump and Brexit campaigns.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Angela J. Bayesian Models of Attention. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.025.

Full text
Abstract:
Traditionally, attentional selection has been thought of as arising naturally from resource limitations, with a focus on what might be the most apt metaphor, e.g. whether it is a ‘bottleneck’ or ‘spotlight’. However, these simple metaphors cannot account for the specificity, flexibility, and heterogeneity of the way attentional selection manifests itself in different behavioural contexts. A recent body of theoretical work has taken a different approach, focusing on the computational needs of selective processing, relative to environmental constraints and behavioural goals. They typically adopt a normative computational framework, incorporating Bayes-optimal algorithms for information processing and action selection. This chapter reviews some of this recent modelling work, specifically in the context of attention for learning, covert spatial attention, and overt spatial attention.
APA, Harvard, Vancouver, ISO, and other styles
5

Pawley, Andrew. Linguistic Evidence as a Window into the Prehistory of Oceania. Edited by Ethan E. Cochrane and Terry L. Hunt. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199925070.013.006.

Full text
Abstract:
Historical linguistics is a key witness in reconstructing the prehistory of Oceania. The extraordinary number of Papuan (non-Austronesian) language families in Near Oceania is consistent with archaeological evidence that this region was settled over 40,000 years ago. One family, Trans New Guinea, is exceptional in its wide distribution, suggesting that its expansion was underpinned by technological advances. Most Austronesian languages of Oceania fall into a single branch of the family, Oceanic, indicating that they stem from a bottleneck in the Austronesian expansion into the southwest Pacific, associated with the formation of Proto Oceanic (POc). The final stages of this formative period almost certainly took place in the Bismarck Archipelago and the subsequent rapid dispersal of Oceanic languages across the southwest Pacific can be connected with the region's colonization by bearers of the Lapita archaeological culture. The reconstructed lexicon of POc provides information about early Lapita material culture and social organization.
APA, Harvard, Vancouver, ISO, and other styles
6

Hilgurt, S. Ya, and O. A. Chemerys. Reconfigurable signature-based information security tools of computer systems. PH “Akademperiodyka”, 2022. http://dx.doi.org/10.15407/akademperiodyka.458.297.

Full text
Abstract:
The book is devoted to the research and development of methods for combining computational structures for reconfigurable signature-based information protection tools for computer systems and networks in order to increase their efficiency. Network security tools based, among others, on such AI-based approaches as deep neural networking, despite the great progress shown in recent years, still suffer from nonzero recognition error probability. Even a low probability of such an error in a critical infrastructure can be disastrous. Therefore, signature-based recognition methods with their theoretically exact matching feature are still relevant when creating information security systems such as network intrusion detection systems, antivirus, anti-spam, and wormcontainment systems. The real time multi-pattern string matching task has been a major performance bottleneck in such systems. To speed up the recognition process, developers use a reconfigurable hardware platform based on FPGA devices. Such platform provides almost software flexibility and near-ASIC performance. The most important component of a signature-based information security system in terms of efficiency is the recognition module, in which the multipattern matching task is directly solved. It must not only check each byte of input data at speeds of tens and hundreds of gigabits/sec against hundreds of thousand or even millions patterns of signature database, but also change its structure every time a new signature appears or the operating conditions of the protected system change. As a result of the analysis of numerous examples of the development of reconfigurable information security systems, three most promising approaches to the construction of hardware circuits of recognition modules were identified, namely, content-addressable memory based on digital comparators, Bloom filter and Aho–Corasick finite automata. A method for fast quantification of components of recognition module and the entire system was proposed. The method makes it possible to exclude resource-intensive procedures for synthesizing digital circuits on FPGAs when building complex reconfigurable information security systems and their components. To improve the efficiency of the systems under study, structural-level combinational methods are proposed, which allow combining into single recognition device several matching schemes built on different approaches and their modifications, in such a way that their advantages are enhanced and disadvantages are eliminated. In order to achieve the maximum efficiency of combining methods, optimization methods are used. The methods of: parallel combining, sequential cascading and vertical junction have been formulated and investigated. The principle of multi-level combining of combining methods is also considered and researched. Algorithms for the implementation of the proposed combining methods have been developed. Software has been created that allows to conduct experiments with the developed methods and tools. Quantitative estimates are obtained for increasing the efficiency of constructing recognition modules as a result of using combination methods. The issue of optimization of reconfigurable devices presented in hardware description languages is considered. A modification of the method of affine transformations, which allows parallelizing such cycles that cannot be optimized by other methods, was presented. In order to facilitate the practical application of the developed methods and tools, a web service using high-performance computer technologies of grid and cloud computing was considered. The proposed methods to increase efficiency of matching procedure can also be used to solve important problems in other fields of science as data mining, analysis of DNA molecules, etc. Keywords: information security, signature, multi-pattern matching, FPGA, structural combining, efficiency, optimization, hardware description language.
APA, Harvard, Vancouver, ISO, and other styles
7

Barker, Richard. Bioscience - Lost in Translation? Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198737780.001.0001.

Full text
Abstract:
Medical innovation as it stands today is fundamentally unsustainable. There is a widening gap between what biomedical research promises and its current impact in terms of patient benefit and health system improvement. This book highlights the global problem, analyses underlying causes, and provides powerful prescriptions for change to close the gap.It contrasts progress in biomedicine with other areas of science and technology, such as information technology, in which there are faster, more reliable returns for society from scientific advance. It questions whether society is right to expect so much from biomedicine and why we have become accustomed to such poor returns.It focuses on four specific ‘gaps in translation’ between bioscience breakthroughs and ultimate patient benefit, and explains how unhelpful mental models and differing perceptions of value, risk, and uncertainty contribute to stifling progress.Specific examples are examined, in which these bottlenecks have prevented promised progress (e.g. antibiotic-resistant infections), and others in which these barriers have been overcome, as a result of patient pressure (e.g. HIV treatment) or a sense of impending crisis (e.g. pandemic influenza).
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Information bottleneck theory"

1

Hu, Chunping, Jianfeng Liu, Yilin Ma, and Jing Wang. "An Analysis on the Park-and-Ride Travel Selection from the Perspective of Bottleneck Theory." In Communications in Computer and Information Science, 241–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22418-8_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carboni, Roberto. "Characterization and Modeling of Spin-Transfer Torque (STT) Magnetic Memory for Computing Applications." In Special Topics in Information Technology, 51–62. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-62476-7_5.

Full text
Abstract:
AbstractWith the ubiquitous diffusion of mobile computing and Internet of Things (IoT), the amount of data exchanged and processed over the internet is increasing every day, demanding secure data communication/storage and new computing primitives. Although computing systems based on microelectronics steadily improved over the past 50 years thanks to the aggressive technological scaling, their improvement is now hindered by excessive power consumption and inherent performance limitation associated to the conventional computer architecture (von Neumann bottleneck). In this scenario, emerging memory technologies are gaining interest thanks to their non-volatility and low power/fast operation. In this chapter, experimental characterization and modeling of spin-transfer torque magnetic memory (STT-MRAM) are presented, with particular focus on cycling endurance and switching variability, which both present a challenge towards STT-based memory applications. Then, the switching variability in STT-MRAM is exploited for hardware security and computing primitives, such as true-random number generator (TRNG) and stochastic spiking neuron for neuromorphic and stochastic computing.
APA, Harvard, Vancouver, ISO, and other styles
3

Pery, Andrew, Majid Rafiei, Michael Simon, and Wil M. P. van der Aalst. "Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities." In Lecture Notes in Business Information Processing, 395–407. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_29.

Full text
Abstract:
AbstractThe premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks. Moreover, there are complexities associated with meeting the specific dimensions of Trustworthy AI best practices such as data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality requirements. These processes involve multiple steps, hand-offs, re-works, and human-in-the-loop oversight. In this paper, we demonstrate that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution, surfacing compliance bottlenecks, and providing for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Valori, Marcello, Vito Basile, Simone Pio Negri, Paolo Scalmati, Chiara Renghini, and Irene Fassi. "Towards the Automated Coverlay Assembly in FPCB Manufacturing: Concept and Preliminary Tests." In IFIP Advances in Information and Communication Technology, 36–50. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72632-4_3.

Full text
Abstract:
AbstractIn modern electronics, flexible and rigid-flex PCBs are largely used due to their intrinsic versatility and performance, allowing to increase the available volume, or enabling connection between unconstrained components. Rigid-flex PCBs consists of rigid board portions with flexible interconnections and are commonly used in a wide variety of industrial applications. However, the assembly process of these devices still has some bottlenecks. Specifically, they require the application of cover layers (namely, coverlays), to provide insulation and protection of the flexible circuits. Due to the variability in planar shape and dimensions, the coverlay application is still performed manually, requiring troublesome manipulation steps and resulting in undetermined time-cycle and precision.This paper aims at the improvement of the industrial process currently performed, by proposing an approach for the automation of Kapton coverlay manipulation and application. Since these products are commercially provided as a film with a protective layer to be removed, the peeling issue is addressed, representing a challenging step of the automated process; the results of a systematic series of tests, performed in order to validate the peeling strategy, are reported in the paper. The overall assembly strategy relies on the development of a customized multi-hole vacuum gripper, whose concept is presented and contextualized in the proposed assembly process by outlining a suitable workcell architecture.
APA, Harvard, Vancouver, ISO, and other styles
5

Agbona, Afolabi, Prasad Peteti, Béla Teeken, Olamide Olaosebikan, Abolore Bello, Elizabeth Parkes, Ismail Rabbi, Lukas Mueller, Chiedozie Egesi, and Peter Kulakow. "Data Management in Multi-disciplinary African RTB Crop Breeding Programs." In Towards Responsible Plant Data Linkage: Data Challenges for Agricultural Research and Development, 85–103. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13276-6_5.

Full text
Abstract:
AbstractQuality phenotype and genotype data are important for the success of a breeding program. Like most programs, African breeding programs generate large multi-disciplinary phenotypic and genotypic datasets from several locations, that must be carefully managed through the use of an appropriate database management system (DBMS) in order to generate reliable and accurate information for breeding-decisions. A DBMS is essential in data collection, storage, retrieval, validation, curation and analysis in plant breeding programs to enhance the ultimate goal of increasing genetic gain. The International Institute of Tropical Agriculture (IITA), working on the roots, tubers and banana (RTB) crops like cassava, yam, banana and plantain has deployed a FAIR-compliant (Findable, Accessible, Interoperable, Reusable) database; BREEDBASE. The functionalities of this database in data management and analysis have been instrumental in achieving breeding goals. Standard Operating Procedures (SOP) for each breeding process have been developed to allow a cognitive walkthrough for users. This has further helped to increase the usage and enhance the acceptability of the system. The wide acceptability gained among breeders in global cassava research programs has resulted in improvements in the precision and quality of genotype and phenotype data, and subsequent improvement in achievement of breeding program goals. Several innovative gender responsive approaches and initiatives have identified users and their preferences which have informed improved customer and product profiles. A remaining bottleneck is the effective linking of data on preferences and social information of crop users with technical breeding data to make this process more effective.
APA, Harvard, Vancouver, ISO, and other styles
6

Rocheva, Anna, Evgeni Varshaver, and Nataliya Ivanova. "Targeting on Social Networking Sites as Sampling Strategy for Online Migrant Surveys: The Challenge of Biases and Search for Possible Solutions." In IMISCOE Research Series, 35–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01319-5_3.

Full text
Abstract:
AbstractChoosing a methodology for migrant surveys usually is a complicated issue for a number of reasons, including the lack of information about sampling frames, and migrants’ status as a hard-to-reach population. The spread of social media usage among migrants has led researchers to look at the potential that Social Networking Sites (SNS) have for migration studies with respect to extracting and analyzing big data, conducting ethnography online, and reaching migrant respondents through SNS advertising. While the advantages of sampling migrants using SNS and surveying them online are clear, the drawbacks of this method—and, even more so, the potential solutions—constitute an almost unexplored field. In this chapter, we address one of the most significant challenges of using this strategy by exploring the biases it may present and the possible ways to resolve them. We use data from five SNS-based migrant surveys conducted during 2016–2018 with various groups of migrants and their adult children (second generation migrants) from Central Asian and Transcaucasian countries in Russia (with N varying from 302 to 12,524). After describing the procedure of surveying migrants with targeting on SNS, we outline the major biases, delineate possible solutions, and demonstrate how some of them—namely weighting based on dropout analysis and external validation—can work regarding the material from one of the surveys. We conclude that, at present, the range of biases remains more considerable than our opportunities to adjust for them, and so it may be time to concede this, and instead direct research efforts to exploring other approaches to data analysis and presentation that are more suitable for contexts of uncertainty—for example, fuzzy set theory and Bayesian statistics. This chapter contributes to the advancement of the emerging field of “tech-savvy” migration studies while signposting its bottlenecks and gains, as well as laying out directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
7

"A New Multi-Instance Learning Scheme for Scene Categorization Using Information Bottleneck Theory." In International Conference on Instrumentation, Measurement, Circuits and Systems (ICIMCS 2011), 799–803. ASME Press, 2011. http://dx.doi.org/10.1115/1.859902.paper179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Balasundaram, Prashobh. "Effective Open-Source Performance Analysis Tools." In Handbook of Research on Computational Science and Engineering, 98–118. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-116-0.ch005.

Full text
Abstract:
This chapter presents a study of leading open source performance analysis tools for high performance computing (HPC). The first section motivates the necessity of open source tools for performance analysis. Background information on performance analysis of computational software is presented discussing the various performance critical components of computers. Metrics useful for performance analysis of common performance bottleneck patterns observed in computational codes are enumerated and followed by an evaluation of open source tools useful for extracting these metrics. The tool’s features are analyzed from the perspective of an end user. Important factors are discussed, such as the portability of tuning applied after identification of performance bottlenecks, the hardware/software requirements of the tools, the need for additional metrics for novel hardware features, and identification of these new metrics and techniques for measuring them. This chapter focuses on open source tools since they are freely available to anyone at no cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Mei, Jian Kang, and Junyao Zhang. "Overcoming detection rate bottlenecks in new QoS violation with combining HMM and information fusion theory." In Electronics, Communications and Networks IV, 1759–63. CRC Press, 2015. http://dx.doi.org/10.1201/b18592-320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Renuka Devi, K., and K. Balasamy. "Securing Clinical Information Through Multimedia Watermarking Techniques." In Advances in Healthcare Information Systems and Administration, 86–109. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4580-8.ch005.

Full text
Abstract:
Today, in this technological world, the information available can be easily copied, manipulated, as well as broadcasted among different channels, which should be protected. In the case of the medical field, the reliability and authenticity of e-medical data is one of the serious concerns. The patient's data without the knowledge of user might be utilized by the intruders. Medical data is highly valuable for the purpose of diagnosis and treatment. Due to increase in the usage of internet, copyright protection, content authentication, and identity theft are considered to be the bottleneck issues for content proprietors. To resolve those issues, the authors focused on the study of watermarking technology, which is used to improve the security of data and protect information from unauthorized access. Multimedia watermarking plays a predominant role in preserving the information from unauthorized access. This chapter aims to provide the detailed review of various watermarking techniques, their applications for security purposes, and the e-verification of medical information.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Information bottleneck theory"

1

Vera, Matias, Leonardo Rey Vega, and Pablo Piantanida. "Distributed cooperative information bottleneck." In 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017. http://dx.doi.org/10.1109/isit.2017.8006620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mahvari, Mohammad Mahdi, Mari Kobayashi, and Abdellatif Zaidi. "Scalable Vector Gaussian Information Bottleneck." In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9517720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pichler, Georg, and Gunther Koliander. "Information Bottleneck on General Alphabets." In 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018. http://dx.doi.org/10.1109/isit.2018.8437714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dikshtein, Michael, Nir Weinberger, and Shlomo Shamai Shitz. "The Compound Information Bottleneck Program." In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. http://dx.doi.org/10.1109/isit50566.2022.9834812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Qianqian, Pablo Piantanida, and Deniz Gunduz. "The multi-layer information bottleneck problem." In 2017 IEEE Information Theory Workshop (ITW). IEEE, 2017. http://dx.doi.org/10.1109/itw.2017.8278006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dikshtein, Michael, Nir Weinberger, and Shlomo Shamai Shitz. "On Information Bottleneck for Gaussian Processes." In 2022 IEEE Information Theory Workshop (ITW). IEEE, 2022. http://dx.doi.org/10.1109/itw54588.2022.9965939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hsu, Hsiang, Shahab Asoodeh, Salman Salamatian, and Flavio P. Calmon. "Generalizing Bottleneck Problems." In 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018. http://dx.doi.org/10.1109/isit.2018.8437632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dikshtein, Michael, Or Ordentlich, and Shlomo Shamai Shitz. "The Double-Sided Information-Bottleneck Function." In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9517899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vera, Matias, Leonardo Rey Vega, and Pablo Piantanida. "The two-way cooperative Information Bottleneck." In 2015 IEEE International Symposium on Information Theory (ISIT). IEEE, 2015. http://dx.doi.org/10.1109/isit.2015.7282832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Binglin, Shuangqing Wei, Yue Wang, and Jian Yuan. "Chernoff information of bottleneck Gaussian trees." In 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016. http://dx.doi.org/10.1109/isit.2016.7541443.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Information bottleneck theory"

1

Tanksley, Steven D., and Dani Zamir. Development and Testing of a Method for the Systematic Discovery and Utilization of Novel QTLs in the Production of Improved Crop Varieties: Tomato as a Model System. United States Department of Agriculture, June 1995. http://dx.doi.org/10.32747/1995.7570570.bard.

Full text
Abstract:
Modern cultivated varieties carry only a small fraction of the variation present in the gene pool. The narrow genetic basis of modern crop plants is a result of genetic bottlenecks imposed during early domestication and modern plant breeding. The wild ancestors of most crop plants can still be found in their natural habitats and Germplasm Centers have been established to collect and maintain this material. These wild and unadapted resources can potentially fuel crop plant improvement efforts for many years into the future (Tanksley and McCouch 1997). Unfortunately, scientists have been unable to exploit the majority of the genetic potential warehoused in germplasm repositories. This is especially true as regards to the improvement of quantitative traits like yield and quality. One of the major problems is that much of the wild germplasm is inferior to modern cultivars for many of the quantitative traits that breeders would like to improve. Our research, focusing on the tomato as a model system, has shown that despite their inferior phenotypes, wild species are likely to contain QTLs that can substantially increase the yield and quality of elite cultivars (de Vicente and Tanksley 1992, Eshed and Zamir 1994, Eshed et al. 1996). Using novel population structures of introgression lines (ILs; Eshed and Zamir 1995) and advanced backcross lines (AB; Tanksley et al. 1996) we identified and introduced valuable QTLs from unadapted germplasm into elite processing tomato varieties. Populations involving crosses with five Lycopersicon species (L. pennellii (Eshed and Zamir 1994; Eshed et al. 1996; Eshed and Zamir 1996), L. hirsutum (Bernacchi et al. 1998), L. pimpinellifolium (Tanksley et al. 1996), L. parviflorum (unpub.), L. peruvianum (Fulton et al. 1997) have been field and laboratory tested in a number of locations around the world. QTLs from the wild parent were identified that improve one or more of the key quantitative traits for processing tomatoes (yield, brix, sugar and acid composition and earliness) by as much as 10-30%. Nearly isogenic lines (QTL-NILs) have been generated for a subset of these QTLs. Each QTL-NIL contains the entire genome of the elite cultivated parent except for a segment (5-40 cM) of the wild species genome corresponding to a specific QTL. The genetic material and information that was developed in this program is presently used by American and Israeli seed companies for the breeding of superior varieties. We expect that in the next few years these varieties will make a difference in the marketplace.
APA, Harvard, Vancouver, ISO, and other styles
2

Mapping the extent to which performance-based financing (PBF) programs reflect quality, informed choice and voluntarism and implications for family planning services: A review of indicators. Population Council, 2018. http://dx.doi.org/10.31899/sbsr2018.1009.

Full text
Abstract:
Expanding access to and use of voluntary family planning (FP) services is a well-established global health goal–it is a specific target under the Sustainable Development Goal (SDG) of good health and well-being, an integral component of Every Woman Every Child (EWEC), and the overall objective of the Family Planning 2020 (FP2020) partnership, among other initiatives. | One promising approach for achieving global voluntary FP goals is performance-based financing (PBF), which deploys financial incentives to the health system to improve service availability, utilization, and quality as well as addressing some public financial management bottlenecks by directly targeting resources to facilities based on performance. | Setting global voluntary FP goals implies following a rights-based approach to family planning, which uses a set of standards and principles to guide program assessment, planning, implementation, monitoring, and evaluation that enables individuals and couples to decide freely and responsibly the number and spacing of their children, to have the information and services to do so, and to be treated equitably and free of discrimination. | While both PBF, which uses financial disbursements to incentivize health service delivery and quality, and rights-based programming have informed efforts to strengthen and scale FP services, there are gaps in understanding the linkages between PBF and a rights-based approach (RBA) to FP services. To address this gap, a review of PBF operations manuals was undertaken together with an analysis of PBF indicators relevant to FP services. This and another report (Mapping the extent to which performance-based financing (PBF) programs reflect quality, informed choice, and voluntarism and implications for family planning services: A review of PBF operational manuals) assess whether existing FP indicators are sensitive to the principles associated with an RBA.
APA, Harvard, Vancouver, ISO, and other styles
3

Mapping the extent to which performance-based financing (PBF) programs reflect quality, informed choice, and voluntarism and implications for family planning services: A review of PBF operational manuals. Population Council, 2018. http://dx.doi.org/10.31899/sbsr2018.1010.

Full text
Abstract:
Expanding access to and use of voluntary family planning (FP) services is a well-established global health goal- it is a specific target under the Sustainable Development Goal (SDG) of good health and well-being, an integral component of Every Woman Every Child (EWEC), and the overall objective of the Family Planning 2020 (FP2020) partnership, among other initiatives. | One promising approach for achieving global voluntary FP goals is performance-based financing (PBF), which deploys financial incentives to the health system to improve service availability, utilization, and quality as well as addressing some public financial management bottlenecks by directly targeting resources to facilities based on performance. | Setting global voluntary FP goals implies following a rights-based approach to family planning, which uses a set of standards and principles to guide program assessment, planning, implementation, monitoring, and evaluation that enables individuals and couples to decide freely and responsibly the number and spacing of their children, to have the information and services to do so, and to be treated equitably and free of discrimination. | While both PBF, which uses financial disbursements to incentivize health service delivery and quality, and rights-based programming have informed efforts to strengthen and scale FP services, there are gaps in understanding the linkages between PBF and a rights-based approach (RBA) to FP services. To address this gap, a review of performance-based financing (PBF) operations manuals was undertaken together with an analysis of PBF indicators relevant to FP services. This and another report (Mapping the extent to which performance-based financing (PBF) programs reflect quality, informed choice and voluntarism and implications for family planning services: A review of indicators) assess whether existing FP indicators are sensitive to the principles associated with an RBA.
APA, Harvard, Vancouver, ISO, and other styles
4

African Open Science Platform Part 1: Landscape Study. Academy of Science of South Africa (ASSAf), 2019. http://dx.doi.org/10.17159/assaf.2019/0047.

Full text
Abstract:
This report maps the African landscape of Open Science – with a focus on Open Data as a sub-set of Open Science. Data to inform the landscape study were collected through a variety of methods, including surveys, desk research, engagement with a community of practice, networking with stakeholders, participation in conferences, case study presentations, and workshops hosted. Although the majority of African countries (35 of 54) demonstrates commitment to science through its investment in research and development (R&D), academies of science, ministries of science and technology, policies, recognition of research, and participation in the Science Granting Councils Initiative (SGCI), the following countries demonstrate the highest commitment and political willingness to invest in science: Botswana, Ethiopia, Kenya, Senegal, South Africa, Tanzania, and Uganda. In addition to existing policies in Science, Technology and Innovation (STI), the following countries have made progress towards Open Data policies: Botswana, Kenya, Madagascar, Mauritius, South Africa and Uganda. Only two African countries (Kenya and South Africa) at this stage contribute 0.8% of its GDP (Gross Domestic Product) to R&D (Research and Development), which is the closest to the AU’s (African Union’s) suggested 1%. Countries such as Lesotho and Madagascar ranked as 0%, while the R&D expenditure for 24 African countries is unknown. In addition to this, science globally has become fully dependent on stable ICT (Information and Communication Technologies) infrastructure, which includes connectivity/bandwidth, high performance computing facilities and data services. This is especially applicable since countries globally are finding themselves in the midst of the 4th Industrial Revolution (4IR), which is not only “about” data, but which “is” data. According to an article1 by Alan Marcus (2015) (Senior Director, Head of Information Technology and Telecommunications Industries, World Economic Forum), “At its core, data represents a post-industrial opportunity. Its uses have unprecedented complexity, velocity and global reach. As digital communications become ubiquitous, data will rule in a world where nearly everyone and everything is connected in real time. That will require a highly reliable, secure and available infrastructure at its core, and innovation at the edge.” Every industry is affected as part of this revolution – also science. An important component of the digital transformation is “trust” – people must be able to trust that governments and all other industries (including the science sector), adequately handle and protect their data. This requires accountability on a global level, and digital industries must embrace the change and go for a higher standard of protection. “This will reassure consumers and citizens, benefitting the whole digital economy”, says Marcus. A stable and secure information and communication technologies (ICT) infrastructure – currently provided by the National Research and Education Networks (NRENs) – is key to advance collaboration in science. The AfricaConnect2 project (AfricaConnect (2012–2014) and AfricaConnect2 (2016–2018)) through establishing connectivity between National Research and Education Networks (NRENs), is planning to roll out AfricaConnect3 by the end of 2019. The concern however is that selected African governments (with the exception of a few countries such as South Africa, Mozambique, Ethiopia and others) have low awareness of the impact the Internet has today on all societal levels, how much ICT (and the 4th Industrial Revolution) have affected research, and the added value an NREN can bring to higher education and research in addressing the respective needs, which is far more complex than simply providing connectivity. Apart from more commitment and investment in R&D, African governments – to become and remain part of the 4th Industrial Revolution – have no option other than to acknowledge and commit to the role NRENs play in advancing science towards addressing the SDG (Sustainable Development Goals). For successful collaboration and direction, it is fundamental that policies within one country are aligned with one another. Alignment on continental level is crucial for the future Pan-African African Open Science Platform to be successful. Both the HIPSSA ((Harmonization of ICT Policies in Sub-Saharan Africa)3 project and WATRA (the West Africa Telecommunications Regulators Assembly)4, have made progress towards the regulation of the telecom sector, and in particular of bottlenecks which curb the development of competition among ISPs. A study under HIPSSA identified potential bottlenecks in access at an affordable price to the international capacity of submarine cables and suggested means and tools used by regulators to remedy them. Work on the recommended measures and making them operational continues in collaboration with WATRA. In addition to sufficient bandwidth and connectivity, high-performance computing facilities and services in support of data sharing are also required. The South African National Integrated Cyberinfrastructure System5 (NICIS) has made great progress in planning and setting up a cyberinfrastructure ecosystem in support of collaborative science and data sharing. The regional Southern African Development Community6 (SADC) Cyber-infrastructure Framework provides a valuable roadmap towards high-speed Internet, developing human capacity and skills in ICT technologies, high- performance computing and more. The following countries have been identified as having high-performance computing facilities, some as a result of the Square Kilometre Array7 (SKA) partnership: Botswana, Ghana, Kenya, Madagascar, Mozambique, Mauritius, Namibia, South Africa, Tunisia, and Zambia. More and more NRENs – especially the Level 6 NRENs 8 (Algeria, Egypt, Kenya, South Africa, and recently Zambia) – are exploring offering additional services; also in support of data sharing and transfer. The following NRENs already allow for running data-intensive applications and sharing of high-end computing assets, bio-modelling and computation on high-performance/ supercomputers: KENET (Kenya), TENET (South Africa), RENU (Uganda), ZAMREN (Zambia), EUN (Egypt) and ARN (Algeria). Fifteen higher education training institutions from eight African countries (Botswana, Benin, Kenya, Nigeria, Rwanda, South Africa, Sudan, and Tanzania) have been identified as offering formal courses on data science. In addition to formal degrees, a number of international short courses have been developed and free international online courses are also available as an option to build capacity and integrate as part of curricula. The small number of higher education or research intensive institutions offering data science is however insufficient, and there is a desperate need for more training in data science. The CODATA-RDA Schools of Research Data Science aim at addressing the continental need for foundational data skills across all disciplines, along with training conducted by The Carpentries 9 programme (specifically Data Carpentry 10 ). Thus far, CODATA-RDA schools in collaboration with AOSP, integrating content from Data Carpentry, were presented in Rwanda (in 2018), and during17-29 June 2019, in Ethiopia. Awareness regarding Open Science (including Open Data) is evident through the 12 Open Science-related Open Access/Open Data/Open Science declarations and agreements endorsed or signed by African governments; 200 Open Access journals from Africa registered on the Directory of Open Access Journals (DOAJ); 174 Open Access institutional research repositories registered on openDOAR (Directory of Open Access Repositories); 33 Open Access/Open Science policies registered on ROARMAP (Registry of Open Access Repository Mandates and Policies); 24 data repositories registered with the Registry of Data Repositories (re3data.org) (although the pilot project identified 66 research data repositories); and one data repository assigned the CoreTrustSeal. Although this is a start, far more needs to be done to align African data curation and research practices with global standards. Funding to conduct research remains a challenge. African researchers mostly fund their own research, and there are little incentives for them to make their research and accompanying data sets openly accessible. Funding and peer recognition, along with an enabling research environment conducive for research, are regarded as major incentives. The landscape report concludes with a number of concerns towards sharing research data openly, as well as challenges in terms of Open Data policy, ICT infrastructure supportive of data sharing, capacity building, lack of skills, and the need for incentives. Although great progress has been made in terms of Open Science and Open Data practices, more awareness needs to be created and further advocacy efforts are required for buy-in from African governments. A federated African Open Science Platform (AOSP) will not only encourage more collaboration among researchers in addressing the SDGs, but it will also benefit the many stakeholders identified as part of the pilot phase. The time is now, for governments in Africa, to acknowledge the important role of science in general, but specifically Open Science and Open Data, through developing and aligning the relevant policies, investing in an ICT infrastructure conducive for data sharing through committing funding to making NRENs financially sustainable, incentivising open research practices by scientists, and creating opportunities for more scientists and stakeholders across all disciplines to be trained in data management.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography