Academic literature on the topic 'Machine learning compositionality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine learning compositionality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine learning compositionality"

1

Lannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 3 (November 15, 2019): 1. http://dx.doi.org/10.32629/jai.v2i3.56.

Full text
Abstract:
In a context of constant evolution of technologies for scientific, economic and social purposes, Artificial Intelligence (AI) and Internet of Things (IoT) have seen significant progress over the past few years. As much as Human-Machine interactions are needed and tasks automation is undeniable, it is important that electronic devices (computers, cars, sensors…) could also communicate with humans just as well as they communicate together. The emergence of automated training and neural networks marked the beginning of a new conversational capability for the machines, illustrated with chat-bots. Nonetheless, using this technology is not sufficient, as they often give inappropriate or unrelated answers, usually when the subject changes. To improve this technology, the problem of defining a communication language constructed from scratch is addressed, in the intention to give machines the possibility to create a new and adapted exchange channel between them. Equipping each machine with a sound emitting system which accompany each individual or collective goal accomplishment, the convergence toward a common ‘’language’’ is analyzed, exactly as it is supposed to have happened for humans in the past. By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Lannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 1 (May 9, 2022): 72. http://dx.doi.org/10.32629/jai.v2i1.56.

Full text
Abstract:
<p align="justify">In a context of constant evolution of technologies for scientific, economic and social purposes, Artificial Intelligence (AI) and Internet of Things (IoT) have seen significant progress over the past few years. As much as Human-Machine interactions are needed and tasks automation is undeniable, it is important that electronic devices (computers, cars, sensors…) could also communicate with humans just as well as they communicate together. The emergence of automated training and neural networks marked the beginning of a new conversational capability for the machines, illustrated with chat-bots. Nonetheless, using this technology is not sufficient, as they often give inappropriate or unrelated answers, usually when the subject changes. To improve this technology, the problem of defining a communication language constructed from scratch is addressed, in the intention to give machines the possibility to create a new and adapted exchange channel between them. Equipping each machine with a sound emitting system which accompany each individual or collective goal accomplishment, the convergence toward a common ‘’language’’ is analyzed, exactly as it is supposed to have happened for humans in the past. By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Pavlovic, Dusko. "Lambek pregroups are Frobenius spiders in preorders." Compositionality 4 (April 13, 2022): 1. http://dx.doi.org/10.32408/compositionality-4-1.

Full text
Abstract:
"Spider" is a nickname of special Frobenius algebras, a fundamental structure from mathematics, physics, and computer science. Pregroups are a fundamental structure from linguistics. Pregroups and spiders have been used together in natural language processing: one for syntax, the other for semantics. It turns out that pregroups themselves can be characterized as pointed spiders in the category of preordered relations, where they naturally arise from grammars. The other way around, preordered spider algebras in general can be characterized as unions of pregroups. This extends the characterization of relational spider algebras as disjoint unions of groups. The compositional framework that emerged with the results suggests new ways to understand and apply the basis structures in machine learning and data analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

McNamee, Daniel C., Kimberly L. Stachenfeld, Matthew M. Botvinick, and Samuel J. Gershman. "Compositional Sequence Generation in the Entorhinal–Hippocampal System." Entropy 24, no. 12 (December 8, 2022): 1791. http://dx.doi.org/10.3390/e24121791.

Full text
Abstract:
Neurons in the medial entorhinal cortex exhibit multiple, periodically organized, firing fields which collectively appear to form an internal representation of space. Neuroimaging data suggest that this grid coding is also present in other cortical areas such as the prefrontal cortex, indicating that it may be a general principle of neural functionality in the brain. In a recent analysis through the lens of dynamical systems theory, we showed how grid coding can lead to the generation of a diversity of empirically observed sequential reactivations of hippocampal place cells corresponding to traversals of cognitive maps. Here, we extend this sequence generation model by describing how the synthesis of multiple dynamical systems can support compositional cognitive computations. To empirically validate the model, we simulate two experiments demonstrating compositionality in space or in time during sequence generation. Finally, we describe several neural network architectures supporting various types of compositionality based on grid coding and highlight connections to recent work in machine learning leveraging analogous techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Busato, Sebastiano, Max Gordon, Meenal Chaudhari, Ib Jensen, Turgut Akyol, Stig Andersen, and Cranos Williams. "Compositionality, sparsity, spurious heterogeneity, and other data-driven challenges for machine learning algorithms within plant microbiome studies." Current Opinion in Plant Biology 71 (February 2023): 102326. http://dx.doi.org/10.1016/j.pbi.2022.102326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harel, David, Assaf Marron, Ariel Rosenfeld, Moshe Vardi, and Gera Weiss. "Labor Division with Movable Walls: Composing Executable Specifications with Machine Learning and Search (Blue Sky Idea)." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9770–74. http://dx.doi.org/10.1609/aaai.v33i01.33019770.

Full text
Abstract:
Artificial intelligence (AI) techniques, including, e.g., machine learning, multi-agent collaboration, planning, and heuristic search, are emerging as ever-stronger tools for solving hard problems in real-world applications. Executable specification techniques (ES), including, e.g., Statecharts and scenario-based programming, is a promising development approach, offering intuitiveness, ease of enhancement, compositionality, and amenability to formal analysis. We propose an approach for integrating AI and ES techniques in developing complex intelligent systems, which can greatly simplify agile/spiral development and maintenance processes. The approach calls for automated detection of whether certain goals and sub-goals are met; a clear division between sub-goals solved with AI and those solved with ES; compositional and incremental addition of AI-based or ES-based components, each focusing on a particular gap between a current capability and a well-stated goal; and, iterative refinement of sub-goals solved with AI into smaller sub-sub-goals where some are solved with ES, and some with AI. We describe the principles of the approach and its advantages, as well as key challenges and suggestions for how to tackle them.
APA, Harvard, Vancouver, ISO, and other styles
7

Günther, Fritz, Luca Rinaldi, and Marco Marelli. "Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions." Perspectives on Psychological Science 14, no. 6 (September 10, 2019): 1006–33. http://dx.doi.org/10.1177/1745691619861372.

Full text
Abstract:
Models that represent meaning as high-dimensional numerical vectors—such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec—have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yue, Bjørn Holmedal, Boyu Liu, Hongxiang Li, Linzhong Zhuang, Jishan Zhang, Qiang Du, and Jianxin Xie. "Towards high-throughput microstructure simulation in compositionally complex alloys via machine learning." Calphad 72 (March 2021): 102231. http://dx.doi.org/10.1016/j.calphad.2020.102231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Angela, Jack Urbanek, Pratik Ringshia, Emily Dinan, Emma Qian, Siddharth Karamcheti, Shrimai Prabhumoye, et al. "Generating Interactive Worlds with Text." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1693–700. http://dx.doi.org/10.1609/aaai.v34i02.5532.

Full text
Abstract:
Procedurally generating cohesive and interesting game environments is challenging and time-consuming. In order for the relationships between the game elements to be natural, common-sense has to be encoded into arrangement of the elements. In this work, we investigate a machine learning approach for world creation using content from the multi-player text adventure game environment LIGHT (Urbanek et al. 2019). We introduce neural network based models to compositionally arrange locations, characters, and objects into a coherent whole. In addition to creating worlds based on existing elements, our models can generate new game content. Humans can also leverage our models to interactively aid in worldbuilding. We show that the game environments created with our approach are cohesive, diverse, and preferred by human evaluators compared to other machine learning based world construction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Nagy, Péter, Bálint Kaszás, István Csabai, Zoltán Hegedűs, Johann Michler, László Pethö, and Jenő Gubicza. "Machine Learning-Based Characterization of the Nanostructure in a Combinatorial Co-Cr-Fe-Ni Compositionally Complex Alloy Film." Nanomaterials 12, no. 24 (December 10, 2022): 4407. http://dx.doi.org/10.3390/nano12244407.

Full text
Abstract:
A novel artificial intelligence-assisted evaluation of the X-ray diffraction (XRD) peak profiles was elaborated for the characterization of the nanocrystallite microstructure in a combinatorial Co-Cr-Fe-Ni compositionally complex alloy (CCA) film. The layer was produced by a multiple beam sputtering physical vapor deposition (PVD) technique on a Si single crystal substrate with the diameter of about 10 cm. This new processing technique is able to produce combinatorial CCA films where the elemental concentrations vary in a wide range on the disk surface. The most important benefit of the combinatorial sample is that it can be used for the study of the correlation between the chemical composition and the microstructure on a single specimen. The microstructure can be characterized quickly in many points on the disk surface using synchrotron XRD. However, the evaluation of the diffraction patterns for the crystallite size and the density of lattice defects (e.g., dislocations and twin faults) using X-ray line profile analysis (XLPA) is not possible in a reasonable amount of time due to the large number (hundreds) of XRD patterns. In the present study, a machine learning-based X-ray line profile analysis (ML-XLPA) was developed and tested on the combinatorial Co-Cr-Fe-Ni film. The new method is able to produce maps of the characteristic parameters of the nanostructure (crystallite size, defect densities) on the disk surface very quickly. Since the novel technique was developed and tested only for face-centered cubic (FCC) structures, additional work is required for the extension of its applicability to other materials. Nevertheless, to the knowledge of the authors, this is the first ML-XLPA evaluation method in the literature, which can pave the way for further development of this methodology.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Machine learning compositionality"

1

Lake, Brenden M. "Towards more human-like concept learning in machines : compositionality, causality, and learning-to-learn." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95856.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 211-220).
People can learn a new concept almost perfectly from just a single example, yet machine learning algorithms typically require hundreds or thousands of examples to perform similarly. People can also use their learned concepts in richer ways than conventional machine learning systems - for action, imagination, and explanation suggesting that concepts are far more than a set of features, exemplars, or rules, the most popular forms of representation in machine learning and traditional models of concept learning. For those interested in better understanding this human ability, or in closing the gap between humans and machines, the key computational questions are the same: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater puzzle arises by putting these two questions together: How do people learn such rich concepts from just one or a few examples? This thesis investigates concept learning as a form of Bayesian program induction, where learning involves selecting a structured procedure that best generates the examples from a category. I introduce a computational framework that utilizes the principles of compositionality, causality, and learning-to-learn to learn good programs from just one or a handful of examples of a new concept. New conceptual representations can be learned compositionally from pieces of related concepts, where the pieces reflect real part structure in the underlying causal process that generates category examples. This approach is evaluated on a number of natural concept learning tasks where humans and machines can be compared side-by-side. Chapter 2 introduces a large-scale data set of novel, simple visual concepts for studying concept learning from sparse data. People were asked to produce new examples of over 1600 novel categories, revealing consistent structure in the generative programs that people used. Initial experiments also show that this structure is useful for one-shot classification. Chapter 3 introduces the computational framework called Hierarchical Bayesian Program Learning, and Chapters 4 and 5 compare humans and machines on six tasks that cover a range of natural conceptual abilities. On a challenging one-shot classification task, the computational model achieves human-level performance while also outperforming several recent deep learning models. Visual "Turing test" experiments were used to compare humans and machines on more creative conceptual abilities, including generating new category examples, predicting latent causal structure, generating new concepts from related concepts, and freely generating new concepts. In each case, fewer than twenty-five percent of judges could reliably distinguish the human behavior from the machine behavior, showing that the model can generalize in ways similar to human performance. A range of comparisons with lesioned models and alternative modeling frameworks reveal that three key ingredients - compositionality, causality, and learning-to-learn - contribute to performance in each of the six tasks. This conclusion is further supported by the results of Chapter 6, where a computational model using only two of these three principles was evaluated on the one-shot learning of new spoken words. Learning programs with these ingredients is a promising route towards more humanlike concept learning in machines.
by Brenden M. Lake.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Asaadi, Shima. "Compositional Matrix-Space Models: Learning Methods and Evaluation." 2020. https://tud.qucosa.de/id/qucosa%3A72439.

Full text
Abstract:
There has been a lot of research on machine-readable representations of words for natural language processing (NLP). One mainstream paradigm for the word meaning representation comprises vector-space models obtained from the distributional information of words in the text. Machine learning techniques have been proposed to produce such word representations for computational linguistic tasks. Moreover, the representation of multi-word structures, such as phrases, in vector space can arguably be achieved by composing the distributional representation of the constituent words. To this end, mathematical operations have been introduced as composition methods in vector space. An alternative approach to word representation and semantic compositionality in natural language has been compositional matrix-space models. In this thesis, two research directions are considered. In the first, considering compositional matrix-space models, we explore word meaning representations and semantic composition of multi-word structures in matrix space. The main motivation for working on these models is that they have shown superiority over vector-space models regarding several properties. The most important property is that the composition operation in matrix-space models can be defined as standard matrix multiplication; in contrast to common vector space composition operations, this is sensitive to word order in language. We design and develop machine learning techniques that induce continuous and numeric representations of natural language in matrix space. The main goal in introducing representation models is enabling NLP systems to understand natural language to solve multiple related tasks. Therefore, first, different supervised machine learning approaches to train word meaning representations and capture the compositionality of multi-word structures using the matrix multiplication of words are proposed. The performance of matrix representation models learned by machine learning techniques is investigated in solving two NLP tasks, namely, sentiment analysis and compositionality detection. Then, learning techniques for learning matrix-space models are proposed that introduce generic task-agnostic representation models, also called word matrix embeddings. In these techniques, word matrices are trained using the distributional information of words in a given text corpus. We show the effectiveness of these models in the compositional representation of multi-word structures in natural language. The second research direction in this thesis explores effective approaches for evaluating the capability of semantic composition methods in capturing the meaning representation of compositional multi-word structures, such as phrases. A common evaluation approach is examining the ability of the methods in capturing the semantic relatedness between linguistic units. The underlying assumption is that the more accurately a method of semantic composition can determine the representation of a phrase, the more accurately it can determine the relatedness of that phrase with other phrases. To apply the semantic relatedness approach, gold standard datasets have been introduced. In this thesis, we identify the limitations of the existing datasets and develop a new gold standard semantic relatedness dataset, which addresses the issues of the existing datasets. The proposed dataset allows us to evaluate meaning composition in vector- and matrix-space models.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Machine learning compositionality"

1

Fabi, Sarah, Sebastian Otte, Jonas Gregor Wiese, and Martin V. Butz. "Investigating Efficient Learning and Compositionality in Generative LSTM Networks." In Artificial Neural Networks and Machine Learning – ICANN 2020, 143–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61609-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography