Добірка наукової літератури з теми "Machine learning compositionality"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Machine learning compositionality".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Machine learning compositionality"
Lannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 3 (November 15, 2019): 1. http://dx.doi.org/10.32629/jai.v2i3.56.
Повний текст джерелаLannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 1 (May 9, 2022): 72. http://dx.doi.org/10.32629/jai.v2i1.56.
Повний текст джерелаPavlovic, Dusko. "Lambek pregroups are Frobenius spiders in preorders." Compositionality 4 (April 13, 2022): 1. http://dx.doi.org/10.32408/compositionality-4-1.
Повний текст джерелаMcNamee, Daniel C., Kimberly L. Stachenfeld, Matthew M. Botvinick, and Samuel J. Gershman. "Compositional Sequence Generation in the Entorhinal–Hippocampal System." Entropy 24, no. 12 (December 8, 2022): 1791. http://dx.doi.org/10.3390/e24121791.
Повний текст джерелаBusato, Sebastiano, Max Gordon, Meenal Chaudhari, Ib Jensen, Turgut Akyol, Stig Andersen, and Cranos Williams. "Compositionality, sparsity, spurious heterogeneity, and other data-driven challenges for machine learning algorithms within plant microbiome studies." Current Opinion in Plant Biology 71 (February 2023): 102326. http://dx.doi.org/10.1016/j.pbi.2022.102326.
Повний текст джерелаHarel, David, Assaf Marron, Ariel Rosenfeld, Moshe Vardi, and Gera Weiss. "Labor Division with Movable Walls: Composing Executable Specifications with Machine Learning and Search (Blue Sky Idea)." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9770–74. http://dx.doi.org/10.1609/aaai.v33i01.33019770.
Повний текст джерелаGünther, Fritz, Luca Rinaldi, and Marco Marelli. "Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions." Perspectives on Psychological Science 14, no. 6 (September 10, 2019): 1006–33. http://dx.doi.org/10.1177/1745691619861372.
Повний текст джерелаLi, Yue, Bjørn Holmedal, Boyu Liu, Hongxiang Li, Linzhong Zhuang, Jishan Zhang, Qiang Du, and Jianxin Xie. "Towards high-throughput microstructure simulation in compositionally complex alloys via machine learning." Calphad 72 (March 2021): 102231. http://dx.doi.org/10.1016/j.calphad.2020.102231.
Повний текст джерелаFan, Angela, Jack Urbanek, Pratik Ringshia, Emily Dinan, Emma Qian, Siddharth Karamcheti, Shrimai Prabhumoye, et al. "Generating Interactive Worlds with Text." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1693–700. http://dx.doi.org/10.1609/aaai.v34i02.5532.
Повний текст джерелаNagy, Péter, Bálint Kaszás, István Csabai, Zoltán Hegedűs, Johann Michler, László Pethö, and Jenő Gubicza. "Machine Learning-Based Characterization of the Nanostructure in a Combinatorial Co-Cr-Fe-Ni Compositionally Complex Alloy Film." Nanomaterials 12, no. 24 (December 10, 2022): 4407. http://dx.doi.org/10.3390/nano12244407.
Повний текст джерелаДисертації з теми "Machine learning compositionality"
Lake, Brenden M. "Towards more human-like concept learning in machines : compositionality, causality, and learning-to-learn." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95856.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 211-220).
People can learn a new concept almost perfectly from just a single example, yet machine learning algorithms typically require hundreds or thousands of examples to perform similarly. People can also use their learned concepts in richer ways than conventional machine learning systems - for action, imagination, and explanation suggesting that concepts are far more than a set of features, exemplars, or rules, the most popular forms of representation in machine learning and traditional models of concept learning. For those interested in better understanding this human ability, or in closing the gap between humans and machines, the key computational questions are the same: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater puzzle arises by putting these two questions together: How do people learn such rich concepts from just one or a few examples? This thesis investigates concept learning as a form of Bayesian program induction, where learning involves selecting a structured procedure that best generates the examples from a category. I introduce a computational framework that utilizes the principles of compositionality, causality, and learning-to-learn to learn good programs from just one or a handful of examples of a new concept. New conceptual representations can be learned compositionally from pieces of related concepts, where the pieces reflect real part structure in the underlying causal process that generates category examples. This approach is evaluated on a number of natural concept learning tasks where humans and machines can be compared side-by-side. Chapter 2 introduces a large-scale data set of novel, simple visual concepts for studying concept learning from sparse data. People were asked to produce new examples of over 1600 novel categories, revealing consistent structure in the generative programs that people used. Initial experiments also show that this structure is useful for one-shot classification. Chapter 3 introduces the computational framework called Hierarchical Bayesian Program Learning, and Chapters 4 and 5 compare humans and machines on six tasks that cover a range of natural conceptual abilities. On a challenging one-shot classification task, the computational model achieves human-level performance while also outperforming several recent deep learning models. Visual "Turing test" experiments were used to compare humans and machines on more creative conceptual abilities, including generating new category examples, predicting latent causal structure, generating new concepts from related concepts, and freely generating new concepts. In each case, fewer than twenty-five percent of judges could reliably distinguish the human behavior from the machine behavior, showing that the model can generalize in ways similar to human performance. A range of comparisons with lesioned models and alternative modeling frameworks reveal that three key ingredients - compositionality, causality, and learning-to-learn - contribute to performance in each of the six tasks. This conclusion is further supported by the results of Chapter 6, where a computational model using only two of these three principles was evaluated on the one-shot learning of new spoken words. Learning programs with these ingredients is a promising route towards more humanlike concept learning in machines.
by Brenden M. Lake.
Ph. D.
Asaadi, Shima. "Compositional Matrix-Space Models: Learning Methods and Evaluation." 2020. https://tud.qucosa.de/id/qucosa%3A72439.
Повний текст джерелаЧастини книг з теми "Machine learning compositionality"
Fabi, Sarah, Sebastian Otte, Jonas Gregor Wiese, and Martin V. Butz. "Investigating Efficient Learning and Compositionality in Generative LSTM Networks." In Artificial Neural Networks and Machine Learning – ICANN 2020, 143–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61609-0_12.
Повний текст джерела