Journal articles on the topic 'Machine learning compositionality'

To see the other types of publications on this topic, follow the link: Machine learning compositionality.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 journal articles for your research on the topic 'Machine learning compositionality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 3 (November 15, 2019): 1. http://dx.doi.org/10.32629/jai.v2i3.56.

Full text
Abstract:
In a context of constant evolution of technologies for scientific, economic and social purposes, Artificial Intelligence (AI) and Internet of Things (IoT) have seen significant progress over the past few years. As much as Human-Machine interactions are needed and tasks automation is undeniable, it is important that electronic devices (computers, cars, sensors…) could also communicate with humans just as well as they communicate together. The emergence of automated training and neural networks marked the beginning of a new conversational capability for the machines, illustrated with chat-bots. Nonetheless, using this technology is not sufficient, as they often give inappropriate or unrelated answers, usually when the subject changes. To improve this technology, the problem of defining a communication language constructed from scratch is addressed, in the intention to give machines the possibility to create a new and adapted exchange channel between them. Equipping each machine with a sound emitting system which accompany each individual or collective goal accomplishment, the convergence toward a common ‘’language’’ is analyzed, exactly as it is supposed to have happened for humans in the past. By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Lannelongue, K., M. De Milly, R. Marcucci, S. Selevarangame, A. Supizet, and A. Grincourt. "Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment." Journal of Autonomous Intelligence 2, no. 1 (May 9, 2022): 72. http://dx.doi.org/10.32629/jai.v2i1.56.

Full text
Abstract:
<p align="justify">In a context of constant evolution of technologies for scientific, economic and social purposes, Artificial Intelligence (AI) and Internet of Things (IoT) have seen significant progress over the past few years. As much as Human-Machine interactions are needed and tasks automation is undeniable, it is important that electronic devices (computers, cars, sensors…) could also communicate with humans just as well as they communicate together. The emergence of automated training and neural networks marked the beginning of a new conversational capability for the machines, illustrated with chat-bots. Nonetheless, using this technology is not sufficient, as they often give inappropriate or unrelated answers, usually when the subject changes. To improve this technology, the problem of defining a communication language constructed from scratch is addressed, in the intention to give machines the possibility to create a new and adapted exchange channel between them. Equipping each machine with a sound emitting system which accompany each individual or collective goal accomplishment, the convergence toward a common ‘’language’’ is analyzed, exactly as it is supposed to have happened for humans in the past. By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Pavlovic, Dusko. "Lambek pregroups are Frobenius spiders in preorders." Compositionality 4 (April 13, 2022): 1. http://dx.doi.org/10.32408/compositionality-4-1.

Full text
Abstract:
"Spider" is a nickname of special Frobenius algebras, a fundamental structure from mathematics, physics, and computer science. Pregroups are a fundamental structure from linguistics. Pregroups and spiders have been used together in natural language processing: one for syntax, the other for semantics. It turns out that pregroups themselves can be characterized as pointed spiders in the category of preordered relations, where they naturally arise from grammars. The other way around, preordered spider algebras in general can be characterized as unions of pregroups. This extends the characterization of relational spider algebras as disjoint unions of groups. The compositional framework that emerged with the results suggests new ways to understand and apply the basis structures in machine learning and data analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

McNamee, Daniel C., Kimberly L. Stachenfeld, Matthew M. Botvinick, and Samuel J. Gershman. "Compositional Sequence Generation in the Entorhinal–Hippocampal System." Entropy 24, no. 12 (December 8, 2022): 1791. http://dx.doi.org/10.3390/e24121791.

Full text
Abstract:
Neurons in the medial entorhinal cortex exhibit multiple, periodically organized, firing fields which collectively appear to form an internal representation of space. Neuroimaging data suggest that this grid coding is also present in other cortical areas such as the prefrontal cortex, indicating that it may be a general principle of neural functionality in the brain. In a recent analysis through the lens of dynamical systems theory, we showed how grid coding can lead to the generation of a diversity of empirically observed sequential reactivations of hippocampal place cells corresponding to traversals of cognitive maps. Here, we extend this sequence generation model by describing how the synthesis of multiple dynamical systems can support compositional cognitive computations. To empirically validate the model, we simulate two experiments demonstrating compositionality in space or in time during sequence generation. Finally, we describe several neural network architectures supporting various types of compositionality based on grid coding and highlight connections to recent work in machine learning leveraging analogous techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Busato, Sebastiano, Max Gordon, Meenal Chaudhari, Ib Jensen, Turgut Akyol, Stig Andersen, and Cranos Williams. "Compositionality, sparsity, spurious heterogeneity, and other data-driven challenges for machine learning algorithms within plant microbiome studies." Current Opinion in Plant Biology 71 (February 2023): 102326. http://dx.doi.org/10.1016/j.pbi.2022.102326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harel, David, Assaf Marron, Ariel Rosenfeld, Moshe Vardi, and Gera Weiss. "Labor Division with Movable Walls: Composing Executable Specifications with Machine Learning and Search (Blue Sky Idea)." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9770–74. http://dx.doi.org/10.1609/aaai.v33i01.33019770.

Full text
Abstract:
Artificial intelligence (AI) techniques, including, e.g., machine learning, multi-agent collaboration, planning, and heuristic search, are emerging as ever-stronger tools for solving hard problems in real-world applications. Executable specification techniques (ES), including, e.g., Statecharts and scenario-based programming, is a promising development approach, offering intuitiveness, ease of enhancement, compositionality, and amenability to formal analysis. We propose an approach for integrating AI and ES techniques in developing complex intelligent systems, which can greatly simplify agile/spiral development and maintenance processes. The approach calls for automated detection of whether certain goals and sub-goals are met; a clear division between sub-goals solved with AI and those solved with ES; compositional and incremental addition of AI-based or ES-based components, each focusing on a particular gap between a current capability and a well-stated goal; and, iterative refinement of sub-goals solved with AI into smaller sub-sub-goals where some are solved with ES, and some with AI. We describe the principles of the approach and its advantages, as well as key challenges and suggestions for how to tackle them.
APA, Harvard, Vancouver, ISO, and other styles
7

Günther, Fritz, Luca Rinaldi, and Marco Marelli. "Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions." Perspectives on Psychological Science 14, no. 6 (September 10, 2019): 1006–33. http://dx.doi.org/10.1177/1745691619861372.

Full text
Abstract:
Models that represent meaning as high-dimensional numerical vectors—such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec—have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yue, Bjørn Holmedal, Boyu Liu, Hongxiang Li, Linzhong Zhuang, Jishan Zhang, Qiang Du, and Jianxin Xie. "Towards high-throughput microstructure simulation in compositionally complex alloys via machine learning." Calphad 72 (March 2021): 102231. http://dx.doi.org/10.1016/j.calphad.2020.102231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Angela, Jack Urbanek, Pratik Ringshia, Emily Dinan, Emma Qian, Siddharth Karamcheti, Shrimai Prabhumoye, et al. "Generating Interactive Worlds with Text." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1693–700. http://dx.doi.org/10.1609/aaai.v34i02.5532.

Full text
Abstract:
Procedurally generating cohesive and interesting game environments is challenging and time-consuming. In order for the relationships between the game elements to be natural, common-sense has to be encoded into arrangement of the elements. In this work, we investigate a machine learning approach for world creation using content from the multi-player text adventure game environment LIGHT (Urbanek et al. 2019). We introduce neural network based models to compositionally arrange locations, characters, and objects into a coherent whole. In addition to creating worlds based on existing elements, our models can generate new game content. Humans can also leverage our models to interactively aid in worldbuilding. We show that the game environments created with our approach are cohesive, diverse, and preferred by human evaluators compared to other machine learning based world construction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Nagy, Péter, Bálint Kaszás, István Csabai, Zoltán Hegedűs, Johann Michler, László Pethö, and Jenő Gubicza. "Machine Learning-Based Characterization of the Nanostructure in a Combinatorial Co-Cr-Fe-Ni Compositionally Complex Alloy Film." Nanomaterials 12, no. 24 (December 10, 2022): 4407. http://dx.doi.org/10.3390/nano12244407.

Full text
Abstract:
A novel artificial intelligence-assisted evaluation of the X-ray diffraction (XRD) peak profiles was elaborated for the characterization of the nanocrystallite microstructure in a combinatorial Co-Cr-Fe-Ni compositionally complex alloy (CCA) film. The layer was produced by a multiple beam sputtering physical vapor deposition (PVD) technique on a Si single crystal substrate with the diameter of about 10 cm. This new processing technique is able to produce combinatorial CCA films where the elemental concentrations vary in a wide range on the disk surface. The most important benefit of the combinatorial sample is that it can be used for the study of the correlation between the chemical composition and the microstructure on a single specimen. The microstructure can be characterized quickly in many points on the disk surface using synchrotron XRD. However, the evaluation of the diffraction patterns for the crystallite size and the density of lattice defects (e.g., dislocations and twin faults) using X-ray line profile analysis (XLPA) is not possible in a reasonable amount of time due to the large number (hundreds) of XRD patterns. In the present study, a machine learning-based X-ray line profile analysis (ML-XLPA) was developed and tested on the combinatorial Co-Cr-Fe-Ni film. The new method is able to produce maps of the characteristic parameters of the nanostructure (crystallite size, defect densities) on the disk surface very quickly. Since the novel technique was developed and tested only for face-centered cubic (FCC) structures, additional work is required for the extension of its applicability to other materials. Nevertheless, to the knowledge of the authors, this is the first ML-XLPA evaluation method in the literature, which can pave the way for further development of this methodology.
APA, Harvard, Vancouver, ISO, and other styles
11

Kuhn, Stephen, Matthew J. Cracknell, Anya M. Reading, and Stephanie Sykora. "Identification of intrusive lithologies in volcanic terrains in British Columbia by machine learning using random forests: The value of using a soft classifier." GEOPHYSICS 85, no. 6 (November 1, 2020): B249—B258. http://dx.doi.org/10.1190/geo2019-0461.1.

Full text
Abstract:
Identifying the location of intrusions is a key component in exploration for porphyry Cu ± Mo ± Au deposits. In typical porphyry terrains, in the absence of outcrop, intrusions can be difficult to discriminate from the compositionally similar volcanic and volcanoclastic sedimentary rocks in which they are emplaced. The ability to produce lithological maps at an early exploration stage can significantly reduce costs by assisting in planning and prioritization of detailed mapping and sampling. Additionally, a data-driven strategy provides opportunity for the discovery of intrusions not identified during conventional mapping and interpretation. We used random forests (RF), a supervised machine-learning algorithm, to classify rock types throughout the Kliyul porphyry prospect in British Columbia, Canada. Rock types determined at geochemical sampling sites were used as training data. Airborne magnetic and radiometric data, geochemistry, and topographic data were used in classification. Results were validated using First Quantum Minerals’ geologic map, which includes additional detail from targeted location and transect mapping. The petrophysical and compositional similarity of rock types resulted in a noisy classification. Intrusions, particularly the more discrete, were inconsistently predicted, likely due to their limited extent relative to data sampling intervals. Closer examination of class membership probabilities (CMPs) identified locations where the probability of an intrusion being present was elevated significantly above the background. Indeed, a large proportion of mapped intrusions correspond to areas of elevated probability and, importantly, areas were highlighted as potential intrusions that were not identified in geologic mapping. The RF classification produced a reasonable lithological map, if lacking in resolution, but more significantly, great benefit comes from the insights drawn from the RF CMPs. Mapping the spatial distribution of elevated intrusion CMP, a soft classifier approach, produced a map product that can target intrusions and prioritize detailed mapping for mineral exploration.
APA, Harvard, Vancouver, ISO, and other styles
12

Fraser, Benjamin T., and Russell G. Congalton. "A Comparison of Methods for Determining Forest Composition from High-Spatial-Resolution Remotely Sensed Imagery." Forests 12, no. 9 (September 21, 2021): 1290. http://dx.doi.org/10.3390/f12091290.

Full text
Abstract:
Remotely sensed imagery has been used to support forest ecology and management for decades. In modern times, the propagation of high-spatial-resolution image analysis techniques and automated workflows have further strengthened this synergy, leading to the inquiry into more complex, local-scale, ecosystem characteristics. To appropriately inform decisions in forestry ecology and management, the most reliable and efficient methods should be adopted. For this reason, our research compares visual interpretation to digital (automated) processing for forest plot composition and individual tree identification. During this investigation, we qualitatively and quantitatively evaluated the process of classifying species groups within complex, mixed-species forests in New England. This analysis included a comparison of three high-resolution remotely sensed imagery sources: Google Earth, National Agriculture Imagery Program (NAIP) imagery, and unmanned aerial system (UAS) imagery. We discovered that, although the level of detail afforded by the UAS imagery spatial resolution (3.02 cm average pixel size) improved the visual interpretation results (7.87–9.59%), the highest thematic accuracy was still only 54.44% for the generalized composition groups. Our qualitative analysis of the uncertainty for visually interpreting different composition classes revealed the persistence of mislabeled hardwood compositions (including an early successional class) and an inability to consistently differentiate between ‘pure’ and ‘mixed’ stands. The results of digitally classifying the same forest compositions produced a higher level of accuracy for both detecting individual trees (93.9%) and labeling them (59.62–70.48%) using machine learning algorithms including classification and regression trees, random forest, and support vector machines. These results indicate that digital, automated, classification produced an increase in overall accuracy of 16.04% over visual interpretation for generalized forest composition classes. Other studies, which incorporate multitemporal, multispectral, or data fusion approaches provide evidence for further widening this gap. Further refinement of the methods for individual tree detection, delineation, and classification should be developed for structurally and compositionally complex forests to supplement the critical deficiency in local-scale forest information around the world.
APA, Harvard, Vancouver, ISO, and other styles
13

Keller, Chandler R., Yang Hu, Kelsey F. Ruud, Anika E. VanDeen, Steve R. Martinez, Barry T. Kahn, Zhiwu Zhang, Roland K. Chen, and Weimin Li. "Human Breast Extracellular Matrix Microstructures and Protein Hydrogel 3D Cultures of Mammary Epithelial Cells." Cancers 13, no. 22 (November 22, 2021): 5857. http://dx.doi.org/10.3390/cancers13225857.

Full text
Abstract:
Tissue extracellular matrix (ECM) is a structurally and compositionally unique microenvironment within which native cells can perform their natural biological activities. Cells grown on artificial substrata differ biologically and phenotypically from those grown within their native tissue microenvironment. Studies examining human tissue ECM structures and the biology of human tissue cells in their corresponding tissue ECM are lacking. Such investigations will improve our understanding about human pathophysiological conditions for better clinical care. We report here human normal breast tissue and invasive ductal carcinoma tissue ECM structural features. For the first time, a hydrogel was successfully fabricated using whole protein extracts of human normal breast ECM. Using immunofluorescence staining of type I collagen (Col I) and machine learning of its fibrous patterns in the polymerized human breast ECM hydrogel, we have defined the microstructural characteristics of the hydrogel and compared the microstructures with those of other native ECM hydrogels. Importantly, the ECM hydrogel supported 3D growth and cell-ECM interaction of both normal and cancerous mammary epithelial cells. This work represents further advancement toward full reconstitution of the human breast tissue microenvironment, an accomplishment that will accelerate the use of human pathophysiological tissue-derived matrices for individualized biomedical research and therapeutic development.
APA, Harvard, Vancouver, ISO, and other styles
14

Xu, Xueli, Zhongming Xie, Zhenyu Yang, Dongfang Li, and Ximing Xu. "A t-SNE Based Classification Approach to Compositional Microbiome Data." Frontiers in Genetics 11 (December 14, 2020). http://dx.doi.org/10.3389/fgene.2020.620143.

Full text
Abstract:
As a data-driven dimensionality reduction and visualization tool, t-distributed stochastic neighborhood embedding (t-SNE) has been successfully applied to a variety of fields. In recent years, it has also received increasing attention for classification and regression analysis. This study presented a t-SNE based classification approach for compositional microbiome data, which enabled us to build classifiers and classify new samples in the reduced dimensional space produced by t-SNE. The Aitchison distance was employed to modify the conditional probabilities in t-SNE to account for the compositionality of microbiome data. To classify a new sample, its low-dimensional features were obtained as the weighted mean vector of its nearest neighbors in the training set. Using the low-dimensional features as input, three commonly used machine learning algorithms, logistic regression (LR), support vector machine (SVM), and decision tree (DT) were considered for classification tasks in this study. The proposed approach was applied to two disease-associated microbiome datasets, achieving better classification performance compared with the classifiers built in the original high-dimensional space. The analytic results also showed that t-SNE with Aitchison distance led to improvement of classification accuracy in both datasets. In conclusion, we have developed a t-SNE based classification approach that is suitable for compositional microbiome data and may also serve as a baseline for more complex classification models.
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Scott, Drew Levin, Jason Thomas, Patrick Finley, and Charles Heilig. "Exploring the Value of Learned Representations for Automated Syndromic Definitions." Online Journal of Public Health Informatics 10, no. 1 (May 22, 2018). http://dx.doi.org/10.5210/ojphi.v10i1.8326.

Full text
Abstract:
ObjectiveTo better define and automate biosurveillance syndrome categorization using modern unsupervised vector embedding techniques.IntroductionComprehensive medical syndrome definitions are critical for outbreak investigation, disease trend monitoring, and public health surveillance. However, because current definitions are based on keyword string-matching, they may miss important distributional information in free text and medical codes that could be used to build a more general classifier. Here, we explore the idea that individual ICD codes can be categorized by examining their contextual relationships across all other ICD codes. We extend previous work in representation learning with medical data [1] by generating dense vector embeddings of these ICD codes found in emergency department (ED) visit records. The resulting representations capture information about disease co-occurrence that would typically require SME involvement and support the development of more robust syndrome definitions.MethodsWe evaluate our method on anonymized ED visit records obtained from the New York City Department of Health and Mental Hygiene. The data set consists of approximately 3 million records spanning January 2016 to December 2016, each containing from one to ten ICD-9 or ICD-10 codes.We use these data to embed each ICD code into a high-dimensional vector space following techniques described in Mikolov, et al. [2], colloquially known as word2vec. We define an individual code’s context window as the entirety of its current health record. Final vector embeddings are generated using the gensim machine learning library in Python. We generate 300-dimensional embeddings using a skip-gram network for qualitative evaluation.We use the TensorFlow Embedding Projector to visualize the resulting embedding space. We generate a three-dimensional t-SNE visualization with a perplexity of 32 and a learning rate of 10, run for 1,000 iterations (Figure 1). Finally, we use cosine distance to measure the nearest neighbors of common ICD-10 codes to evaluate the consistency of the generated vector embeddings (Table 1).ResultsT-SNE visualization of the generated vector embeddings confirms our hypothesis that ICD codes can be contextually grouped into distinct syndrome clusters (Figure 1). Manual examination of the resulting embeddings confirms consistency across codes from the same top-level category but also reveals cross-category relationships that would be missed from a strictly hierarchical analysis (Table 1). For example, not only does the method appropriately discover the close relationship between influenza codes J10.1 and A49.2, it also reveals a link between asthma code J45.20 and obesity code E66.09. We believe these learned relationships will be useful both for refining existing syndrome categories and developing new ones.ConclusionsThe embedding structure supports the hypothesis of distinct syndrome clusters, and nearest-neighbor results expose relationships between categorically unrelated codes (appropriate upon examination). The method works automatically without the need for SME analysis and it provides an objective, data-driven baseline for the development of syndrome definitions and their refinement.References[1] Choi Y, Chiu CY-I, Sontag D. Learning Low-Dimensional Representations of Medical Concepts. AMIA Summits on Translational Science Proceedings. 2016;2016:41-50.[2] Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. InAdvances in neural information processing systems 2013 (pp. 3111-3119).
APA, Harvard, Vancouver, ISO, and other styles
16

Khakurel, Hrishabh, M. F. N. Taufique, Ankit Roy, Ganesh Balasubramanian, Gaoyuan Ouyang, Jun Cui, Duane D. Johnson, and Ram Devanathan. "Machine learning assisted prediction of the Young’s modulus of compositionally complex alloys." Scientific Reports 11, no. 1 (August 25, 2021). http://dx.doi.org/10.1038/s41598-021-96507-0.

Full text
Abstract:
AbstractWe identify compositionally complex alloys (CCAs) that offer exceptional mechanical properties for elevated temperature applications by employing machine learning (ML) in conjunction with rapid synthesis and testing of alloys for validation to accelerate alloy design. The advantages of this approach are scalability, rapidity, and reasonably accurate predictions. ML tools were implemented to predict Young’s modulus of refractory-based CCAs by employing different ML models. Our results, in conjunction with experimental validation, suggest that average valence electron concentration, the difference in atomic radius, a geometrical parameter λ and melting temperature of the alloys are the key features that determine the Young’s modulus of CCAs and refractory-based CCAs. The Gradient Boosting model provided the best predictive capabilities (mean absolute error of 6.15 GPa) among the models studied. Our approach integrates high-quality validation data from experiments, literature data for training machine-learning models, and feature selection based on physical insights. It opens a new avenue to optimize the desired materials property for different engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Jiaheng, Yingbo Zhang, Xinyu Cao, Qi Zeng, Ye Zhuang, Xiaoying Qian, and Hui Chen. "Accelerated discovery of high-strength aluminum alloys by machine learning." Communications Materials 1, no. 1 (October 12, 2020). http://dx.doi.org/10.1038/s43246-020-00074-2.

Full text
Abstract:
Abstract Aluminum alloys are attractive for a number of applications due to their high specific strength, and developing new compositions is a major goal in the structural materials community. Here, we investigate the Al-Zn-Mg-Cu alloy system (7xxx series) by machine learning-based composition and process optimization. The discovered optimized alloy is compositionally lean with a high ultimate tensile strength of 952 MPa and 6.3% elongation following a cost-effective processing route. We find that the Al8Cu4Y phase in wrought 7xxx-T6 alloys exists in the form of a nanoscale network structure along sub-grain boundaries besides the common irregular-shaped particles. Our study demonstrates the feasibility of using machine learning to search for 7xxx alloys with good mechanical performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Kovacs, Alexander, Johann Fischbacher, Harald Oezelt, Alexander Kornell, Qais Ali, Markus Gusenbauer, Masao Yano, et al. "Physics-informed machine learning combining experiment and simulation for the design of neodymium-iron-boron permanent magnets with reduced critical-elements content." Frontiers in Materials 9 (January 18, 2023). http://dx.doi.org/10.3389/fmats.2022.1094055.

Full text
Abstract:
Rare-earth elements like neodymium, terbium and dysprosium are crucial to the performance of permanent magnets used in various green-energy technologies like hybrid or electric cars. To address the supply risk of those elements, we applied machine-learning techniques to design magnetic materials with reduced neodymium content and without terbium and dysprosium. However, the performance of the magnet intended to be used in electric motors should be preserved. We developed machine-learning methods that assist materials design by integrating physical models to bridge the gap between length scales, from atomistic to the micrometer-sized granular microstructure of neodymium-iron-boron permanent magnets. Through data assimilation, we combined data from experiments and simulations to build machine-learning models which we used to optimize the chemical composition and the microstructure of the magnet. We applied techniques that help to understand and interpret the results of machine learning predictions. The variables importance shows how the main design variables influence the magnetic properties. High-throughput measurements on compositionally graded sputtered films are a systematic way to generate data for machine data analysis. Using the machine learning models we show how high-performance, Nd-lean magnets can be realized.
APA, Harvard, Vancouver, ISO, and other styles
19

Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. "Building machines that learn and think like people." Behavioral and Brain Sciences 40 (November 24, 2016). http://dx.doi.org/10.1017/s0140525x16001837.

Full text
Abstract:
AbstractRecent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
APA, Harvard, Vancouver, ISO, and other styles
20

Mao, Keyou S., Tyler J. Gerczak, Jason M. Harp, Casey S. McKinney, Timothy G. Lach, Omer Karakoc, Andrew T. Nelson, Kurt A. Terrani, Chad M. Parish, and Philip D. Edmondson. "Identifying chemically similar multiphase nanoprecipitates in compositionally complex non-equilibrium oxides via machine learning." Communications Materials 3, no. 1 (April 19, 2022). http://dx.doi.org/10.1038/s43246-022-00244-4.

Full text
Abstract:
AbstractCharacterizing oxide nuclear fuels is difficult due to complex fission products, which result from time-evolving system chemistry and extreme operating environments. Here, we report a machine learning-enhanced approach that accelerates the characterization of spent nuclear fuels and improves the accuracy of identifying nanophase fission products and bubbles. We apply this approach to commercial, high-burnup, irradiated light-water reactor fuels, demonstrating relationships between fission product precipitates and gases. We also gain understanding of the fission versus decay pathways of precipitates across the radius of a fuel pellet. An algorithm is provided for quantifying the chemical segregation of the fission products with respect to the high-burnup structure, which enhances our ability to process large amounts of microscopy data, including approaching the atomistic-scale. This may provide a faster route for achieving physics-based fuel performance modeling.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhou, Ziqing, Yeju Zhou, Quanfeng He, Zhaoyi Ding, Fucheng Li, and Yong Yang. "Machine learning guided appraisal and exploration of phase design for high entropy alloys." npj Computational Materials 5, no. 1 (December 2019). http://dx.doi.org/10.1038/s41524-019-0265-1.

Full text
Abstract:
AbstractHigh entropy alloys (HEAs) and compositionally complex alloys (CCAs) have recently attracted great research interest because of their remarkable mechanical and physical properties. Although many useful HEAs or CCAs were reported, the rules of phase design, if there are any, which could guide alloy screening are still an open issue. In this work, we made a critical appraisal of the existing design rules commonly used by the academic community with different machine learning (ML) algorithms. Based on the artificial neural network algorithm, we were able to derive and extract a sensitivity matrix from the ML modeling, which enabled the quantitative assessment of how to tune a design parameter for the formation of a certain phase, such as solid solution, intermetallic, or amorphous phase. Furthermore, we explored the use of an extended set of new design parameters, which had not been considered before, for phase design in HEAs or CCAs with the ML modeling. To verify our ML-guided design rule, we performed various experiments and designed a series of alloys out of the Fe-Cr-Ni-Zr-Cu system. The outcomes of our experiments agree reasonably well with our predictions, which suggests that the ML-based techniques could be a useful tool in the future design of HEAs or CCAs.
APA, Harvard, Vancouver, ISO, and other styles
22

Fung, Victor, Jiaxin Zhang, Eric Juarez, and Bobby G. Sumpter. "Benchmarking graph neural networks for materials chemistry." npj Computational Materials 7, no. 1 (June 3, 2021). http://dx.doi.org/10.1038/s41524-021-00554-0.

Full text
Abstract:
AbstractGraph neural networks (GNNs) have received intense interest as a rapidly expanding class of machine learning models remarkably well-suited for materials applications. To date, a number of successful GNNs have been proposed and demonstrated for systems ranging from crystal stability to electronic property prediction and to surface chemistry and heterogeneous catalysis. However, a consistent benchmark of these models remains lacking, hindering the development and consistent evaluation of new models in the materials field. Here, we present a workflow and testing platform, MatDeepLearn, for quickly and reproducibly assessing and comparing GNNs and other machine learning models. We use this platform to optimize and evaluate a selection of top performing GNNs on several representative datasets in computational materials chemistry. From our investigations we note the importance of hyperparameter selection and find roughly similar performances for the top models once optimized. We identify several strengths in GNNs over conventional models in cases with compositionally diverse datasets and in its overall flexibility with respect to inputs, due to learned rather than defined representations. Meanwhile several weaknesses of GNNs are also observed including high data requirements, and suggestions for further improvement for applications in materials chemistry are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Soni, Vinay Kumar, S. Sanyal, K. Raja Rao, and Sudip K. Sinha. "A review on phase prediction in high entropy alloys." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, May 13, 2021, 095440622110089. http://dx.doi.org/10.1177/09544062211008935.

Full text
Abstract:
The formation of single phase solid solution in High Entropy Alloys (HEAs) is essential for the properties of the alloys therefore, numerous approach were proposed by many researchers to predict the stability of single phase solid solution in High Entropy Alloy. The present review examines some of the recent developments while using computational intelligence techniques such as parametric approach, CALPHAD, Machine Learning etc. for prediction of various phase formation in multicomponent high entropy alloys. A detail study of this data-driven approaches pertaining to the understanding of structural and phase formation behaviour of a new class of compositionally complex alloys is done in the present investigation. The advantages and drawbacks of the various computational are also discussed. Finally, this review aims at understanding several computational modeling tools complying the thermodynamic criteria for phase formation of novel HEAs which could possibly deliver superior mechanical properties keeping an aim at advanced engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Mayer, Francis D., Pooya Hosseini-Benhangi, Carlos M. Sánchez-Sánchez, Edouard Asselin, and Előd L. Gyenge. "Scanning electrochemical microscopy screening of CO2 electroreduction activities and product selectivities of catalyst arrays." Communications Chemistry 3, no. 1 (November 6, 2020). http://dx.doi.org/10.1038/s42004-020-00399-6.

Full text
Abstract:
Abstract The electroreduction of CO2 is one of the most investigated reactions and involves testing a large number and variety of catalysts. The majority of experimental electrocatalysis studies use conventional one-sample-at-a-time methods without providing spatially resolved catalytic activity information. Herein, we present the application of scanning electrochemical microscopy (SECM) for simultaneous screening of different catalysts forming an array. We demonstrate the potential of this method for electrocatalytic assessment of an array consisting of three Sn/SnOx catalysts for CO2 reduction to formate (CO2RF). Simultaneous SECM scans with fast scan (1 V s−1) cyclic voltammetry detection of products (HCOO−, CO and H2) at the Pt ultramicroelectrode tip were performed. We were able to consistently distinguish the electrocatalytic activities of the three compositionally and morphologically different Sn/SnOx catalysts. Further development of this technique for larger catalyst arrays and matrices coupled with machine learning based algorithms could greatly accelerate the CO2 electroreduction catalyst discovery.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography