Academic literature on the topic 'Neural-symbolic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural-symbolic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural-symbolic"

1

Zhang, Jing, Bo Chen, Lingxi Zhang, Xirui Ke, and Haipeng Ding. "Neural, symbolic and neural-symbolic reasoning on knowledge graphs." AI Open 2 (2021): 14–35. http://dx.doi.org/10.1016/j.aiopen.2021.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Venkatraman, Vinod, Daniel Ansari, and Michael W. L. Chee. "Neural correlates of symbolic and non-symbolic arithmetic." Neuropsychologia 43, no. 5 (January 2005): 744–53. http://dx.doi.org/10.1016/j.neuropsychologia.2004.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neto, João Pedro, Hava T. Siegelmann, and J. Félix Costa. "Symbolic processing in neural networks." Journal of the Brazilian Computer Society 8, no. 3 (April 2003): 58–70. http://dx.doi.org/10.1590/s0104-65002003000100005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Do, Quan, and Michael E. Hasselmo. "Neural circuits and symbolic processing." Neurobiology of Learning and Memory 186 (December 2021): 107552. http://dx.doi.org/10.1016/j.nlm.2021.107552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Smolensky, Paul. "Symbolic functions from neural computation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1971 (July 28, 2012): 3543–69. http://dx.doi.org/10.1098/rsta.2011.0334.

Full text
Abstract:
Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Shavlik, Jude W. "Combining symbolic and neural learning." Machine Learning 14, no. 3 (March 1994): 321–31. http://dx.doi.org/10.1007/bf00993982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Setiono, R., and Huan Liu. "Symbolic representation of neural networks." Computer 29, no. 3 (March 1996): 71–77. http://dx.doi.org/10.1109/2.485895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsamoura, Efthymia, Timothy Hospedales, and Loizos Michael. "Neural-Symbolic Integration: A Compositional Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5051–60. http://dx.doi.org/10.1609/aaai.v35i6.16639.

Full text
Abstract:
Despite significant progress in the development of neural-symbolic frameworks, the question of how to integrate a neural and a symbolic system in a compositional manner remains open. Our work seeks to fill this gap by treating these two systems as black boxes to be integrated as modules into a single architecture, without making assumptions on their internal structure and semantics. Instead, we expect only that each module exposes certain methods for accessing the functions that the module implements: the symbolic module exposes a deduction method for computing the function's output on a given input, and an abduction method for computing the function's inputs for a given output; the neural module exposes a deduction method for computing the function's output on a given input, and an induction method for updating the function given input-output training instances. We are, then, able to show that a symbolic module --- with any choice for syntax and semantics, as long as the deduction and abduction methods are exposed --- can be cleanly integrated with a neural module, and facilitate the latter's efficient training, achieving empirical performance that exceeds that of previous work.
APA, Harvard, Vancouver, ISO, and other styles
9

Taha, I. A., and J. Ghosh. "Symbolic interpretation of artificial neural networks." IEEE Transactions on Knowledge and Data Engineering 11, no. 3 (1999): 448–63. http://dx.doi.org/10.1109/69.774103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

BOOKMAN, LAWRENCE A., and RON SUN. "EDITORIAL Integrating Neural and Symbolic Processes." Connection Science 5, no. 3-4 (January 1993): 203–4. http://dx.doi.org/10.1080/09540099308915699.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural-symbolic"

1

Bader, Sebastian. "Neural-Symbolic Integration." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-25468.

Full text
Abstract:
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them. Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.
APA, Harvard, Vancouver, ISO, and other styles
2

Townsend, Joseph Paul. "Artificial development of neural-symbolic networks." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15162.

Full text
Abstract:
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Chunyang. "Neural-Symbolic Learning for Semantic Parsing." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0268/document.

Full text
Abstract:
Notre but dans cette thèse est de construire un système qui réponde à une question en langue naturelle (NL) en représentant sa sémantique comme une forme logique (LF) et ensuite en calculant une réponse en exécutant cette LF sur une base de connaissances. La partie centrale d'un tel système est l'analyseur sémantique qui transforme les questions en formes logiques. Notre objectif est de construire des analyseurs sémantiques performants en apprenant à partir de paires (NL, LF). Nous proposons de combiner des réseaux neuronaux récurrents (RNN) avec des connaissances préalables symboliques exprimées à travers des grammaires hors-contexte (CFGs) et des automates. En intégrant des CFGs contrôlant la validité des LFs dans les processus d'apprentissage et d'inférence des RNNs, nous garantissons que les formes logiques générées sont bien formées; en intégrant, par le biais d'automates pondérés, des connaissances préalables sur la présence de certaines entités dans la LF, nous améliorons encore la performance de nos modèles. Expérimentalement, nous montrons que notre approche permet d'obtenir de meilleures performances que les analyseurs sémantiques qui n'utilisent pas de réseaux neuronaux, ainsi que les analyseurs à base de RNNs qui ne sont pas informés par de telles connaissances préalables
Our goal in this thesis is to build a system that answers a natural language question (NL) by representing its semantics as a logical form (LF) and then computing the answer by executing the LF over a knowledge base. The core part of such a system is the semantic parser that maps questions to logical forms. Our focus is how to build high-performance semantic parsers by learning from (NL, LF) pairs. We propose to combine recurrent neural networks (RNNs) with symbolic prior knowledge expressed through context-free grammars (CFGs) and automata. By integrating CFGs over LFs into the RNN training and inference processes, we guarantee that the generated logical forms are well-formed; by integrating, through weighted automata, prior knowledge over the presence of certain entities in the LF, we further enhance the performance of our models. Experimentally, we show that our approach achieves better performance than previous semantic parsers not using neural networks as well as RNNs not informed by such prior knowledge
APA, Harvard, Vancouver, ISO, and other styles
4

Mann, Jordyn(Jordyn L. ). "Neural Bayesian goal inference for symbolic planning domains." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130701.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 51-52).
There are several reasons for which one may aim to infer the short- and long-term goals of agents in diverse physical domains. As increasingly powerful autonomous systems come into development, it is conceivable that they may eventually need to accurately infer the goals of humans. There are also more immediate reasons for which this sort of inference may be desirable, such as in the use case of intelligent personal assistants. This thesis introduces a neural Bayesian approach to goal inference in multiple symbolic planning domains and compares the results of this approach to the results of a recently developed Monte Carlo Bayesian inference method known as Sequential Inverse Plan Search (SIPS). SIPS is based on sequential Monte Carlo inference for Bayesian inversion of probabilistic plan search in Planning Domain Definition Language (PDDL) domains. In addition to the neural architectures, the thesis also introduces approaches for converting PDDL predicate state representations to numerical arrays and vectors suitable for input to the neural networks. The experimental results presented indicate that for the domains investigated, in cases where the training set is representative of the test set, the neural approach provides similar accuracy results to SIPS in the later portions of the observation sequences with a far shorter amortized time cost. However, in earlier timesteps of those observation sequences and in cases where the training set is less similar to the testing set, SIPS outperforms the neural approach in terms of accuracy. These results indicate that a model-based inference method where SIPS uses a neural proposal based on the neural networks designed in this thesis could have the potential to combine the advantages of both goal inference approaches by improving the speed of SIPS inference while maintaining generalizability and high accuracy throughout the timesteps of the observation sequences.
by Jordyn Mann.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Noda, Itsuki. "Neural Networks that Learn Symbolic and Structred Representation of Information." Kyoto University, 1995. http://hdl.handle.net/2433/154663.

Full text
Abstract:
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第5860号
工博第1404号
新制||工||978(附属図書館)
UT51-95-B205
京都大学大学院工学研究科電気工学専攻
(主査)教授 長尾 真, 教授 池田 克夫, 教授 矢島 脩三
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
6

Chichlowski, Kazimierz O. "Modelling and recognition of continuous and symbolic data using artificial neural networks." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hsinchun, P. Buntin, Linlin She, S. Sutjahjo, C. Sommer, and D. Neely. "Expert Prediction, Symbolic Learning, and Neural Networks: An Experiment on Greyhound Racing." IEEE, 1994. http://hdl.handle.net/10150/105472.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
For our research, we investigated a different problem-solving scenario called game playing, which is unstructured, complex, and seldom-studied. We considered several real-life game-playing scenarios and decided on greyhound racing. The large amount of historical information involved in the search poses a challenge for both human experts and machine-learning algorithms. The questions then become: Can machine-learning techniques reduce the uncertainty in a complex game-playing scenario? Can these methods outperform human experts in prediction? Our research sought to answer these questions.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Hsinchun. "Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/106427.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to “intelligent” information retrieval and indexing. More recently, information science researchers have turned to other newer artificial-intelligence- based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. These newer techniques, which are grounded on diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information storage and retrieval systems. In this article, we first provide an overview of these newer techniques and their use in information science research. To familiarize readers with these techniques, we present three popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evolution- based genetic algorithms. We discuss their knowledge representations and algorithms in the context of information retrieval. Sample implementation and testing results from our own research are also provided for each technique. We believe these techniques are promising in their ability to analyze user queries, identify users’ information needs, and suggest alternatives for search. With proper user-system interactions, these methods can greatly complement the prevailing full-text, keywordbased, probabilistic, and knowledge-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Zibin. "A new design approach for numeric-to-symbolic conversion using neural networks." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4242.

Full text
Abstract:
A new approach is proposed which uses a combination of a Backprop paradigm neural network along with some perceptron processing elements performing logic operations to construct a numeric-to-symbolic converter. The design approach proposed herein is capable of implementing a decision region defined by a multi-dimensional, non-linear boundary surface. By defining a "two-valued" subspace of the boundary surface, a Backprop paradigm neural network is used to model the boundary surf ace. An input vector is tested by the neural network boundary model (along with perceptron logic gates) to determine whether the incoming vector point is within the decision region or not. Experiments with two qualitatively different kinds of nonlinear surface were carried out to test and demonstrate the design approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Carmantini, Giovanni Sirio. "Dynamical systems theory for transparent symbolic computation in neuronal networks." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/8647.

Full text
Abstract:
In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize computation in a range of neuronal network models. We first present a constructive mapping between a range of formal systems and Recurrent Neural Networks (RNNs), through the introduction of a Versatile Shift and a modular network architecture supporting its real-time simulation. We then move on to more detailed models of neural dynamics, characterizing the computation performed by networks of delay-pulse-coupled oscillators supporting the emergence of heteroclinic dynamics. We show that a correspondence can be found between these networks and Finite-State Transducers, and use the derived abstraction to investigate how noise affects computation in this class of systems, unveiling a surprising facilitatory effect on information transmission. Finally, we present a new dynamical framework for computation in neuronal networks based on the slow-fast dynamics paradigm, and discuss the consequences of our results for future work, specifically for what concerns the fields of interactive computation and Artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural-symbolic"

1

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. Neural-Symbolic Learning Systems. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

C, Lamb Luís, and Gabbay Dov M. 1945-, eds. Neural-symbolic cognitive reasoning. Berlin: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pascal, Hitzler, and SpringerLink (Online service), eds. Perspectives of Neural-Symbolic Integration. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hammer, Barbara, and Pascal Hitzler, eds. Perspectives of Neural-Symbolic Integration. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73954-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Ron, and Lawrence A. Bookman, eds. Computational Architectures Integrating Neural And Symbolic Processes. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/b102608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Garcez, Artur S. D'Avila. Neural-Symbolic Learning Systems: Foundations and Applications. London: Springer London, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

International School on Neural Nets "E.R. Caianiello" Fifth Course: From Synapses to Rules: Discovering Symbolic Rules From Neural Processed Data (2002 Erice, Italy). From synapses to rules: Discovering symbolic rules from neural processed data. New York: Kluwer Academic/Plenum Pub., 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Apolloni, Bruno. From Synapses to Rules: Discovering Symbolic Rules from Neural Processed Data. Boston, MA: Springer US, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dong, Tiansi. A Geometric Approach to the Unification of Symbolic Structures and Neural Networks. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56275-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ruan, Da. Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks, and Genetic Algorithms. Boston, MA: Springer US, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural-symbolic"

1

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Introduction and Overview." In Neural-Symbolic Learning Systems, 1–12. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Background." In Neural-Symbolic Learning Systems, 13–40. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Theory Refinement in Neural Networks." In Neural-Symbolic Learning Systems, 43–85. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Experiments on Theory Refinement." In Neural-Symbolic Learning Systems, 87–110. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Knowledge Extraction from Trained Networks." In Neural-Symbolic Learning Systems, 113–58. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Experiments on Knowledge Extraction." In Neural-Symbolic Learning Systems, 159–79. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Handling Inconsistencies in Neural Networks." In Neural-Symbolic Learning Systems, 183–208. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Experiments on Handling Inconsistencies." In Neural-Symbolic Learning Systems, 209–33. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

d’Avila Garcez, Artur S., Krysia B. Broda, and Dov M. Gabbay. "Neural-Symbolic Integration: The Road Ahead." In Neural-Symbolic Learning Systems, 235–52. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0211-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Matthew K. O. "Neural Networks and Symbolic A.I." In Neurocomputing, 117–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76153-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural-symbolic"

1

Lamb, Luís C., Artur d’Avila Garcez, Marco Gori, Marcelo O. R. Prates, Pedro H. C. Avelar, and Moshe Y. Vardi. "Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/679.

Full text
Abstract:
Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNNs) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as their relationship to current developments in neural-symbolic computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Shiqi, Shen, Shweta Shinde, Soundarya Ramesh, Abhik Roychoudhury, and Prateek Saxena. "Neuro-Symbolic Execution: Augmenting Symbolic Execution with Neural Constraints." In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2019. http://dx.doi.org/10.14722/ndss.2019.23530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Perotti, Alan, Artur d'Avila Garcez, and Guido Boella. "Neural-symbolic monitoring and adaptation." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tran, Son N. "Compositional Neural Logic Programming." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/421.

Full text
Abstract:
This paper introduces Compositional Neural Logic Programming (CNLP), a framework that integrates neural networks and logic programming for symbolic and sub-symbolic reasoning. We adopt the idea of compositional neural networks to represent first-order logic predicates and rules. A voting backward-forward chaining algorithm is proposed for inference with both symbolic and sub-symbolic variables in an argument-retrieval style. The framework is highly flexible in that it can be constructed incrementally with new knowledge, and it also supports batch reasoning in certain cases. In the experiments, we demonstrate the advantages of CNLP in discriminative tasks and generative tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Zhun, Adam Ishay, and Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Full text
Abstract:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
APA, Harvard, Vancouver, ISO, and other styles
6

Converse, Hayes, Antonio Filieri, Divya Gopinath, and Corina S. Pasareanu. "Probabilistic Symbolic Analysis of Neural Networks." In 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). IEEE, 2020. http://dx.doi.org/10.1109/issre5003.2020.00023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Manhaeve, Robin, Giuseppe Marra, and Luc De Raedt. "Approximate Inference for Neural Probabilistic Logic Programming." In 18th International Conference on Principles of Knowledge Representation and Reasoning {KR-2021}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/kr.2021/45.

Full text
Abstract:
DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks. It is realized by providing an interface between the probabilistic logic and the neural networks. Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation. In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relational AI. Instead of considering all possible proofs for a certain query, the system searches for the best proof. However, training a DeepProbLog model using approximate inference introduces additional challenges, as the best proof is unknown at the start of training which can lead to convergence towards a local optimum. To be able to apply DeepProbLog on larger tasks, we propose: 1) a method for approximate inference using an A*-like search, called DPLA* 2) an exploration strategy for proving in a neural-symbolic setting, and 3) a parametric heuristic to guide the proof search. We empirically evaluate the performance and scalability of the new approach, and also compare the resulting approach to other neural-symbolic systems. The experiments show that DPLA* achieves a speed up of up to 2-3 orders of magnitude in some cases.
APA, Harvard, Vancouver, ISO, and other styles
8

Aasted, Christopher M., Sunwook Lim, and Rahmat A. Shoureshi. "Vehicle Health Inferencing Using Feature-Based Neural-Symbolic Networks." In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3831.

Full text
Abstract:
In order to optimize the use of fault tolerant controllers for unmanned or autonomous aerial vehicles, a health diagnostics system is being developed. To autonomously determine the effect of damage on global vehicle health, a feature-based neural-symbolic network is utilized to infer vehicle health using historical data. Our current system is able to accurately characterize the extent of vehicle damage with 99.2% accuracy when tested on prior incident data. Based on the results of this work, neural-symbolic networks appear to be a useful tool for diagnosis of global vehicle health based on features of subsystem diagnostic information.
APA, Harvard, Vancouver, ISO, and other styles
9

Chrupała, Grzegorz, and Afra Alishahi. "Correlating Neural and Symbolic Representations of Language." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Arjun, and Tim Oates. "Connecting deep neural networks with symbolic knowledge." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966309.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural-symbolic"

1

Gardner, Daniel. Symbolic Processor Based Models of Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, May 1988. http://dx.doi.org/10.21236/ada200200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burton, Robert M., and Jr. Topics in Stochastics, Symbolic Dynamics and Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, December 1996. http://dx.doi.org/10.21236/ada336426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Zibin. A new design approach for numeric-to-symbolic conversion using neural networks. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography