Journal articles on the topic 'Neural-symbolic'

To see the other types of publications on this topic, follow the link: Neural-symbolic.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural-symbolic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Jing, Bo Chen, Lingxi Zhang, Xirui Ke, and Haipeng Ding. "Neural, symbolic and neural-symbolic reasoning on knowledge graphs." AI Open 2 (2021): 14–35. http://dx.doi.org/10.1016/j.aiopen.2021.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Venkatraman, Vinod, Daniel Ansari, and Michael W. L. Chee. "Neural correlates of symbolic and non-symbolic arithmetic." Neuropsychologia 43, no. 5 (January 2005): 744–53. http://dx.doi.org/10.1016/j.neuropsychologia.2004.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neto, João Pedro, Hava T. Siegelmann, and J. Félix Costa. "Symbolic processing in neural networks." Journal of the Brazilian Computer Society 8, no. 3 (April 2003): 58–70. http://dx.doi.org/10.1590/s0104-65002003000100005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Do, Quan, and Michael E. Hasselmo. "Neural circuits and symbolic processing." Neurobiology of Learning and Memory 186 (December 2021): 107552. http://dx.doi.org/10.1016/j.nlm.2021.107552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Smolensky, Paul. "Symbolic functions from neural computation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1971 (July 28, 2012): 3543–69. http://dx.doi.org/10.1098/rsta.2011.0334.

Full text
Abstract:
Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Shavlik, Jude W. "Combining symbolic and neural learning." Machine Learning 14, no. 3 (March 1994): 321–31. http://dx.doi.org/10.1007/bf00993982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Setiono, R., and Huan Liu. "Symbolic representation of neural networks." Computer 29, no. 3 (March 1996): 71–77. http://dx.doi.org/10.1109/2.485895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsamoura, Efthymia, Timothy Hospedales, and Loizos Michael. "Neural-Symbolic Integration: A Compositional Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5051–60. http://dx.doi.org/10.1609/aaai.v35i6.16639.

Full text
Abstract:
Despite significant progress in the development of neural-symbolic frameworks, the question of how to integrate a neural and a symbolic system in a compositional manner remains open. Our work seeks to fill this gap by treating these two systems as black boxes to be integrated as modules into a single architecture, without making assumptions on their internal structure and semantics. Instead, we expect only that each module exposes certain methods for accessing the functions that the module implements: the symbolic module exposes a deduction method for computing the function's output on a given input, and an abduction method for computing the function's inputs for a given output; the neural module exposes a deduction method for computing the function's output on a given input, and an induction method for updating the function given input-output training instances. We are, then, able to show that a symbolic module --- with any choice for syntax and semantics, as long as the deduction and abduction methods are exposed --- can be cleanly integrated with a neural module, and facilitate the latter's efficient training, achieving empirical performance that exceeds that of previous work.
APA, Harvard, Vancouver, ISO, and other styles
9

Taha, I. A., and J. Ghosh. "Symbolic interpretation of artificial neural networks." IEEE Transactions on Knowledge and Data Engineering 11, no. 3 (1999): 448–63. http://dx.doi.org/10.1109/69.774103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

BOOKMAN, LAWRENCE A., and RON SUN. "EDITORIAL Integrating Neural and Symbolic Processes." Connection Science 5, no. 3-4 (January 1993): 203–4. http://dx.doi.org/10.1080/09540099308915699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Besold, Tarek R., Artur d’Avila Garcez, Kai-Uwe Kühnberger, and Terrence C. Stewart. "Neural-symbolic networks for cognitive capacities." Biologically Inspired Cognitive Architectures 9 (July 2014): iii—iv. http://dx.doi.org/10.1016/s2212-683x(14)00061-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Setiono, Rudy, James Y. L. Thong, and Chee-Sing Yap. "Symbolic rule extraction from neural networks." Information & Management 34, no. 2 (September 1998): 91–101. http://dx.doi.org/10.1016/s0378-7206(98)00048-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Garcez, Artur S. d’Avila, Dov M. Gabbay, Oliver Ray, and John Woods. "Abductive reasoning in neural-symbolic systems." Topoi 26, no. 1 (April 5, 2007): 37–49. http://dx.doi.org/10.1007/s11245-006-9005-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Borges, Rafael V., Artur S. d'Avila Garcez, and Luis C. Lamb. "A neural-symbolic perspective on analogy." Behavioral and Brain Sciences 31, no. 4 (July 29, 2008): 379–80. http://dx.doi.org/10.1017/s0140525x08004482.

Full text
Abstract:
AbstractThe target article criticises neural-symbolic systems as inadequate for analogical reasoning and proposes a model of analogy as transformation (i.e., learning). We accept the importance of learning, but we argue that, instead of conflicting, integrated reasoning and learning would model analogy much more adequately. In this new perspective, modern neural-symbolic systems become the natural candidates for modelling analogy.
APA, Harvard, Vancouver, ISO, and other styles
15

Ban, Jung-Chao. "Neural network equations and symbolic dynamics." International Journal of Machine Learning and Cybernetics 6, no. 4 (March 19, 2014): 567–79. http://dx.doi.org/10.1007/s13042-014-0244-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Achler, Tsvi. "Symbolic neural networks for cognitive capacities." Biologically Inspired Cognitive Architectures 9 (July 2014): 71–81. http://dx.doi.org/10.1016/j.bica.2014.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Winters, Thomas, Giuseppe Marra, Robin Manhaeve, and Luc De Raedt. "DeepStochLog: Neural Stochastic Logic Programming." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10090–100. http://dx.doi.org/10.1609/aaai.v36i9.21248.

Full text
Abstract:
Recent advances in neural-symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural-symbolic learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
18

d'AVILA GARCEZ, ARTUR S., LUÍS C. LAMB, KRYSIA BRODA, and DOV M. GABBAY. "APPLYING CONNECTIONIST MODAL LOGICS TO DISTRIBUTED KNOWLEDGE REPRESENTATION PROBLEMS." International Journal on Artificial Intelligence Tools 13, no. 01 (March 2004): 115–39. http://dx.doi.org/10.1142/s0218213004001442.

Full text
Abstract:
Neural-Symbolic Systems concern the integration of the symbolic and connectionist paradigms of Artificial Intelligence. Distributed knowledge representation is traditionally seen under a symbolic perspective. In this paper, we show how neural networks can represent distributed symbolic knowledge, acting as multi-agent systems with learning capability (a key feature of neural networks). We apply the framework of Connectionist Modal Logics to well-known testbeds for distributed knowledge representation formalisms, namely the muddy children and the wise men puzzles. Finally, we sketch a full solution to these problems by extending our approach to deal with knowledge evolution over time.
APA, Harvard, Vancouver, ISO, and other styles
19

Oh, Jinyoung, and Jeong-Won Cha. "Relation Extraction based on Neural-Symbolic Structure." Journal of KIISE 48, no. 5 (May 31, 2021): 533–38. http://dx.doi.org/10.5626/jok.2021.48.5.533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hitzler, Pascal, Federico Bianchi, Monireh Ebrahimi, and Md Kamruzzaman Sarker. "Neural-symbolic integration and the Semantic Web." Semantic Web 11, no. 1 (January 31, 2020): 3–11. http://dx.doi.org/10.3233/sw-190368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lyons, Ian M., and Sian L. Beilock. "Characterizing the neural coding of symbolic quantities." NeuroImage 178 (September 2018): 503–18. http://dx.doi.org/10.1016/j.neuroimage.2018.05.062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lesher, S., Li Guan, and A. H. Cohen. "Symbolic time-series analysis of neural data." Neurocomputing 32-33 (June 2000): 1073–81. http://dx.doi.org/10.1016/s0925-2312(00)00281-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hatzilygeroudis, Ioannis, and Jim Prentzas. "Symbolic-neural rule based reasoning and explanation." Expert Systems with Applications 42, no. 9 (June 2015): 4595–609. http://dx.doi.org/10.1016/j.eswa.2015.01.068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Huynh, T. Q., and J. A. Reggia. "Symbolic Representation of Recurrent Neural Network Dynamics." IEEE Transactions on Neural Networks and Learning Systems 23, no. 10 (October 2012): 1649–58. http://dx.doi.org/10.1109/tnnls.2012.2210242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

O'Dwyer, Carl, and Daniel Richardson. "Spiking neural nets with symbolic internal state." Information Processing Letters 95, no. 6 (September 2005): 529–36. http://dx.doi.org/10.1016/j.ipl.2005.05.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vankov, Ivan I., and Jeffrey S. Bowers. "Training neural networks to encode symbols enables combinatorial generalization." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1791 (December 16, 2019): 20190309. http://dx.doi.org/10.1098/rstb.2019.0309.

Full text
Abstract:
Combinatorial generalization—the ability to understand and produce novel combinations of already familiar elements—is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks cannot solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper, we introduce a novel way of representing symbolic structures in connectionist terms—the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations, we show that neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization in their symbolic and non-symbolic output. This adds to other recent work that has shown improved combinatorial generalization under some training conditions, and raises the question of whether specific mechanisms or training routines are needed to support symbolic processing. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
APA, Harvard, Vancouver, ISO, and other styles
27

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Full text
Abstract:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
APA, Harvard, Vancouver, ISO, and other styles
28

Pacheco, Maria Leonor, and Dan Goldwasser. "Modeling Content and Context with Deep Relational Learning." Transactions of the Association for Computational Linguistics 9 (February 2021): 100–119. http://dx.doi.org/10.1162/tacl_a_00357.

Full text
Abstract:
Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.
APA, Harvard, Vancouver, ISO, and other styles
29

Burattini, Ernesto, Antonio de Francesco, and Massimo de Gregorio. "NSL: A Neuro–Symbolic Language for a Neuro–Symbolic Processor (NSP)." International Journal of Neural Systems 13, no. 02 (April 2003): 93–101. http://dx.doi.org/10.1142/s0129065703001480.

Full text
Abstract:
A Neuro–Symbolic Language for monotonic and non–monotonic parallel logical inference by means of artificial neural networks (ANNs) is presented. Both the language and its compiler have been designed and implemented in order to translate the neural representation of a given problem into a VHDL software, which in turn can set devices such as FPGA. The result of this operation leads to an electronic circuit that we call NSP (Neuro–Symbolic Processor).
APA, Harvard, Vancouver, ISO, and other styles
30

Tian, Jidong, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. "Weakly Supervised Neural Symbolic Learning for Cognitive Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5888–96. http://dx.doi.org/10.1609/aaai.v36i5.20533.

Full text
Abstract:
Despite the recent success of end-to-end deep neural networks, there are growing concerns about their lack of logical reasoning abilities, especially on cognitive tasks with perception and reasoning processes. A solution is the neural symbolic learning (NeSyL) method that can effectively utilize pre-defined logic rules to constrain the neural architecture making it perform better on cognitive tasks. However, it is challenging to apply NeSyL to these cognitive tasks because of the lack of supervision, the non-differentiable manner of the symbolic system, and the difficulty to probabilistically constrain the neural network. In this paper, we propose WS-NeSyL, a weakly supervised neural symbolic learning model for cognitive tasks with logical reasoning. First, WS-NeSyL employs a novel back search algorithm to sample the possible reasoning process through logic rules. This sampled process can supervise the neural network as the pseudo label. Based on this algorithm, we can backpropagate gradients to the neural network of WS-NeSyL in a weakly supervised manner. Second, we introduce a probabilistic logic regularization into WS-NeSyL to help the neural network learn probabilistic logic. To evaluate WS-NeSyL, we have conducted experiments on three cognitive datasets, including temporal reasoning, handwritten formula recognition, and relational reasoning datasets. Experimental results show that WS-NeSyL not only outperforms the end-to-end neural model but also beats the state-of-the-art neural symbolic learning models.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Zhong, and Hai Fei Si. "Integration of Neural Network and Symbolic Inference and its Application to Detection of Foundation Piles." Advanced Materials Research 383-390 (November 2011): 4977–81. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.4977.

Full text
Abstract:
An integrated system for neural network and symbolic inference is presented. In the system the two intelligent functions, neural network and symbolic inference, can work together to make greater contributions. By applying it to the computer simulation for detection of foundation piles, the system is proved to be effective.
APA, Harvard, Vancouver, ISO, and other styles
32

Demeter, David, and Doug Downey. "Just Add Functions: A Neural-Symbolic Language Model." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7634–42. http://dx.doi.org/10.1609/aaai.v34i05.6264.

Full text
Abstract:
Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings?In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.
APA, Harvard, Vancouver, ISO, and other styles
33

Bae, Yongjin, and Kong-Joo Lee. "Hybrid Passage Retrieval Combing Neural and Symbolic Methods." Journal of Korean Institute of Information Technology 20, no. 7 (July 31, 2022): 11–17. http://dx.doi.org/10.14801/jkiit.2022.20.7.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Dathathri, Sumanth, Sicun Gao, and Richard M. Murray. "Inverse Abstraction of Neural Networks Using Symbolic Interpolation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3437–44. http://dx.doi.org/10.1609/aaai.v33i01.33013437.

Full text
Abstract:
Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically requires extracting information through computing pre-images of the network transformations, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images by computing their overapproximations and underapproximations through all layers. The abstraction of pre-images enables formal analysis and knowledge extraction without affecting standard learning algorithms. We use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are interpretable and can be used for analyzing complex properties.
APA, Harvard, Vancouver, ISO, and other styles
35

Kollia, Ilianna, Nikolaos Simou, Andreas Stafylopatis, and Stefanos Kollias. "SEMANTIC IMAGE ANALYSIS USING A SYMBOLIC NEURAL ARCHITECTURE." Image Analysis & Stereology 29, no. 3 (November 1, 2010): 159. http://dx.doi.org/10.5566/ias.v29.p159-172.

Full text
Abstract:
Image segmentation and classification are basic operations in image analysis and multimedia search which have gained great attention over the last few years due to the large increase of digital multimedia content. A recent trend in image analysis aims at incorporating symbolic knowledge representation systems and machine learning techniques. In this paper, we examine interweaving of neural network classifiers and fuzzy description logics for the adaptation of a knowledge base for semantic image analysis. The proposed approach includes a formal knowledge component, which, assisted by a reasoning engine, generates the a-priori knowledge for the image analysis problem. This knowledge is transferred to a kernel based connectionist system, which is then adapted to a specific application field through extraction and use of MPEG-7 image descriptors. Adaptation of the knowledge base can be achieved next. Combined segmentation and classification of images, or video frames, of summer holidays, is the field used to illustrate the good performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
36

CARPENTER, GAIL A., and AH-HWEE TAN. "Rule Extraction: From Neural Architecture to Symbolic Representation." Connection Science 7, no. 1 (January 1995): 3–27. http://dx.doi.org/10.1080/09540099508915655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lewis, John E., and Leon Glass. "Nonlinear Dynamics and Symbolic Dynamics of Neural Networks." Neural Computation 4, no. 5 (September 1992): 621–42. http://dx.doi.org/10.1162/neco.1992.4.5.621.

Full text
Abstract:
A piecewise linear equation is proposed as a method of analysis of mathematical models of neural networks. A symbolic representation of the dynamics in this equation is given as a directed graph on an N-dimensional hypercube. This provides a formal link with discrete neural networks such as the original Hopfield models. Analytic criteria are given to establish steady states and limit cycle oscillations independent of network dimension. Model networks that display multiple stable limit cycles and chaotic dynamics are discussed. The results show that such equations are a useful and efficient method of investigating the behavior of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Townsend, Joe, Ed Keedwell, and Antony Galton. "Artificial Development of Biologically Plausible Neural-Symbolic Networks." Cognitive Computation 6, no. 1 (April 13, 2013): 18–34. http://dx.doi.org/10.1007/s12559-013-9217-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Shavlik, Jude W., Raymond J. Mooney, and Geoffrey G. Towell. "Symbolic and neural learning algorithms: An experimental comparison." Machine Learning 6, no. 2 (March 1991): 111–43. http://dx.doi.org/10.1007/bf00114160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

UEBERLA, JOERG P., and ARUN JAGOTA. "Integrating Neural and Symbolic Approaches: A Symbolic Learning Scheme for a Connectionist Associative Memory." Connection Science 5, no. 3-4 (January 1993): 377–93. http://dx.doi.org/10.1080/09540099308915706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Garcez, Artur S. d'Avila, and Luís C. Lamb. "A Connectionist Computational Model for Epistemic and Temporal Reasoning." Neural Computation 18, no. 7 (July 2006): 1711–38. http://dx.doi.org/10.1162/neco.2006.18.7.1711.

Full text
Abstract:
The importance of the efforts to bridge the gap between the connectionist and symbolic paradigms of artificial intelligence has been widely recognized. The merging of theory (background knowledge) and data learning (learning from examples) into neural-symbolic systems has indicated that such a learning system is more effective than purely symbolic or purely connectionist systems. Until recently, however, neural-symbolic systems were not able to fully represent, reason, and learn expressive languages other than classical propositional and fragments of first-order logic. In this article, we show that nonclassical logics, in particular propositional temporal logic and combinations of temporal and epistemic (modal) reasoning, can be effectively computed by artificial neural networks. We present the language of a connectionist temporal logic of knowledge (CTLK). We then present a temporal algorithm that translates CTLK theories into ensembles of neural networks and prove that the translation is correct. Finally, we apply CTLK to the muddy children puzzle, which has been widely used as a testbed for distributed knowledge representation. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about knowledge evolution in time and of knowledge acquisition through learning.
APA, Harvard, Vancouver, ISO, and other styles
42

Prentzas, Jim, and Ioannis Hatzilygeroudis. "Neurules and connectionist expert systems: Unexplored neuro-symbolic reasoning aspects." Intelligent Decision Technologies 15, no. 4 (January 10, 2022): 761–77. http://dx.doi.org/10.3233/idt-210211.

Full text
Abstract:
Neuro-symbolic approaches combine neural and symbolic methods. This paper explores aspects regarding the reasoning mechanisms of two neuro-symbolic approaches, that is, neurules and connectionist expert systems. Both provide reasoning and explanation facilities. Neurules are a type of neuro-symbolic rules tightly integrating the neural and symbolic components, giving pre-eminence to the symbolic component. Connectionist expert systems give pre-eminence to the connectionist component. This paper explores reasoning aspects about neurules and connectionist expert systems that have not been previously addressed. As far as neurules are concerned, an aspect playing a role in conflict resolution (i.e., order of neurules) is explored. Experimental results show an improvement in reasoning efficiency. As far as connectionist expert systems are concerned, variations of the reasoning mechanism are explored. Experimental results are presented for them as well showing that one of the variations generally performs better than the others.
APA, Harvard, Vancouver, ISO, and other styles
43

Yamauchi, Yukari, and Shun'ichi Tano. "Analysis of Symbol Generation and Integration in a Unified Model Based on a Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 3 (May 20, 2005): 297–303. http://dx.doi.org/10.20965/jaciii.2005.p0297.

Full text
Abstract:
The computational (numerical information) and symbolic (knowledge-based) processing used in intelligent processing has advantages and disadvantages. A simple model integrating symbols into a neural network was proposed as a first step toward fusing computational and symbolic processing. To verify the effectiveness of this model, we first analyze the trained neural network and generate symbols manually. Then we discuss generation methods that are able to discover effective symbols during training of the neural network. We evaluated these through simulations of reinforcement learning in simple football games. Results indicate that the integration of symbols into the neural network improved the performance of player agents.
APA, Harvard, Vancouver, ISO, and other styles
44

Martin, R. John, and Sujatha Sujatha. "Symbolic-Connectionist Representational Model for Optimizing Decision Making Behavior in Intelligent Systems." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (February 1, 2018): 326. http://dx.doi.org/10.11591/ijece.v8i1.pp326-332.

Full text
Abstract:
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients.
APA, Harvard, Vancouver, ISO, and other styles
45

Blutner, Reinhard. "Neural networks, penalty logic and optimality theory." ZAS Papers in Linguistics 51 (January 1, 2009): 53–94. http://dx.doi.org/10.21248/zaspil.51.2009.374.

Full text
Abstract:
Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined. On the other hand, it is a matter of fact that intelligence resides in the brain, where computation appears to be organized by numerical and statistical principles and where a parallel distributed architecture is appropriate. The present claim is in line with researchers like Paul Smolensky and Peter Gärdenfors and suggests that this controversy can be resolved by a unified theory of cognition – one that integrates both aspects of cognition and assigns the proper roles to symbolic computation and numerical neural computation. The overall goal in this contribution is to discuss formal systems that are suitable for grounding the formal basis for such a unified theory. It is suggested that the instruments of modern logic and model theoretic semantics are appropriate for analyzing certain aspects of dynamical systems like inferring and learning in neural networks. Hence, I suggest that an active dialogue between the traditional symbolic approaches to logic, information and language and the connectionist paradigm is possible and fruitful. An essential component of this dialogue refers to Optimality Theory (OT) – taken as a theory that likewise aims to overcome the gap between symbolic and neuronal systems. In the light of the proposed logical analysis notions like recoverability and bidirection are explained, and likewise the problem of founding a strict constraint hierarchy is discussed. Moreover, a claim is made for developing an "embodied" OT closing the gap between symbolic representation and embodied cognition.
APA, Harvard, Vancouver, ISO, and other styles
46

MILARÉ, CLAUDIA R., ANDRÉ C. P. DE L. F. DE CARVALHO, and MARIA C. MONARD. "AN APPROACH TO EXPLAIN NEURAL NETWORKS USING SYMBOLIC ALGORITHMS." International Journal of Computational Intelligence and Applications 02, no. 04 (December 2002): 365–76. http://dx.doi.org/10.1142/s1469026802000695.

Full text
Abstract:
Although Artificial Neural Networks have been satisfactorily employed in several problems, such as clustering, pattern recognition, dynamic systems control and prediction, they still suffer from significant limitations. One of them is that the induced concept representation is not usually comprehensible to humans. Several techniques have been suggested to extract meaningful knowledge from trained networks. This paper proposes the use of symbolic learning algorithms, commonly used by the Machine Learning community, such as C4.5, C4.5rules and CN2, to extract symbolic representations from trained networks. The approach proposed is similar to that used by the Trepan algorithm, which extracts symbolic representations, expressed as decision trees, from trained networks. Experimental results are presented and discussed in order to compare the knowledge extracted from Artificial Neural Networks using the proposed approach and the Trepan approach. Results are compared regarding two aspects: fidelity and comprehensibility.
APA, Harvard, Vancouver, ISO, and other styles
47

ROLI, F., S. B. SERPICO, and G. VERNAZZA. "IMAGE RECOGNITION BY INTEGRATION OF CONNECTIONIST AND SYMBOLIC APPROACHES." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 03 (June 1995): 485–515. http://dx.doi.org/10.1142/s0218001495000493.

Full text
Abstract:
This paper presents a methodology for integrating connectionist and symbolic approaches to 2D image recognition. The proposed integration paradigm exploits the synergy of the two approaches for both the training and the recognition phases of an image recognition system. In the training phase, a symbolic module provides an approximate solution to a given image-recognition problem in terms of symbolic models. Such models are hierarchically organized into different abstraction levels, and include contextual descriptions. After mapping such models into a complex neural architecture, a neural training process is carried out to optimize the solution of the recognition problem. The so-obtained neural networks are used during the recognition phase for pattern classification. In this phase, the role of symbolic modules consists of managing complex aspects of information processing: abstraction levels, contextual information, and global recognition hypotheses. A hybrid system implementing the proposed integration paradigm is presented, and its advantages over single approaches are assessed. Results on Magnetic Resonance image recognition are reported, and comparisons with some well-known classifiers are made.
APA, Harvard, Vancouver, ISO, and other styles
48

Sonwane, Atharv, Gautam Shroff, Lovekesh Vig, Ashwin Srinivasan, and Tirtharaj Dash. "Solving Visual Analogies Using Neural Algorithmic Reasoning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13055–56. http://dx.doi.org/10.1609/aaai.v36i11.21664.

Full text
Abstract:
We consider a class of visual analogical reasoning problems that involve discovering the sequence of transformations by which pairs of input/output images are related, so as to analogously transform future inputs. This program synthesis task can be easily solved via symbolic search. Using a variation of the ‘neural analogical reasoning’ approach, we instead search for a sequence of elementary neural network transformations that manipulate distributed representations derived from a symbolic space, to which input images are directly encoded. We evaluate the extent to which our ‘neural reasoning’ approach generalises for images with unseen shapes and positions.
APA, Harvard, Vancouver, ISO, and other styles
49

Bak, Stanley. "Symbolic and Numeric Challenges in Neural Network Verification Methods." Electronic Proceedings in Theoretical Computer Science 361 (July 10, 2022): 5. http://dx.doi.org/10.4204/eptcs.361.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Segwang, Hyoungwook Nam, Joonyoung Kim, and Kyomin Jung. "Neural Sequence-to-grid Module for Learning Symbolic Rules." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8163–71. http://dx.doi.org/10.1609/aaai.v35i9.16994.

Full text
Abstract:
Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning. In particular, even state-of-the-art neural networks fail to achieve \textit{out-of-distribution} (OOD) generalization of symbolic reasoning tasks, whereas humans can easily extend learned symbolic rules. To resolve this difficulty, we propose a neural sequence-to-grid (seq2grid) module, an input preprocessor that automatically segments and aligns an input sequence into a grid. As our module outputs a grid via a novel differentiable mapping, any neural network structure taking a grid input, such as ResNet or TextCNN, can be jointly trained with our module in an end-to-end fashion. Extensive experiments show that neural networks having our module as an input preprocessor achieve OOD generalization on various arithmetic and algorithmic problems including number sequence prediction problems, algebraic word problems, and computer program evaluation problems while other state-of-the-art sequence transduction models cannot. Moreover, we verify that our module enhances TextCNN to solve the bAbI QA tasks without external memory.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography