Journal articles on the topic 'Artificial symbol learning'

To see the other types of publications on this topic, follow the link: Artificial symbol learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial symbol learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pollack, Courtney. "Same-different judgments with alphabetic characters: The case of literal symbol processing." Journal of Numerical Cognition 5, no. 2 (August 22, 2019): 241–59. http://dx.doi.org/10.5964/jnc.v5i2.163.

Full text
Abstract:
Learning mathematics requires fluency with symbols that convey numerical magnitude. Algebra and higher-level mathematics involve literal symbols, such as "x", that often represent numerical magnitude. Compared to other symbols, such as Arabic numerals, literal symbols may require more complex processing because they have strong pre-existing associations in literacy. The present study tested this notion using same-different tasks that produce less efficient judgments for different magnitudes that are closer together compared to farther apart (i.e., same-different distance effects). Twenty-four adolescents completed three same-different tasks using Arabic numerals, literal symbols, and artificial symbols. All three symbolic formats produced same-different distance effects, showing literal and artificial symbol processing of numerical magnitude. Importantly, judgments took longer for literal symbols than artificial symbols on average, suggesting a cost specific to literal symbol processing. Taken together, results suggest that literal symbol processing differs from processing of other symbols that represent numerical magnitude.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmetoglu, Alper, M. Yunus Seker, Justus Piater, Erhan Oztop, and Emre Ugur. "DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction." Journal of Artificial Intelligence Research 75 (November 6, 2022): 709–45. http://dx.doi.org/10.1613/jair.1.13754.

Full text
Abstract:
Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoderdecoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as ‘rollable’ and ‘insertable’. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.
APA, Harvard, Vancouver, ISO, and other styles
3

Baek, Sung-Bum, Jin-Gon Shon, and Ji-Su Park. "CAC: A Learning Context Recognition Model Based on AI for Handwritten Mathematical Symbols in e-Learning Systems." Mathematics 10, no. 8 (April 12, 2022): 1277. http://dx.doi.org/10.3390/math10081277.

Full text
Abstract:
The e-learning environment should support the handwriting of mathematical expressions and accurately recognize inputted handwritten mathematical expressions. To this end, expression-related information should be fully utilized in e-learning environments. However, pre-existing handwritten mathematical expression recognition models mainly utilize the shape of handwritten mathematical symbols, thus limiting the models from improving the recognition accuracy of a vaguely represented symbol. Therefore, in this paper, a context-aided correction (CAC) model is proposed that adjusts an output of handwritten mathematical symbol (HMS) recognition by additionally utilizing information related to the HMS in an e-learning system. The CAC model collects learning contextual data associated with the HMS and converts them into learning contextual information. Next, contextual information is recognized through artificial intelligence to adjust the recognition output of the HMS. Finally, the CAC model is trained and tested using a dataset similar to that of a real learning situation. The experiment results show that the recognition accuracy of handwritten mathematical symbols is improved when using the CAC model.
APA, Harvard, Vancouver, ISO, and other styles
4

YANG, DERSHUNG, LARRY A. RENDELL, JULIE L. WEBSTER, DORIS S. SHAW, and JAMES H. GARRETT. "SYMBOL RECOGNITION IN A CAD ENVIRONMENT USING A NEURAL NETWORK." International Journal on Artificial Intelligence Tools 03, no. 02 (June 1994): 157–85. http://dx.doi.org/10.1142/s0218213094000091.

Full text
Abstract:
A new neural network called AUGURS is designed to assist a user of a Computer-Aided Design system in utilizing standard graphic symbols. With AUGURS, the CAD user can avoid searching for standard symbols in a large library and rely on AUGURS to automatically retrieve those symbols resembling the user’s drawing. More specifically, AUGURS inputs a bitmap image normalized with respect to location, size, and orientation, and outputs a list of standard symbols ranked by its assessment of the similarity between the symbol and the input image. Only the top ranked symbols are presented to the user for selection. AUGURS encodes geometric knowledge into its network structure and carefully balances its discriminant power and noise tolerance. The encoded knowledge enables AUGURS to learn reasonably well despite the limited number of training examples, the most serious challenge for the CAD domain. We have compared AUGURS with the Zipcode Net, a traditional layered feed-forward network with an unconstrained structure, and a network that inputs either Zernike or pseudo-Zernike moments. The experimental results conclude that AUGURS can achieve the best recognition performance among all networks being compared with reasonable recognition and learning efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Crespo, Kimberly, and Margarita Kaushanskaya. "The Role of Attention, Language Ability, and Language Experience in Children's Artificial Grammar Learning." Journal of Speech, Language, and Hearing Research 65, no. 4 (April 4, 2022): 1574–91. http://dx.doi.org/10.1044/2021_jslhr-21-00112.

Full text
Abstract:
Purpose: The current study examined the role of attention and language ability in nonverbal rule induction performance in a demographically diverse sample of school-age children. Method: The participants included 43 English-speaking monolingual and 65 Spanish–English bilingual children between the ages of 5 and 9 years. Core Language Index standard scores from the Clinical Evaluation of Language Fundamentals–Fourth Edition indexed children's language skills. Rule induction was measured via a visual artificial grammar learning task. Two equally complex finite-state artificial grammars were used. Children learned one grammar in a low attention condition (where children were exposed to symbol sequences with no distractors) and another grammar in a high attention condition (where distractor symbols were presented around the perimeter of the target symbol sequences). Results: Overall, performance in the high attention condition was significantly worse than performance in the low attention condition. Children with robust language skills performed significantly better in the high attention condition than children with weaker language skills. Despite group differences in socioeconomic status, English language skills, and nonverbal intelligence, monolingual and bilingual children performed similarly to each other in both conditions. Conclusion: The results suggest that the ability to extract rules from visual input is attenuated by the presence of competing visual information and that language ability, but not bilingualism, may influence rule induction.
APA, Harvard, Vancouver, ISO, and other styles
6

KITANI, KRIS M., YOICHI SATO, and AKIHIRO SUGIMOTO. "RECOVERING THE BASIC STRUCTURE OF HUMAN ACTIVITIES FROM NOISY VIDEO-BASED SYMBOL STRINGS." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 08 (December 2008): 1621–46. http://dx.doi.org/10.1142/s0218001408006776.

Full text
Abstract:
In recent years stochastic context-free grammars have been shown to be effective in modeling human activities because of the hierarchical structures they represent. However, most of the research in this area has yet to address the issue of learning the activity grammars from a noisy input source, namely, video. In this paper, we present a framework for identifying noise and recovering the basic activity grammar from a noisy symbol string produced by video. We identify the noise symbols by finding the set of non-noise symbols that optimally compresses the training data, where the optimality of compression is measured using an MDL criterion. We show the robustness of our system to noise and its effectiveness in learning the basic structure of human activity, through experiments with artificial data and a real video sequence from a local convenience store.
APA, Harvard, Vancouver, ISO, and other styles
7

Inkelas, Sharon, Keith Johnson, Charles Lee, Emil Minas, George Mulcaire, Gek Yong Keng, and Tomomi Yuasa. "Testing the Learnability of Writing Systems." Annual Meeting of the Berkeley Linguistics Society 39, no. 1 (December 16, 2013): 75. http://dx.doi.org/10.3765/bls.v39i1.3871.

Full text
Abstract:
In lieu of an abstract, here is a brief excerpt:The world’s sound-based writing systems differ according to the size of the typical speech chunk which is mapped to a symbol: the phone, in so-called alphabetic writing systems, and the mora, demisyllable or syllable, in so-called syllabaries. This paper reports the results of an artificial learning study designed to test whether the acoustic stability of the speech chunks mapped to symbols is a factor in subjects’ ability to learn a novel writing system.
APA, Harvard, Vancouver, ISO, and other styles
8

Latapie, Hugo, Ozkan Kilic, Gaowen Liu, Ramana Kompella, Adam Lawrence, Yuhong Sun, Jayanth Srinivasa, Yan Yan, Pei Wang, and Kristinn R. Thórisson. "A Metamodel and Framework for Artificial General Intelligence From Theory to Practice." Journal of Artificial Intelligence and Consciousness 08, no. 02 (April 22, 2021): 205–27. http://dx.doi.org/10.1142/s2705078521500119.

Full text
Abstract:
This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation. While interest in hybrid machine learning/symbolic AI systems leveraging, for example, reasoning and knowledge graphs, is gaining popularity, we find there remains a need for both a clear definition of knowledge and a metamodel to guide the creation and manipulation of knowledge. Some of the benefits of the metamodel we introduce in this paper include a solution to the symbol grounding problem, cumulative learning and federated learning. We have applied the metamodel to problems ranging from time series analysis, computer vision and natural language understanding and have found that the metamodel enables a wide variety of learning mechanisms ranging from machine learning, to graph network analysis and learning by reasoning engines to interoperate in a highly synergistic way. Our metamodel-based projects have consistently exhibited unprecedented accuracy, performance, and ability to generalize. This paper is inspired by the state-of-the-art approaches to AGI, recent AGI-aspiring work, the granular computing community, as well as Alfred Korzybski’s general semantics. One surprising consequence of the metamodel is that it not only enables a new level of autonomous learning and optimal functioning for machine intelligences, but may also shed light on a path to better understanding how to improve human cognition.
APA, Harvard, Vancouver, ISO, and other styles
9

Raue, Federico, Andreas Dengel, Thomas M. Breuel, and Marcus Liwicki. "Symbol Grounding Association in Multimodal Sequences with Missing Elements." Journal of Artificial Intelligence Research 61 (April 11, 2018): 787–806. http://dx.doi.org/10.1613/jair.5736.

Full text
Abstract:
In this paper, we extend a symbolic association framework for being able to handle missing elements in multimodal sequences. The general scope of the work is the symbolic associations of object-word mappings as it happens in language development in infants. In other words, two different representations of the same abstract concepts can associate in both directions. This scenario has been long interested in Artificial Intelligence, Psychology, and Neuroscience. In this work, we extend a recent approach for multimodal sequences (visual and audio) to also cope with missing elements in one or both modalities. Our method uses two parallel Long Short-Term Memories (LSTMs) with a learning rule based on EM-algorithm. It aligns both LSTM outputs via Dynamic Time Warping (DTW). We propose to include an extra step for the combination with the max operation for exploiting the common elements between both sequences. The motivation behind is that the combination acts as a condition selector for choosing the best representation from both LSTMs. We evaluated the proposed extension in the following scenarios: missing elements in one modality (visual or audio) and missing elements in both modalities (visual and sound). The performance of our extension reaches better results than the original model and similar results to individual LSTM trained in each modality.
APA, Harvard, Vancouver, ISO, and other styles
10

Kocabas, S. "A review of learning." Knowledge Engineering Review 6, no. 3 (September 1991): 195–222. http://dx.doi.org/10.1017/s0269888900005804.

Full text
Abstract:
AbstractLearning is one of the important research fields in artificial intelligence. This paper begins with an outline of the definitions of learning and intelligence, followed by a discussion of the aims of machine learning as an emerging science, and an historical outline of machine learning. The paper then examines the elements and various classifications of learning, and then introduces a new classification of learning based on the levels of representation and learning as knowledge-, symboland device-level learning. Similarity- and explanation-based generalization and conceptual clustering are described as knowledge level learning methods. Learning in classifiers, genetic algorithms and classifier systems are described as symbol level learning, and neural networks are described as device level systems. In accordance with this classification, methods of learning are described in terms of inputs, learning algorithms or devices, and outputs. Then there follows a discussion on the relationships between knowledge representation and learning, and a discussion on the limits of learning in knowledge systems. The paper concludes with a summary of the results drawn from this review.
APA, Harvard, Vancouver, ISO, and other styles
11

Yamauchi, Yukari, and Shun'ichi Tano. "Analysis of Symbol Generation and Integration in a Unified Model Based on a Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 3 (May 20, 2005): 297–303. http://dx.doi.org/10.20965/jaciii.2005.p0297.

Full text
Abstract:
The computational (numerical information) and symbolic (knowledge-based) processing used in intelligent processing has advantages and disadvantages. A simple model integrating symbols into a neural network was proposed as a first step toward fusing computational and symbolic processing. To verify the effectiveness of this model, we first analyze the trained neural network and generate symbols manually. Then we discuss generation methods that are able to discover effective symbols during training of the neural network. We evaluated these through simulations of reinforcement learning in simple football games. Results indicate that the integration of symbols into the neural network improved the performance of player agents.
APA, Harvard, Vancouver, ISO, and other styles
12

Lyre, Holger. "The State Space of Artificial Intelligence." Minds and Machines 30, no. 3 (September 2020): 325–47. http://dx.doi.org/10.1007/s11023-020-09538-3.

Full text
Abstract:
Abstract The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is recognized as generalization, the possibility to go over from specific to more general types of problems. A third dimension is semantic grounding. Our overall analysis connects to a number of known foundational issues in the philosophy of mind and cognition: the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and use theories of meaning. It shall finally be argued that the dimension of grounding decomposes into three sub-dimensions. And the dimension of self-learning turns out as only one of a whole range of “self-x-capacities” (based on ideas of organic computing) that span the self-x-subspace of the full AI state space.
APA, Harvard, Vancouver, ISO, and other styles
13

Kull, Kalevi. "Steps towards the natural meronomy and taxonomy of semiosis: Emotin between index and symbol?" Sign Systems Studies 47, no. 1/2 (August 8, 2019): 88–104. http://dx.doi.org/10.12697/sss.2019.47.1-2.03.

Full text
Abstract:
The main aim of this brief and purposely radical essay is to investigate further possibilities for empirical research in natural classification of semiosis (signs as wholes). Before introducing emon – a missing term in the taxonomy of signs – we make a distinction between the natural and artificial, and between the taxonomic and meronomic classifications of signs. Natural classifications or typologies are empirically based, while artificial classifications do not require empirical test. Meronomy describes the relational or functional structure of the whole (for instance triadic, circular, etc. composition of sign), while taxonomy categorizes individuals (individual signs). We argue that a natural taxonomy of signs can be based on the existence of different complexity of operations during semiosis, which implies different mechanisms of learning. We add into the taxonomy a particular type of signs – emonic signs, which are at work in imitation and social learning, while being more complex than indexes and less complex than symbols. Icons are related to imprinting, indexes to conditioning, emons to imitating, and symbols to conventions or naming. We also argue that the semiotic typologies could undergo large changes after the discovery of the proper mechanisms or workings of semiosis.
APA, Harvard, Vancouver, ISO, and other styles
14

HOLLAND, HANS, MIROSLAV KUBAT, and JAN ŽIŽKA. "HANDLING AMBIGUOUS VALUES IN INSTANCE-BASED CLASSIFIERS." International Journal on Artificial Intelligence Tools 17, no. 03 (June 2008): 449–63. http://dx.doi.org/10.1142/s0218213008003996.

Full text
Abstract:
In an attempt to automate evaluation of network intrusion detection systems, we encountered the problem of ambiguously described learning examples. For instance, an attribute's value, or a class label, in a given example was known to be a or b but definitely not c or d. Previous research in machine learning usually either “disambiguated” the value (by giving preference to a or b), or replaced it with a “don't-know” symbol. Neither approach is satisfactory: while the former distorts the available information by pretending precise knowledge, the latter ignores the fact that at least something is known. Our experiments confirm the intuition that classification performance is indeed impaired if the ambiguities are not handled properly. In the research reported here, we limited ourselves to the realm of the relatively simple nearest-neighbor classifiers and investigated a few alternative solutions. The paper describes the techniques we used and describes their behavior in experimental domains.
APA, Harvard, Vancouver, ISO, and other styles
15

Haumahu, John Pierre. "Recognition of Beam's Music Notation Patterns Using Artificial Neural Networks with The Backpropagation Method." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 3, no. 1 (July 25, 2019): 41. http://dx.doi.org/10.31289/jite.v3i1.2557.

Full text
Abstract:
<p class="8AbstrakBahasaIndonesia"><span>The beam notations is officially used as the standard of international music notation, and is often found in scores for both musical instruments and vocals. In Indonesia, the use of numerical notation is more widely used and understood, because the learning process of notation beams is not easy, and takes time for the introduction of each symbol and its meaning. The pattern recognition technology makes it possible to recognize the pattern of the beam notations. The software used for system development is Matlab, utilizing artificial neural network using backpropagation method to recognize the pattern of beam notation. Backpropagation is a supervised learning method, where the system will be given the training first, and then the system can understand and identify patterns based on the knowledge gained. The final result shows that the system is able to recognize patterns from notations that have been previously studied with the highest percentage of 91.20%.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
16

Aberšek, Boris, Bojan Borstner, and Janez Bregant. "THE VIRTUAL SCIENCE TEACHER AS A HYBRID SYSTEM: COGNITIVE SCIENCE HAND IN HAND WITH CYBERNETIC PEDAGOGY." Journal of Baltic Science Education 13, no. 1 (February 25, 2014): 75–90. http://dx.doi.org/10.33225/jbse/14.13.75.

Full text
Abstract:
The findings of cybernetic pedagogy and didactics developed in the 1970s, which were in those times limited due to poor technological capabilities, are taken as a starting point in this research. A revised version of cybernetic pedagogy is proposed and is used to develop the hybrid cognitive model presented. It is not based purely on the symbolic notation of the teaching algorithm alone, as done in the past, but also on the connectionist model of our cognition, which draws on the brain’s characteristics and their physiological and functional structure, such as parallel data processing, content associative memory and divided presentations. The learning process algorithm can be, on the basis of this idea, re-defined as a hybrid cognitive model, i.e. a combination of a symbol system and a neural network, and named mRKP. The article concludes that an intelligent artificial tutor can independently, i.e. without the need for reprogramming, accommodate the learning process to the needs and possibilities of an individual student if it uses mRKP as its basis, and can eventually replace a human tutor in some situations, but a human teacher must decide where and when it should be used. Key words: cybernetic pedagogy, cognitive science, hybrid systems, learning algorithm, e-learning material.
APA, Harvard, Vancouver, ISO, and other styles
17

Hanson, Stephen José, and David J. Burr. "What connectionist models learn: Learning and representation in connectionist networks." Behavioral and Brain Sciences 13, no. 3 (September 1990): 471–89. http://dx.doi.org/10.1017/s0140525x00079760.

Full text
Abstract:
AbstractConnectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including “distributed representations”) or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Yan, Yuan. "Exploring the Philosophical Problems of Artificial Intelligence Based on ERP Experiment." Proceedings 47, no. 1 (June 4, 2020): 53. http://dx.doi.org/10.3390/proceedings2020047053.

Full text
Abstract:
To study the cognitive process of the human brain in dealing with philosophical issues, for the first time, from the perspective of scientific experiments, the issue of “relationship” in philosophy was verified. A set of algorithms combining physiology analysis and computer technology linearity and a nonlinear manifold learning algorithm were proposed. Two groups of auditory cognitive experiments were performed, and the concept expected effect was defined as the symbol of conceptual intervention. From the perspective of time, whether the concept was involved after the sensation arises was explored. EEG (electroencephalogram) physiology was used to analyze the data. The results showed that the concept induced a positive shift of the waveform after intervention. It has little effect on the early components, but it has a significant effect on the composition of the sensory components. Waveform changes before and after conceptual intervention have significant main effects. Perceptual production does not involve conceptual intervention, which verifies, in time, that “the relationship” exists.
APA, Harvard, Vancouver, ISO, and other styles
19

Yan, Yuan. "Exploring the Philosophical Problems of Artificial Intelligence Based on ERP Experiment." Proceedings 47, no. 1 (June 4, 2020): 53. http://dx.doi.org/10.3390/proceedings47010053.

Full text
Abstract:
To study the cognitive process of the human brain in dealing with philosophical issues, for the first time, from the perspective of scientific experiments, the issue of “relationship” in philosophy was verified. A set of algorithms combining physiology analysis and computer technology linearity and a nonlinear manifold learning algorithm were proposed. Two groups of auditory cognitive experiments were performed, and the concept expected effect was defined as the symbol of conceptual intervention. From the perspective of time, whether the concept was involved after the sensation arises was explored. EEG (electroencephalogram) physiology was used to analyze the data. The results showed that the concept induced a positive shift of the waveform after intervention. It has little effect on the early components, but it has a significant effect on the composition of the sensory components. Waveform changes before and after conceptual intervention have significant main effects. Perceptual production does not involve conceptual intervention, which verifies, in time, that “the relationship” exists.
APA, Harvard, Vancouver, ISO, and other styles
20

Zabala-Blanco, David, Marco Mora, Cesar A. Azurdia-Meza, and Ali Dehghan Firoozabadi. "Extreme Learning Machines to Combat Phase Noise in RoF-OFDM Schemes." Electronics 8, no. 9 (August 22, 2019): 921. http://dx.doi.org/10.3390/electronics8090921.

Full text
Abstract:
Radio-over-fiber (RoF) orthogonal frequency division multiplexing (OFDM) systems have been revealed as the solution to support secure, cost-effective, and high-capacity wireless access for the future telecommunication systems. Unfortunately, the bandwidth-distance product in these schemes is mainly limited by phase noise that comes from the laser linewidth, as well as the chromatic fiber dispersion. On the other hand, the single-hidden layer feedforward neural network subject to the extreme learning machine (ELM) algorithm has been widely studied in regression and classification problems for different research fields, because of its good generalization performance and extremely fast learning speed. In this work, ELMs in the real and complex domains for direct-detection OFDM-based RoF schemes are proposed for the first time. These artificial neural networks are based on the use of pilot subcarriers as training samples and data subcarriers as testing samples, and consequently, their learning stages occur in real-time without decreasing the effective transmission rate. Regarding the feasible pilot-assisted equalization method, the effectiveness and simplicity of the ELM algorithm in the complex domain are highlighted by evaluation of a QPSK-OFDM signal over an additive white Gaussian noise channel at diverse laser linewidths and chromatic fiber dispersion effects and taking into account several OFDM symbol periods. Considering diverse relationships between the fiber transmission distance and the radio frequency (for practical design purposes) and the duration of a single OFDM symbol equal to 64 ns, the fully-complex ELM followed by the real ELM outperform the pilot-based correction channel in terms of the system performance tolerance against the signal-to-noise ratio and the laser linewidth.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Na, and Ting Gong. "A Fuzzy Multicriteria Assessment Mechanism towards Musical Courses Using Deep Learning." Mathematical Problems in Engineering 2022 (June 13, 2022): 1–11. http://dx.doi.org/10.1155/2022/5830850.

Full text
Abstract:
Nowadays, the musical courses have been quite prevalent spiritual activities in online or offline scenarios. However, the teaching quality is diverse and cannot be easily assessed by general nonprofessional audience. Limited by the amount of experts, it is supposed to investigate intelligent mechanisms that can automatically assess the teaching quality of musical courses. To deal with such issue, the combination of artificial intelligence and conventional music knowledge acts as a promising way. In this work, a fuzzy multicriteria assessment mechanism is used towards musical courses with the use of a typical deep learning model: convolutional neural network (CNN). Specifically, note that features inside the musical symbol sequences are expected to be extracted by residual CNN structure. Next, multilevel features inside the musical notes are further fused with neural computing structure, so that feature abstraction of initial musical objects can be further improved. On this basis, notes can be identified with use of bidirectional recurrent unit structure in order to speed up fitting efficiency of the whole assessment framework. Comprehensive experimental analysis is conducted by comparing the proposed method with several baseline methods, showing a good performance effect of the proposal.
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Zhiwu, Cheng Wen, Shengchao Qin, and Mengda He. "Extracting automata from neural networks using active learning." PeerJ Computer Science 7 (April 19, 2021): e436. http://dx.doi.org/10.7717/peerj-cs.436.

Full text
Abstract:
Deep learning is one of the most advanced forms of machine learning. Most modern deep learning models are based on an artificial neural network, and benchmarking studies reveal that neural networks have produced results comparable to and in some cases superior to human experts. However, the generated neural networks are typically regarded as incomprehensible black-box models, which not only limits their applications, but also hinders testing and verifying. In this paper, we present an active learning framework to extract automata from neural network classifiers, which can help users to understand the classifiers. In more detail, we use Angluin’s L* algorithm as a learner and the neural network under learning as an oracle, employing abstraction interpretation of the neural network for answering membership and equivalence queries. Our abstraction consists of value, symbol and word abstractions. The factors that may affect the abstraction are also discussed in the paper. We have implemented our approach in a prototype. To evaluate it, we have performed the prototype on a MNIST classifier and have identified that the abstraction with interval number 2 and block size 1 × 28 offers the best performance in terms of F1 score. We also have compared our extracted DFA against the DFAs learned via the passive learning algorithms provided in LearnLib and the experimental results show that our DFA gives a better performance on the MNIST dataset.
APA, Harvard, Vancouver, ISO, and other styles
23

Minesh, Patel. "A REVIEW ON IMPORTANCE OF ARTIFICIAL INTELLIGENCE IN PARKINSONS DISEASE & ITS FUTURE OUTCOMES FOR PARKINSONS DISEASE." International Journal of Advanced Research 9, no. 09 (September 30, 2021): 142–47. http://dx.doi.org/10.21474/ijar01/13387.

Full text
Abstract:
Over the centurys, increasingly sophisticated tools have been developed to serve humanity. In many ways, digital computers are just another tool. You can perform the same number and symbol operations as ordinary people, but it is faster and more reliable. This article provides an overview of artificial intelligence algorithms used in computer programs and applications. It includes knowledge-based systems the computational intelligence that leads to artificial intelligence is the science of imitating human intelligence on computers. This will help doctors perform dissections when making a medical diagnosis. Use a data-driven final data approach to determine the existence molecule, Machine Learning and natural sources of Parkinsons disease (PD) subtypes. There are two large groups of independently newly diagnosed patients. Parkinsons disease (PD) causes difficulty in hand movement, which has been treated in multiple studies. The methods are used at the same time. The treatment of Parkinsons disease is an evolving field, indicating new treatments and improvements over old methods. Pharmacology, surgery and treatment methods. Specific patient problems that arise.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Yingxue, and Zhe Li. "Automatic Synthesis Technology of Music Teaching Melodies Based on Recurrent Neural Network." Scientific Programming 2021 (December 9, 2021): 1–10. http://dx.doi.org/10.1155/2021/1704995.

Full text
Abstract:
Computer music creation boasts broad application prospects. It generally relies on artificial intelligence (AI) and machine learning (ML) to generate the music score that matches the original mono-symbol score model or memorize/recognize the rhythms and beats of the music. However, there are very few music melody synthesis models based on artificial neural networks (ANNs). Some ANN-based models cannot adapt to the transposition invariance of original rhythm training set. To overcome the defect, this paper tries to develop an automatic synthesis technology of music teaching melodies based on recurrent neural network (RNN). Firstly, a strategy was proposed to extract the acoustic features from music melody. Next, the sequence-sequence model was adopted to synthetize general music melodies. After that, an RNN was established to synthetize music melody with singing melody, such as to find the suitable singing segments for the music melody in teaching scenario. The RNN can synthetize music melody with a short delay solely based on static acoustic features, eliminating the need for dynamic features. The proposed model was proved valid through experiments.
APA, Harvard, Vancouver, ISO, and other styles
25

Feng, Rui, Cheng-Chen Huang, Kun Luo, and Hui-Jun Zheng. "Deciphering wintertime air pollution upon the West Lake of Hangzhou, China." Journal of Intelligent & Fuzzy Systems 40, no. 3 (March 2, 2021): 5215–23. http://dx.doi.org/10.3233/jifs-201964.

Full text
Abstract:
The West Lake of Hangzhou, a world famous landscape and cultural symbol of China, suffered from severe air quality degradation in January 2015. In this work, Random Forest (RF) and Recurrent Neural Networks (RNN) are used to analyze and predict air pollutants on the central island of the West Lake. We quantitatively demonstrate that the PM2.5 and PM10 were chiefly associated by the ups and downs of the gaseous air pollutants (SO2, NO2 and CO). Compared with the gaseous air pollutants, meteorological circumstances and regional transport played trivial roles in shaping PM. The predominant meteorological factor for SO2, NO2 and surface O3 was dew-point deficit. The proportion of sulfate in PM10 was higher than that in PM2.5. CO was strongly positively linked with PM. We discover that machine learning can accurately predict daily average wintertime SO2, NO2, PM2.5 and PM10, casting new light on the forecast and early warning of the high episodes of air pollutants in the future.
APA, Harvard, Vancouver, ISO, and other styles
26

Kaczmarek, I., A. Iwaniak, A. Świetlicka, M. Piwowarczyk, and F. Harvey. "SPATIAL PLANNING TEXT INFORMATION PROCESSING WITH USE OF MACHINE LEARNING METHODS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-4/W2-2020 (September 15, 2020): 95–102. http://dx.doi.org/10.5194/isprs-annals-vi-4-w2-2020-95-2020.

Full text
Abstract:
Abstract. Spatial development plans provide an important information on future land development capabilities. Unfortunately, at the moment access to planning information in Poland is limited. Despite many initiatives taken to standardize planning documents, the standard for recording plans has not yet been developed. Each of the planning areas has a symbol and a category of land use, which is different in each of the plans. For this reason, it is very difficult to carry out an analysis enabling aggregation of all areas with a specific, the same development function.The authors in the article conduct experiments aimed at using machine learning methods for the needs of processing the text part of plans and their classification. The main aim was to find the best method for grouping texts of zones with the same land use. The experiment consists in an attempt to automatically classify the texts of findings for individual areas into the 10 defined categories of land use. Thanks to this, it is possible to predict the future land use function for a specific zone text regulation and aggregate all zones with specific land use type.In the proposed solution for the classification problem of heterogeneous planning information authors used k-means algorithm and artificial neural networks. The main challenge for this solution, however, was not the design of the classification tool but rather the preprocessing of the text. In this paper an approach for text preprocessing as well as selected methods of text classification is presented. The results of the work indicate greater use of CNN's usability to solve the problem presented. K-means clustering produces clusters, in which texts are not grouped according to land use function, which is not useful in the context of zones aggregation.
APA, Harvard, Vancouver, ISO, and other styles
27

Dodig-Crnkovic, Gordana. "Cognition as Morphological/Morphogenetic Embodied Computation In Vivo." Entropy 24, no. 11 (October 31, 2022): 1576. http://dx.doi.org/10.3390/e24111576.

Full text
Abstract:
Cognition, historically considered uniquely human capacity, has been recently found to be the ability of all living organisms, from single cells and up. This study approaches cognition from an info-computational stance, in which structures in nature are seen as information, and processes (information dynamics) are seen as computation, from the perspective of a cognizing agent. Cognition is understood as a network of concurrent morphological/morphogenetic computations unfolding as a result of self-assembly, self-organization, and autopoiesis of physical, chemical, and biological agents. The present-day human-centric view of cognition still prevailing in major encyclopedias has a variety of open problems. This article considers recent research about morphological computation, morphogenesis, agency, basal cognition, extended evolutionary synthesis, free energy principle, cognition as Bayesian learning, active inference, and related topics, offering new theoretical and practical perspectives on problems inherent to the old computationalist cognitive models which were based on abstract symbol processing, and unaware of actual physical constraints and affordances of the embodiment of cognizing agents. A better understanding of cognition is centrally important for future artificial intelligence, robotics, medicine, and related fields.
APA, Harvard, Vancouver, ISO, and other styles
28

Prescott, Tony J., Daniel Camilleri, Uriel Martinez-Hernandez, Andreas Damianou, and Neil D. Lawrence. "Memory and mental time travel in humans and social robots." Philosophical Transactions of the Royal Society B: Biological Sciences 374, no. 1771 (March 11, 2019): 20180025. http://dx.doi.org/10.1098/rstb.2018.0025.

Full text
Abstract:
From neuroscience, brain imaging and the psychology of memory, we are beginning to assemble an integrated theory of the brain subsystems and pathways that allow the compression, storage and reconstruction of memories for past events and their use in contextualizing the present and reasoning about the future—mental time travel (MTT). Using computational models, embedded in humanoid robots, we are seeking to test the sufficiency of this theoretical account and to evaluate the usefulness of brain-inspired memory systems for social robots. In this contribution, we describe the use of machine learning techniques—Gaussian process latent variable models—to build a multimodal memory system for the iCub humanoid robot and summarize results of the deployment of this system for human–robot interaction. We also outline the further steps required to create a more complete robotic implementation of human-like autobiographical memory and MTT. We propose that generative memory models, such as those that form the core of our robot memory system, can provide a solution to the symbol grounding problem in embodied artificial intelligence. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
APA, Harvard, Vancouver, ISO, and other styles
29

Mary Joseph, Nisha, and C. Puttamadappa. "Estimation of channel distortion in orthogonal frequency division multiplexing system using pilot technique." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 1 (October 1, 2022): 106. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp106-114.

Full text
Abstract:
Orthogonal frequency-division multiplexing (OFDM) is resistant to frequency selective fading due to the longer symbol duration. However, mobile applications channel timing fluctuations in one OFDM signal cause intercarrier-interference (ICI), which reduces performance. This research presented the support vector regression (SVR) model-based channel estimation technique for coherent optical communication systems. Due to the coherent optical orthogonal frequency-division-multiplexed (COOFDM) system, a channel model is developed that includes linear fibre dispersion effects, noise from optical amplifiers, and inter-carrier interference generated by laser phase noise. As a result, for such a system, an accurate channel estimate is essential. Based on this concept, derivation of channel estimation and phase estimation for the system of CO-OFDM. The proposed method is tested and evaluated using MATLAB software. Computer simulation results for several standard methods such as extreme learning machines (ELM) and artificial neural networks (ANN) validate the feasibility of the suggested methodology. The CO-OFDM system’s transmission experiments and computer simulations prove that the support vector machine-based model following pilot-assisted phase estimation gives the optimal performance. Therefore, results depicted that the channel estimation utilizing the SVR model gives good performance than the other methods, thus the proposed model gives an accurate CE process, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Ding, Zhe, Lihui Luo, Shaohui Guo, Qing Shen, Yueying Zheng, and Shengmei Zhu. "Non-Linear Association between Folate/Vitamin B12 Status and Cognitive Function in Older Adults." Nutrients 14, no. 12 (June 13, 2022): 2443. http://dx.doi.org/10.3390/nu14122443.

Full text
Abstract:
Although folate and vitamin B12 status have long been implicated in cognitive function, there is no consensus on the threshold of folate and vitamin B12 for assessing their impacts on cognition. The goal of this study was to detail the association between folate and vitamin B12 with cognitive performance. We analyzed cross-sectional data of older adults (≥60 y; n = 2204) from the NHANES (National Health and Nutrition Examination Surveys) cohort from 2011–2014. The restricted cubic spline model was used for describing the associations between serum total folate, RBC folate, 5-methyltetrahydrofolate, and vitamin B12 and the Consortium to Establish a Registry for Alzheimer’s Disease Word Learning (CERAD-WL) and Delayed Recall (CERAD-DR) tests, the Animal Fluency (AF) test, and the Digit Symbol Substitution Test (DSST), respectively. Older adults with a different folate and vitamin B12 status were clustered by artificial intelligence unsupervised learning. The statistically significant non-linear relationships between the markers of folate or vitamin B12 status and cognitive function were found after adjustments for potential confounders. Inverse U-shaped associations between folate/vitamin B12 status and cognitive function were observed, and the estimated breakpoint was described. No statistically significant interaction between vitamin B12 and folate status on cognitive function was observed in the current models. In addition, based on the biochemical examination of these four markers, older adults could be assigned into three clusters representing relatively low, medium, and high folate/vitamin B12 status with significantly different scores on the CERAD-DR and DSST. Low or high folate and vitamin B12 status affected selective domains of cognition, and was associated with suboptimal cognitive test outcomes.
APA, Harvard, Vancouver, ISO, and other styles
31

Futia, Giuseppe, and Antonio Vetrò. "On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research." Information 11, no. 2 (February 22, 2020): 122. http://dx.doi.org/10.3390/info11020122.

Full text
Abstract:
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.
APA, Harvard, Vancouver, ISO, and other styles
32

Sarkar, Somwrita, Andy Dong, and John S. Gero. "Learning symbolic formulations in design: Syntax, semantics, and knowledge reification." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 24, no. 1 (January 29, 2010): 63–85. http://dx.doi.org/10.1017/s0890060409990175.

Full text
Abstract:
AbstractAn artificial intelligence (AI) algorithm to automate symbolic design reformulation is an enduring challenge in design automation. Existing research shows that design tools either require high levels of knowledge engineering or large databases of training cases. To address these limitations, we present a singular value decomposition (SVD) and unsupervised clustering-based method that performs design reformulation by acquiring semantic knowledge from the syntax of design representations. The development of the method was analogically inspired by applications of SVD in statistical natural language processing and digital image processing. We demonstrate our method on an analytically formulated hydraulic cylinder design problem and an aeroengine design problem formulated using a nonanalytic design structure matrix form. Our results show that the method automates various design reformulation tasks on problems of varying sizes from different design domains, stated in analytic and nonanalytic representational forms. The behavior of the method presents observations that cannot be explained by pure symbolic AI approaches, including uncovering patterns of implicit knowledge that are not readily encoded as logical rules, and automating tasks that require the associative transformation of sets of inputs to experiences. As an explanation, we relate the structure and performance of our algorithm with findings in cognitive neuroscience, and present a set of theoretical postulates addressing an alternate perspective on how symbols may interact with each other in experiences to reify semantic knowledge in design representations.
APA, Harvard, Vancouver, ISO, and other styles
33

Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.

Full text
Abstract:
We consider the problem of constructing abstract representations for planning in high-dimensional, continuous environments. We assume an agent equipped with a collection of high-level actions, and construct representations provably capable of evaluating plans composed of sequences of those actions. We first consider the deterministic planning case, and show that the relevant computation involves set operations performed over sets of states. We define the specific collection of sets that is necessary and sufficient for planning, and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning. The resulting representation can be expressed in PDDL, a canonical high-level planning domain language; we construct such a representation for the Playroom domain and solve it in milliseconds using an off-the-shelf planner. We then consider probabilistic planning, which we show requires generalizing from sets of states to distributions over states. We identify the specific distributions required for planning, and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan. In addition, we show that learning the relevant probability distributions corresponds to specific instances of probabilistic density estimation and probabilistic classification. We construct an agent that autonomously learns the correct abstract representation of a computer game domain, and rapidly solves it. Finally, we apply these techniques to create a physical robot system that autonomously learns its own symbolic representation of a mobile manipulation task directly from sensorimotor data---point clouds, map locations, and joint angles---and then plans using that representation. Together, these results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high-level representations.
APA, Harvard, Vancouver, ISO, and other styles
34

Elyan, Eyad, Laura Jamieson, and Adamu Ali-Gombe. "Deep learning for symbols detection and classification in engineering drawings." Neural Networks 129 (September 2020): 91–102. http://dx.doi.org/10.1016/j.neunet.2020.05.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

van Bekkum, Michael, Maaike de Boer, Frank van Harmelen, André Meyer-Vitali, and Annette ten Teije. "Modular design patterns for hybrid learning and reasoning systems." Applied Intelligence 51, no. 9 (June 18, 2021): 6528–46. http://dx.doi.org/10.1007/s10489-021-02394-3.

Full text
Abstract:
AbstractThe unification of statistical (data-driven) and symbolic (knowledge-driven) methods is widely recognized as one of the key challenges of modern AI. Recent years have seen a large number of publications on such hybrid neuro-symbolic AI systems. That rapidly growing literature is highly diverse, mostly empirical, and is lacking a unifying view of the large variety of these hybrid systems. In this paper, we analyze a large body of recent literature and we propose a set of modular design patterns for such hybrid, neuro-symbolic systems. We are able to describe the architecture of a very large number of hybrid systems by composing only a small set of elementary patterns as building blocks. The main contributions of this paper are: 1) a taxonomically organised vocabulary to describe both processes and data structures used in hybrid systems; 2) a set of 15+ design patterns for hybrid AI systems organized in a set of elementary patterns and a set of compositional patterns; 3) an application of these design patterns in two realistic use-cases for hybrid AI systems. Our patterns reveal similarities between systems that were not recognized until now. Finally, our design patterns extend and refine Kautz’s earlier attempt at categorizing neuro-symbolic architectures.
APA, Harvard, Vancouver, ISO, and other styles
36

Dushkin, R. V. "Semantic Supervised Training for General Artificial Cognitive Agents." Siberian Journal of Philosophy 19, no. 2 (October 21, 2021): 51–64. http://dx.doi.org/10.25205/2541-7517-2021-19-2-51-64.

Full text
Abstract:
The article describes the author's approach to the construction of general-level artificial cognitive agents based on the so-called "semantic supervised learning", within which, in accordance with the hybrid paradigm of artificial intelligence, both machine learning methods and methods of the symbolic ap­ proach and knowledge-based systems are used ("good old-fashioned artificial intelligence"). А descrip­ tion of current proЬlems with understanding of the general meaning and context of situations in which narrow AI agents are found is presented. The definition of semantic supervised learning is given and its relationship with other machine learning methods is described. In addition, а thought experiment is presented, which shows the essence and meaning of supervised semantic learning.
APA, Harvard, Vancouver, ISO, and other styles
37

Olague, Gustavo, Jose Armando Menendez-Clavijo, Matthieu Olague, Arturo Ocampo, Gerardo Ibarra-Vazquez, Rocio Ochoa, and Roberto Pineda. "Automated Design of Salient Object Detection Algorithms with Brain Programming." Applied Sciences 12, no. 20 (October 21, 2022): 10686. http://dx.doi.org/10.3390/app122010686.

Full text
Abstract:
Despite recent improvements in computer vision, artificial visual systems’ design is still daunting since an explanation of visual computing algorithms remains elusive. Salient object detection is one problem that is still open due to the difficulty of understanding the brain’s inner workings. Progress in this research area follows the traditional path of hand-made designs using neuroscience knowledge or, more recently, deep learning, a particular branch of machine learning. Recently, a different approach based on genetic programming appeared to enhance handcrafted techniques following two different strategies. The first method follows the idea of combining previous hand-made methods through genetic programming and fuzzy logic. The second approach improves the inner computational structures of basic hand-made models through artificial evolution. This research proposes expanding the artificial dorsal stream using a recent proposal based on symbolic learning to solve salient object detection problems following the second technique. This approach applies the fusion of visual saliency and image segmentation algorithms as a template. The proposed methodology discovers several critical structures in the template through artificial evolution. We present results on a benchmark designed by experts with outstanding results in an extensive comparison with the state of the art, including classical methods and deep learning approaches to highlight the importance of symbolic learning in visual saliency.
APA, Harvard, Vancouver, ISO, and other styles
38

Bickel, Sebastian, Benjamin Schleich, and Sandro Wartzack. "DETECTION AND CLASSIFICATION OF SYMBOLS IN PRINCIPLE SKETCHES USING DEEP LEARNING." Proceedings of the Design Society 1 (July 27, 2021): 1183–92. http://dx.doi.org/10.1017/pds.2021.118.

Full text
Abstract:
AbstractData-driven methods from the field of Artificial Intelligence or Machine Learning are increasingly applied in mechanical engineering. This refers to the development of digital engineering in recent years, which aims to bring these methods into practice in order to realize cost and time savings. However, a necessary step towards the implementation of such methods is the utilization of existing data. This problem is essential because the mere availability of data does not automatically imply data usability. Therefore, this paper presents a method to automatically recognize symbols from principle sketches, which allows the generation of training data for machine learning algorithms. In this approach, the symbols are created randomly and their illustration varies with each generation. . A deep learning network from the field of computer vision is used to test the generated data set and thus to recognize symbols on principle sketches. This type of drawing is especially interesting because the cost-saving potential is very high due to the application in the early phases of the product development process.
APA, Harvard, Vancouver, ISO, and other styles
39

Garcez, Artur S. d'Avila, and Luís C. Lamb. "A Connectionist Computational Model for Epistemic and Temporal Reasoning." Neural Computation 18, no. 7 (July 2006): 1711–38. http://dx.doi.org/10.1162/neco.2006.18.7.1711.

Full text
Abstract:
The importance of the efforts to bridge the gap between the connectionist and symbolic paradigms of artificial intelligence has been widely recognized. The merging of theory (background knowledge) and data learning (learning from examples) into neural-symbolic systems has indicated that such a learning system is more effective than purely symbolic or purely connectionist systems. Until recently, however, neural-symbolic systems were not able to fully represent, reason, and learn expressive languages other than classical propositional and fragments of first-order logic. In this article, we show that nonclassical logics, in particular propositional temporal logic and combinations of temporal and epistemic (modal) reasoning, can be effectively computed by artificial neural networks. We present the language of a connectionist temporal logic of knowledge (CTLK). We then present a temporal algorithm that translates CTLK theories into ensembles of neural networks and prove that the translation is correct. Finally, we apply CTLK to the muddy children puzzle, which has been widely used as a testbed for distributed knowledge representation. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about knowledge evolution in time and of knowledge acquisition through learning.
APA, Harvard, Vancouver, ISO, and other styles
40

Ai, Lun, Stephen H. Muggleton, Céline Hocquette, Mark Gromowski, and Ute Schmid. "Beneficial and harmful explanatory machine learning." Machine Learning 110, no. 4 (March 11, 2021): 695–721. http://dx.doi.org/10.1007/s10994-020-05941-0.

Full text
Abstract:
AbstractGiven the recent successes of Deep Learning in AI there has been increased interest in the role and need for explanations in machine learned theories. A distinct notion in this context is that of Michie’s definition of ultra-strong machine learning (USML). USML is demonstrated by a measurable increase in human performance of a task following provision to the human of a symbolic machine learned theory for task performance. A recent paper demonstrates the beneficial effect of a machine learned logic theory for a classification task, yet no existing work to our knowledge has examined the potential harmfulness of machine’s involvement for human comprehension during learning. This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games and proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature. The approach involves a cognitive window consisting of two quantifiable bounds and it is supported by empirical evidence collected from human trials. Our quantitative and qualitative results indicate that human learning aided by a symbolic machine learned theory which satisfies a cognitive window has achieved significantly higher performance than human self learning. Results also demonstrate that human learning aided by a symbolic machine learned theory that fails to satisfy this window leads to significantly worse performance than unaided human learning.
APA, Harvard, Vancouver, ISO, and other styles
41

Marra, Giuseppe. "Bridging symbolic and subsymbolic reasoning with minimax entropy models." Intelligenza Artificiale 15, no. 2 (February 4, 2022): 71–90. http://dx.doi.org/10.3233/ia-210088.

Full text
Abstract:
In this paper, we investigate MiniMax Entropy models, a class of neural symbolic models where symbolic and subsymbolic features are seamlessly integrated. We show how these models recover classical algorithms from both the deep learning and statistical relational learning scenarios. Novel hybrid settings are defined and experimentally explored, showing state-of-the-art performance in collective classification, knowledge base completion and graph (molecular) data generation.
APA, Harvard, Vancouver, ISO, and other styles
42

Zehner, Brett. "A Liar’s Epistemology." Qui Parle 30, no. 1 (June 1, 2021): 119–57. http://dx.doi.org/10.1215/10418385-8955815.

Full text
Abstract:
Abstract This methodologically important essay aims to trace a genealogical account of Herbert Simon’s media philosophy and to contest the histories of artificial intelligence that overlook the organizational capacities of computational models. As Simon’s work demonstrates, humans’ subjection to large-scale organizations and divisions of labor is at the heart of artificial intelligence. As such, questions of procedures are key to understanding the power assumed by institutions wielding artificial intelligence. Most media-historical accounts of the development of contemporary artificial intelligence stem from the work of Warren S. McCulloch and Walter Pitts, especially the 1943 essay “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Yet Simon’s revenge is perhaps that reinforcement learning systems adopt his prescriptive approach to algorithmic procedures. Computer scientists criticized Simon for the performative nature of his artificially intelligent systems, mainly for his positivism, but he defended his positivism based on his belief that symbolic computation could stand in for any reality and in fact shape that reality. Simon was not looking to actually re-create human intelligence; he was using coercion, bad faith, and fraud as tactical weapons in the reordering of human decision-making. Artificial intelligence was the perfect medium for his explorations.
APA, Harvard, Vancouver, ISO, and other styles
43

Apprich, Clemens. "Secret Agents." Digital Culture & Society 4, no. 1 (March 1, 2018): 29–44. http://dx.doi.org/10.14361/dcs-2018-0104.

Full text
Abstract:
Abstract “Good Old-Fashioned Artificial Intelligence” (GOFAI), which was based on a symbolic information-processing model of the mind, has been superseded by neural-network models to describe and create intelligence. Rather than a symbolic representation of the world, the idea is to mimic the structure of the brain in electronic form, whereby artificial neurons draw their own connections during a self-learning process. Critiquing such a brain physiological model, the following article takes up the idea of a “psychoanalysis of things” and applies it to artificial intelligence and machine learning. This approach may help to reveal some of the hidden layers within the current A. I. debate and hints towards a central mechanism in the psycho-economy of our socio-technological world: The question of “Who speaks?”, central for the analysis of paranoia, becomes paramount at a time, when algorithms, in the form of artificial neural networks, operate more and more as secret agents.
APA, Harvard, Vancouver, ISO, and other styles
44

MILARÉ, CLAUDIA R., ANDRÉ C. P. DE L. F. DE CARVALHO, and MARIA C. MONARD. "AN APPROACH TO EXPLAIN NEURAL NETWORKS USING SYMBOLIC ALGORITHMS." International Journal of Computational Intelligence and Applications 02, no. 04 (December 2002): 365–76. http://dx.doi.org/10.1142/s1469026802000695.

Full text
Abstract:
Although Artificial Neural Networks have been satisfactorily employed in several problems, such as clustering, pattern recognition, dynamic systems control and prediction, they still suffer from significant limitations. One of them is that the induced concept representation is not usually comprehensible to humans. Several techniques have been suggested to extract meaningful knowledge from trained networks. This paper proposes the use of symbolic learning algorithms, commonly used by the Machine Learning community, such as C4.5, C4.5rules and CN2, to extract symbolic representations from trained networks. The approach proposed is similar to that used by the Trepan algorithm, which extracts symbolic representations, expressed as decision trees, from trained networks. Experimental results are presented and discussed in order to compare the knowledge extracted from Artificial Neural Networks using the proposed approach and the Trepan approach. Results are compared regarding two aspects: fidelity and comprehensibility.
APA, Harvard, Vancouver, ISO, and other styles
45

CRAVEN, MARK W., and JUDE W. SHAVLIK. "VISUALIZING LEARNING AND COMPUTATION IN ARTIFICIAL NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 01, no. 03 (September 1992): 399–425. http://dx.doi.org/10.1142/s0218213092000260.

Full text
Abstract:
Scientific visualization is the process of using graphical images to form succinct and lucid representations of numerical data. Visualization has proven to be a useful method for understanding both learning and computation in artificial neural networks. While providing a powerful and general technique for inductive learning, artificial neural networks are difficult to comprehend because they form representations that are encoded by a large number of real-valued parameters. By viewing these parameters pictorially, a better understanding can be gained of how a network maps inputs into outputs. In this article, we survey a number of visualization techniques for understanding the learning and decision-making processes of neural networks. We also describe our work in knowledge-based neural networks and the visualization techniques we have used to understand these networks. In a knowledge-based neural network, the topology and initial weight values of the network are determined by an approximately-correct set of inference rules. Knowledge-based networks are easier to interpret than conventional networks because of the synergy between visualization methods and the relation of the networks to symbolic rules.
APA, Harvard, Vancouver, ISO, and other styles
46

Pocius, Rey, Lawrence Neal, and Alan Fern. "Strategic Tasks for Explainable Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 10007–8. http://dx.doi.org/10.1609/aaai.v33i01.330110007.

Full text
Abstract:
Commonly used sequential decision making tasks such as the games in the Arcade Learning Environment (ALE) provide rich observation spaces suitable for deep reinforcement learning. However, they consist mostly of low-level control tasks which are of limited use for the development of explainable artificial intelligence(XAI) due to the fine temporal resolution of the tasks. Many of these domains also lack built-in high level abstractions and symbols. Existing tasks that provide for both strategic decision-making and rich observation spaces are either difficult to simulate or are intractable. We provide a set of new strategic decision-making tasks specialized for the development and evaluation of explainable AI methods, built as constrained mini-games within the StarCraft II Learning Environment.
APA, Harvard, Vancouver, ISO, and other styles
47

SIROMONEY, RANI, K. G. SUBRAMANIAN, and LISA MATHEW. "LEARNING OF PATTERN AND PICTURE LANGUAGES." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 02n03 (August 1992): 275–84. http://dx.doi.org/10.1142/s0218001492000163.

Full text
Abstract:
A method of learning pattern languages in time polynomial in the length of the pattern is introduced. The learning of certain picture languages can then be done by considering them as an interpretation of pattern languages. The learning of Tabled Regular k-Matrix languages describing arrays of symbols is also examined.
APA, Harvard, Vancouver, ISO, and other styles
48

Kim, Seonho, Jungjoon Kim, and Hong-Woo Chun. "Wave2Vec: Vectorizing Electroencephalography Bio-Signal for Prediction of Brain Disease." International Journal of Environmental Research and Public Health 15, no. 8 (August 15, 2018): 1750. http://dx.doi.org/10.3390/ijerph15081750.

Full text
Abstract:
Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.
APA, Harvard, Vancouver, ISO, and other styles
49

Wermter, S., and V. Weber. "SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks." Journal of Artificial Intelligence Research 6 (January 1, 1997): 35–85. http://dx.doi.org/10.1613/jair.282.

Full text
Abstract:
Previous approaches of analyzing spontaneously spoken language often have been based on encoding syntactic and semantic knowledge manually and symbolically. While there has been some progress using statistical or connectionist language models, many current spoken- language systems still use a relatively brittle, hand-coded symbolic grammar or symbolic semantic component. In contrast, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for analyzing an utterance at various syntactic, semantic and dialog levels. Rather than using a deeply structured symbolic analysis, we use a flat connectionist analysis. This screening approach aims at supporting speech and language processing by using (1) data-driven learning and (2) robustness of connectionist networks. In order to test this approach, we have developed the SCREEN system which is based on this new robust, learned and flat analysis. In this paper, we focus on a detailed description of SCREEN's architecture, the flat syntactic and semantic analysis, the interaction with a speech recognizer, and a detailed evaluation analysis of the robustness under the influence of noisy or incomplete input. The main result of this paper is that flat representations allow more robust processing of spontaneous spoken language than deeply structured representations. In particular, we show how the fault-tolerance and learning capability of connectionist networks can support a flat analysis for providing more robust spoken-language processing within an overall hybrid symbolic/connectionist framework.
APA, Harvard, Vancouver, ISO, and other styles
50

d'AVILA GARCEZ, ARTUR S., LUÍS C. LAMB, KRYSIA BRODA, and DOV M. GABBAY. "APPLYING CONNECTIONIST MODAL LOGICS TO DISTRIBUTED KNOWLEDGE REPRESENTATION PROBLEMS." International Journal on Artificial Intelligence Tools 13, no. 01 (March 2004): 115–39. http://dx.doi.org/10.1142/s0218213004001442.

Full text
Abstract:
Neural-Symbolic Systems concern the integration of the symbolic and connectionist paradigms of Artificial Intelligence. Distributed knowledge representation is traditionally seen under a symbolic perspective. In this paper, we show how neural networks can represent distributed symbolic knowledge, acting as multi-agent systems with learning capability (a key feature of neural networks). We apply the framework of Connectionist Modal Logics to well-known testbeds for distributed knowledge representation formalisms, namely the muddy children and the wise men puzzles. Finally, we sketch a full solution to these problems by extending our approach to deal with knowledge evolution over time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography