Journal articles on the topic 'Supervised neural network'

To see the other types of publications on this topic, follow the link: Supervised neural network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Supervised neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tian, Jidong, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. "Weakly Supervised Neural Symbolic Learning for Cognitive Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5888–96. http://dx.doi.org/10.1609/aaai.v36i5.20533.

Full text
Abstract:
Despite the recent success of end-to-end deep neural networks, there are growing concerns about their lack of logical reasoning abilities, especially on cognitive tasks with perception and reasoning processes. A solution is the neural symbolic learning (NeSyL) method that can effectively utilize pre-defined logic rules to constrain the neural architecture making it perform better on cognitive tasks. However, it is challenging to apply NeSyL to these cognitive tasks because of the lack of supervision, the non-differentiable manner of the symbolic system, and the difficulty to probabilistically constrain the neural network. In this paper, we propose WS-NeSyL, a weakly supervised neural symbolic learning model for cognitive tasks with logical reasoning. First, WS-NeSyL employs a novel back search algorithm to sample the possible reasoning process through logic rules. This sampled process can supervise the neural network as the pseudo label. Based on this algorithm, we can backpropagate gradients to the neural network of WS-NeSyL in a weakly supervised manner. Second, we introduce a probabilistic logic regularization into WS-NeSyL to help the neural network learn probabilistic logic. To evaluate WS-NeSyL, we have conducted experiments on three cognitive datasets, including temporal reasoning, handwritten formula recognition, and relational reasoning datasets. Experimental results show that WS-NeSyL not only outperforms the end-to-end neural model but also beats the state-of-the-art neural symbolic learning models.
APA, Harvard, Vancouver, ISO, and other styles
2

Ito, Toshio. "Supervised Learning Methods of Bilinear Neural Network Systems Using Discrete Data." International Journal of Machine Learning and Computing 6, no. 5 (October 2016): 235–40. http://dx.doi.org/10.18178/ijmlc.2016.6.5.604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Verma, Vikas, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, and Jian Tang. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 10024–32. http://dx.doi.org/10.1609/aaai.v35i11.17203.

Full text
Abstract:
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the "aggregation" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such as Graph Convolutional Networks, Graph Attention Networks and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can consistently improve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and Co-author-Physics.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Jinghan. "Semi-supervised Blindness Detection with Neural Network Ensemble." Highlights in Science, Engineering and Technology 12 (August 26, 2022): 171–76. http://dx.doi.org/10.54097/hset.v12i.1448.

Full text
Abstract:
Diabetic retinopathy (DR), a common complication of diabetes mellitus, is a major cause of visual loss among the working-age population. Since DR vision loss is irreversible, early detection of DR is crucial for preventing vision loss in patients. However, manual detection of DR remains time costly and inefficient. In this paper, an ensemble of 6 pre-trained neural networks (including EfficientNets, ResNet, and Inception) are combined. The compatibility of different networks is tested by creating different combinations of networks and evaluating their relative performance. Pseudo-labelling is used to further increase accuracy. With a limited training data set of only 5592 images, the final neural network ensemble achieved an accuracy of 0.864.
APA, Harvard, Vancouver, ISO, and other styles
5

Hindarto, Djarot, and Handri Santoso. "Performance Comparison of Supervised Learning Using Non-Neural Network and Neural Network." Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) 11, no. 1 (April 6, 2022): 49. http://dx.doi.org/10.23887/janapati.v11i1.40768.

Full text
Abstract:
Currently, the development of mobile phones and mobile applications based on the Android operating system is increasing rapidly. Many new companies and startups are digitally transforming by using mobile apps to provide disruptive digital services to replace existing old-fashioned services. This transformation prompted attackers to create malicious software (malware) using sophisticated methods to target victims of Android phone users. The purpose of this study is to identify Android APK files by classifying them using Artificial Neural Network (ANN) and Non-Neural Network (NNN). ANN is a Multi-Layer Perceptron Classifier (MLPC), while NNN is a method of KNN, SVM, Decision Tree. This study aims to make a comparison between the performance of the Non-Neural Network and the Neural Network. Problems that occur when classifying using the Non-Neural Network algorithm have problems with decreasing performance, where performance is often decreased if done with a larger dataset. Answering the problem of decreasing model performance, the solution is used with the Artificial Neural Network algorithm. The Artificial Neural Network Algorithm selected is Multi_layer Perceptron Classifier (MLPC). Using the Non-Neural Network algorithm, K-Nearest Neighbor conducts training with the 600 APK dataset achieving 91.2% accuracy and training using the 14170 APK dataset decreases its accuracy to 88%. The use of the Support Vector Machine algorithm with the 600 APK dataset has 99.1% accuracy and the 14170 APK dataset has decreased accuracy to 90.5%. The use of the Decision Tree algorithm to conduct training with a dataset of 600 APKs has an accuracy of 99.2% and training with a dataset of 14170 APKs has decreased accuracy to 90.8%. Experiments using the Multi-Layer Perceptron Classifier have increased accuracy performance with the 600 APK dataset achieving 99% accuracy and training using the 14170 APK dataset increasing the accuracy reaching 100%.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Chenghua, Zhuolin Liao, Yixuan Ma, and Kun Zhan. "Stationary Diffusion State Neural Estimation for Multiview Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7542–49. http://dx.doi.org/10.1609/aaai.v36i7.20719.

Full text
Abstract:
Although many graph-based clustering methods attempt to model the stationary diffusion state in their objectives, their performance limits to using a predefined graph. We argue that the estimation of the stationary diffusion state can be achieved by gradient descent over neural networks. We specifically design the Stationary Diffusion State Neural Estimation (SDSNE) to exploit multiview structural graph information for co-supervised learning. We explore how to design a graph neural network specially for unsupervised multiview learning and integrate multiple graphs into a unified consensus graph by a shared self-attentional module. The view-shared self-attentional module utilizes the graph structure to learn a view-consistent global graph. Meanwhile, instead of using auto-encoder in most unsupervised learning graph neural networks, SDSNE uses a co-supervised strategy with structure information to supervise the model learning. The co-supervised strategy as the loss function guides SDSNE in achieving the stationary state. With the help of the loss and the self-attentional module, we learn to obtain a graph in which nodes in each connected component fully connect by the same weight. Experiments on several multiview datasets demonstrate effectiveness of SDSNE in terms of six clustering evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

Cho, Myoung Won. "Supervised learning in a spiking neural network." Journal of the Korean Physical Society 79, no. 3 (July 20, 2021): 328–35. http://dx.doi.org/10.1007/s40042-021-00254-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Juexin, Anjun Ma, Qin Ma, Dong Xu, and Trupti Joshi. "Inductive inference of gene regulatory network using supervised and semi-supervised graph neural networks." Computational and Structural Biotechnology Journal 18 (2020): 3335–43. http://dx.doi.org/10.1016/j.csbj.2020.10.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nobukawa, Sou, Haruhiko Nishimura, and Teruya Yamanishi. "Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (October 1, 2019): 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.

Full text
Abstract:
Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu, and Jinglei Lv. "Supervised Brain Network Learning Based on Deep Recurrent Neural Networks." IEEE Access 8 (2020): 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fu, Chentao, Shuiying Xiang, Yanan Han, Ziwei Song, and Yue Hao. "Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization." Photonics 9, no. 4 (March 25, 2022): 217. http://dx.doi.org/10.3390/photonics9040217.

Full text
Abstract:
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes.
APA, Harvard, Vancouver, ISO, and other styles
12

Tan, Junyang, Dan Xia, Shiyun Dong, Honghao Zhu, and Binshi Xu. "Research On Pre-Training Method and Generalization Ability of Big Data Recognition Model of the Internet of Things." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 5 (July 20, 2021): 1–15. http://dx.doi.org/10.1145/3433539.

Full text
Abstract:
The Internet of Things and big data are currently hot concepts and research fields. The mining, classification, and recognition of big data in the Internet of Things system are the key links that are widely of concern at present. The artificial neural network is beneficial for multi-dimensional data classification and recognition because of its strong feature extraction and self-learning ability. Pre-training is an effective method to address the gradient diffusion problem in deep neural networks and could result in better generalization. This article focuses on the performance of supervised pre-training that uses labelled data. In particular, this pre-training procedure is a simulation that shows the changes in judgment patterns as they progress from primary to mature within the human brain. In this article, the state-of-the-art of neural network pre-training is reviewed. Then, the principles of the auto-encoder and supervised pre-training are introduced in detail. Furthermore, an extended structure of supervised pre-training is proposed. A set of experiments are carried out to compare the performances of different pre-training methods. These experiments include a comparison between the original and pre-trained networks as well as a comparison between the networks with two types of sub-network structures. In addition, a homemade database is established to analyze the influence of pre-training on the generalization ability of neural networks. Finally, an ordinary convolutional neural network is used to verify the applicability of supervised pre-training.
APA, Harvard, Vancouver, ISO, and other styles
13

Javidi, Mohammad Masoud. "Network Attacks Detection by Hierarchical Neural Network." Computer Engineering and Applications Journal 4, no. 2 (June 18, 2015): 119–32. http://dx.doi.org/10.18495/comengapp.v4i2.108.

Full text
Abstract:
Intrusion detection is an emerging area of research in the computer security and net-works with the growing usage of internet in everyday life. Most intrusion detection systems (IDSs) mostly use a single classifier algorithm to classify the network traffic data as normal behavior or anomalous. However, these single classifier systems fail to provide the best possible attack detection rate with low false alarm rate. In this paper,we propose to use a hybrid intelligent approach using a combination of classifiers in order to make the decision intelligently, so that the overall performance of the resul-tant model is enhanced. The general procedure in this is to follow the supervised or un-supervised data filtering with classifier or cluster first on the whole training dataset and then the output are applied to another classifier to classify the data. In this re- search, we applied Neural Network with Supervised and Unsupervised Learning in order to implement the intrusion detection system. Moreover, in this project, we used the method of Parallelization with real time application of the system processors to detect the systems intrusions.Using this method enhanced the speed of the intrusion detection. In order to train and test the neural network, NSLKDD database was used. Creating some different intrusion detection systems, each of which considered as a single agent, we precisely proceeded with the signature-based intrusion detection of the network.In the proposed design, the attacks have been classified into 4 groups and each group is detected by an Agent equipped with intrusion detection system (IDS).These agents act independently and report the intrusion or non-intrusion in the system; the results achieved by the agents will be studied in the Final Analyst and at last the analyst reports that whether there has been an intrusion in the system or not.Keywords: Intrusion Detection, Multi-layer Perceptron, False Positives, Signature- based intrusion detection, Decision tree, Nave Bayes Classifier
APA, Harvard, Vancouver, ISO, and other styles
14

Floreano, Dario. "An internal teacher for neural computation." Behavioral and Brain Sciences 20, no. 4 (December 1997): 687–88. http://dx.doi.org/10.1017/s0140525x97261601.

Full text
Abstract:
Contextual signals might supervise the discovery of coherently varying information between cortical modules computing different functions of their receptive field input. This hypothesis is explored in two sets of computational experiments, one studying the effects on learning of long-range unidirectional contextual signals mediated by intervening processors, and the other showing contextually supervised discovery of a high-order variable in a multilayer network.
APA, Harvard, Vancouver, ISO, and other styles
15

Sarukkai, Ramesh R. "Supervised Networks That Self-Organize Class Outputs." Neural Computation 9, no. 3 (March 1, 1997): 637–48. http://dx.doi.org/10.1162/neco.1997.9.3.637.

Full text
Abstract:
Supervised, neural network, learning algorithms have proved very successful at solving a variety of learning problems; however, they suffer from a common problem of requiring explicit output labels. In this article, it is shown that pattern classification can be achieved, in a multilayered, feedforward, neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together iteratively using the derived self organizing backpropagation algorithm. The ability of the self-organizing network to generalize on unseen data is also experimentally evaluated on real data sets and compares favorably with the traditional labeled supervision with neural networks. In addition, interesting features emerge out of the proposed self-organizing supervision, which are absent in conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Wei, Guangmin Hu, and Fucai Yu. "Ricci Curvature-Based Semi-Supervised Learning on an Attributed Network." Entropy 23, no. 3 (February 27, 2021): 292. http://dx.doi.org/10.3390/e23030292.

Full text
Abstract:
In recent years, on the basis of drawing lessons from traditional neural network models, people have been paying more and more attention to the design of neural network architectures for processing graph structure data, which are called graph neural networks (GNN). GCN, namely, graph convolution networks, are neural network models in GNN. GCN extends the convolution operation from traditional data (such as images) to graph data, and it is essentially a feature extractor, which aggregates the features of neighborhood nodes into those of target nodes. In the process of aggregating features, GCN uses the Laplacian matrix to assign different importance to the nodes in the neighborhood of the target nodes. Since graph-structured data are inherently non-Euclidean, we seek to use a non-Euclidean mathematical tool, namely, Riemannian geometry, to analyze graphs (networks). In this paper, we present a novel model for semi-supervised learning called the Ricci curvature-based graph convolutional neural network, i.e., RCGCN. The aggregation pattern of RCGCN is inspired by that of GCN. We regard the network as a discrete manifold, and then use Ricci curvature to assign different importance to the nodes within the neighborhood of the target nodes. Ricci curvature is related to the optimal transport distance, which can well reflect the geometric structure of the underlying space of the network. The node importance given by Ricci curvature can better reflect the relationships between the target node and the nodes in the neighborhood. The proposed model scales linearly with the number of edges in the network. Experiments demonstrated that RCGCN achieves a significant performance gain over baseline methods on benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
17

Sun, Li, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, and Philip S. Yu. "A Self-Supervised Mixed-Curvature Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4146–55. http://dx.doi.org/10.1609/aaai.v36i4.20333.

Full text
Abstract:
Graph representation learning received increasing attentions in recent years. Most of the existing methods ignore the complexity of the graph structures and restrict graphs in a single constant-curvature representation space, which is only suitable to particular kinds of graph structure indeed. Additionally, these methods follow the supervised or semi-supervised learning paradigm, and thereby notably limit their deployment on the unlabeled graphs in real applications. To address these aforementioned limitations, we take the first attempt to study the self-supervised graph representation learning in the mixed-curvature spaces. In this paper, we present a novel Self-Supervised Mixed-Curvature Graph Neural Network (SelfMGNN). To capture the complex graph structures, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces, and design hierarchical attention mechanisms for learning and fusing graph representations across these component spaces. To enable the self-supervised learning, we propose a novel dual contrastive approach. The constructed mixed-curvature space actually provides multiple Riemannian views for the contrastive learning. We introduce a Riemannian projector to reveal these views, and utilize a well-designed Riemannian discriminator for the single-view and cross-view contrastive learning within and across the Riemannian views. Finally, extensive experiments show that SelfMGNN captures the complex graph structures and outperforms state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
18

Shin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee, and Kyoobin Lee. "Self-Supervised Transfer Learning from Natural Images for Sound Classification." Applied Sciences 11, no. 7 (March 29, 2021): 3043. http://dx.doi.org/10.3390/app11073043.

Full text
Abstract:
We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the following benchmarks: ESC-50, UrbanSound8k, and GTZAN. The network pre-trained via self-supervised learning achieved a similar level of accuracy as those pre-trained using a supervised method that require labels. Therefore, we demonstrated that transfer learning from natural images contributes to improvements in audio-related tasks, and self-supervised learning with natural images is adequate for pre-training scheme in terms of simplicity and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
19

Gou, Ruru, Wenzhu Yang, Zifei Luo, Yunfeng Yuan, and Andong Li. "Tohjm-Trained Multiscale Spatial Temporal Graph Convolutional Neural Network for Semi-Supervised Skeletal Action Recognition." Electronics 11, no. 21 (October 28, 2022): 3498. http://dx.doi.org/10.3390/electronics11213498.

Full text
Abstract:
In recent years, spatial-temporal graph convolutional networks have played an increasingly important role in skeleton-based human action recognition. However, there are still three major limitations to most ST-GCN-based approaches: (1) They only use a single joint scale to extract action features, or process joint and skeletal information separately. As a result, action features cannot be extracted dynamically through the mutual directivity between the scales. (2) These models treat the contributions of all joints equally in training, which neglects the problem that some joints with difficult loss-reduction are critical joints in network training. (3) These networks rely heavily on a large amount of labeled data, which remains costly. To address these problems, we propose a Tohjm-trained multiscale spatial-temporal graph convolutional neural network for semi-supervised action recognition, which contains three parts: encoder, decoder and classifier. The encoder’s core is a correlated joint–bone–body-part fusion spatial-temporal graph convolutional network that allows the network to learn more stable action features between coarse and fine scales. The decoder uses a self-supervised training method with a motion prediction head, which enables the network to extract action features using unlabeled data so that the network can achieve semi-supervised learning. In addition, the network is also capable of fully supervised learning with the encoder, decoder and classifier. Our proposed time-level online hard joint mining strategy is also used in the decoder training process, which allows the network to focus on hard training joints and improve the overall network performance. Experimental results on the NTU-RGB + D dataset and the Kinetics-skeleton dataset show that the improved model achieves good performance for action recognition based on semi-supervised training, and is also applicable to the fully supervised approach.
APA, Harvard, Vancouver, ISO, and other styles
20

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (January 20, 2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Full text
Abstract:
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is depending on identifying the net difference between processed output which is usually a prediction and the real targeted output which occurs as an error, then a series of adjustments achieved to gain a proper learning result, this process called supervised learning. Artificial neural networks have found and proved itself in many applications in a variety of fields due to their capacity to recreate and simulate nonlinear phenomena. System identification and control (process control, vehicle control, quantum chemistry, trajectory prediction, and natural resource management. Etc.) In addition to face recognition which proved to be very effective. Neural network was proved to be a very promising technique in many fields due to its accuracy and problem solving properties.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Yinan, and Wenyu Chen. "Incorporating Siamese Network Structure into Graph Neural Network." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012023. http://dx.doi.org/10.1088/1742-6596/2171/1/012023.

Full text
Abstract:
Abstract Siamese network plays an important role in many artificial intelligence domains, but there requires more exploration of applying Siamese structure to graph neural network. This paper proposes a novel framework that incorporates Siamese network structure into Graph Neural Network (Siam-GNN). We use DropEdge as graph augmentation technique to generate new graphs. Besides, the strategy of constructing Siamese network’s paired inputs is also studied in our work. Notably, stopping gradient backpropagation one side in Siam-GNN is an important factor affecting the performance of model. We equip some graph neural networks with Siamese structure and evaluate these Siam-GNNs on several standard semi-supervised node classification datasets and achieve surprising improvement on almost every original graph neural network.
APA, Harvard, Vancouver, ISO, and other styles
22

Ali Humayun, Mohammad, Ibrahim Hameed, Syed Muslim Shah, Sohaib Hassan Khan, Irfan Zafar, Saad Bin Ahmed, and Junaid Shuja. "Regularized Urdu Speech Recognition with Semi-Supervised Deep Learning." Applied Sciences 9, no. 9 (May 13, 2019): 1956. http://dx.doi.org/10.3390/app9091956.

Full text
Abstract:
Automatic Speech Recognition, (ASR) has achieved the best results for English, with end-to-end neural network based supervised models. These supervised models need huge amounts of labeled speech data for good generalization, which can be quite a challenge to obtain for low-resource languages like Urdu. Most models proposed for Urdu ASR are based on Hidden Markov Models (HMMs). This paper proposes an end-to-end neural network model, for Urdu ASR, regularized with dropout, ensemble averaging and Maxout units. Dropout and ensembles are averaging techniques over multiple neural network models while Maxout are units in a neural network which adapt their activation functions. Due to limited labeled data, Semi Supervised Learning (SSL) techniques are also incorporated to improve model generalization. Speech features are transformed into a lower dimensional manifold using an unsupervised dimensionality-reduction technique called Locally Linear Embedding (LLE). Transformed data along with higher dimensional features is used to train neural networks. The proposed model also utilizes label propagation-based self-training of initially trained models and achieves a Word Error Rate (WER) of 4% less than that reported as the benchmark on the same Urdu corpus using HMM. The decrease in WER after incorporating SSL is more significant with an increased validation data size.
APA, Harvard, Vancouver, ISO, and other styles
23

Tomasov, Adrian, Martin Holik, Vaclav Oujezsky, Tomas Horvath, and Petr Munster. "GPON PLOAMd Message Analysis Using Supervised Neural Networks." Applied Sciences 10, no. 22 (November 18, 2020): 8139. http://dx.doi.org/10.3390/app10228139.

Full text
Abstract:
This paper discusses the possibility of analyzing the orchestration protocol used in gigabit-capable passive optical networks (GPONs). Considering the fact that a GPON is defined by the International Telecommunication Union Telecommunication sector (ITU-T) as a set of recommendations, implementation across device vendors might exhibit few differences, which complicates analysis of such protocols. Therefore, machine learning techniques are used (e.g., neural networks) to evaluate differences in GPONs among various device vendors. As a result, this paper compares three neural network models based on different types of recurrent cells and discusses their suitability for such analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Sayed, Mohamed, and Faris Baker. "E-Learning Optimization Using Supervised Artificial Neural-Network." Journal of Software Engineering and Applications 08, no. 01 (2015): 26–34. http://dx.doi.org/10.4236/jsea.2015.81004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Saleem, Nasir, Muhammad Irfan Khattak, and Abdul Baser Qazi. "Supervised speech enhancement based on deep neural network." Journal of Intelligent & Fuzzy Systems 37, no. 4 (October 25, 2019): 5187–201. http://dx.doi.org/10.3233/jifs-190047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Issa, Riham J., and Yusra F. Al-Irhaym. "Audio source separation using supervised deep neural network." Journal of Physics: Conference Series 1879, no. 2 (May 1, 2021): 022077. http://dx.doi.org/10.1088/1742-6596/1879/2/022077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Khashman, Adnan, and Claudia Georgeta Carstea. "Oil price prediction using a supervised neural network." International Journal of Oil, Gas and Coal Technology 20, no. 3 (2019): 360. http://dx.doi.org/10.1504/ijogct.2019.098458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Carstea, Claudia Georgeta, and Adnan Khashman. "Oil price prediction using a supervised neural network." International Journal of Oil, Gas and Coal Technology 20, no. 3 (2019): 360. http://dx.doi.org/10.1504/ijogct.2019.10020004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Salam, F. M. A., and S. Bai. "A new feedback neural network with supervised learning." IEEE Transactions on Neural Networks 2, no. 1 (1991): 170–73. http://dx.doi.org/10.1109/72.80309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

KHASHMAN, ADNAN, and BORAN SEKEROGLU. "DOCUMENT IMAGE BINARISATION USING A SUPERVISED NEURAL NETWORK." International Journal of Neural Systems 18, no. 05 (October 2008): 405–18. http://dx.doi.org/10.1142/s0129065708001671.

Full text
Abstract:
Advances in digital technologies have allowed us to generate more images than ever. Images of scanned documents are examples of these images that form a vital part in digital libraries and archives. Scanned degraded documents contain background noise and varying contrast and illumination, therefore, document image binarisation must be performed in order to separate foreground from background layers. Image binarisation is performed using either local adaptive thresholding or global thresholding; with local thresholding being generally considered as more successful. This paper presents a novel method to global thresholding, where a neural network is trained using local threshold values of an image in order to determine an optimum global threshold value which is used to binarise the whole image. The proposed method is compared with five local thresholding methods, and the experimental results indicate that our method is computationally cost-effective and capable of binarising scanned degraded documents with superior results.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Yu, Lian Tian, and Linfei Liu. "Improved Extension Neural Network and Its Applications." Mathematical Problems in Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/593021.

Full text
Abstract:
Extension neural network (ENN) is a new neural network that is a combination of extension theory and artificial neural network (ANN). The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data) and the borders between clusters (boundary data) are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN) has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.
APA, Harvard, Vancouver, ISO, and other styles
32

Deolia, Vinay Kumar, and Manoj Kumar. "A Novel Medical Image Retrieval based on Deep Convolutional Neural Network and Supervised Hashing." Journal of Advanced Research in Dynamical and Control Systems 11, no. 11-SPECIAL ISSUE (November 29, 2019): 208–14. http://dx.doi.org/10.5373/jardcs/v11sp11/20192949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hou, Xue Mei. "Noise-Robust Speech Recognition Based on RBF Neural Network." Advanced Materials Research 217-218 (March 2011): 413–18. http://dx.doi.org/10.4028/www.scientific.net/amr.217-218.413.

Full text
Abstract:
Considering the actuality of current speech recognition and the characteristic of RBF neural network, a noise-robust speech recognition system based on RBF neural network is proposed with the entire-supervised algorithm. If the traditional clustering algorithm is employed, there is a flaw that the node center of hidden layer is always sensitive to the initial value, but if the entire-supervised algorithm is used, the flaw will not turn up, and the classification ability of RBF network will be enhanced. Experimental results show that, compared with the traditional clustering algorithm, the entire-supervised algorithm is of higher recognition rate in different SNRs than that of clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Jitao, Chugui Xu, Yongquan Liang, Gengkun Wu, and Zhao Liang. "An RBF Neural Network Clustering Algorithm Based on K-Nearest Neighbor." Mathematical Problems in Engineering 2022 (August 24, 2022): 1–9. http://dx.doi.org/10.1155/2022/1083961.

Full text
Abstract:
Neural network is a supervised classification algorithm which can deal with high complexity and nonlinear data analysis. Supervised algorithm needs some known labels in the training process, and then corrects parameters through backpropagation method. However, due to the lack of marked labels, existing literature mostly uses Auto-Encoder to reduce the dimension of data when facing of clustering problems. This paper proposes an RBF (Radial Basis Function) neural network clustering algorithm based on K-nearest neighbors theory, which first uses K-means algorithm for preclassification, and then constructs self-supervised labels based on K-nearest neighbors theory for backpropagation. The algorithm in this paper belongs to a self-supervised neural network clustering algorithm, and it also makes the neural network truly have the ability of self-decision-making and self-optimization. From the experimental results of the artificial data sets and the UCI data sets, it can be proved that the proposed algorithm has excellent adaptability and robustness.
APA, Harvard, Vancouver, ISO, and other styles
35

Rouček, Tomáš, Arash Sadeghi Amjadi, Zdeněk Rozsypálek, George Broughton, Jan Blaha, Keerthy Kusumam, and Tomáš Krajník. "Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation." Sensors 22, no. 8 (April 7, 2022): 2836. http://dx.doi.org/10.3390/s22082836.

Full text
Abstract:
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day–night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
APA, Harvard, Vancouver, ISO, and other styles
36

Kulathunga, Nalinda, Nishath Rajiv Ranasinghe, Daniel Vrinceanu, Zackary Kinsman, Lei Huang, and Yunjiao Wang. "Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks." Algorithms 14, no. 2 (February 5, 2021): 51. http://dx.doi.org/10.3390/a14020051.

Full text
Abstract:
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with L-ReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics.
APA, Harvard, Vancouver, ISO, and other styles
37

Aragon-Calvo, M. A., and J. C. Carvajal. "Self-supervised learning with physics-aware neural networks – I. Galaxy model fitting." Monthly Notices of the Royal Astronomical Society 498, no. 3 (September 7, 2020): 3713–19. http://dx.doi.org/10.1093/mnras/staa2228.

Full text
Abstract:
ABSTRACT Estimating the parameters of a model describing a set of observations using a neural network is, in general, solved in a supervised way. In cases when we do not have access to the model’s true parameters, this approach can not be applied. Standard unsupervised learning techniques, on the other hand, do not produce meaningful or semantic representations that can be associated with the model’s parameters. Here we introduce a novel self-supervised hybrid network architecture that combines traditional neural network elements with analytic or numerical models, which represent a physical process to be learned by the system. Self-supervised learning is achieved by generating an internal representation equivalent to the parameters of the physical model. This semantic representation is used to evaluate the model and compare it to the input data during training. The semantic autoencoder architecture described here shares the robustness of neural networks while including an explicit model of the data, learns in an unsupervised way, and estimates, by construction, parameters with direct physical interpretation. As an illustrative application, we perform unsupervised learning for 2D model fitting of exponential light profiles and evaluate the performance of the network as a function of network size and noise.
APA, Harvard, Vancouver, ISO, and other styles
38

Choi, Inkyu, Soo Hyun Bae, and Nam Soo Kim. "Deep Convolutional Neural Network with Structured Prediction for Weakly Supervised Audio Event Detection." Applied Sciences 9, no. 11 (June 4, 2019): 2302. http://dx.doi.org/10.3390/app9112302.

Full text
Abstract:
Audio event detection (AED) is a task of recognizing the types of audio events in an audio stream and estimating their temporal positions. AED is typically based on fully supervised approaches, requiring strong labels including both the presence and temporal position of each audio event. However, fully supervised datasets are not easily available due to the heavy cost of human annotation. Recently, weakly supervised approaches for AED have been proposed, utilizing large scale datasets with weak labels including only the occurrence of events in recordings. In this work, we introduce a deep convolutional neural network (CNN) model called DSNet based on densely connected convolution networks (DenseNets) and squeeze-and-excitation networks (SENets) for weakly supervised training of AED. DSNet alleviates the vanishing-gradient problem and strengthens feature propagation and models interdependencies between channels. We also propose a structured prediction method for weakly supervised AED. We apply a recurrent neural network (RNN) based framework and a prediction smoothness cost function to consider long-term contextual information with reduced error propagation. In post-processing, conditional random fields (CRFs) are applied to take into account the dependency between segments and delineate the borders of audio events precisely. We evaluated our proposed models on the DCASE 2017 task 4 dataset and obtained state-of-the-art results on both audio tagging and event detection tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

MAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.

Full text
Abstract:
Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yi-Nan, Tsang-Yen Hsieh, Cheng-Ying Yang, Victor RL Shen, Tony Tong-Ying Juang, and Wen-Hao Chen. "Deep Petri nets of unsupervised and supervised learning." Measurement and Control 53, no. 7-8 (June 9, 2020): 1267–77. http://dx.doi.org/10.1177/0020294020923375.

Full text
Abstract:
Artificial intelligence is one of the hottest research topics in computer science. In general, when it comes to the needs to perform deep learning, the most intuitive and unique implementation method is to use neural network. But there are two shortcomings in neural network. First, it is not easy to be understood. When encountering the needs for implementation, it often requires a lot of relevant research efforts to implement the neural network. Second, the structure is complex. When constructing a perfect learning structure, in order to achieve the fully defined connection between nodes, the overall structure becomes complicated. It is hard for developers to track the parameter changes inside. Therefore, the goal of this article is to provide a more streamlined method so as to perform deep learning. A modified high-level fuzzy Petri net, called deep Petri net, is used to perform deep learning, in an attempt to propose a simple and easy structure and to track parameter changes, with faster speed than the deep neural network. The experimental results have shown that the deep Petri net performs better than the deep neural network.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Hong, and Hongying Huang. "Neural Network for Solving Nonlinear Parabolic Differential Equations." Journal of Physics: Conference Series 2381, no. 1 (December 1, 2022): 012050. http://dx.doi.org/10.1088/1742-6596/2381/1/012050.

Full text
Abstract:
Abstract We develop a cell-average-based neural network (CANN) method to compute nonlinear differential equations. Using feedforward networks, we can train average solutions from t0 + Δt with initial values. In order to find the optimal parameters for the network, in combination with supervised training, we use a BP algorithm. By the trained network, we may compute the approximate solutions at the time t n+1 with the ones at time tn . Numerical results show CANN method permits a very large time step size for solution evolution.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Jianhe, and Suohai Fan. "GRNN: Graph-Retraining Neural Network for Semi-Supervised Node Classification." Algorithms 16, no. 3 (February 22, 2023): 126. http://dx.doi.org/10.3390/a16030126.

Full text
Abstract:
In recent years, graph neural networks (GNNs) have played an important role in graph representation learning and have successfully achieved excellent results in semi-supervised classification. However, these GNNs often neglect the global smoothing of the graph because the global smoothing of the graph is incompatible with node classification. Specifically, a cluster of nodes in the graph often has a small number of other classes of nodes. To address this issue, we propose a graph-retraining neural network (GRNN) model that performs smoothing over the graph by alternating between a learning procedure and an inference procedure, based on the key idea of the expectation-maximum algorithm. Moreover, the global smoothing error is combined with the cross-entropy error to form the loss function of GRNN, which effectively solves the problem. The experiments show that GRNN achieves high accuracy in the standard citation network datasets, including Cora, Citeseer, and PubMed, which proves the effectiveness of GRNN in semi-supervised node classification.
APA, Harvard, Vancouver, ISO, and other styles
43

Jiang, Yi, and Hui Sun. "Top-K Pseudo Labeling for Semi-Supervised Image Classification." International Journal of Data Warehousing and Mining 19, no. 2 (December 30, 2022): 1–18. http://dx.doi.org/10.4018/ijdwm.316150.

Full text
Abstract:
In this paper, a top-k pseudo labeling method for semi-supervised self-learning is proposed. Pseudo labeling is a key technology in semi-supervised self-learning. Briefly, the quality of the pseudo label generated largely determined the convergence of the neural network and the accuracy obtained. In this paper, the authors use a method called top-k pseudo labeling to generate pseudo label during the training of semi-supervised neural network model. The proposed labeling method helps a lot in learning features from unlabeled data. The proposed method is easy to implement and only relies on the neural network prediction and hyper-parameter k. The experiment results show that the proposed method works well with semi-supervised learning on CIFAR-10 and CIFAR-100 datasets. Also, a variant of top-k labeling for supervised learning named top-k regulation is proposed. The experiment results show that various models can achieve higher accuracy on test set when trained with top-k regulation.
APA, Harvard, Vancouver, ISO, and other styles
44

HUNG, CHENG-AN, and SHENG-FUU LIN. "AN INCREMENTAL LEARNING NEURAL NETWORK FOR PATTERN CLASSIFICATION." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 06 (September 1999): 913–28. http://dx.doi.org/10.1142/s0218001499000501.

Full text
Abstract:
A neural network architecture that incorporates a supervised mechanism into a fuzzy adaptive Hamming net (FAHN) is presented. The FAHN constructs hyper-rectangles that represent template weights in an unsupervised learning paradigm. Learning in the FAHN consists of creating and adjusting hyper-rectangles in feature space. By aggregating multiple hyper-rectangles into a single class, we can build a classifier, to be henceforth termed as a supervised fuzzy adaptive Hamming net (SFAHN), that discriminates between nonconvex and even discontinuous classes. The SFAHN can operate at a fast-learning rate in online (incremental) or offline (batch) applications, without becoming unstable. The performance of the SFAHN is tested on the Fisher iris data and on an online character recognition problem.
APA, Harvard, Vancouver, ISO, and other styles
45

Dapkus, Paulius, Liudas Mažeika, and Vytautas Sliesoraitis. "A study of supervised combined neural-network-based ultrasonic method for reconstruction of spatial distribution of material properties." Information Technology And Control 49, no. 3 (September 23, 2020): 381–94. http://dx.doi.org/10.5755/j01.itc.49.3.26792.

Full text
Abstract:
This paper examines the performance of the commonly used neural-network-based classifiers for investigating a structural noise in metals as grain size estimation. The biggest problem which aims to identify the object structure grain size based on metal features or the object structure itself. When the structure data is obtained, a proposed feature extraction method is used to extract the feature of the object. Afterwards, the extracted features are used as the inputs for the classifiers. This research studies is focused to use basic ultrasonic sensors to obtain objects structural grain size which are used in neural network. The performance for used neural-network-based classifier is evaluated based on recognition accuracy for individual object. Also, traditional neural networks, namely convolutions and fully connected dense networks are shown as a result of grain size estimation model. To evaluate robustness property of neural networks, the original samples data is mixed for three types of grain sizes. Experimental results show that combined convolutions and fully connected dense neural networks with classifiers outperform the others single neural networks with original samples with high SN data. The Dense neural network as itself demonstrates the best robustness property when the object samples not differ from trained datasets.
APA, Harvard, Vancouver, ISO, and other styles
46

Qin, Shanshan, Nayantara Mudur, and Cengiz Pehlevan. "Contrastive Similarity Matching for Supervised Learning." Neural Computation 33, no. 5 (April 13, 2021): 1300–1328. http://dx.doi.org/10.1162/neco_a_01374.

Full text
Abstract:
Abstract We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
APA, Harvard, Vancouver, ISO, and other styles
47

Mane, Deepak T., Rahul Moorthy, Prashant Kumbharkar, Gopal Upadhye, Dipmala Salunke, and Rashmi Ashtagi. "Pattern Classification Using Supervised Hypersphere Neural Network0." International Journal of Emerging Technology and Advanced Engineering 12, no. 8 (August 4, 2022): 214–22. http://dx.doi.org/10.46338/ijetae0822_25.

Full text
Abstract:
Pattern classification is of significant demand in the field of machine learning. Its applications range from a simple problem of speech recognition to a complex and important problem of medical diagnosis. Fuzzy based algorithms have been one of the most important methods which have contributed in solving the pattern classification problem. A customized Fuzzy based Supervised Hypersphere Neural Network (SHNN) is presented for the use of pattern classification. Here, a modernized pattern classification method has been presented by considering the fuzzy hypersphere neural network concept at the back end and using a modified version of the membership function aiming to solve the pattern classification problems and boost the performance of the algorithm. The proposed SHNN model creates supervised hypersphere using measurements obtained from intra-class distance techniques along with individual class pattern choice, over the fuzzy membership function. The previous modified fuzzy approaches presented an inherent drawback of ambiguous assignment of classes, losing the fuzzy nature during the assigning of classes in the testing phase and over fitting of the model during the training phase. The proposed approach solves this problem by adding a non linearity to the output of the membership function to maintain its fuzzy nature. Additionally, a new weighted Euclidean distance equation has been designed to enhance the performance of the algorithm. The performance of the proposed model of SHNN has been examined on four standard datasets namely - Pima, liver, glass and monks-3. The results obtained were superior to the previous proposed approaches. Thus, presenting a new state result of the art on the datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Gaiduk, A. R., O. V. Martjanov, M. Yu Medvedev, V. Kh Pshikhopov, N. Hamdan, and A. Farhood. "Neural Network Based Control System for Robots Group Operating in 2-d Uncertain Environment." Mekhatronika, Avtomatizatsiya, Upravlenie 21, no. 8 (August 5, 2020): 470–79. http://dx.doi.org/10.17587/mau.21.470-479.

Full text
Abstract:
This study is devoted to development of a neural network based control system of robots group. The control system performs estimation of an environment state, searching the optimal path planning method, path planning, and changing the trajectories on via the robots interaction. The deep learning neural networks implements the optimal path planning method, and path planning of the robots. The first neural network classifies the environment into two types. For the first type a method of the shortest path planning is used. For the second type a method of the most safety path planning is used. Estimation of the path planning algorithm is based on the multi-objective criteria. The criterion includes the time of movement to the target point, path length, and minimal distance from the robot to obstacles. A new hybrid learning algorithm of the neural network is proposed. The algorithm includes elements of both a supervised learning as well as an unsupervised learning. The second neural network plans the shortest path. The third neural network plans the most safety path. To train the second and third networks a supervised algorithm is developed. The second and third networks do not plan a whole path of the robot. The outputs of these neural networks are the direction of the robot’s movement in the step k. Thus the recalculation of the whole path of the robot is not performed every step in a dynamical environment. Likewise in this paper algorithm of the robots formation for unmapped obstructed environment is developed. The results of simulation and experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
49

FRASCONI, PAOLO, MARCO GORI, and GIOVANNI SODA. "DAPHNE: DATA PARALLELISM NEURAL NETWORK SIMULATOR." International Journal of Modern Physics C 04, no. 01 (February 1993): 17–28. http://dx.doi.org/10.1142/s0129183193000045.

Full text
Abstract:
In this paper we describe the guideline of Daphne, a parallel simulator for supervised recurrent neural networks trained by Backpropagation through time. The simulator has a modular structure, based on a parallel training kernel running on the CM-2 Connection Machine. The training kernel is written in CM Fortran in order to exploit some advantages of the slicewise execution model. The other modules are written in serial C code. They are used for designing and testing the network, and for interfacing with the training data. A dedicated language is available for defining the network architecture, which allows the use of linked modules. The implementation of the learning procedures is based on training example parallelism. This dimension of parallelism has been found to be effective for learning static patterns using feedforward networks. We extend training example parallelism for learning sequences with full recurrent networks. Daphne is mainly conceived for applications in the field of Automatic Speech Recognition, though it can also serve for simulating feedforward networks.
APA, Harvard, Vancouver, ISO, and other styles
50

Kulkarni, Shruti R., and Bipin Rajendran. "Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization." Neural Networks 103 (July 2018): 118–27. http://dx.doi.org/10.1016/j.neunet.2018.03.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography