Journal articles on the topic 'Supervised neural networks'

To see the other types of publications on this topic, follow the link: Supervised neural networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Supervised neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yeh, I.-Cheng, and Kuan-Cheng Lin. "Supervised Learning Probabilistic Neural Networks." Neural Processing Letters 34, no. 2 (July 22, 2011): 193–208. http://dx.doi.org/10.1007/s11063-011-9191-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hush, D. R., and B. G. Horne. "Progress in supervised neural networks." IEEE Signal Processing Magazine 10, no. 1 (January 1993): 8–39. http://dx.doi.org/10.1109/79.180705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tomasov, Adrian, Martin Holik, Vaclav Oujezsky, Tomas Horvath, and Petr Munster. "GPON PLOAMd Message Analysis Using Supervised Neural Networks." Applied Sciences 10, no. 22 (November 18, 2020): 8139. http://dx.doi.org/10.3390/app10228139.

Full text
Abstract:
This paper discusses the possibility of analyzing the orchestration protocol used in gigabit-capable passive optical networks (GPONs). Considering the fact that a GPON is defined by the International Telecommunication Union Telecommunication sector (ITU-T) as a set of recommendations, implementation across device vendors might exhibit few differences, which complicates analysis of such protocols. Therefore, machine learning techniques are used (e.g., neural networks) to evaluate differences in GPONs among various device vendors. As a result, this paper compares three neural network models based on different types of recurrent cells and discusses their suitability for such analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Hammer, Barbara. "Neural Smithing – Supervised Learning in Feedforward Artificial Neural Networks." Pattern Analysis & Applications 4, no. 1 (March 2001): 73–74. http://dx.doi.org/10.1007/s100440170029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sarukkai, Ramesh R. "Supervised Networks That Self-Organize Class Outputs." Neural Computation 9, no. 3 (March 1, 1997): 637–48. http://dx.doi.org/10.1162/neco.1997.9.3.637.

Full text
Abstract:
Supervised, neural network, learning algorithms have proved very successful at solving a variety of learning problems; however, they suffer from a common problem of requiring explicit output labels. In this article, it is shown that pattern classification can be achieved, in a multilayered, feedforward, neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together iteratively using the derived self organizing backpropagation algorithm. The ability of the self-organizing network to generalize on unseen data is also experimentally evaluated on real data sets and compares favorably with the traditional labeled supervision with neural networks. In addition, interesting features emerge out of the proposed self-organizing supervision, which are absent in conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Doyle, J. R. "Supervised learning in N-tuple neural networks." International Journal of Man-Machine Studies 33, no. 1 (July 1990): 21–40. http://dx.doi.org/10.1016/s0020-7373(05)80113-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Secco, Jacopo, Mauro Poggio, and Fernando Corinto. "Supervised neural networks with memristor binary synapses." International Journal of Circuit Theory and Applications 46, no. 1 (January 2018): 221–33. http://dx.doi.org/10.1002/cta.2429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.

Full text
Abstract:
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Juexin, Anjun Ma, Qin Ma, Dong Xu, and Trupti Joshi. "Inductive inference of gene regulatory network using supervised and semi-supervised graph neural networks." Computational and Structural Biotechnology Journal 18 (2020): 3335–43. http://dx.doi.org/10.1016/j.csbj.2020.10.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li, and Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks." Shock and Vibration 2021 (August 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Full text
Abstract:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessing algorithm based on normalized sample symmetry of bearing is adopted to greatly increase the number of samples. Two different convolutional neural networks, supervised networks and unsupervised networks, are tested separately for the bearing defect detection. The first experiment adopts the supervised networks, and ResNet neural networks are selected as the supervised networks in this experiment. The experiment result shows that the AUC of the model is 0.8567, which is low for the actual use. Also, the positive and negative samples should be labelled manually. To improve the AUC of the model and the flexibility of the samples labelling, a new unsupervised neural network based on autoencoder networks is proposed. Gradients of the unlabeled data are used as labels, and autoencoder networks are created with U-net to predict the output. In the second experiment, positive samples of the supervised experiment are used as the training set. The experiment of the unsupervised neural networks shows that the AUC of the model is 0.9721. In this experiment, the AUC is higher than the first experiment, but the positive samples must be selected. To overcome this shortage, the dataset of the third experiment is the same as the supervised experiment, where all the positive and negative samples are mixed together, which means that there is no need to label the samples. This experiment shows that the AUC of the model is 0.9623. Although the AUC is slightly lower than that of the second experiment, the AUC is high enough for actual use. The experiment results demonstrate the feasibility and superiority of the proposed unsupervised networks.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu, and Jinglei Lv. "Supervised Brain Network Learning Based on Deep Recurrent Neural Networks." IEEE Access 8 (2020): 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tan, Junyang, Dan Xia, Shiyun Dong, Honghao Zhu, and Binshi Xu. "Research On Pre-Training Method and Generalization Ability of Big Data Recognition Model of the Internet of Things." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 5 (July 20, 2021): 1–15. http://dx.doi.org/10.1145/3433539.

Full text
Abstract:
The Internet of Things and big data are currently hot concepts and research fields. The mining, classification, and recognition of big data in the Internet of Things system are the key links that are widely of concern at present. The artificial neural network is beneficial for multi-dimensional data classification and recognition because of its strong feature extraction and self-learning ability. Pre-training is an effective method to address the gradient diffusion problem in deep neural networks and could result in better generalization. This article focuses on the performance of supervised pre-training that uses labelled data. In particular, this pre-training procedure is a simulation that shows the changes in judgment patterns as they progress from primary to mature within the human brain. In this article, the state-of-the-art of neural network pre-training is reviewed. Then, the principles of the auto-encoder and supervised pre-training are introduced in detail. Furthermore, an extended structure of supervised pre-training is proposed. A set of experiments are carried out to compare the performances of different pre-training methods. These experiments include a comparison between the original and pre-trained networks as well as a comparison between the networks with two types of sub-network structures. In addition, a homemade database is established to analyze the influence of pre-training on the generalization ability of neural networks. Finally, an ordinary convolutional neural network is used to verify the applicability of supervised pre-training.
APA, Harvard, Vancouver, ISO, and other styles
13

Nobukawa, Sou, Haruhiko Nishimura, and Teruya Yamanishi. "Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (October 1, 2019): 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.

Full text
Abstract:
Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
14

BELATRECHE, AMMAR, LIAM P. MAGUIRE, MARTIN MCGINNITY, and QING XIANG WU. "EVOLUTIONARY DESIGN OF SPIKING NEURAL NETWORKS." New Mathematics and Natural Computation 02, no. 03 (November 2006): 237–53. http://dx.doi.org/10.1142/s179300570600049x.

Full text
Abstract:
Unlike traditional artificial neural networks (ANNs), which use a high abstraction of real neurons, spiking neural networks (SNNs) offer a biologically plausible model of realistic neurons. They differ from classical artificial neural networks in that SNNs handle and communicate information by means of timing of individual pulses, an important feature of neuronal systems being ignored by models based on rate coding scheme. However, in order to make the most of these realistic neuronal models, good training algorithms are required. Most existing learning paradigms tune the synaptic weights in an unsupervised way using an adaptation of the famous Hebbian learning rule, which is based on the correlation between the pre- and post-synaptic neurons activity. Nonetheless, supervised learning is more appropriate when prior knowledge about the outcome of the network is available. In this paper, a new approach for supervised training is presented with a biologically plausible architecture. An adapted evolutionary strategy (ES) is used for adjusting the synaptic strengths and delays, which underlie the learning and memory processes in the nervous system. The algorithm is applied to complex non-linearly separable problems, and the results show that the network is able to perform learning successfully by means of temporal encoding of presented patterns.
APA, Harvard, Vancouver, ISO, and other styles
15

Zenke, Friedemann, and Surya Ganguli. "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 30, no. 6 (June 2018): 1514–41. http://dx.doi.org/10.1162/neco_a_01086.

Full text
Abstract:
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
APA, Harvard, Vancouver, ISO, and other styles
16

Wettayaprasit, W., C. Lursinsap, and C. H. Chu. "Extracting linguistic quantitative rules from supervised neural networks." International Journal of Knowledge-based and Intelligent Engineering Systems 8, no. 3 (January 10, 2005): 161–70. http://dx.doi.org/10.3233/kes-2004-8304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Xingguo, Haibo Gao, Liang Ding, Pol D. Spanos, Zongquan Deng, and Zhijun Li. "Locally supervised neural networks for approximating terramechanics models." Mechanical Systems and Signal Processing 75 (June 2016): 57–74. http://dx.doi.org/10.1016/j.ymssp.2015.12.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Alshehhi, Rasha, Chris S. Hanson, Laurent Gizon, and Shravan Hanasoge. "Supervised neural networks for helioseismic ring-diagram inversions." Astronomy & Astrophysics 622 (February 2019): A124. http://dx.doi.org/10.1051/0004-6361/201834237.

Full text
Abstract:
Context. The inversion of ring fit parameters to obtain subsurface flow maps in ring-diagram analysis for eight years of SDO observations is computationally expensive, requiring ∼3200 CPU hours. Aims. In this paper we apply machine-learning techniques to the inversion step of the ring diagram pipeline in order to speed up the calculations. Specifically, we train a predictor for subsurface flows using the mode fit parameters and the previous inversion results to replace future inversion requirements. Methods. We utilize artificial neural networks (ANNs) as a supervised learning method for predicting the flows in 15° ring tiles. We discuss each step of the proposed method to determine the optimal approach. In order to demonstrate that the machine-learning results still contain the subtle signatures key to local helioseismic studies, we use the machine-learning results to study the recently discovered solar equatorial Rossby waves. Results. The ANN is computationally efficient, able to make future flow predictions of an entire Carrington rotation in a matter of seconds, which is much faster than the current ∼31 CPU hours. Initial training of the networks requires ∼3 CPU hours. The trained ANN can achieve a rms error equal to approximately half that reported for the velocity inversions, demonstrating the accuracy of the machine learning (and perhaps the overestimation of the original errors from the ring-diagram pipeline). We find the signature of equatorial Rossby waves in the machine-learning flows covering six years of data, demonstrating that small-amplitude signals are maintained. The recovery of Rossby waves in the machine-learning flow maps can be achieved with only one Carrington rotation (27.275 days) of training data. Conclusions. We show that machine learning can be applied to and perform more efficiently than the current ring-diagram inversion. The computation burden of the machine learning includes 3 CPU hours for initial training, then around 10−4 CPU hours for future predictions.
APA, Harvard, Vancouver, ISO, and other styles
19

Cheung, Man-Fung, Kevin M. Passino, and Stephen Yurkovich. "Supervised Training of Neural Networks via Ellipsoid Algorithms." Neural Computation 6, no. 4 (July 1994): 748–60. http://dx.doi.org/10.1162/neco.1994.6.4.748.

Full text
Abstract:
In this paper we show that two ellipsoid algorithms can be used to train single-layer neural networks with general staircase nonlinearities. The ellipsoid algorithms have several advantages over other conventional training approaches including (1) explicit convergence results and automatic determination of linear separability, (2) an elimination of problems with picking initial values for the weights, (3) guarantees that the trained weights are in some “acceptable region,” (4) certain “robustness” characteristics, and (5) a training approach for neural networks with a wider variety of activation functions. We illustrate the training approach by training the MAJ function and then by showing how to train a controller for a reaction chamber temperature control problem.
APA, Harvard, Vancouver, ISO, and other styles
20

Sperduti, A., and A. Starita. "Supervised neural networks for the classification of structures." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 714–35. http://dx.doi.org/10.1109/72.572108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mazzatorta, Paolo, Marjan Vračko, Aneta Jezierska, and Emilio Benfenati. "Modeling Toxicity by Using Supervised Kohonen Neural Networks." Journal of Chemical Information and Computer Sciences 43, no. 2 (March 2003): 485–92. http://dx.doi.org/10.1021/ci0256182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gazula, S., and M. R. Kabuka. "Design of supervised classifiers using Boolean neural networks." IEEE Transactions on Pattern Analysis and Machine Intelligence 17, no. 12 (1995): 1239–46. http://dx.doi.org/10.1109/34.476519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Smith, Alice E., and Cihan H. Dagli. "Controlling industrial processes through supervised, feedforward neural networks." Computers & Industrial Engineering 21, no. 1-4 (January 1991): 247–51. http://dx.doi.org/10.1016/0360-8352(91)90096-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Yaxian, Senlin Luo, Longfei Han, Limin Pan, and Tiemei Zhang. "Deep supervised learning with mixture of neural networks." Artificial Intelligence in Medicine 102 (January 2020): 101764. http://dx.doi.org/10.1016/j.artmed.2019.101764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Qin, Shanshan, Nayantara Mudur, and Cengiz Pehlevan. "Contrastive Similarity Matching for Supervised Learning." Neural Computation 33, no. 5 (April 13, 2021): 1300–1328. http://dx.doi.org/10.1162/neco_a_01374.

Full text
Abstract:
Abstract We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Wei, Guangmin Hu, and Fucai Yu. "Ricci Curvature-Based Semi-Supervised Learning on an Attributed Network." Entropy 23, no. 3 (February 27, 2021): 292. http://dx.doi.org/10.3390/e23030292.

Full text
Abstract:
In recent years, on the basis of drawing lessons from traditional neural network models, people have been paying more and more attention to the design of neural network architectures for processing graph structure data, which are called graph neural networks (GNN). GCN, namely, graph convolution networks, are neural network models in GNN. GCN extends the convolution operation from traditional data (such as images) to graph data, and it is essentially a feature extractor, which aggregates the features of neighborhood nodes into those of target nodes. In the process of aggregating features, GCN uses the Laplacian matrix to assign different importance to the nodes in the neighborhood of the target nodes. Since graph-structured data are inherently non-Euclidean, we seek to use a non-Euclidean mathematical tool, namely, Riemannian geometry, to analyze graphs (networks). In this paper, we present a novel model for semi-supervised learning called the Ricci curvature-based graph convolutional neural network, i.e., RCGCN. The aggregation pattern of RCGCN is inspired by that of GCN. We regard the network as a discrete manifold, and then use Ricci curvature to assign different importance to the nodes within the neighborhood of the target nodes. Ricci curvature is related to the optimal transport distance, which can well reflect the geometric structure of the underlying space of the network. The node importance given by Ricci curvature can better reflect the relationships between the target node and the nodes in the neighborhood. The proposed model scales linearly with the number of edges in the network. Experiments demonstrated that RCGCN achieves a significant performance gain over baseline methods on benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Kulathunga, Nalinda, Nishath Rajiv Ranasinghe, Daniel Vrinceanu, Zackary Kinsman, Lei Huang, and Yunjiao Wang. "Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks." Algorithms 14, no. 2 (February 5, 2021): 51. http://dx.doi.org/10.3390/a14020051.

Full text
Abstract:
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with L-ReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics.
APA, Harvard, Vancouver, ISO, and other styles
28

MAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.

Full text
Abstract:
Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.
APA, Harvard, Vancouver, ISO, and other styles
29

Shin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee, and Kyoobin Lee. "Self-Supervised Transfer Learning from Natural Images for Sound Classification." Applied Sciences 11, no. 7 (March 29, 2021): 3043. http://dx.doi.org/10.3390/app11073043.

Full text
Abstract:
We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the following benchmarks: ESC-50, UrbanSound8k, and GTZAN. The network pre-trained via self-supervised learning achieved a similar level of accuracy as those pre-trained using a supervised method that require labels. Therefore, we demonstrated that transfer learning from natural images contributes to improvements in audio-related tasks, and self-supervised learning with natural images is adequate for pre-training scheme in terms of simplicity and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
30

Aragon-Calvo, M. A., and J. C. Carvajal. "Self-supervised learning with physics-aware neural networks – I. Galaxy model fitting." Monthly Notices of the Royal Astronomical Society 498, no. 3 (September 7, 2020): 3713–19. http://dx.doi.org/10.1093/mnras/staa2228.

Full text
Abstract:
ABSTRACT Estimating the parameters of a model describing a set of observations using a neural network is, in general, solved in a supervised way. In cases when we do not have access to the model’s true parameters, this approach can not be applied. Standard unsupervised learning techniques, on the other hand, do not produce meaningful or semantic representations that can be associated with the model’s parameters. Here we introduce a novel self-supervised hybrid network architecture that combines traditional neural network elements with analytic or numerical models, which represent a physical process to be learned by the system. Self-supervised learning is achieved by generating an internal representation equivalent to the parameters of the physical model. This semantic representation is used to evaluate the model and compare it to the input data during training. The semantic autoencoder architecture described here shares the robustness of neural networks while including an explicit model of the data, learns in an unsupervised way, and estimates, by construction, parameters with direct physical interpretation. As an illustrative application, we perform unsupervised learning for 2D model fitting of exponential light profiles and evaluate the performance of the network as a function of network size and noise.
APA, Harvard, Vancouver, ISO, and other styles
31

Tang, Zheng, Xu Gang Wang, Hiroki Tamura, and Masahiro Ishii. "An Algorithm of Supervised Learning for Multilayer Neural Networks." Neural Computation 15, no. 5 (May 1, 2003): 1125–42. http://dx.doi.org/10.1162/089976603765202686.

Full text
Abstract:
A method of supervised learning for multilayer artificial neural networks to escape local minima is proposed. The learning model has two phases: a backpropagation phase and a gradient ascent phase. The backpropagation phase performs steepest descent on a surface in weight space whose height at any point in weight space is equal to an error measure, and it finds a set of weights minimizing this error measure. When the backpropagation gets stuck in local minima, the gradient ascent phase attempts to fill up the valley by modifying gain parameters in a gradient ascent direction of the error measure. The two phases are repeated until the network gets out of local minima. The algorithm has been tested on benchmark problems, such as exclusive-or (XOR), parity, alphabetic characters learning, Arabic numerals with a noise recognition problem, and a realistic real-world problem: classification of radar returns from the ionosphere. For all of these problems, the systems are shown to be capable of escaping from the backpropagation local minima and converge faster when using the new proposed method than using the simulated annealing techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Pengfei, and Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks." Mathematical Problems in Engineering 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Full text
Abstract:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Orukwo, Joy Oyinye, and Ledisi Giok Kabari. "Diagnosing Diabetes Using Artificial Neural Networks." European Journal of Engineering Research and Science 5, no. 2 (February 27, 2020): 221–24. http://dx.doi.org/10.24018/ejers.2020.5.2.1774.

Full text
Abstract:
Diabetes has always been a silent killer and the number of people suffering from it has increased tremendously in the last few decades. More often than not, people continue with their normal lifestyle, unaware that their health is at severe risk and with each passing day diabetes goes undetected. Artificial Neural Networks have become extensively useful in medical diagnosis as it provides a powerful tool to help analyze, model and make sense of complex clinical data. This study developed a diabetes diagnosis system using feed-forward neural network with supervised learning algorithm. The neural network is systematically trained and tested and a success rate of 90% was achieved.
APA, Harvard, Vancouver, ISO, and other styles
34

Dapkus, Paulius, Liudas Mažeika, and Vytautas Sliesoraitis. "A study of supervised combined neural-network-based ultrasonic method for reconstruction of spatial distribution of material properties." Information Technology And Control 49, no. 3 (September 23, 2020): 381–94. http://dx.doi.org/10.5755/j01.itc.49.3.26792.

Full text
Abstract:
This paper examines the performance of the commonly used neural-network-based classifiers for investigating a structural noise in metals as grain size estimation. The biggest problem which aims to identify the object structure grain size based on metal features or the object structure itself. When the structure data is obtained, a proposed feature extraction method is used to extract the feature of the object. Afterwards, the extracted features are used as the inputs for the classifiers. This research studies is focused to use basic ultrasonic sensors to obtain objects structural grain size which are used in neural network. The performance for used neural-network-based classifier is evaluated based on recognition accuracy for individual object. Also, traditional neural networks, namely convolutions and fully connected dense networks are shown as a result of grain size estimation model. To evaluate robustness property of neural networks, the original samples data is mixed for three types of grain sizes. Experimental results show that combined convolutions and fully connected dense neural networks with classifiers outperform the others single neural networks with original samples with high SN data. The Dense neural network as itself demonstrates the best robustness property when the object samples not differ from trained datasets.
APA, Harvard, Vancouver, ISO, and other styles
35

Hodges, Jaret, and Soumya Mohan. "Machine Learning in Gifted Education: A Demonstration Using Neural Networks." Gifted Child Quarterly 63, no. 4 (September 9, 2019): 243–52. http://dx.doi.org/10.1177/0016986219867483.

Full text
Abstract:
Machine learning algorithms are used in language processing, automated driving, and for prediction. Though the theory of machine learning has existed since the 1950s, it was not until the advent of advanced computing that their potential has begun to be realized. Gifted education is a field where machine learning has yet to be utilized, even though one of the underlying problems of gifted education is classification, which is an area where learning algorithms have become exceptionally accurate. We provide a brief overview of machine learning with a focus on neural networks and supervised learning, followed by a demonstration using simulated data and neural networks for classification issues with a practical explanation of the mechanics of the neural network and associated R code. Implications for gifted education are then discussed. Finally, the limitations of supervised learning are discussed. Code used in this article can be found at https://osf.io/4pa3b/
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yingxue, Soumyasundar Pal, Mark Coates, and Deniz Ustebay. "Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5829–36. http://dx.doi.org/10.1609/aaai.v33i01.33015829.

Full text
Abstract:
Recently, techniques for applying convolutional neural networks to graph-structured data have emerged. Graph convolutional neural networks (GCNNs) have been used to address node and graph classification and matrix completion. Although the performance has been impressive, the current implementations have limited capability to incorporate uncertainty in the graph structure. Almost all GCNNs process a graph as though it is a ground-truth depiction of the relationship between nodes, but often the graphs employed in applications are themselves derived from noisy data or modelling assumptions. Spurious edges may be included; other edges may be missing between nodes that have very strong relationships. In this paper we adopt a Bayesian approach, viewing the observed graph as a realization from a parametric family of random graphs. We then target inference of the joint posterior of the random graph parameters and the node (or graph) labels. We present the Bayesian GCNN framework and develop an iterative learning procedure for the case of assortative mixed-membership stochastic block models. We present the results of experiments that demonstrate that the Bayesian formulation can provide better performance when there are very few labels available during the training process.
APA, Harvard, Vancouver, ISO, and other styles
37

Ben Boubaker, Ourida. "APPLYING NEURAL NETWORKS FOR SUPERVISED LEARNING OF MEDICAL DATA." International Journal of Data Mining & Knowledge Management Process 09, no. 03 (May 31, 2019): 29–38. http://dx.doi.org/10.5121/ijdkp.2019.9303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Yipeng, Marc Modat, Eli Gibson, Wenqi Li, Nooshin Ghavami, Ester Bonmati, Guotai Wang, et al. "Weakly-supervised convolutional neural networks for multimodal image registration." Medical Image Analysis 49 (October 2018): 1–13. http://dx.doi.org/10.1016/j.media.2018.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Malu, Hong Qu, Xiurui Xie, and Jürgen Kurths. "Supervised learning in spiking neural networks with noise-threshold." Neurocomputing 219 (January 2017): 333–49. http://dx.doi.org/10.1016/j.neucom.2016.09.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ding, Zhengming, Nasser M. Nasrabadi, and Yun Fu. "Semi-supervised Deep Domain Adaptation via Coupled Neural Networks." IEEE Transactions on Image Processing 27, no. 11 (November 2018): 5214–24. http://dx.doi.org/10.1109/tip.2018.2851067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gyer, M. S. "Adjuncts and alternatives to neural networks for supervised classification." IEEE Transactions on Systems, Man, and Cybernetics 22, no. 1 (1992): 35–46. http://dx.doi.org/10.1109/21.141309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Černá, Lenka, and Milan Chytrý. "Supervised classification of plant communities with artificial neural networks." Journal of Vegetation Science 16, no. 4 (February 24, 2005): 407–14. http://dx.doi.org/10.1111/j.1654-1103.2005.tb02380.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Amorim, Willian Paraguassu, Gustavo Henrique Rosa, Rogério Thomazella, José Eduardo Cogo Castanho, Fábio Romano Lofrano Dotto, Oswaldo Pons Rodrigues Júnior, Aparecido Nilceu Marana, and João Paulo Papa. "Semi-supervised learning with connectivity-driven convolutional neural networks." Pattern Recognition Letters 128 (December 2019): 16–22. http://dx.doi.org/10.1016/j.patrec.2019.08.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ludwig, Oswaldo, and Urbano Nunes. "Novel Maximum-Margin Training Algorithms for Supervised Neural Networks." IEEE Transactions on Neural Networks 21, no. 6 (June 2010): 972–84. http://dx.doi.org/10.1109/tnn.2010.2046423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

López-Vázquez, G., M. Ornelas-Rodriguez, A. Espinal, J. A. Soria-Alcaraz, A. Rojas-Domínguez, H. J. Puga-Soberanes, J. M. Carpio, and H. Rostro-Gonzalez. "Evolutionary Spiking Neural Networks for Solving Supervised Classification Problems." Computational Intelligence and Neuroscience 2019 (March 28, 2019): 1–13. http://dx.doi.org/10.1155/2019/4182639.

Full text
Abstract:
This paper presents a grammatical evolution (GE)-based methodology to automatically design third generation artificial neural networks (ANNs), also known as spiking neural networks (SNNs), for solving supervised classification problems. The proposal performs the SNN design by exploring the search space of three-layered feedforward topologies with configured synaptic connections (weights and delays) so that no explicit training is carried out. Besides, the designed SNNs have partial connections between input and hidden layers which may contribute to avoid redundancies and reduce the dimensionality of input feature vectors. The proposal was tested on several well-known benchmark datasets from the UCI repository and statistically compared against a similar design methodology for second generation ANNs and an adapted version of that methodology for SNNs; also, the results of the two methodologies and the proposed one were improved by changing the fitness function in the design process. The proposed methodology shows competitive and consistent results, and the statistical tests support the conclusion that the designs produced by the proposal perform better than those produced by other methodologies.
APA, Harvard, Vancouver, ISO, and other styles
46

keyan, M. Karthi. "Semi Supervised Document Classification Model Using Artificial Neural Networks." International Journal of Computer Trends and Technology 34, no. 1 (April 25, 2016): 52–58. http://dx.doi.org/10.14445/22312803/ijctt-v34p109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tang, Rongxin, Hualin Liu, Jingbo Wei, and Wenchao Tang. "Supervised learning with convolutional neural networks for hyperspectral visualization." Remote Sensing Letters 11, no. 4 (February 6, 2020): 363–72. http://dx.doi.org/10.1080/2150704x.2020.1717014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bruzzone, L., and D. Fernández Prieto. "Supervised training technique for radial basis function neural networks." Electronics Letters 34, no. 11 (1998): 1115. http://dx.doi.org/10.1049/el:19980789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Polikar, R., L. Upda, S. S. Upda, and V. Honavar. "Learn++: an incremental learning algorithm for supervised neural networks." IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 31, no. 4 (2001): 497–508. http://dx.doi.org/10.1109/5326.983933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Jain, Lakhmi C., Manjeevan Seera, Chee Peng Lim, and P. Balasubramaniam. "A review of online learning in supervised neural networks." Neural Computing and Applications 25, no. 3-4 (December 31, 2013): 491–509. http://dx.doi.org/10.1007/s00521-013-1534-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography