Academic literature on the topic 'Supervised neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Supervised neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Supervised neural network"

1

Tian, Jidong, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. "Weakly Supervised Neural Symbolic Learning for Cognitive Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5888–96. http://dx.doi.org/10.1609/aaai.v36i5.20533.

Full text
Abstract:
Despite the recent success of end-to-end deep neural networks, there are growing concerns about their lack of logical reasoning abilities, especially on cognitive tasks with perception and reasoning processes. A solution is the neural symbolic learning (NeSyL) method that can effectively utilize pre-defined logic rules to constrain the neural architecture making it perform better on cognitive tasks. However, it is challenging to apply NeSyL to these cognitive tasks because of the lack of supervision, the non-differentiable manner of the symbolic system, and the difficulty to probabilistically constrain the neural network. In this paper, we propose WS-NeSyL, a weakly supervised neural symbolic learning model for cognitive tasks with logical reasoning. First, WS-NeSyL employs a novel back search algorithm to sample the possible reasoning process through logic rules. This sampled process can supervise the neural network as the pseudo label. Based on this algorithm, we can backpropagate gradients to the neural network of WS-NeSyL in a weakly supervised manner. Second, we introduce a probabilistic logic regularization into WS-NeSyL to help the neural network learn probabilistic logic. To evaluate WS-NeSyL, we have conducted experiments on three cognitive datasets, including temporal reasoning, handwritten formula recognition, and relational reasoning datasets. Experimental results show that WS-NeSyL not only outperforms the end-to-end neural model but also beats the state-of-the-art neural symbolic learning models.
APA, Harvard, Vancouver, ISO, and other styles
2

Ito, Toshio. "Supervised Learning Methods of Bilinear Neural Network Systems Using Discrete Data." International Journal of Machine Learning and Computing 6, no. 5 (October 2016): 235–40. http://dx.doi.org/10.18178/ijmlc.2016.6.5.604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Verma, Vikas, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, and Jian Tang. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 10024–32. http://dx.doi.org/10.1609/aaai.v35i11.17203.

Full text
Abstract:
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the "aggregation" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such as Graph Convolutional Networks, Graph Attention Networks and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can consistently improve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and Co-author-Physics.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Jinghan. "Semi-supervised Blindness Detection with Neural Network Ensemble." Highlights in Science, Engineering and Technology 12 (August 26, 2022): 171–76. http://dx.doi.org/10.54097/hset.v12i.1448.

Full text
Abstract:
Diabetic retinopathy (DR), a common complication of diabetes mellitus, is a major cause of visual loss among the working-age population. Since DR vision loss is irreversible, early detection of DR is crucial for preventing vision loss in patients. However, manual detection of DR remains time costly and inefficient. In this paper, an ensemble of 6 pre-trained neural networks (including EfficientNets, ResNet, and Inception) are combined. The compatibility of different networks is tested by creating different combinations of networks and evaluating their relative performance. Pseudo-labelling is used to further increase accuracy. With a limited training data set of only 5592 images, the final neural network ensemble achieved an accuracy of 0.864.
APA, Harvard, Vancouver, ISO, and other styles
5

Hindarto, Djarot, and Handri Santoso. "Performance Comparison of Supervised Learning Using Non-Neural Network and Neural Network." Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) 11, no. 1 (April 6, 2022): 49. http://dx.doi.org/10.23887/janapati.v11i1.40768.

Full text
Abstract:
Currently, the development of mobile phones and mobile applications based on the Android operating system is increasing rapidly. Many new companies and startups are digitally transforming by using mobile apps to provide disruptive digital services to replace existing old-fashioned services. This transformation prompted attackers to create malicious software (malware) using sophisticated methods to target victims of Android phone users. The purpose of this study is to identify Android APK files by classifying them using Artificial Neural Network (ANN) and Non-Neural Network (NNN). ANN is a Multi-Layer Perceptron Classifier (MLPC), while NNN is a method of KNN, SVM, Decision Tree. This study aims to make a comparison between the performance of the Non-Neural Network and the Neural Network. Problems that occur when classifying using the Non-Neural Network algorithm have problems with decreasing performance, where performance is often decreased if done with a larger dataset. Answering the problem of decreasing model performance, the solution is used with the Artificial Neural Network algorithm. The Artificial Neural Network Algorithm selected is Multi_layer Perceptron Classifier (MLPC). Using the Non-Neural Network algorithm, K-Nearest Neighbor conducts training with the 600 APK dataset achieving 91.2% accuracy and training using the 14170 APK dataset decreases its accuracy to 88%. The use of the Support Vector Machine algorithm with the 600 APK dataset has 99.1% accuracy and the 14170 APK dataset has decreased accuracy to 90.5%. The use of the Decision Tree algorithm to conduct training with a dataset of 600 APKs has an accuracy of 99.2% and training with a dataset of 14170 APKs has decreased accuracy to 90.8%. Experiments using the Multi-Layer Perceptron Classifier have increased accuracy performance with the 600 APK dataset achieving 99% accuracy and training using the 14170 APK dataset increasing the accuracy reaching 100%.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Chenghua, Zhuolin Liao, Yixuan Ma, and Kun Zhan. "Stationary Diffusion State Neural Estimation for Multiview Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7542–49. http://dx.doi.org/10.1609/aaai.v36i7.20719.

Full text
Abstract:
Although many graph-based clustering methods attempt to model the stationary diffusion state in their objectives, their performance limits to using a predefined graph. We argue that the estimation of the stationary diffusion state can be achieved by gradient descent over neural networks. We specifically design the Stationary Diffusion State Neural Estimation (SDSNE) to exploit multiview structural graph information for co-supervised learning. We explore how to design a graph neural network specially for unsupervised multiview learning and integrate multiple graphs into a unified consensus graph by a shared self-attentional module. The view-shared self-attentional module utilizes the graph structure to learn a view-consistent global graph. Meanwhile, instead of using auto-encoder in most unsupervised learning graph neural networks, SDSNE uses a co-supervised strategy with structure information to supervise the model learning. The co-supervised strategy as the loss function guides SDSNE in achieving the stationary state. With the help of the loss and the self-attentional module, we learn to obtain a graph in which nodes in each connected component fully connect by the same weight. Experiments on several multiview datasets demonstrate effectiveness of SDSNE in terms of six clustering evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

Cho, Myoung Won. "Supervised learning in a spiking neural network." Journal of the Korean Physical Society 79, no. 3 (July 20, 2021): 328–35. http://dx.doi.org/10.1007/s40042-021-00254-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Juexin, Anjun Ma, Qin Ma, Dong Xu, and Trupti Joshi. "Inductive inference of gene regulatory network using supervised and semi-supervised graph neural networks." Computational and Structural Biotechnology Journal 18 (2020): 3335–43. http://dx.doi.org/10.1016/j.csbj.2020.10.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nobukawa, Sou, Haruhiko Nishimura, and Teruya Yamanishi. "Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (October 1, 2019): 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.

Full text
Abstract:
Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu, and Jinglei Lv. "Supervised Brain Network Learning Based on Deep Recurrent Neural Networks." IEEE Access 8 (2020): 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Supervised neural network"

1

Tran, Khanh-Hung. "Semi-supervised dictionary learning and Semi-supervised deep neural network." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASP014.

Full text
Abstract:
Depuis les années 2010, l’apprentissage automatique (ML) est l’un des sujets qui retient beaucoup l'attention des chercheurs scientifiques. De nombreux modèles de ML ont démontré leur capacité produire d’excellent résultats dans des divers domaines comme Vision par ordinateur, Traitement automatique des langues, Robotique… Toutefois, la plupart de ces modèles emploient l’apprentissage supervisé, qui requiert d’un massive annotation. Par conséquent, l’objectif de cette thèse est d’étudier et de proposer des approches semi-supervisées qui ont plusieurs avantages par rapport à l’apprentissage supervisé. Au lieu d’appliquer directement un classificateur semi-supervisé sur la représentation originale des données, nous utilisons plutôt des types de modèle qui intègrent une phase de l’apprentissage de représentation avant de la phase de classification, pour mieux s'adapter à la non linéarité des données. Dans le premier temps, nous revisitons des outils qui permettent de construire notre modèles semi-supervisés. Tout d’abord, nous présentons deux types de modèle qui possèdent l’apprentissage de représentation dans leur architecture : l’apprentissage de dictionnaire et le réseau de neurones, ainsi que les méthodes d’optimisation pour chaque type de model, en plus, dans le cas de réseau de neurones, nous précisons le problème avec les exemples contradictoires. Ensuite, nous présentons les techniques qui accompagnent souvent avec l’apprentissage semi-supervisé comme l’apprentissage de variétés et le pseudo-étiquetage. Dans le deuxième temps, nous travaillons sur l’apprentissage de dictionnaire. Nous synthétisons en général trois étapes pour construire un modèle semi-supervisée à partir d’un modèle supervisé. Ensuite, nous proposons notre modèle semi-supervisée pour traiter le problème de classification typiquement dans le cas d’un faible nombre d’échantillons d’entrainement (y compris tous labellisés et non labellisés échantillons). D'une part, nous appliquons la préservation de la structure de données de l’espace original à l’espace de code parcimonieux (l’apprentissage de variétés), ce qui est considéré comme la régularisation pour les codes parcimonieux. D'autre part, nous intégrons un classificateur semi-supervisé dans l’espace de code parcimonieux. En outre, nous effectuons le codage parcimonieux pour les échantillons de test en prenant en compte aussi la préservation de la structure de données. Cette méthode apporte une amélioration sur le taux de précision par rapport à des méthodes existantes. Dans le troisième temps, nous travaillons sur le réseau de neurones. Nous proposons une approche qui s’appelle "manifold attack" qui permets de renforcer l’apprentissage de variétés. Cette approche est inspirée par l’apprentissage antagoniste : trouver des points virtuels qui perturbent la fonction de coût sur l’apprentissage de variétés (en la maximisant) en fixant les paramètres du modèle; ensuite, les paramètres du modèle sont mis à jour, en minimisant cette fonction de coût et en fixant les points virtuels. Nous fournissons aussi des critères pour limiter l’espace auquel les points virtuels appartiennent et la méthode pour les initialiser. Cette approche apporte non seulement une amélioration sur le taux de précision mais aussi une grande robustesse contre les exemples contradictoires. Enfin, nous analysons des similarités et des différences, ainsi que des avantages et inconvénients entre l’apprentissage de dictionnaire et le réseau de neurones. Nous proposons quelques perspectives sur ces deux types de modèle. Dans le cas de l’apprentissage de dictionnaire semi-supervisé, nous proposons quelques techniques en inspirant par le réseau de neurones. Quant au réseau de neurones, nous proposons d’intégrer "manifold attack" sur les modèles génératifs
Since the 2010's, machine learning (ML) has been one of the topics that attract a lot of attention from scientific researchers. Many ML models have been demonstrated their ability to produce excellent results in various fields such as Computer Vision, Natural Language Processing, Robotics... However, most of these models use supervised learning, which requires a massive annotation. Therefore, the objective of this thesis is to study and to propose semi-supervised learning approaches that have many advantages over supervised learning. Instead of directly applying a semi-supervised classifier on the original representation of data, we rather use models that integrate a representation learning stage before the classification stage, to better adapt to the non-linearity of the data. In the first step, we revisit tools that allow us to build our semi-supervised models. First, we present two types of model that possess representation learning in their architecture: dictionary learning and neural network, as well as the optimization methods for each type of model. Moreover, in the case of neural network, we specify the problem with adversarial examples. Then, we present the techniques that often accompany with semi-supervised learning such as variety learning and pseudo-labeling. In the second part, we work on dictionary learning. We synthesize generally three steps to build a semi-supervised model from a supervised model. Then, we propose our semi-supervised model to deal with the classification problem typically in the case of a low number of training samples (including both labelled and non-labelled samples). On the one hand, we apply the preservation of the data structure from the original space to the sparse code space (manifold learning), which is considered as regularization for sparse codes. On the other hand, we integrate a semi-supervised classifier in the sparse code space. In addition, we perform sparse coding for test samples by taking into account also the preservation of the data structure. This method provides an improvement on the accuracy rate compared to other existing methods. In the third step, we work on neural network models. We propose an approach called "manifold attack" which allows reinforcing manifold learning. This approach is inspired from adversarial learning : finding virtual points that disrupt the cost function on manifold learning (by maximizing it) while fixing the model parameters; then the model parameters are updated by minimizing this cost function while fixing these virtual points. We also provide criteria for limiting the space to which the virtual points belong and the method for initializing them. This approach provides not only an improvement on the accuracy rate but also a significant robustness to adversarial examples. Finally, we analyze the similarities and differences, as well as the advantages and disadvantages between dictionary learning and neural network models. We propose some perspectives on both two types of models. In the case of semi-supervised dictionary learning, we propose some techniques inspired by the neural network models. As for the neural network, we propose to integrate manifold attack on generative models
APA, Harvard, Vancouver, ISO, and other styles
2

Morns, Ian Philip. "The novel dynamic supervised forward propagation neural network for handwritten character recognition." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Syrén, Grönfelt Natalie. "Pretraining a Neural Network for Hyperspectral Images Using Self-Supervised Contrastive Learning." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179122.

Full text
Abstract:
Hyperspectral imaging is an expanding topic within the field of computer vision, that uses images of high spectral granularity. Contrastive learning is a discrim- inative approach to self-supervised learning, a form of unsupervised learning where the network is trained using self-created pseudo-labels. This work com- bines these two research areas and investigates how a pretrained network based on contrastive learning can be used for hyperspectral images. The hyperspectral images used in this work are generated from simulated RGB images and spec- tra from a spectral library. The network is trained with a pretext task based on data augmentations, and is evaluated through transfer learning and fine-tuning for a downstream task. The goal is to determine the impact of the pretext task on the downstream task and to determine the required amount of labelled data. The results show that the downstream task (a classifier) based on the pretrained network barely performs better than a classifier without a pretrained network. In the end, more research needs to be done to confirm or reject the benefit of a pretrained network based on contrastive learning for hyperspectral images. Also, the pretrained network should be tested on real-world hyperspectral data and trained with a pretext task designed for hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
4

Bylund, Andreas, Anton Erikssen, and Drazen Mazalica. "Hyperparameters impact in a convolutional neural network." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18670.

Full text
Abstract:
Machine learning and image recognition is a big and growing subject in today's society. Therefore the aim of this thesis is to compare convolutional neural networks with different hyperparameter settings and see how the hyperparameters affect the networks test accuracy in identifying images of traffic signs. The reason why traffic signs are chosen as objects to evaluate hyperparameters is due to the author's previous experience in the domain. The object itself that is used for image recognition does not matter. Any dataset with images can be used to see the hyperparameters affect. Grid search is used to create a large amount of models with different width and depth, learning rate and momentum. Convolution layers, activation functions and batch size are all tested separately. These experiments make it possible to evaluate how the hyperparameters affect the networks in their performance of recognizing images of traffic signs. The models are created using Keras API and then trained and tested on the dataset Traffic Signs Preprocessed. The results show that hyperparameters affect test accuracy, some affect more than others. Configuring learning rate and momentum can in some cases result in disastrous results if they are set too high or too low. Activation function also show to be a crucial hyperparameter where it in some cases produce terrible results.
APA, Harvard, Vancouver, ISO, and other styles
5

Schembri, Massimo. "Anomaly Prediction in Production Supercomputer with Convolution and Semi-supervised autoencoder." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22379/.

Full text
Abstract:
Un sistema HPC (High Performance Computing) è un sistema con capacità computazionali molto elevate adatto a task molto esigenti in termini di risorse. Alcune delle proprietà fondamentali di un sistema del genere sono certamente la disponibilità e l'affidabilità che possono essere messe a rischio da problemi hardware e software. In quest'attività di tesi si è realizzato e analizzato le performance di un sistema di anomaly detection in termini di capacità di rilevazione e predizione di un'anomalia su vari nodi di un sistema HPC, in particolare utilizzando i dati relativi al sistema MARCONI del consorzio CINECA.
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Lilin. "A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.

Full text
Abstract:
Learning is central to infusing intelligence to any biologically inspired system. This study introduces a novel Cross-Correlated Delay Shift (CCDS) learning method for spiking neurons with the ability to learn and reproduce arbitrary spike patterns in a supervised fashion with applicability tospatiotemporalinformation encoded at the precise timing of spikes. By integrating the cross-correlated term,axonaland synapse delays, the CCDS rule is proven to be both biologically plausible and computationally efficient. The proposed learning algorithm is evaluated in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. The results indicate that the proposed CCDS learning rule greatly improves classification accuracy when compared to the standards reached with the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. Network structureis the crucial partforany application domain of Artificial Spiking Neural Network (ASNN). Thus, temporal learning rules in multilayer spiking neural networks are investigated. As extensions of single-layer learning rules, the multilayer CCDS (MutCCDS) is also developed. Correlated neurons are connected through fine-tuned weights and delays. In contrast to the multilayer Remote Supervised Method (MutReSuMe) and multilayertempotronrule (MutTmptr), the newly developed MutCCDS shows better generalization ability and faster convergence. The proposed multilayer rules provide an efficient and biologically plausible mechanism, describing how delays and synapses in the multilayer networks are adjusted to facilitate learning. Interictalspikes (IS) aremorphologicallydefined brief events observed in electroencephalography (EEG) records from patients with epilepsy. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for seizure prediction and guided therapy. In this work, we present a new IS detection method using the Wavelet Encoding Device (WED) method together with CCDS learning rule and a specially designed Spiking Neural Network (SNN) structure. The results confirm the ability of such SNN to achieve good performance for automatically detecting such events from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
7

Hansen, Vedal Amund. "Comparing performance of convolutional neural network models on a novel car classification task." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213468.

Full text
Abstract:
Recent neural network advances have lead to models that can be used for a variety of image classification tasks, useful for many of today’s media technology applications. In this paper, I train hallmark neural network architectures on a newly collected vehicle image dataset to do both coarse- and fine-grained classification of vehicle type. The results show that the neural networks can learn to distinguish both between many very different and between a few very similar classes, reaching accuracies of 50.8% accuracy on 28 classes and 61.5% in the most challenging 5, despite noisy images and labeling of the dataset.
Nya neurala nätverksframsteg har lett till modeller som kan användas för en mängd olika bildklasseringsuppgifter, och är därför användbara många av dagens medietekniska applikationer. I detta projektet tränar jag moderna neurala nätverksarkitekturer på en nyuppsamlad bilbild-datasats för att göra både grov- och finkornad klassificering av fordonstyp. Resultaten visar att neurala nätverk kan lära sig att skilja mellan många mycket olika bilklasser,  och även mellan några mycket liknande klasser. Mina bästa modeller nådde 50,8% träffsäkerhet vid 28 klasser och 61,5% på de mest utmanande 5, trots brusiga bilder och manuell klassificering av datasetet.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Erik, and Gilbert Nordhammar. "Naive semi-supervised deep learning med sammansättning av pseudo-klassificerare." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17177.

Full text
Abstract:
Ett vanligt problem inom supervised learning är brist på taggad träningsdata. Naive semi-supervised deep learning är en träningsteknik som ämnar att mildra detta problem genom att generera pseudo-taggad data och därefter låta ett neuralt nätverk träna på denna samt en mindre mängd taggad data. Detta arbete undersöker om denna teknik kan förbättras genom användandet av röstning. Flera neurala nätverk tränas genom den framtagna tekniken, naive semi-supervised deep learning eller supervised learning och deras träffsäkerhet utvärderas därefter. Resultaten visade nästan enbart försämringar då röstning användes. Dock verkar inte förutsättningarna för röstning ha varit särskilt goda, vilket gör det svårt att dra en säker slutsats kring effekterna av röstning. Även om röstning inte gav förbättringar har NSSDL visat sig vara mycket effektiv. Det finns flera applikationsområden där tekniken i framtiden skulle kunna användas med goda resultat.
APA, Harvard, Vancouver, ISO, and other styles
9

Flores, Quiroz Martín. "Descriptive analysis of the acquisition of the base form, third person singular, present participle regular past, irregular past, and past participle in a supervised artificial neural network and an unsupervised artificial neural network." Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/115653.

Full text
Abstract:
Tesis para optar al grado de Magíster en Lingüistica mención Lengua Inglesa
Studying children’s language acquisition in natural settings is not cost and time effective. Therefore, language acquisition may be studied in an artificial setting reducing the costs related to this type of research. By artificial, I do not mean that children will be placed in an artificial setting, first because this would not be ethical and second because the problem of the time needed for this research would still be present. Thus, by artificial I mean that the tools of simulation found in artificial intelligence can be used. Simulators as artificial neural networks (ANNs) possess the capacity to simulate different human cognitive skills, as pattern or speech recognition, and can also be implemented in personal computers with software such as MATLAB, a numerical computing software. ANNs are computer simulation models that try to resemble the neural processes behind several human cognitive skills. There are two main types of ANNs: supervised and unsupervised. The learning processes in the first are guided by the computer programmer, while the learning processes of the latter are random.
APA, Harvard, Vancouver, ISO, and other styles
10

Dabiri, Sina. "Semi-Supervised Deep Learning Approach for Transportation Mode Identification Using GPS Trajectory Data." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/86845.

Full text
Abstract:
Identification of travelers' transportation modes is a fundamental step for various problems that arise in the domain of transportation such as travel demand analysis, transport planning, and traffic management. This thesis aims to identify travelers' transportation modes purely based on their GPS trajectories. First, a segmentation process is developed to partition a user's trip into GPS segments with only one transportation mode. A majority of studies have proposed mode inference models based on hand-crafted features, which might be vulnerable to traffic and environmental conditions. Furthermore, the classification task in almost all models have been performed in a supervised fashion while a large amount of unlabeled GPS trajectories has remained unused. Accordingly, a deep SEmi-Supervised Convolutional Autoencoder (SECA) architecture is proposed to not only automatically extract relevant features from GPS segments but also exploit useful information in unlabeled data. The SECA integrates a convolutional-deconvolutional autoencoder and a convolutional neural network into a unified framework to concurrently perform supervised and unsupervised learning. The two components are simultaneously trained using both labeled and unlabeled GPS segments, which have already been converted into an efficient representation for the convolutional operation. An optimum schedule for varying the balancing parameters between reconstruction and classification errors are also implemented. The performance of the proposed SECA model, trip segmentation, the method for converting a raw trajectory into a new representation, the hyperparameter schedule, and the model configuration are evaluated by comparing to several baselines and alternatives for various amounts of labeled and unlabeled data. The experimental results demonstrate the superiority of the proposed model over the state-of-the-art semi-supervised and supervised methods with respect to metrics such as accuracy and F-measure.
Master of Science
Identifying users' transportation modes (e.g., bike, bus, train, and car) is a key step towards many transportation related problems including (but not limited to) transport planning, transit demand analysis, auto ownership, and transportation emissions analysis. Traditionally, the information for analyzing travelers' behavior for choosing transport mode(s) was obtained through travel surveys. High cost, low-response rate, time-consuming manual data collection, and misreporting are the main demerits of the survey-based approaches. With the rapid growth of ubiquitous GPS-enabled devices (e.g., smartphones), a constant stream of users' trajectory data can be recorded. A user's GPS trajectory is a sequence of GPS points, recorded by means of a GPS-enabled device, in which a GPS point contains the information of the device geographic location at a particular moment. In this research, users' GPS trajectories, rather than traditional resources, are harnessed to predict their transportation mode by means of statistical models. With respect to the statistical models, a wide range of studies have developed travel mode detection models using on hand-designed attributes and classical learning techniques. Nonetheless, hand-crafted features cause some main shortcomings including vulnerability to traffic uncertainties and biased engineering justification in generating effective features. A potential solution to address these issues is by leveraging deep learning frameworks that are capable of capturing abstract features from the raw input in an automated fashion. Thus, in this thesis, deep learning architectures are exploited in order to identify transport modes based on only raw GPS tracks. It is worth noting that a significant portion of trajectories in GPS data might not be annotated by a transport mode and the acquisition of labeled data is a more expensive and labor-intensive task in comparison with collecting unlabeled data. Thus, utilizing the unlabeled GPS trajectory (i.e., the GPS trajectories that have not been annotated by a transport mode) is a cost-effective approach for improving the prediction quality of the travel mode detection model. Therefore, the unlabeled GPS data are also leveraged by developing a novel deep-learning architecture that is capable of extracting information from both labeled and unlabeled data. The experimental results demonstrate the superiority of the proposed models over the state-of-the-art methods in literature with respect to several performance metrics.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Supervised neural network"

1

J, Marks Robert, ed. Neural smithing: Supervised learning in feedforward artificial neural networks. Cambridge, Mass: The MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suresh, Sundaram, Narasimhan Sundararajan, and Ramasamy Savitha. Supervised Learning with Complex-valued Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Suresh, Sundaram. Supervised Learning with Complex-valued Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Surinder, Singh. Exploratory spatial data analysis using supervised neural networks. London: University of East London, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

SFI/CNLS Workshop on Formal Approaches to Supervised Learning (1992 Santa Fe, N.M.). The mathematics of generalization: The proceedings of the SFI/CNLS Workshop on Formal Approaches to Supervised Learning. Edited by Wolpert David H. Reading, Mass: Addison-Wesley Pub. Co., 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Supervised and unsupervised pattern recognition: Feature extraction and computational intelligence. Boca Raton, Fla: CRC Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leung, Wing Kai. The specification, analysis and metrics of supervised feedforward artificial neural networks for applied science and engineering applications. Birmingham: University of Central England in Birmingham, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Supervised Learning With Complexvalued Neural Networks. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Supervised neural network"

1

Magrans de Abril, Ildefons, and Ann Nowé. "Supervised Neural Network Structure Recovery." In Neural Connectomics Challenge, 37–45. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-53070-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Muselli, Marco, and Sandro Ridella. "Supervised Learning Using a Genetic Algorithm." In International Neural Network Conference, 790. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baldi, Pierre, Yves Chauvin, and Kurt Hornik. "Supervised and Unsupervised Learning in Linear Networks." In International Neural Network Conference, 825–28. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Battiti, Roberto, and Francesco Masulli. "BFGS Optimization for Faster and Automated Supervised Learning." In International Neural Network Conference, 757–60. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pandya, Abhijit S., and Raisa Szabo. "ALOPEX Algorithm for Supervised Learning in Layer Networks." In International Neural Network Conference, 791. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_88.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Midenet, S., and A. Grumbach. "Supervised Learning Based on Kohonen’s Self-Organising Feature Maps." In International Neural Network Conference, 773–76. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yusoff, Nooraini, and André Grüning. "Supervised Associative Learning in Spiking Neural Network." In Artificial Neural Networks – ICANN 2010, 224–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15819-3_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bianchini, Monica, and Marco Maggini. "Supervised Neural Network Models for Processing Graphs." In Intelligent Systems Reference Library, 67–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36657-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Suresh, Sundaram, Narasimhan Sundararajan, and Ramasamy Savitha. "Complex-valued Self-regulatory Resource Allocation Network (CSRAN)." In Supervised Learning with Complex-valued Neural Networks, 135–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Magoulas, G. D., M. N. Vrahatis, T. N. Grapsa, and G. S. Androulakis. "Neural Network Supervised Training Based on a Dimension Reducing Method." In Mathematics of Neural Networks, 245–49. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Supervised neural network"

1

FILO, G. "Analysis of Neural Network Structure for Implementation of the Prescriptive Maintenance Strategy." In Terotechnology XII. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644902059-40.

Full text
Abstract:
Abstract. This paper provides an initial analysis of neural network implementation possibilities in practical implementations of the prescriptive maintenance strategy. The main issues covered are the preparation and processing of input data, the choice of artificial neural network architecture and the models of neurons used in each layer. The methods of categorisation and normalisation within each distinguished category were proposed in input data. Based on the normalisation results, it was suggested to use specific neuron activation functions. As part of the network structure, the applied solutions were analysed, including the number of neuron layers used and the number of neurons in each layer. In further work, the proposed structures of neural networks may undergo a process of supervised or partially supervised training to verify the accuracy and confidence level of the results they generate.
APA, Harvard, Vancouver, ISO, and other styles
2

Huynh, Alex V., John F. Walkup, and Thomas F. Krile. "Optical perceptron-based quadratic neural network." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mii8.

Full text
Abstract:
Optical quadratic neural networks are currently being investigated because of their advantages over linear neural networks.1 Based on a quadratic neuron already constructed,2 an optical quadratic neural network utilizing four-wave mixing in photorefractive barium titanate (BaTiO3) has been developed. This network implements a feedback loop using a charge-coupled device camera, two monochrome liquid crystal televisions, a computer, and various optical elements. For training, the network employs the supervised quadratic Perceptron algorithm to associate binary-valued input vectors with specified target vectors. The training session is composed of epochs, each of which comprises an entire set of iterations for all input vectors. The network converges when the interconnection matrix remains unchanged for every successive epoch. Using a spatial multiplexing scheme for two bipolar neurons, the network can classify up to eight different input patterns. To the best of our knowledge, this proof-of-principle experiment represents one of the first working trainable optical quadratic networks utilizing a photorefractive medium.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Chengshuai, Shuai Liu, Feng Huang, Shichao Liu, and Wen Zhang. "CSGNN: Contrastive Self-Supervised Graph Neural Network for Molecular Interaction Prediction." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/517.

Full text
Abstract:
Molecular interactions are significant resources for analyzing sophisticated biological systems. Identification of multifarious molecular interactions attracts increasing attention in biomedicine, bioinformatics, and human healthcare communities. Recently, a plethora of methods have been proposed to reveal molecular interactions in one specific domain. However, existing methods heavily rely on features or structures involving molecules, which limits the capacity of transferring the models to other tasks. Therefore, generalized models for the multifarious molecular interaction prediction (MIP) are in demand. In this paper, we propose a contrastive self-supervised graph neural network (CSGNN) to predict molecular interactions. CSGNN injects a mix-hop neighborhood aggregator into a graph neural network (GNN) to capture high-order dependency in the molecular interaction networks and leverages a contrastive self-supervised learning task as a regularizer within a multi-task learning paradigm to enhance the generalization ability. Experiments on seven molecular interaction networks show that CSGNN outperforms classic and state-of-the-art models. Comprehensive experiments indicate that the mix-hop aggregator and the self-supervised regularizer can effectively facilitate the link inference in multifarious molecular networks.
APA, Harvard, Vancouver, ISO, and other styles
4

LEMPA, P. "Analysis of Neural Network Training Algorithms for Implementation of the Prescriptive Maintenance Strategy." In Terotechnology XII. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644902059-41.

Full text
Abstract:
Abstract. This paper presents a proposal to combine supervised and semi-supervised training strategies to obtain a neural network for use in the prescriptive maintenance approach. It is required in this approach because of only partially labelled data for use in supervised learning, and additionally, this data is predicted to expand quickly. The main issue is the decision on which are suitable training methodologies for supervised learning, having in mind using this data and methods for semi-supervised learning. The proposed methods of training neural networks with supervised and semi-supervised training to receive the best results will be tested and compared in further work.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmed, Sultan Uddin, Md Shahjahan, and Kazuyuki Murase. "Chaotic dynamics of supervised neural network." In 2010 13th International Conference on Computer and Information Technology (ICCIT). IEEE, 2010. http://dx.doi.org/10.1109/iccitechn.2010.5723893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chunwei, Zhang, and Liu Haijiang. "A New Supervised Spiking Neural Network." In 2009 Second International Conference on Intelligent Computation Technology and Automation. IEEE, 2009. http://dx.doi.org/10.1109/icicta.2009.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ali, Rashid, and Iram Naim. "Neural network based supervised rank aggregation." In 2011 International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT). IEEE, 2011. http://dx.doi.org/10.1109/mspct.2011.6150439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Francis T. S., Taiwei Lu, and Don A. Gregory. "Self-Learning Optical Neural Network." In Spatial Light Modulators and Applications. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/slma.1990.mb4.

Full text
Abstract:
One of the features in neural computing must be the adaptability to changeable environment and to recognize unknown objects. In general, there are two types of learning processes that are used in the human brain; supervised and unsupervised learnings [1]. In a supervised learning process, the artificial neural network has to be taught when to learn and when to process the information. Nevertheless, if an unknown object is presented to the artificial neural network during the processing, the network may provide an error output result. On the other hand, for unsupervised learning (also called self-learning), the students are learning by themselves, in which based on simple learning rules and their past experiences. Kohonon's model is one of the simplest self-organizing algorithms[1], which is capable of performing statistical pattern recognition and classification, and it can be modified for optical neural network implementation. A compact optical neural network of 64 neurons using liquid crystal televisions is used for unsupervised learning process[2].
APA, Harvard, Vancouver, ISO, and other styles
9

Perumalla, Aniruddha, Ahmet Koru, and Eric Johnson. "Network Topology Identification using Supervised Pattern Recognition Neural Networks." In 13th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010231902580264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salam, F. M. A., and S. Bai. "A feedback neural network with supervised learning." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137855.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Supervised neural network"

1

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Full text
Abstract:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO, and other styles
2

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yunchong. Blind Denoising by Self-Supervised Neural Networks in Astronomical Datasets (Noise2Self4Astro). Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1614728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography