Littérature scientifique sur le sujet « Supervised neural network »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Supervised neural network ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Supervised neural network"
Tian, Jidong, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He et Yaohui Jin. « Weakly Supervised Neural Symbolic Learning for Cognitive Tasks ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 5 (28 juin 2022) : 5888–96. http://dx.doi.org/10.1609/aaai.v36i5.20533.
Texte intégralIto, Toshio. « Supervised Learning Methods of Bilinear Neural Network Systems Using Discrete Data ». International Journal of Machine Learning and Computing 6, no 5 (octobre 2016) : 235–40. http://dx.doi.org/10.18178/ijmlc.2016.6.5.604.
Texte intégralVerma, Vikas, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala et Jian Tang. « GraphMix : Improved Training of GNNs for Semi-Supervised Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 11 (18 mai 2021) : 10024–32. http://dx.doi.org/10.1609/aaai.v35i11.17203.
Texte intégralHu, Jinghan. « Semi-supervised Blindness Detection with Neural Network Ensemble ». Highlights in Science, Engineering and Technology 12 (26 août 2022) : 171–76. http://dx.doi.org/10.54097/hset.v12i.1448.
Texte intégralHindarto, Djarot, et Handri Santoso. « Performance Comparison of Supervised Learning Using Non-Neural Network and Neural Network ». Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) 11, no 1 (6 avril 2022) : 49. http://dx.doi.org/10.23887/janapati.v11i1.40768.
Texte intégralLiu, Chenghua, Zhuolin Liao, Yixuan Ma et Kun Zhan. « Stationary Diffusion State Neural Estimation for Multiview Clustering ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 7 (28 juin 2022) : 7542–49. http://dx.doi.org/10.1609/aaai.v36i7.20719.
Texte intégralCho, Myoung Won. « Supervised learning in a spiking neural network ». Journal of the Korean Physical Society 79, no 3 (20 juillet 2021) : 328–35. http://dx.doi.org/10.1007/s40042-021-00254-4.
Texte intégralWang, Juexin, Anjun Ma, Qin Ma, Dong Xu et Trupti Joshi. « Inductive inference of gene regulatory network using supervised and semi-supervised graph neural networks ». Computational and Structural Biotechnology Journal 18 (2020) : 3335–43. http://dx.doi.org/10.1016/j.csbj.2020.10.022.
Texte intégralNobukawa, Sou, Haruhiko Nishimura et Teruya Yamanishi. « Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity ». Journal of Artificial Intelligence and Soft Computing Research 9, no 4 (1 octobre 2019) : 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.
Texte intégralZhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu et Jinglei Lv. « Supervised Brain Network Learning Based on Deep Recurrent Neural Networks ». IEEE Access 8 (2020) : 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.
Texte intégralThèses sur le sujet "Supervised neural network"
Tran, Khanh-Hung. « Semi-supervised dictionary learning and Semi-supervised deep neural network ». Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASP014.
Texte intégralSince the 2010's, machine learning (ML) has been one of the topics that attract a lot of attention from scientific researchers. Many ML models have been demonstrated their ability to produce excellent results in various fields such as Computer Vision, Natural Language Processing, Robotics... However, most of these models use supervised learning, which requires a massive annotation. Therefore, the objective of this thesis is to study and to propose semi-supervised learning approaches that have many advantages over supervised learning. Instead of directly applying a semi-supervised classifier on the original representation of data, we rather use models that integrate a representation learning stage before the classification stage, to better adapt to the non-linearity of the data. In the first step, we revisit tools that allow us to build our semi-supervised models. First, we present two types of model that possess representation learning in their architecture: dictionary learning and neural network, as well as the optimization methods for each type of model. Moreover, in the case of neural network, we specify the problem with adversarial examples. Then, we present the techniques that often accompany with semi-supervised learning such as variety learning and pseudo-labeling. In the second part, we work on dictionary learning. We synthesize generally three steps to build a semi-supervised model from a supervised model. Then, we propose our semi-supervised model to deal with the classification problem typically in the case of a low number of training samples (including both labelled and non-labelled samples). On the one hand, we apply the preservation of the data structure from the original space to the sparse code space (manifold learning), which is considered as regularization for sparse codes. On the other hand, we integrate a semi-supervised classifier in the sparse code space. In addition, we perform sparse coding for test samples by taking into account also the preservation of the data structure. This method provides an improvement on the accuracy rate compared to other existing methods. In the third step, we work on neural network models. We propose an approach called "manifold attack" which allows reinforcing manifold learning. This approach is inspired from adversarial learning : finding virtual points that disrupt the cost function on manifold learning (by maximizing it) while fixing the model parameters; then the model parameters are updated by minimizing this cost function while fixing these virtual points. We also provide criteria for limiting the space to which the virtual points belong and the method for initializing them. This approach provides not only an improvement on the accuracy rate but also a significant robustness to adversarial examples. Finally, we analyze the similarities and differences, as well as the advantages and disadvantages between dictionary learning and neural network models. We propose some perspectives on both two types of models. In the case of semi-supervised dictionary learning, we propose some techniques inspired by the neural network models. As for the neural network, we propose to integrate manifold attack on generative models
Morns, Ian Philip. « The novel dynamic supervised forward propagation neural network for handwritten character recognition ». Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285741.
Texte intégralSyrén, Grönfelt Natalie. « Pretraining a Neural Network for Hyperspectral Images Using Self-Supervised Contrastive Learning ». Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179122.
Texte intégralBylund, Andreas, Anton Erikssen et Drazen Mazalica. « Hyperparameters impact in a convolutional neural network ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18670.
Texte intégralSchembri, Massimo. « Anomaly Prediction in Production Supercomputer with Convolution and Semi-supervised autoencoder ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22379/.
Texte intégralGuo, Lilin. « A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications ». FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.
Texte intégralHansen, Vedal Amund. « Comparing performance of convolutional neural network models on a novel car classification task ». Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213468.
Texte intégralNya neurala nätverksframsteg har lett till modeller som kan användas för en mängd olika bildklasseringsuppgifter, och är därför användbara många av dagens medietekniska applikationer. I detta projektet tränar jag moderna neurala nätverksarkitekturer på en nyuppsamlad bilbild-datasats för att göra både grov- och finkornad klassificering av fordonstyp. Resultaten visar att neurala nätverk kan lära sig att skilja mellan många mycket olika bilklasser, och även mellan några mycket liknande klasser. Mina bästa modeller nådde 50,8% träffsäkerhet vid 28 klasser och 61,5% på de mest utmanande 5, trots brusiga bilder och manuell klassificering av datasetet.
Karlsson, Erik, et Gilbert Nordhammar. « Naive semi-supervised deep learning med sammansättning av pseudo-klassificerare ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17177.
Texte intégralFlores, Quiroz Martín. « Descriptive analysis of the acquisition of the base form, third person singular, present participle regular past, irregular past, and past participle in a supervised artificial neural network and an unsupervised artificial neural network ». Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/115653.
Texte intégralStudying children’s language acquisition in natural settings is not cost and time effective. Therefore, language acquisition may be studied in an artificial setting reducing the costs related to this type of research. By artificial, I do not mean that children will be placed in an artificial setting, first because this would not be ethical and second because the problem of the time needed for this research would still be present. Thus, by artificial I mean that the tools of simulation found in artificial intelligence can be used. Simulators as artificial neural networks (ANNs) possess the capacity to simulate different human cognitive skills, as pattern or speech recognition, and can also be implemented in personal computers with software such as MATLAB, a numerical computing software. ANNs are computer simulation models that try to resemble the neural processes behind several human cognitive skills. There are two main types of ANNs: supervised and unsupervised. The learning processes in the first are guided by the computer programmer, while the learning processes of the latter are random.
Dabiri, Sina. « Semi-Supervised Deep Learning Approach for Transportation Mode Identification Using GPS Trajectory Data ». Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/86845.
Texte intégralMaster of Science
Identifying users' transportation modes (e.g., bike, bus, train, and car) is a key step towards many transportation related problems including (but not limited to) transport planning, transit demand analysis, auto ownership, and transportation emissions analysis. Traditionally, the information for analyzing travelers' behavior for choosing transport mode(s) was obtained through travel surveys. High cost, low-response rate, time-consuming manual data collection, and misreporting are the main demerits of the survey-based approaches. With the rapid growth of ubiquitous GPS-enabled devices (e.g., smartphones), a constant stream of users' trajectory data can be recorded. A user's GPS trajectory is a sequence of GPS points, recorded by means of a GPS-enabled device, in which a GPS point contains the information of the device geographic location at a particular moment. In this research, users' GPS trajectories, rather than traditional resources, are harnessed to predict their transportation mode by means of statistical models. With respect to the statistical models, a wide range of studies have developed travel mode detection models using on hand-designed attributes and classical learning techniques. Nonetheless, hand-crafted features cause some main shortcomings including vulnerability to traffic uncertainties and biased engineering justification in generating effective features. A potential solution to address these issues is by leveraging deep learning frameworks that are capable of capturing abstract features from the raw input in an automated fashion. Thus, in this thesis, deep learning architectures are exploited in order to identify transport modes based on only raw GPS tracks. It is worth noting that a significant portion of trajectories in GPS data might not be annotated by a transport mode and the acquisition of labeled data is a more expensive and labor-intensive task in comparison with collecting unlabeled data. Thus, utilizing the unlabeled GPS trajectory (i.e., the GPS trajectories that have not been annotated by a transport mode) is a cost-effective approach for improving the prediction quality of the travel mode detection model. Therefore, the unlabeled GPS data are also leveraged by developing a novel deep-learning architecture that is capable of extracting information from both labeled and unlabeled data. The experimental results demonstrate the superiority of the proposed models over the state-of-the-art methods in literature with respect to several performance metrics.
Livres sur le sujet "Supervised neural network"
J, Marks Robert, dir. Neural smithing : Supervised learning in feedforward artificial neural networks. Cambridge, Mass : The MIT Press, 1999.
Trouver le texte intégralSuresh, Sundaram, Narasimhan Sundararajan et Ramasamy Savitha. Supervised Learning with Complex-valued Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4.
Texte intégralGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.
Texte intégralGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012.
Trouver le texte intégralSuresh, Sundaram. Supervised Learning with Complex-valued Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013.
Trouver le texte intégralSurinder, Singh. Exploratory spatial data analysis using supervised neural networks. London : University of East London, 1994.
Trouver le texte intégralSFI/CNLS Workshop on Formal Approaches to Supervised Learning (1992 Santa Fe, N.M.). The mathematics of generalization : The proceedings of the SFI/CNLS Workshop on Formal Approaches to Supervised Learning. Sous la direction de Wolpert David H. Reading, Mass : Addison-Wesley Pub. Co., 1995.
Trouver le texte intégralSupervised and unsupervised pattern recognition : Feature extraction and computational intelligence. Boca Raton, Fla : CRC Press, 2000.
Trouver le texte intégralLeung, Wing Kai. The specification, analysis and metrics of supervised feedforward artificial neural networks for applied science and engineering applications. Birmingham : University of Central England in Birmingham, 2002.
Trouver le texte intégralSupervised Learning With Complexvalued Neural Networks. Springer, 2012.
Trouver le texte intégralChapitres de livres sur le sujet "Supervised neural network"
Magrans de Abril, Ildefons, et Ann Nowé. « Supervised Neural Network Structure Recovery ». Dans Neural Connectomics Challenge, 37–45. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-53070-3_3.
Texte intégralMuselli, Marco, et Sandro Ridella. « Supervised Learning Using a Genetic Algorithm ». Dans International Neural Network Conference, 790. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_86.
Texte intégralBaldi, Pierre, Yves Chauvin et Kurt Hornik. « Supervised and Unsupervised Learning in Linear Networks ». Dans International Neural Network Conference, 825–28. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_99.
Texte intégralBattiti, Roberto, et Francesco Masulli. « BFGS Optimization for Faster and Automated Supervised Learning ». Dans International Neural Network Conference, 757–60. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_68.
Texte intégralPandya, Abhijit S., et Raisa Szabo. « ALOPEX Algorithm for Supervised Learning in Layer Networks ». Dans International Neural Network Conference, 791. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_88.
Texte intégralMidenet, S., et A. Grumbach. « Supervised Learning Based on Kohonen’s Self-Organising Feature Maps ». Dans International Neural Network Conference, 773–76. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_72.
Texte intégralYusoff, Nooraini, et André Grüning. « Supervised Associative Learning in Spiking Neural Network ». Dans Artificial Neural Networks – ICANN 2010, 224–29. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15819-3_30.
Texte intégralBianchini, Monica, et Marco Maggini. « Supervised Neural Network Models for Processing Graphs ». Dans Intelligent Systems Reference Library, 67–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36657-4_3.
Texte intégralSuresh, Sundaram, Narasimhan Sundararajan et Ramasamy Savitha. « Complex-valued Self-regulatory Resource Allocation Network (CSRAN) ». Dans Supervised Learning with Complex-valued Neural Networks, 135–68. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4_8.
Texte intégralMagoulas, G. D., M. N. Vrahatis, T. N. Grapsa et G. S. Androulakis. « Neural Network Supervised Training Based on a Dimension Reducing Method ». Dans Mathematics of Neural Networks, 245–49. Boston, MA : Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_41.
Texte intégralActes de conférences sur le sujet "Supervised neural network"
FILO, G. « Analysis of Neural Network Structure for Implementation of the Prescriptive Maintenance Strategy ». Dans Terotechnology XII. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644902059-40.
Texte intégralHuynh, Alex V., John F. Walkup et Thomas F. Krile. « Optical perceptron-based quadratic neural network ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mii8.
Texte intégralZhao, Chengshuai, Shuai Liu, Feng Huang, Shichao Liu et Wen Zhang. « CSGNN : Contrastive Self-Supervised Graph Neural Network for Molecular Interaction Prediction ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/517.
Texte intégralLEMPA, P. « Analysis of Neural Network Training Algorithms for Implementation of the Prescriptive Maintenance Strategy ». Dans Terotechnology XII. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644902059-41.
Texte intégralAhmed, Sultan Uddin, Md Shahjahan et Kazuyuki Murase. « Chaotic dynamics of supervised neural network ». Dans 2010 13th International Conference on Computer and Information Technology (ICCIT). IEEE, 2010. http://dx.doi.org/10.1109/iccitechn.2010.5723893.
Texte intégralChunwei, Zhang, et Liu Haijiang. « A New Supervised Spiking Neural Network ». Dans 2009 Second International Conference on Intelligent Computation Technology and Automation. IEEE, 2009. http://dx.doi.org/10.1109/icicta.2009.13.
Texte intégralAli, Rashid, et Iram Naim. « Neural network based supervised rank aggregation ». Dans 2011 International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT). IEEE, 2011. http://dx.doi.org/10.1109/mspct.2011.6150439.
Texte intégralYu, Francis T. S., Taiwei Lu et Don A. Gregory. « Self-Learning Optical Neural Network ». Dans Spatial Light Modulators and Applications. Washington, D.C. : Optica Publishing Group, 1990. http://dx.doi.org/10.1364/slma.1990.mb4.
Texte intégralPerumalla, Aniruddha, Ahmet Koru et Eric Johnson. « Network Topology Identification using Supervised Pattern Recognition Neural Networks ». Dans 13th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010231902580264.
Texte intégralSalam, F. M. A., et S. Bai. « A feedback neural network with supervised learning ». Dans 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137855.
Texte intégralRapports d'organisations sur le sujet "Supervised neural network"
Farhi, Edward, et Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, décembre 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.
Texte intégralEngel, Bernard, Yael Edan, James Simon, Hanoch Pasternak et Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, juillet 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Texte intégralZhang, Yunchong. Blind Denoising by Self-Supervised Neural Networks in Astronomical Datasets (Noise2Self4Astro). Office of Scientific and Technical Information (OSTI), août 2019. http://dx.doi.org/10.2172/1614728.
Texte intégral