Letteratura scientifica selezionata sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Artificial Neural Networks and Recurrent Neutral Networks".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Prathibha, Dr G., Y. Kavya, P. Vinay Jacob e L. Poojita. "Speech Emotion Recognition Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 07 (4 luglio 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.
Testo completoAli, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin e Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement". Energies 15, n. 20 (13 ottobre 2022): 7553. http://dx.doi.org/10.3390/en15207553.
Testo completoPranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct". Journal of Electrical Systems 20, n. 3s (4 aprile 2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.
Testo completoNassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar e Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments". Information 13, n. 10 (27 settembre 2022): 456. http://dx.doi.org/10.3390/info13100456.
Testo completoLee, Hong Jae, e Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)". Academic Society of Global Business Administration 20, n. 5 (30 ottobre 2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.
Testo completoSutskever, Ilya, e Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks". Neural Networks 23, n. 2 (marzo 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.
Testo completoWang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks". Applied and Computational Engineering 40, n. 1 (21 febbraio 2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.
Testo completoPoudel, Sushan, e Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, n. 2 (26 maggio 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.
Testo completoTurner, Andrew James, e Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks". Genetic Programming and Evolvable Machines 18, n. 2 (8 agosto 2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.
Testo completoZiemke, Tom. "Radar image segmentation using recurrent artificial neural networks". Pattern Recognition Letters 17, n. 4 (aprile 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.
Testo completoTesi sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /". The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.
Testo completoShao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.
Testo completoGudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for". Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.
Testo completon this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.
Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism". Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.
Testo completoNAPOLI, CHRISTIAN. "A-I: Artificial intelligence". Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.
Testo completoKramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation". Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.
Testo completoRallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications". Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.
Testo completoIn this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.
Testo completo
Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.
Testo completoWillmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference". UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.
Testo completoLibri sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Cerca il testo completoJain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Cerca il testo completoJain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Cerca il testo completoGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.
Cerca il testo completoIntegration Of Swarm Intelligence And Artificial Neutral Network. World Scientific Publishing Company, 2011.
Cerca il testo completoSangeetha, V., e S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.
Testo completoZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part II. Springer London, Limited, 2007.
Cerca il testo completoZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part I. Springer London, Limited, 2007.
Cerca il testo completo(Editor), Derong Liu, Shumin Fei (Editor), Zeng-Guang Hou (Editor), Huaguang Zhang (Editor) e Changyin Sun (Editor), a cura di. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007Nanjing, China, June 3-7, 2007. Proceedings, Part I (Lecture Notes in Computer Science). Springer, 2007.
Cerca il testo completoChurchland, Paul M. The Engine of Reason, the Seat of the Soul. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/2758.001.0001.
Testo completoCapitoli di libri sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Recurrent Hopfield Networks". In Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.
Testo completoKrauss, Patrick. "Recurrent Neural Networks". In Artificial Intelligence and Brain Research, 131–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_14.
Testo completoLynch, Stephen. "Recurrent Neural Networks". In Python for Scientific Computing and Artificial Intelligence, 267–84. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003285816-19.
Testo completoSharma, Arpana, Kanupriya Goswami, Vinita Jindal e Richa Gupta. "A Road Map to Artificial Neural Network". In Recurrent Neural Networks, 3–21. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-2.
Testo completoKathirvel, A., Debashreet Das, Stewart Kirubakaran, M. Subramaniam e S. Naveneethan. "Artificial Intelligence–Based Mobile Bill Payment System Using Biometric Fingerprint". In Recurrent Neural Networks, 233–45. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-16.
Testo completoda Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Forecast of Stock Market Trends Using Recurrent Networks". In Artificial Neural Networks, 221–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_13.
Testo completoLindgren, Kristian, Anders Nilsson, Mats G. Nordahl e Ingrid Råde. "Evolving Recurrent Neural Networks". In Artificial Neural Nets and Genetic Algorithms, 55–62. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_9.
Testo completoSchäfer, Anton Maximilian, e Hans Georg Zimmermann. "Recurrent Neural Networks Are Universal Approximators". In Artificial Neural Networks – ICANN 2006, 632–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_66.
Testo completoRiaza, Ricardo, e Pedro J. Zufiria. "Time-Scaling in Recurrent Neural Learning". In Artificial Neural Networks — ICANN 2002, 1371–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_221.
Testo completoHammer, Barbara. "On the Generalization Ability of Recurrent Networks". In Artificial Neural Networks — ICANN 2001, 731–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_102.
Testo completoAtti di convegni sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Cao, Zhu, Linlin Wang e Gerard de Melo. "Multiple-Weight Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/205.
Testo completoWu, Hao, Ziyang Chen, Weiwei Sun, Baihua Zheng e Wei Wang. "Modeling Trajectories with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/430.
Testo completoOmar, Tarek A., Nabih E. Bedewi e Azim Eskandarian. "Recurrent Artificial Neural Networks for Crashworthiness Analysis". In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1190.
Testo completoMak, M. W. "Speaker identification using modular recurrent neural networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950519.
Testo completoChen, Yuexing, e Jiarun Li. "Recurrent Neural Networks algorithms and applications". In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2021. http://dx.doi.org/10.1109/icbase53849.2021.00015.
Testo completoSharma, Shambhavi. "Emotion Recognition from Speech using Artificial Neural Networks and Recurrent Neural Networks". In 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2021. http://dx.doi.org/10.1109/confluence51648.2021.9377192.
Testo completoLee, Jinhyuk, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi e Jaewoo Kang. "Name Nationality Classification with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/289.
Testo completoArgun, Aykut, Tobias Thalheim, Frank Cichos e Giovanni Volpe. "Calibration of force fields using recurrent neural networks". In Emerging Topics in Artificial Intelligence 2020, a cura di Giovanni Volpe, Joana B. Pereira, Daniel Brunner e Aydogan Ozcan. SPIE, 2020. http://dx.doi.org/10.1117/12.2567931.
Testo completo"INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING". In International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003740603280333.
Testo completoSwanston, D. J. "Relative order defines a topology for recurrent networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950564.
Testo completoRapporti di organizzazioni sul tema "Artificial Neural Networks and Recurrent Neutral Networks"
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak e Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, luglio 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Testo completo