Academic literature on the topic 'Artificial Neural Networks and Recurrent Neutral Networks'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neural Networks and Recurrent Neutral Networks.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
Prathibha, Dr G., Y. Kavya, P. Vinay Jacob, and L. Poojita. "Speech Emotion Recognition Using Deep Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 07 (July 4, 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.
Full textAli, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin, and Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement." Energies 15, no. 20 (October 13, 2022): 7553. http://dx.doi.org/10.3390/en15207553.
Full textPranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct." Journal of Electrical Systems 20, no. 3s (April 4, 2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.
Full textNassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar, and Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments." Information 13, no. 10 (September 27, 2022): 456. http://dx.doi.org/10.3390/info13100456.
Full textLee, Hong Jae, and Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)." Academic Society of Global Business Administration 20, no. 5 (October 30, 2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.
Full textSutskever, Ilya, and Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks." Neural Networks 23, no. 2 (March 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.
Full textWang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks." Applied and Computational Engineering 40, no. 1 (February 21, 2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.
Full textPoudel, Sushan, and Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks." JOIV : International Journal on Informatics Visualization 4, no. 2 (May 26, 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.
Full textTurner, Andrew James, and Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks." Genetic Programming and Evolvable Machines 18, no. 2 (August 8, 2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.
Full textZiemke, Tom. "Radar image segmentation using recurrent artificial neural networks." Pattern Recognition Letters 17, no. 4 (April 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.
Full textDissertations / Theses on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.
Full textShao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.
Full textGudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for." Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.
Full textn this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.
Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.
Full textNAPOLI, CHRISTIAN. "A-I: Artificial intelligence." Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.
Full textKramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.
Full textRallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications." Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.
Full textIn this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.
Full text
Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.
Full textWillmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference." UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.
Full textBooks on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Find full textJain, Lakhmi C., and Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Find full textJain, Lakhmi C., and Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Find full textGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.
Find full textIntegration Of Swarm Intelligence And Artificial Neutral Network. World Scientific Publishing Company, 2011.
Find full textSangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.
Full textZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun, and Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part II. Springer London, Limited, 2007.
Find full textZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun, and Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part I. Springer London, Limited, 2007.
Find full text(Editor), Derong Liu, Shumin Fei (Editor), Zeng-Guang Hou (Editor), Huaguang Zhang (Editor), and Changyin Sun (Editor), eds. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007Nanjing, China, June 3-7, 2007. Proceedings, Part I (Lecture Notes in Computer Science). Springer, 2007.
Find full textChurchland, Paul M. The Engine of Reason, the Seat of the Soul. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/2758.001.0001.
Full textBook chapters on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Recurrent Hopfield Networks." In Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.
Full textKrauss, Patrick. "Recurrent Neural Networks." In Artificial Intelligence and Brain Research, 131–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_14.
Full textLynch, Stephen. "Recurrent Neural Networks." In Python for Scientific Computing and Artificial Intelligence, 267–84. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003285816-19.
Full textSharma, Arpana, Kanupriya Goswami, Vinita Jindal, and Richa Gupta. "A Road Map to Artificial Neural Network." In Recurrent Neural Networks, 3–21. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-2.
Full textKathirvel, A., Debashreet Das, Stewart Kirubakaran, M. Subramaniam, and S. Naveneethan. "Artificial Intelligence–Based Mobile Bill Payment System Using Biometric Fingerprint." In Recurrent Neural Networks, 233–45. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-16.
Full textda Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Forecast of Stock Market Trends Using Recurrent Networks." In Artificial Neural Networks, 221–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_13.
Full textLindgren, Kristian, Anders Nilsson, Mats G. Nordahl, and Ingrid Råde. "Evolving Recurrent Neural Networks." In Artificial Neural Nets and Genetic Algorithms, 55–62. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_9.
Full textSchäfer, Anton Maximilian, and Hans Georg Zimmermann. "Recurrent Neural Networks Are Universal Approximators." In Artificial Neural Networks – ICANN 2006, 632–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_66.
Full textRiaza, Ricardo, and Pedro J. Zufiria. "Time-Scaling in Recurrent Neural Learning." In Artificial Neural Networks — ICANN 2002, 1371–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_221.
Full textHammer, Barbara. "On the Generalization Ability of Recurrent Networks." In Artificial Neural Networks — ICANN 2001, 731–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_102.
Full textConference papers on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
Cao, Zhu, Linlin Wang, and Gerard de Melo. "Multiple-Weight Recurrent Neural Networks." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/205.
Full textWu, Hao, Ziyang Chen, Weiwei Sun, Baihua Zheng, and Wei Wang. "Modeling Trajectories with Recurrent Neural Networks." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/430.
Full textOmar, Tarek A., Nabih E. Bedewi, and Azim Eskandarian. "Recurrent Artificial Neural Networks for Crashworthiness Analysis." In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1190.
Full textMak, M. W. "Speaker identification using modular recurrent neural networks." In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950519.
Full textChen, Yuexing, and Jiarun Li. "Recurrent Neural Networks algorithms and applications." In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2021. http://dx.doi.org/10.1109/icbase53849.2021.00015.
Full textSharma, Shambhavi. "Emotion Recognition from Speech using Artificial Neural Networks and Recurrent Neural Networks." In 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2021. http://dx.doi.org/10.1109/confluence51648.2021.9377192.
Full textLee, Jinhyuk, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi, and Jaewoo Kang. "Name Nationality Classification with Recurrent Neural Networks." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/289.
Full textArgun, Aykut, Tobias Thalheim, Frank Cichos, and Giovanni Volpe. "Calibration of force fields using recurrent neural networks." In Emerging Topics in Artificial Intelligence 2020, edited by Giovanni Volpe, Joana B. Pereira, Daniel Brunner, and Aydogan Ozcan. SPIE, 2020. http://dx.doi.org/10.1117/12.2567931.
Full text"INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING." In International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003740603280333.
Full textSwanston, D. J. "Relative order defines a topology for recurrent networks." In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950564.
Full textReports on the topic "Artificial Neural Networks and Recurrent Neutral Networks"
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Full text