Literatura científica selecionada sobre o tema "Artificial Neural Networks and Recurrent Neutral Networks"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Artificial Neural Networks and Recurrent Neutral Networks".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
Prathibha, Dr G., Y. Kavya, P. Vinay Jacob e L. Poojita. "Speech Emotion Recognition Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 07 (4 de julho de 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.
Texto completo da fonteAli, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin e Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement". Energies 15, n.º 20 (13 de outubro de 2022): 7553. http://dx.doi.org/10.3390/en15207553.
Texto completo da fontePranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct". Journal of Electrical Systems 20, n.º 3s (4 de abril de 2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.
Texto completo da fonteNassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar e Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments". Information 13, n.º 10 (27 de setembro de 2022): 456. http://dx.doi.org/10.3390/info13100456.
Texto completo da fonteLee, Hong Jae, e Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)". Academic Society of Global Business Administration 20, n.º 5 (30 de outubro de 2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.
Texto completo da fonteSutskever, Ilya, e Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks". Neural Networks 23, n.º 2 (março de 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.
Texto completo da fonteWang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks". Applied and Computational Engineering 40, n.º 1 (21 de fevereiro de 2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.
Texto completo da fontePoudel, Sushan, e Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, n.º 2 (26 de maio de 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.
Texto completo da fonteTurner, Andrew James, e Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks". Genetic Programming and Evolvable Machines 18, n.º 2 (8 de agosto de 2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.
Texto completo da fonteZiemke, Tom. "Radar image segmentation using recurrent artificial neural networks". Pattern Recognition Letters 17, n.º 4 (abril de 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.
Texto completo da fonteTeses / dissertações sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /". The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.
Texto completo da fonteShao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.
Texto completo da fonteGudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for". Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.
Texto completo da fonten this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.
Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism". Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.
Texto completo da fonteNAPOLI, CHRISTIAN. "A-I: Artificial intelligence". Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.
Texto completo da fonteKramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation". Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.
Texto completo da fonteRallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications". Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.
Texto completo da fonteIn this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.
Texto completo da fonte
Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.
Texto completo da fonteWillmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference". UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.
Texto completo da fonteLivros sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Encontre o texto completo da fonteJain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Encontre o texto completo da fonteJain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Encontre o texto completo da fonteGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.
Encontre o texto completo da fonteIntegration Of Swarm Intelligence And Artificial Neutral Network. World Scientific Publishing Company, 2011.
Encontre o texto completo da fonteSangeetha, V., e S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.
Texto completo da fonteZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part II. Springer London, Limited, 2007.
Encontre o texto completo da fonteZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part I. Springer London, Limited, 2007.
Encontre o texto completo da fonte(Editor), Derong Liu, Shumin Fei (Editor), Zeng-Guang Hou (Editor), Huaguang Zhang (Editor) e Changyin Sun (Editor), eds. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007Nanjing, China, June 3-7, 2007. Proceedings, Part I (Lecture Notes in Computer Science). Springer, 2007.
Encontre o texto completo da fonteChurchland, Paul M. The Engine of Reason, the Seat of the Soul. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/2758.001.0001.
Texto completo da fonteCapítulos de livros sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Recurrent Hopfield Networks". In Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.
Texto completo da fonteKrauss, Patrick. "Recurrent Neural Networks". In Artificial Intelligence and Brain Research, 131–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_14.
Texto completo da fonteLynch, Stephen. "Recurrent Neural Networks". In Python for Scientific Computing and Artificial Intelligence, 267–84. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003285816-19.
Texto completo da fonteSharma, Arpana, Kanupriya Goswami, Vinita Jindal e Richa Gupta. "A Road Map to Artificial Neural Network". In Recurrent Neural Networks, 3–21. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-2.
Texto completo da fonteKathirvel, A., Debashreet Das, Stewart Kirubakaran, M. Subramaniam e S. Naveneethan. "Artificial Intelligence–Based Mobile Bill Payment System Using Biometric Fingerprint". In Recurrent Neural Networks, 233–45. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-16.
Texto completo da fonteda Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Forecast of Stock Market Trends Using Recurrent Networks". In Artificial Neural Networks, 221–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_13.
Texto completo da fonteLindgren, Kristian, Anders Nilsson, Mats G. Nordahl e Ingrid Råde. "Evolving Recurrent Neural Networks". In Artificial Neural Nets and Genetic Algorithms, 55–62. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_9.
Texto completo da fonteSchäfer, Anton Maximilian, e Hans Georg Zimmermann. "Recurrent Neural Networks Are Universal Approximators". In Artificial Neural Networks – ICANN 2006, 632–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_66.
Texto completo da fonteRiaza, Ricardo, e Pedro J. Zufiria. "Time-Scaling in Recurrent Neural Learning". In Artificial Neural Networks — ICANN 2002, 1371–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_221.
Texto completo da fonteHammer, Barbara. "On the Generalization Ability of Recurrent Networks". In Artificial Neural Networks — ICANN 2001, 731–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_102.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
Cao, Zhu, Linlin Wang e Gerard de Melo. "Multiple-Weight Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/205.
Texto completo da fonteWu, Hao, Ziyang Chen, Weiwei Sun, Baihua Zheng e Wei Wang. "Modeling Trajectories with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/430.
Texto completo da fonteOmar, Tarek A., Nabih E. Bedewi e Azim Eskandarian. "Recurrent Artificial Neural Networks for Crashworthiness Analysis". In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1190.
Texto completo da fonteMak, M. W. "Speaker identification using modular recurrent neural networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950519.
Texto completo da fonteChen, Yuexing, e Jiarun Li. "Recurrent Neural Networks algorithms and applications". In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2021. http://dx.doi.org/10.1109/icbase53849.2021.00015.
Texto completo da fonteSharma, Shambhavi. "Emotion Recognition from Speech using Artificial Neural Networks and Recurrent Neural Networks". In 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2021. http://dx.doi.org/10.1109/confluence51648.2021.9377192.
Texto completo da fonteLee, Jinhyuk, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi e Jaewoo Kang. "Name Nationality Classification with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/289.
Texto completo da fonteArgun, Aykut, Tobias Thalheim, Frank Cichos e Giovanni Volpe. "Calibration of force fields using recurrent neural networks". In Emerging Topics in Artificial Intelligence 2020, editado por Giovanni Volpe, Joana B. Pereira, Daniel Brunner e Aydogan Ozcan. SPIE, 2020. http://dx.doi.org/10.1117/12.2567931.
Texto completo da fonte"INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING". In International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003740603280333.
Texto completo da fonteSwanston, D. J. "Relative order defines a topology for recurrent networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950564.
Texto completo da fonteRelatórios de organizações sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak e Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, julho de 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Texto completo da fonte