Gotowa bibliografia na temat „Artificial Neural Networks and Recurrent Neutral Networks”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Artificial Neural Networks and Recurrent Neutral Networks”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Artificial Neural Networks and Recurrent Neutral Networks"
Prathibha, Dr G., Y. Kavya, P. Vinay Jacob i L. Poojita. "Speech Emotion Recognition Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 07 (4.07.2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.
Pełny tekst źródłaAli, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin i Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement". Energies 15, nr 20 (13.10.2022): 7553. http://dx.doi.org/10.3390/en15207553.
Pełny tekst źródłaPranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct". Journal of Electrical Systems 20, nr 3s (4.04.2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.
Pełny tekst źródłaNassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar i Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments". Information 13, nr 10 (27.09.2022): 456. http://dx.doi.org/10.3390/info13100456.
Pełny tekst źródłaLee, Hong Jae, i Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)". Academic Society of Global Business Administration 20, nr 5 (30.10.2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.
Pełny tekst źródłaSutskever, Ilya, i Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks". Neural Networks 23, nr 2 (marzec 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.
Pełny tekst źródłaWang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks". Applied and Computational Engineering 40, nr 1 (21.02.2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.
Pełny tekst źródłaPoudel, Sushan, i Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, nr 2 (26.05.2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.
Pełny tekst źródłaTurner, Andrew James, i Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks". Genetic Programming and Evolvable Machines 18, nr 2 (8.08.2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.
Pełny tekst źródłaZiemke, Tom. "Radar image segmentation using recurrent artificial neural networks". Pattern Recognition Letters 17, nr 4 (kwiecień 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.
Pełny tekst źródłaRozprawy doktorskie na temat "Artificial Neural Networks and Recurrent Neutral Networks"
Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /". The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.
Pełny tekst źródłaShao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.
Pełny tekst źródłaGudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for". Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.
Pełny tekst źródłan this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.
Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism". Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.
Pełny tekst źródłaNAPOLI, CHRISTIAN. "A-I: Artificial intelligence". Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.
Pełny tekst źródłaKramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation". Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.
Pełny tekst źródłaRallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications". Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.
Pełny tekst źródłaIn this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.
Pełny tekst źródła
Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.
Pełny tekst źródłaWillmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference". UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.
Pełny tekst źródłaKsiążki na temat "Artificial Neural Networks and Recurrent Neutral Networks"
Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Znajdź pełny tekst źródłaJain, Lakhmi C., i Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Znajdź pełny tekst źródłaJain, Lakhmi C., i Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.
Znajdź pełny tekst źródłaGraves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.
Znajdź pełny tekst źródłaIntegration Of Swarm Intelligence And Artificial Neutral Network. World Scientific Publishing Company, 2011.
Znajdź pełny tekst źródłaSangeetha, V., i S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.
Pełny tekst źródłaZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun i Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part II. Springer London, Limited, 2007.
Znajdź pełny tekst źródłaZhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun i Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part I. Springer London, Limited, 2007.
Znajdź pełny tekst źródła(Editor), Derong Liu, Shumin Fei (Editor), Zeng-Guang Hou (Editor), Huaguang Zhang (Editor) i Changyin Sun (Editor), red. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007Nanjing, China, June 3-7, 2007. Proceedings, Part I (Lecture Notes in Computer Science). Springer, 2007.
Znajdź pełny tekst źródłaChurchland, Paul M. The Engine of Reason, the Seat of the Soul. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/2758.001.0001.
Pełny tekst źródłaCzęści książek na temat "Artificial Neural Networks and Recurrent Neutral Networks"
da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni i Silas Franco dos Reis Alves. "Recurrent Hopfield Networks". W Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.
Pełny tekst źródłaKrauss, Patrick. "Recurrent Neural Networks". W Artificial Intelligence and Brain Research, 131–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_14.
Pełny tekst źródłaLynch, Stephen. "Recurrent Neural Networks". W Python for Scientific Computing and Artificial Intelligence, 267–84. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003285816-19.
Pełny tekst źródłaSharma, Arpana, Kanupriya Goswami, Vinita Jindal i Richa Gupta. "A Road Map to Artificial Neural Network". W Recurrent Neural Networks, 3–21. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-2.
Pełny tekst źródłaKathirvel, A., Debashreet Das, Stewart Kirubakaran, M. Subramaniam i S. Naveneethan. "Artificial Intelligence–Based Mobile Bill Payment System Using Biometric Fingerprint". W Recurrent Neural Networks, 233–45. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-16.
Pełny tekst źródłada Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni i Silas Franco dos Reis Alves. "Forecast of Stock Market Trends Using Recurrent Networks". W Artificial Neural Networks, 221–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_13.
Pełny tekst źródłaLindgren, Kristian, Anders Nilsson, Mats G. Nordahl i Ingrid Råde. "Evolving Recurrent Neural Networks". W Artificial Neural Nets and Genetic Algorithms, 55–62. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_9.
Pełny tekst źródłaSchäfer, Anton Maximilian, i Hans Georg Zimmermann. "Recurrent Neural Networks Are Universal Approximators". W Artificial Neural Networks – ICANN 2006, 632–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_66.
Pełny tekst źródłaRiaza, Ricardo, i Pedro J. Zufiria. "Time-Scaling in Recurrent Neural Learning". W Artificial Neural Networks — ICANN 2002, 1371–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_221.
Pełny tekst źródłaHammer, Barbara. "On the Generalization Ability of Recurrent Networks". W Artificial Neural Networks — ICANN 2001, 731–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_102.
Pełny tekst źródłaStreszczenia konferencji na temat "Artificial Neural Networks and Recurrent Neutral Networks"
Cao, Zhu, Linlin Wang i Gerard de Melo. "Multiple-Weight Recurrent Neural Networks". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/205.
Pełny tekst źródłaWu, Hao, Ziyang Chen, Weiwei Sun, Baihua Zheng i Wei Wang. "Modeling Trajectories with Recurrent Neural Networks". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/430.
Pełny tekst źródłaOmar, Tarek A., Nabih E. Bedewi i Azim Eskandarian. "Recurrent Artificial Neural Networks for Crashworthiness Analysis". W ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1190.
Pełny tekst źródłaMak, M. W. "Speaker identification using modular recurrent neural networks". W 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950519.
Pełny tekst źródłaChen, Yuexing, i Jiarun Li. "Recurrent Neural Networks algorithms and applications". W 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2021. http://dx.doi.org/10.1109/icbase53849.2021.00015.
Pełny tekst źródłaSharma, Shambhavi. "Emotion Recognition from Speech using Artificial Neural Networks and Recurrent Neural Networks". W 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2021. http://dx.doi.org/10.1109/confluence51648.2021.9377192.
Pełny tekst źródłaLee, Jinhyuk, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi i Jaewoo Kang. "Name Nationality Classification with Recurrent Neural Networks". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/289.
Pełny tekst źródłaArgun, Aykut, Tobias Thalheim, Frank Cichos i Giovanni Volpe. "Calibration of force fields using recurrent neural networks". W Emerging Topics in Artificial Intelligence 2020, redaktorzy Giovanni Volpe, Joana B. Pereira, Daniel Brunner i Aydogan Ozcan. SPIE, 2020. http://dx.doi.org/10.1117/12.2567931.
Pełny tekst źródła"INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING". W International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003740603280333.
Pełny tekst źródłaSwanston, D. J. "Relative order defines a topology for recurrent networks". W 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950564.
Pełny tekst źródłaRaporty organizacyjne na temat "Artificial Neural Networks and Recurrent Neutral Networks"
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak i Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, lipiec 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Pełny tekst źródła