Literatura académica sobre el tema "Reservoir computing networks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Reservoir computing networks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Reservoir computing networks"
Van der Sande, Guy, Daniel Brunner y Miguel C. Soriano. "Advances in photonic reservoir computing". Nanophotonics 6, n.º 3 (12 de mayo de 2017): 561–76. http://dx.doi.org/10.1515/nanoph-2016-0132.
Texto completoGhosh, Sanjib, Kohei Nakajima, Tanjung Krisnanda, Keisuke Fujii y Timothy C. H. Liew. "Quantum Neuromorphic Computing with Reservoir Computing Networks". Advanced Quantum Technologies 4, n.º 9 (9 de julio de 2021): 2100053. http://dx.doi.org/10.1002/qute.202100053.
Texto completoRohm, Andre, Lina Jaurigue y Kathy Ludge. "Reservoir Computing Using Laser Networks". IEEE Journal of Selected Topics in Quantum Electronics 26, n.º 1 (enero de 2020): 1–8. http://dx.doi.org/10.1109/jstqe.2019.2927578.
Texto completoDamicelli, Fabrizio, Claus C. Hilgetag y Alexandros Goulas. "Brain connectivity meets reservoir computing". PLOS Computational Biology 18, n.º 11 (16 de noviembre de 2022): e1010639. http://dx.doi.org/10.1371/journal.pcbi.1010639.
Texto completoAntonik, Piotr, Serge Massar y Guy Van Der Sande. "Photonic reservoir computing using delay dynamical systems". Photoniques, n.º 104 (septiembre de 2020): 45–48. http://dx.doi.org/10.1051/photon/202010445.
Texto completoTran, Dat y Christof Teuscher. "Computational Capacity of Complex Memcapacitive Networks". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 2 (abril de 2021): 1–25. http://dx.doi.org/10.1145/3445795.
Texto completoSenn, Christoph Walter y Itsuo Kumazawa. "Abstract Reservoir Computing". AI 3, n.º 1 (10 de marzo de 2022): 194–210. http://dx.doi.org/10.3390/ai3010012.
Texto completoHart, Joseph D., Laurent Larger, Thomas E. Murphy y Rajarshi Roy. "Delayed dynamical systems: networks, chimeras and reservoir computing". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, n.º 2153 (22 de julio de 2019): 20180123. http://dx.doi.org/10.1098/rsta.2018.0123.
Texto completoS Pathak, Shantanu y D. Rajeswara Rao. "Reservoir Computing for Healthcare Analytics". International Journal of Engineering & Technology 7, n.º 2.32 (31 de mayo de 2018): 240. http://dx.doi.org/10.14419/ijet.v7i2.32.15576.
Texto completoAndrecut, M. "Reservoir computing on the hypersphere". International Journal of Modern Physics C 28, n.º 07 (julio de 2017): 1750095. http://dx.doi.org/10.1142/s0129183117500954.
Texto completoTesis sobre el tema "Reservoir computing networks"
Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks". Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.
Texto completoKulkarni, Manjari S. "Memristor-based Reservoir Computing". PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.
Texto completoCanaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.
Texto completoDai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.
Texto completoAlomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems". Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.
Texto completo[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.
Texto completoAlmassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons". PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.
Texto completoBazzanella, Davide. "Microring Based Neuromorphic Photonics". Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.
Texto completoRöhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde y Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge". Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.
Texto completoEnel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate". Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.
Texto completoLibros sobre el tema "Reservoir computing networks"
Brunner, Daniel, Miguel C. Soriano y Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.
Buscar texto completoBrunner, Daniel, Miguel C. Soriano y Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.
Buscar texto completoBrunner, Daniel, Miguel C. Soriano y Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.
Buscar texto completoCapítulos de libros sobre el tema "Reservoir computing networks"
Sergio, Anderson Tenório y Teresa B. Ludermir. "PSO for Reservoir Computing Optimization". En Artificial Neural Networks and Machine Learning – ICANN 2012, 685–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_86.
Texto completoVerstraeten, David y Benjamin Schrauwen. "On the Quantification of Dynamics in Reservoir Computing". En Artificial Neural Networks – ICANN 2009, 985–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_101.
Texto completoAtencia, Miguel, Claudio Gallicchio, Gonzalo Joya y Alessio Micheli. "Time Series Clustering with Deep Reservoir Computing". En Artificial Neural Networks and Machine Learning – ICANN 2020, 482–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_39.
Texto completoColla, Valentina, Ismael Matino, Stefano Dettori, Silvia Cateni y Ruben Matino. "Reservoir Computing Approaches Applied to Energy Management in Industry". En Engineering Applications of Neural Networks, 66–79. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20257-6_6.
Texto completoKobayashi, Taisuke. "Practical Fractional-Order Neuron Dynamics for Reservoir Computing". En Artificial Neural Networks and Machine Learning – ICANN 2018, 116–25. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_12.
Texto completoSchliebs, Stefan, Nikola Kasabov, Dave Parry y Doug Hunt. "Towards a Wearable Coach: Classifying Sports Activities with Reservoir Computing". En Engineering Applications of Neural Networks, 233–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41013-0_24.
Texto completoHinaut, Xavier y Peter F. Dominey. "On-Line Processing of Grammatical Structure Using Reservoir Computing". En Artificial Neural Networks and Machine Learning – ICANN 2012, 596–603. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_75.
Texto completoOyebode, Oluwaseun y Josiah Adeyemo. "Reservoir Inflow Forecasting Using Differential Evolution Trained Neural Networks". En Advances in Intelligent Systems and Computing, 307–19. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07494-8_21.
Texto completoTietz, Stephan, Doreen Jirak y Stefan Wermter. "A Reservoir Computing Framework for Continuous Gesture Recognition". En Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 7–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30493-5_1.
Texto completoLocquet, Jean-Pierre. "Overview on the PHRESCO Project: PHotonic REServoir COmputing". En Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 149–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30493-5_14.
Texto completoActas de conferencias sobre el tema "Reservoir computing networks"
Bacciu, Davide, Daniele Di Sarli, Pouria Faraji, Claudio Gallicchio y Alessio Micheli. "Federated Reservoir Computing Neural Networks". En 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534035.
Texto completoHeroux, Jean Benoit, Hidetoshi Numata, Naoki Kanazawa y Daiju Nakano. "Optoelectronic Reservoir Computing with VCSEL". En 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489757.
Texto completoDong, Jonathan, Erik Borve, Mushegh Rafayelyan y Michael Unser. "Asymptotic Stability in Reservoir Computing". En 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892302.
Texto completoWyffels, Francis, Benjamin Schrauwen, David Verstraeten y Dirk Stroobandt. "Band-pass Reservoir Computing". En 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634252.
Texto completoFu, Kaiwei, Ruomin Zhu, Alon Loeffler, Joel Hochstetter, Adrian Diaz-Alvarez, Adam Stieg, James Gimzewski, Tomonobu Nakayama y Zdenka Kuncic. "Reservoir Computing with Neuromemristive Nanowire Networks". En 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207727.
Texto completoMujal, Pere, Johannes Nokkala, Rodrigo Martinez-Peña, Jorge Garcia Beni, Gian Luca Giorgi, Miguel C. Cornelles-Soriano y Roberta Zambrini. "Quantum reservoir computing in bosonic networks". En Emerging Topics in Artificial Intelligence (ETAI) 2021, editado por Giovanni Volpe, Joana B. Pereira, Daniel Brunner y Aydogan Ozcan. SPIE, 2021. http://dx.doi.org/10.1117/12.2596177.
Texto completoGallicchio, Claudio. "Sparsity in Reservoir Computing Neural Networks". En 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020. http://dx.doi.org/10.1109/inista49547.2020.9194611.
Texto completoZhu, Ruomin, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski y Zdenka Kuncic. "Reservoir Computing with Neuromorphic Nanowire Networks". En Neuromorphic Materials, Devices, Circuits and Systems. València: FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2023. http://dx.doi.org/10.29363/nanoge.neumatdecas.2023.055.
Texto completoRöhm, André y Kathy Lüdge. "Reservoir computing with delay in structured networks". En Neuro-inspired Photonic Computing, editado por Marc Sciamanna y Peter Bienstman. SPIE, 2018. http://dx.doi.org/10.1117/12.2307159.
Texto completoLaporte, Floris, Joni Dambre y Peter Bienstman. "Reservoir computing with signal-mixing cavities". En 2017 19th International Conference on Transparent Optical Networks (ICTON). IEEE, 2017. http://dx.doi.org/10.1109/icton.2017.8024990.
Texto completo