Academic literature on the topic 'Hopfield network; Neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Hopfield network; Neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Hopfield network; Neural networks"

1

Wilson, Robert C. "Parallel Hopfield Networks." Neural Computation 21, no. 3 (March 2009): 831–50. http://dx.doi.org/10.1162/neco.2008.03-07-496.

Full text
Abstract:
We introduce a novel type of neural network, termed the parallel Hopfield network, that can simultaneously effect the dynamics of many different, independent Hopfield networks in parallel in the same piece of neural hardware. Numerically we find that under certain conditions, each Hopfield subnetwork has a finite memory capacity approaching that of the equivalent isolated attractor network, while a simple signal-to-noise analysis sheds qualitative, and some quantitative, insight into the workings (and failures) of the system.
APA, Harvard, Vancouver, ISO, and other styles
2

Kobayashi, Masaki. "Storage Capacities of Twin-Multistate Quaternion Hopfield Neural Networks." Computational Intelligence and Neuroscience 2018 (November 1, 2018): 1–5. http://dx.doi.org/10.1155/2018/1275290.

Full text
Abstract:
A twin-multistate quaternion Hopfield neural network (TMQHNN) is a multistate Hopfield model and can store multilevel information, such as image data. Storage capacity is an important problem of Hopfield neural networks. Jankowski et al. approximated the crosstalk terms of complex-valued Hopfield neural networks (CHNNs) by the 2-dimensional normal distributions and evaluated their storage capacities. In this work, we evaluate the storage capacities of TMQHNNs based on their idea.
APA, Harvard, Vancouver, ISO, and other styles
3

Antipova, E. S., and S. A. Rashkovskiy. "Autoassociative Hamming Neural Network." Nelineinaya Dinamika 17, no. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Full text
Abstract:
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
APA, Harvard, Vancouver, ISO, and other styles
4

ISOKAWA, TEIJIRO, HARUHIKO NISHIMURA, NAOTAKE KAMIURA, and NOBUYUKI MATSUI. "ASSOCIATIVE MEMORY IN QUATERNIONIC HOPFIELD NEURAL NETWORK." International Journal of Neural Systems 18, no. 02 (April 2008): 135–45. http://dx.doi.org/10.1142/s0129065708001440.

Full text
Abstract:
Associative memory networks based on quaternionic Hopfield neural network are investigated in this paper. These networks are composed of quaternionic neurons, and input, output, threshold, and connection weights are represented in quaternions, which is a class of hypercomplex number systems. The energy function of the network and the Hebbian rule for embedding patterns are introduced. The stable states and their basins are explored for the networks with three neurons and four neurons. It is clarified that there exist at most 16 stable states, called multiplet components, as the degenerated stored patterns, and each of these states has its basin in the quaternionic networks.
APA, Harvard, Vancouver, ISO, and other styles
5

SUKHASWAMI, M. B., P. SEETHARAMULU, and ARUN K. PUJARI. "RECOGNITION OF TELUGU CHARACTERS USING NEURAL NETWORKS." International Journal of Neural Systems 06, no. 03 (September 1995): 317–57. http://dx.doi.org/10.1142/s0129065795000238.

Full text
Abstract:
The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different “hands” in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.
APA, Harvard, Vancouver, ISO, and other styles
6

Savin, Daniela, Sabah Alkass, and Paul Fazio. "Construction resource leveling using neural networks." Canadian Journal of Civil Engineering 23, no. 4 (August 1, 1996): 917–25. http://dx.doi.org/10.1139/l96-898.

Full text
Abstract:
A neural network model for construction resource leveling is developed and discussed. The model is derived by mapping an augmented Lagrangian multiplier optimization formulation of a resource leveling problem onto a discrete-time Hopfield net. The resulting neural network model consists of two main blocks. Specifically, it consists of a discrete-time Hopfield neural network block, and a control block for the adjustment of Lagrange multipliers in the augmented Lagrangian multiplier optimization, and for the computation of the new set of weights of the neural network block. An experimental verification of the proposed artificial neural network model is also provided. Key words: neural networks in construction, resource leveling, construction management, project management.
APA, Harvard, Vancouver, ISO, and other styles
7

Fan, Tongke. "Research on Optimal CDMA Multiuser Detection Based on Stochastic Hopfield Neural Network." Recent Patents on Computer Science 12, no. 3 (May 8, 2019): 233–40. http://dx.doi.org/10.2174/2213275912666181210103742.

Full text
Abstract:
Background: Most of the common multi-user detection techniques have the shortcomings of large computation and slow operation. For Hopfield neural networks, there are some problems such as high-speed searching ability and parallel processing, but there are local convergence problems. Objective: The stochastic Hopfield neural network avoids local convergence by introducing noise into the state variables and then achieves the optimal detection. Methods: Based on the study of CDMA communication model, this paper presents and models the problem of multi-user detection. Then a new stochastic Hopfield neural network is obtained by introducing a stochastic disturbance into the traditional Hopfield neural network. Finally, the problem of CDMA multi-user detection is simulated. Conclusion: The results show that the introduction of stochastic disturbance into Hopfield neural network can help the neural network to jump out of the local minimum, thus achieving the minimum and improving the performance of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
8

Kobayashi, Masaki. "Bicomplex Projection Rule for Complex-Valued Hopfield Neural Networks." Neural Computation 32, no. 11 (November 2020): 2237–48. http://dx.doi.org/10.1162/neco_a_01320.

Full text
Abstract:
A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should be simplified. In this work, the number of weight parameters is reduced by bicomplex projection rule for CHNNs, which is given by the decomposition of bicomplex-valued Hopfield neural networks. Computer simulations support that the noise tolerance of CHNN with a bicomplex projection rule is equal to or even better than that of quaternion- and bicomplex-valued Hopfield neural networks. By computer simulations, we find that the projection rule for hyperbolic-valued Hopfield neural networks in synchronous mode maintains a high noise tolerance.
APA, Harvard, Vancouver, ISO, and other styles
9

YAN, JUN-JUH, TEH-LU LIAO, JUI-SHENG LIN, and CHAO-JUNG CHENG. "SYNCHRONIZATION CONTROL OF NEURAL NETWORKS SUBJECT TO TIME-VARYING DELAYS AND INPUT NONLINEARITY." International Journal of Bifurcation and Chaos 16, no. 12 (December 2006): 3643–54. http://dx.doi.org/10.1142/s0218127406017038.

Full text
Abstract:
This paper investigates the synchronization problem for a particular class of neural networks subject to time-varying delays and input nonlinearity. Using the variable structure control technique, a memoryless decentralized control law is established which guarantees exponential synchronization even when input nonlinearity is present. The proposed controller is suitable for application in delayed cellular neural networks and Hopfield neural networks with no restriction on the derivative of the time-varying delays. A two-dimensional cellular neural network and a four-dimensional Hopfield neural network, both with time-varying delays, are presented as illustrative examples to demonstrate the effectiveness of the proposed synchronization scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Kirti, Suju M. George, and P. Rambabu. "Use of a System of Recurrent Neural Networks for Solving the Cell Placement Problem in VLSI Design." International Journal on Artificial Intelligence Tools 06, no. 01 (March 1997): 15–35. http://dx.doi.org/10.1142/s0218213097000037.

Full text
Abstract:
Cell placement in VLSI design is an NP-complete problem. In this paper, we have tried to solve the standard cell placement problem using the Hopfield neural network model. Furthermore, a new system of coupled recurrent neural networks, which was designed to eliminate the drawbacks of the Hopfield neural network, is introduced. The performance of Hopfield networks with discrete and graded neurons is also investigated. The energy function corresponding to the chosen representation is given and the weight matrix and the inputs needed for the network are also computed in this work. Several different combinations of parameters were examined to find the optimum set of parameters which results in better and faster convergence. To show the effectiveness of our methods, cell placement problems up to 30 cells are considered and the results are compared with the results obtained by a genetic algorithm. Our results show that a system of coupled neural networks could be an efficient way to overcome the limitation of recurrent neural networks which consider only bilinear forms of the energy function.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Hopfield network; Neural networks"

1

Brouwer, Roelof K. "Pattern recognition using a generalised discrete Hopfield network." Thesis, University of Warwick, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Smeda, Adel Abdullah. "Application of the Hopfield neural network in routing for computer networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39701.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tseng, Hung-Li. "Computational Complexity of Hopfield Networks." Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278272/.

Full text
Abstract:
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, Chih Kang. "Hardware implementation of the complex Hopfield neural network." CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Breese, Amanda Jane. "Optics, optimisation and the Hopfield neural network." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Halabian, Faezeh. "An Enhanced Learning for Restricted Hopfield Networks." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42271.

Full text
Abstract:
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Dexiang. "Occluded object discrimination by a modified Hopfield neural network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0020/MQ55520.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Garcia, Garcia Núria 1958. "Radio Resource Management strategies based hopfield neural networks." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7556.

Full text
Abstract:
Esta tesis doctoral se enmarca en la temática de Gestion de Recursos Radioelectricos en los sistemas de comunicaciones móviles, 3G, B3G y 4G. Como es ampliamente conocido los recursos radios son escasos, particularmente en los sistemas de comunicaciones móviles celulares, y en consecuencia es de uso obligado una gestión eficiente de los mismos. Desde un punto de vista práctico si bien estos sistemas se basan en el uso de tecnologías de acceso radio estandarizada, no es este el caso para los algoritmos subyacentes a la gestión de los recursos radio, de manera que siempre son posibles nuevas realizaciones de los mismos que resulten más convenientes en el marco del estándar en cuestión. Es en esta dirección hacia donde apunta la realización de esta tesis doctoral, y entiendo que lo consigue con éxito al introducir nuevas estrategias de Gestion de los Recursos Radio en el marco de las estrategias de múltiple acceso CDMA en los sistemas 3G, TDMA/CDMA en los sistemas B3G y OFDMA en los sistemas 4G. Esta tesis, tras identificar los gestores más usuales de gestión de recursos radio y una breve descripción de los mismos, introduce una descripción básicamente autocontenida de los aspectos más relevantes de los sistemas de acceso múltiple WCDMA y OFDMA. En este sentido se detallan mecanismos de su funcionamiento que con posterioridad serán utilizados en la definición y especificación de los algoritmos de gestión de recursos propiamente dichos. Con posterioridad se hace un breve recorrido sobre lo que son las redes neuronales , para finalizar en una exposición más detallada de las Redes Neuronales de Hopfield que serán el hilo conductor de los trabajos de esta tesis. En particular se describen las ecuaciones que caracterizan estas redes como sistemas dinámicos y se establecen sus condiciones de convergencia a través de los teoremas de estabilidad Lyapunov y la definición de la función Energía.De la conjunción de las particularidades de los sistemas de acceso WCDMA, TDMA y OFDMA y de las redes neuronales de Hopfield se van desarrollando una serie de algoritmos que operan en escenarios unicelulares y que entiendo novedosos y que a continuación enumeran brevemente.Admisión en un sistema WCDMA , enlace ascendente, mediante una gestión optimizada de las distintas velocidades de transmisión asignadas a los usuarios que comparten el acceso y que se les permite distintos perfiles. Aspectos relativos a la robustez del algoritmo, y en particular a su convergencia son también detallados. Se suponen restricciones de carga de la red máxima, repartición del espectro justa y potencia máxima disponible en los terminales móviles. Se suponen un servicio en tiempo real con velocidades variables. La probabilidad de bloqueo se usa para exhibir las prestaciones del algoritmo.Gestión de las velocidades de los usuarios ya admitidos en un sistema WCDMA,enlace ascendente, con objeto de garantizarles una definida probabilidad de satisfacción superior a un determinado valor y que está basada en las velocidades reales de transmisión asignadas. Se supone también un servicio en tiempo real con velocidades variables y las mismas restricciones que en Admisión. Gestión de las velocidades de los usuarios ya admitidos en un sistema WCDMA, enlace descendente, con objeto de garantizarles un máximo retardo en la entrega de paquetes. Se suponen restricciones de repartición del espectro justa y potencia máxima disponible en la estación de base. Se supone un servicio interactivo basado en un modelo de tráfico para servicios www. Se introduce también un algoritmo de referencia a efectos comparativos. La probabilidad de pérdida es el parámetro usado para valorar las prestaciones del algoritmo.Gestión combinada de servicios en tiempo real e interactivos en sistemas WCDMA, enlace descendente. Incorpora parte de los algoritmos anteriormente enunciados y se mantienen los mismos modelos de tráfico y las mismas restricciones. Se han usado en esta caso las probabilidades de satisfacción y de pérdida para capturar el la velocidad de transmisión agregada y retardo respectivamente Algoritmo de Gestión común de recursos radio para un escenario B3G donde un usuario puede ser servido por más de un acceso. En este caso se han usado WCDMA y TDMA. Algoritmos de Gestión de las velocidades de los usuarios ya admitidos en un sistema OFDMA, enlace descendente, con objeto de garantizarles un máximo retardo en la entrega de paquetes.La tesis apunta también hacia prometedoras futuras líneas de investigación que pretenden explotar la base de la metodología desarrollada en esta tesis y que consisten en escenarios celulares centralizadas para pasar después a distribuidas en entornos multicelulares y en particular para los sistemas OFDMA , base de los accesos en 4G.
APA, Harvard, Vancouver, ISO, and other styles
9

Bireddy, Chakradhar. "Hopfield Networks as an Error Correcting Technique for Speech Recognition." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc5551/.

Full text
Abstract:
I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries.
APA, Harvard, Vancouver, ISO, and other styles
10

Barron, Kenneth Falconer. "Optics in Hopfield content addressable memories." Thesis, Heriot-Watt University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Hopfield network; Neural networks"

1

Ally, Afshan. A Hopfield neural network decoder for convolutional codes. Ottawa: National Library of Canada = Bibliothèque nationale du Canada, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keeler, James David. Comparison between sparsely distributed memory and Hopfield-type neural network models. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

P, Coughlin James. Neural computation in Hopfield networks and Boltzmann machines. Newark: University of Delaware Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Collier, N. H. English-Japanese lexical transfer using a hopfield neural network. Manchester: UMIST, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rojas, Raúl. Neural networks: A systematic introduction. Berlin: Springer-Verlag, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dayhoff, Judith E. Neural network architectures: An introduction. New York, N.Y: Van Nostrand Reinhold, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bharath, Ramachandran. Neural network computing. New York: Windcrest/McGraw-Hill, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

B, Demuth Howard, and Beale Mark H, eds. Neural network design. Boston: PWS Pub., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cierniak, Robert. Nowe algorytmy rekonstrukcji obrazu z projekcji z zastosowaniem sieci neuronowych typu Hopfielda. Częstochowa: Wydawn. Politechniki Częstochowskiej, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Neural network parallel computing. Boston: Kluwer Academic Publishers, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Hopfield network; Neural networks"

1

Abe, Shigeo. "The Hopfield Network." In Neural Networks and Fuzzy Systems, 7–43. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6253-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Davalo, Eric, and Patrick Naïm. "The Hopfield Model." In Neural Networks, 63–80. London: Macmillan Education UK, 1991. http://dx.doi.org/10.1007/978-1-349-12312-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rojas, Raúl. "The Hopfield Model." In Neural Networks, 335–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yi, Zhang, and K. K. Tan. "Hopfield Recurrent Neural Networks." In Network Theory and Applications, 15–32. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Müller, Berndt, and Joachim Reinhardt. "The Hopfield Network for p/N → o." In Neural Networks, 156–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Müller, Berndt, and Joachim Reinhardt. "The Hopfield Network for Finite p/N." In Neural Networks, 165–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Müller, Berndt, Joachim Reinhardt, and Michael T. Strickland. "The Hopfield Network for p/N → 0." In Neural Networks, 201–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Müller, Berndt, Joachim Reinhardt, and Michael T. Strickland. "The Hopfield Network for Finite p/N." In Neural Networks, 209–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ansari, Nirwan, and Edwin Hou. "Hopfield Neural Networks." In Computational Intelligence for Optimization, 27–45. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6331-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Recurrent Hopfield Networks." In Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Hopfield network; Neural networks"

1

Kojic, Nenad S. "Implementations of Hopfield neural network in communication networks." In 2013 21st Telecommunications Forum Telfor (TELFOR). IEEE, 2013. http://dx.doi.org/10.1109/telfor.2013.6716253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ueda, Yasuhiro, Masakazu Kawahara, Takashi Inoue, Yoko Uwate, and Yoshifumi Nishio. "Space-varying cellular neural networks designed by Hopfield neural network." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Jin, Yong-zhi Sun, Ping Tan, and Yong Ning. "Detecting Communities in Networks Using Competitive Hopfield Neural Network." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dai, Zuo, and Jianzhong Cha. "A Hybrid Approach of Heuristic and Neural Network for Packing Problems." In ASME 1994 Design Technical Conferences collocated with the ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/detc1994-0119.

Full text
Abstract:
Abstract Artificial Neural Networks, particularly the Hopfield-Tank network, have been effectively applied to the solution of a variety of tasks formulated as large scale combinatorial optimization problems, such as Travelling Salesman Problem and N Queens Problem [1]. The problem of optimally packing a set of geometries into a space with finite dimensions arises frequently in many applications and is far difficult than general NP-complete problems listed in [2]. Until now within accepted time limit, it can only be solved with heuristic methods for very simple cases (e.g. 2D layout). In this paper we propose a heuristic-based Hopfield neural network designed to solve the rectangular packing problems in two dimensions, which is still NP-complete [3]. By comparing the adequacy and efficiency of the results with that obtained by several other exact and heuristic approaches, it has been concluded that the proposed method has great potential in solving 2D packing problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Vallejo, Jose Refugio, and Eduardo Bayro-Corrochano. "Clifford Hopfield Neural Networks." In 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jagota and Jakubowicz. "Knowledge representation in a multilayered Hopfield network." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

F. Pinheiro, R., D. Colón, and L. L. R. Reinoso. "Relating Lurie's problem, Hopfield's network and Alzheimer's disease." In Congresso Brasileiro de Automática - 2020. sbabra, 2020. http://dx.doi.org/10.48011/asba.v2i1.1624.

Full text
Abstract:
Alzheimer's disease is a degenerative brain disorder that affects millions of people around the world and still without cure. A very common application of Hopfield neural networks is to simulate a human memory as well as to evaluate problems of degeneration and memory loss. On the other hand, from the control area, one has Lurie's problem, which emerged in the 1940s and which still does not have a general solution. However many works and results came in an attempt to solve it. In this paper, the Hopfield's network is shown as a particular case of Lurie's problem, then one of the consequences of Alzheimer's disease, memory failure, is modeled using Hopfield's networks and nally a recent result of Lurie's problem is applied to the computationally modeled disease to correct the problem of memory loss. The correction is made using a controller via DK-iteration. Simulations are performed to validate the computational model of the disease and to demonstrate the effectiveness of the application of the recent Lurie's problem theorem. Therefore, in addition to the results presented, this work aims at encouraging the researches in the area, so that in the future, better diagnostic and treatment conditions will be achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Convertino, G., M. Brattoli, A. Branca, and A. Distante. "Hopfield neural network for motion understanding." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Atkins. "Sorting by Hopfield net." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Kou-Yuan, and Jia-Rong Yang. "Hopfield neural network for seismic velocity picking." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889512.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Hopfield network; Neural networks"

1

Wieselthier, Jeffrey E., and Craig M. Barnhart. The Application of Hopfield Neural Network Techniques to Problems of Routing and Scheduling in Packet Radio Networks. Fort Belvoir, VA: Defense Technical Information Center, November 1990. http://dx.doi.org/10.21236/ada229039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, September 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
4

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Full text
Abstract:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) data for six Washington Metropolitan Area Transit Authority (WMATA) bus routes operating in Washington, DC. We developed regression models and Artificial Neural Network (ANN) models for predicting travel times of buses for different peak periods (AM, Mid-Day and PM). Our analysis included variables such as number of served bus stops, length of route between bus stops, average number of passengers in the bus, average dwell time of buses, and number of intersections between bus stops. We obtained ANN models for travel times by using approximation technique incorporating two separate algorithms: Quasi-Newton and Levenberg-Marquardt. The training strategy for neural network models involved feed forward and errorback processes that minimized the generated errors. We also evaluated the models with a Comparison of the Normalized Squared Errors (NSE). From the results, we observed that the travel times of buses and the dwell times at bus stops generally increased over time of the day. We gathered travel time equations for buses for the AM, Mid-Day and PM Peaks. The lowest NSE for the AM, Mid-Day and PM Peak periods corresponded to training processes using Quasi-Newton algorithm, which had 3, 2 and 5 perceptron layers, respectively. These prediction models could be adapted by transit agencies to provide the patrons with accurate travel time information at bus stops or online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography