To see the other types of publications on this topic, follow the link: Hopfield network; Neural networks.

Journal articles on the topic 'Hopfield network; Neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Hopfield network; Neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wilson, Robert C. "Parallel Hopfield Networks." Neural Computation 21, no. 3 (March 2009): 831–50. http://dx.doi.org/10.1162/neco.2008.03-07-496.

Full text
Abstract:
We introduce a novel type of neural network, termed the parallel Hopfield network, that can simultaneously effect the dynamics of many different, independent Hopfield networks in parallel in the same piece of neural hardware. Numerically we find that under certain conditions, each Hopfield subnetwork has a finite memory capacity approaching that of the equivalent isolated attractor network, while a simple signal-to-noise analysis sheds qualitative, and some quantitative, insight into the workings (and failures) of the system.
APA, Harvard, Vancouver, ISO, and other styles
2

Kobayashi, Masaki. "Storage Capacities of Twin-Multistate Quaternion Hopfield Neural Networks." Computational Intelligence and Neuroscience 2018 (November 1, 2018): 1–5. http://dx.doi.org/10.1155/2018/1275290.

Full text
Abstract:
A twin-multistate quaternion Hopfield neural network (TMQHNN) is a multistate Hopfield model and can store multilevel information, such as image data. Storage capacity is an important problem of Hopfield neural networks. Jankowski et al. approximated the crosstalk terms of complex-valued Hopfield neural networks (CHNNs) by the 2-dimensional normal distributions and evaluated their storage capacities. In this work, we evaluate the storage capacities of TMQHNNs based on their idea.
APA, Harvard, Vancouver, ISO, and other styles
3

Antipova, E. S., and S. A. Rashkovskiy. "Autoassociative Hamming Neural Network." Nelineinaya Dinamika 17, no. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Full text
Abstract:
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
APA, Harvard, Vancouver, ISO, and other styles
4

ISOKAWA, TEIJIRO, HARUHIKO NISHIMURA, NAOTAKE KAMIURA, and NOBUYUKI MATSUI. "ASSOCIATIVE MEMORY IN QUATERNIONIC HOPFIELD NEURAL NETWORK." International Journal of Neural Systems 18, no. 02 (April 2008): 135–45. http://dx.doi.org/10.1142/s0129065708001440.

Full text
Abstract:
Associative memory networks based on quaternionic Hopfield neural network are investigated in this paper. These networks are composed of quaternionic neurons, and input, output, threshold, and connection weights are represented in quaternions, which is a class of hypercomplex number systems. The energy function of the network and the Hebbian rule for embedding patterns are introduced. The stable states and their basins are explored for the networks with three neurons and four neurons. It is clarified that there exist at most 16 stable states, called multiplet components, as the degenerated stored patterns, and each of these states has its basin in the quaternionic networks.
APA, Harvard, Vancouver, ISO, and other styles
5

SUKHASWAMI, M. B., P. SEETHARAMULU, and ARUN K. PUJARI. "RECOGNITION OF TELUGU CHARACTERS USING NEURAL NETWORKS." International Journal of Neural Systems 06, no. 03 (September 1995): 317–57. http://dx.doi.org/10.1142/s0129065795000238.

Full text
Abstract:
The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different “hands” in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.
APA, Harvard, Vancouver, ISO, and other styles
6

Savin, Daniela, Sabah Alkass, and Paul Fazio. "Construction resource leveling using neural networks." Canadian Journal of Civil Engineering 23, no. 4 (August 1, 1996): 917–25. http://dx.doi.org/10.1139/l96-898.

Full text
Abstract:
A neural network model for construction resource leveling is developed and discussed. The model is derived by mapping an augmented Lagrangian multiplier optimization formulation of a resource leveling problem onto a discrete-time Hopfield net. The resulting neural network model consists of two main blocks. Specifically, it consists of a discrete-time Hopfield neural network block, and a control block for the adjustment of Lagrange multipliers in the augmented Lagrangian multiplier optimization, and for the computation of the new set of weights of the neural network block. An experimental verification of the proposed artificial neural network model is also provided. Key words: neural networks in construction, resource leveling, construction management, project management.
APA, Harvard, Vancouver, ISO, and other styles
7

Fan, Tongke. "Research on Optimal CDMA Multiuser Detection Based on Stochastic Hopfield Neural Network." Recent Patents on Computer Science 12, no. 3 (May 8, 2019): 233–40. http://dx.doi.org/10.2174/2213275912666181210103742.

Full text
Abstract:
Background: Most of the common multi-user detection techniques have the shortcomings of large computation and slow operation. For Hopfield neural networks, there are some problems such as high-speed searching ability and parallel processing, but there are local convergence problems. Objective: The stochastic Hopfield neural network avoids local convergence by introducing noise into the state variables and then achieves the optimal detection. Methods: Based on the study of CDMA communication model, this paper presents and models the problem of multi-user detection. Then a new stochastic Hopfield neural network is obtained by introducing a stochastic disturbance into the traditional Hopfield neural network. Finally, the problem of CDMA multi-user detection is simulated. Conclusion: The results show that the introduction of stochastic disturbance into Hopfield neural network can help the neural network to jump out of the local minimum, thus achieving the minimum and improving the performance of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
8

Kobayashi, Masaki. "Bicomplex Projection Rule for Complex-Valued Hopfield Neural Networks." Neural Computation 32, no. 11 (November 2020): 2237–48. http://dx.doi.org/10.1162/neco_a_01320.

Full text
Abstract:
A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should be simplified. In this work, the number of weight parameters is reduced by bicomplex projection rule for CHNNs, which is given by the decomposition of bicomplex-valued Hopfield neural networks. Computer simulations support that the noise tolerance of CHNN with a bicomplex projection rule is equal to or even better than that of quaternion- and bicomplex-valued Hopfield neural networks. By computer simulations, we find that the projection rule for hyperbolic-valued Hopfield neural networks in synchronous mode maintains a high noise tolerance.
APA, Harvard, Vancouver, ISO, and other styles
9

YAN, JUN-JUH, TEH-LU LIAO, JUI-SHENG LIN, and CHAO-JUNG CHENG. "SYNCHRONIZATION CONTROL OF NEURAL NETWORKS SUBJECT TO TIME-VARYING DELAYS AND INPUT NONLINEARITY." International Journal of Bifurcation and Chaos 16, no. 12 (December 2006): 3643–54. http://dx.doi.org/10.1142/s0218127406017038.

Full text
Abstract:
This paper investigates the synchronization problem for a particular class of neural networks subject to time-varying delays and input nonlinearity. Using the variable structure control technique, a memoryless decentralized control law is established which guarantees exponential synchronization even when input nonlinearity is present. The proposed controller is suitable for application in delayed cellular neural networks and Hopfield neural networks with no restriction on the derivative of the time-varying delays. A two-dimensional cellular neural network and a four-dimensional Hopfield neural network, both with time-varying delays, are presented as illustrative examples to demonstrate the effectiveness of the proposed synchronization scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Kirti, Suju M. George, and P. Rambabu. "Use of a System of Recurrent Neural Networks for Solving the Cell Placement Problem in VLSI Design." International Journal on Artificial Intelligence Tools 06, no. 01 (March 1997): 15–35. http://dx.doi.org/10.1142/s0218213097000037.

Full text
Abstract:
Cell placement in VLSI design is an NP-complete problem. In this paper, we have tried to solve the standard cell placement problem using the Hopfield neural network model. Furthermore, a new system of coupled recurrent neural networks, which was designed to eliminate the drawbacks of the Hopfield neural network, is introduced. The performance of Hopfield networks with discrete and graded neurons is also investigated. The energy function corresponding to the chosen representation is given and the weight matrix and the inputs needed for the network are also computed in this work. Several different combinations of parameters were examined to find the optimum set of parameters which results in better and faster convergence. To show the effectiveness of our methods, cell placement problems up to 30 cells are considered and the results are compared with the results obtained by a genetic algorithm. Our results show that a system of coupled neural networks could be an efficient way to overcome the limitation of recurrent neural networks which consider only bilinear forms of the energy function.
APA, Harvard, Vancouver, ISO, and other styles
11

Massini, Giulia. "Hopfield Neural Network." Substance Use & Misuse 33, no. 2 (January 1998): 481–88. http://dx.doi.org/10.3109/10826089809115877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

P, Monurajan, Ruhanbevi A, and Manjula J. "Design of Memristive Hopfield Neural Network using Memristor Bridges." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 652. http://dx.doi.org/10.14419/ijet.v7i3.12.16447.

Full text
Abstract:
Artificial Neural Networks are interconnection of neurons inspired from the biological neural network of the brain. ANN is claimed to rule the future, spreads its wings to various areas of interest to name a few such as optimization, information technology, cryptography, image processing and even in medical diagnosis. There are devices which possess synaptic behaviour, one such device is memristor. Bridge circuit of memristors can be combined together to form neurons. Neurons can be made into a network with appropriate parameters to store data or images. Hopfield neural networks are chosen to store the data in associative memory. Hopfield neural networks are a significant feature in ANN which are recurrent in nature and in general are used as associative memory and in solving optimization problems such as the Travelling Salesman Problem. The paper deals on the construction of memristive Hopfield neural network using memristor bridging circuit and its application in the associative memory. This paper also illustrates the experiment with mathematical equations and the associative memory concept of the network using Matlab.
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Xia, Zhen Wang, and Yuxia Li. "Nonlinear Dynamics and Chaos in Fractional-Order Hopfield Neural Networks with Delay." Advances in Mathematical Physics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/657245.

Full text
Abstract:
A fractional-order two-neuron Hopfield neural network with delay is proposed based on the classic well-known Hopfield neural networks, and further, the complex dynamical behaviors of such a network are investigated. A great variety of interesting dynamical phenomena, including single-periodic, multiple-periodic, and chaotic motions, are found to exist. The existence of chaotic attractors is verified by the bifurcation diagram and phase portraits as well.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Yanxia, Zenghui Wang, and Barend Jacobus van Wyk. "Chaotic Hopfield Neural Network Swarm Optimization and Its Application." Journal of Applied Mathematics 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/873670.

Full text
Abstract:
A new neural network based optimization algorithm is proposed. The presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. The proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. The effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Kobayashi, Masaki. "Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules." Computational Intelligence and Neuroscience 2017 (2017): 1–6. http://dx.doi.org/10.1155/2017/4894278.

Full text
Abstract:
Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Li‐Xin, and Jerry M. Mendel. "Adaptive minimum prediction‐error deconvolution and source wavelet estimation using Hopfield neural networks." GEOPHYSICS 57, no. 5 (May 1992): 670–79. http://dx.doi.org/10.1190/1.1443281.

Full text
Abstract:
The massively parallel processing advantage of artificial neural networks makes them suitable for hardware implementations; therefore, using artificial neural networks for seismic signal processing problems has the potential of greatly speeding up seismic signal processing. A commonly used artificial neural network—Hopfield neural network—is used to implement a new adaptive minimum prediction‐error deconvolution (AMPED) procedure which decomposes deconvolution and wavelet estimation into three subprocesses: reflectivity location detection, reflectivity magnitude estimation, and source wavelet extraction. A random reflectivity model is not required. The basic idea of the approach is to relate the cost functions of the deconvolution and wavelet estimation problem with the energy functions of these Hopfield neural networks so that when these neural networks reach their stable states, for which the energy functions are locally minimized, the outputs of the networks give the solution to the deconvolution and wavelet estimation problem. Three Hopfield neural networks are constructed to implement the three subprocesses, respectively, and they are then connected in an iterative way to implement the entire deconvolution and wavelet estimation procedure. This approach is applied to synthetic and real seismic traces, and the results show that: (1) the Hopfield neural networks converge to their stable states in only one to four iterations; hence, this approach gives a solution to the deconvolution and wavelet estimation problem very quickly; (2) this approach works impressively well in the cases of low signal‐to‐noise ratio and nonminimum phase wavelets; and (3) this approach can treat backscatter either as noise or as useful signal.
APA, Harvard, Vancouver, ISO, and other styles
17

Akhmet, Marat, and Ejaily Milad Alejaily. "Domain-Structured Chaos in a Hopfield Neural Network." International Journal of Bifurcation and Chaos 29, no. 14 (December 26, 2019): 1950205. http://dx.doi.org/10.1142/s0218127419502055.

Full text
Abstract:
In this paper, we provide a new method for constructing chaotic Hopfield neural networks. Our approach is based on structuring the domain to form a special set through the discrete evolution of the network state variables. In the chaotic regime, the formed set is invariant under the system governing the dynamics of the neural network. The approach can be viewed as an extension of the unimodality technique for one-dimensional map, thereby generating chaos from higher-dimensional systems. We show that the discrete Hopfield neural network considered is chaotic in the sense of Devaney, Li–Yorke, and Poincaré. Mathematical analysis and numerical simulation are provided to confirm the presence of chaos in the network.
APA, Harvard, Vancouver, ISO, and other styles
18

Sousa, Miguel Angelo de Abreu de, Edson Lemos Horta, Sergio Takeo Kofuji, and Emilio Del-Moral-Hernandez. "Architecture Analysis of an FPGA-Based Hopfield Neural Network." Advances in Artificial Neural Systems 2014 (December 9, 2014): 1–10. http://dx.doi.org/10.1155/2014/602325.

Full text
Abstract:
Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.
APA, Harvard, Vancouver, ISO, and other styles
19

LIEBOVITCH, LARRY S., NIKITA D. ARNOLD, and LEV Y. SELECTOR. "NEURAL NETWORKS TO COMPUTE MOLECULAR DYNAMICS." Journal of Biological Systems 02, no. 02 (June 1994): 193–228. http://dx.doi.org/10.1142/s0218339094000155.

Full text
Abstract:
Large molecules such as proteins have many of the properties of neural networks. Hence, neural networks may serve as a natural and thus efficient method to compute the time dependent changes of the structure in large molecules. We describe how to encode the spatial conformation and energy structure of a molecule in a neural network. The dynamics of the molecule can then be computed from the dynamics of the corresponding neural network. As a detailed example, we formulated a Hopfield network to compute the molecular dynamics of a small molecule, cyclohexane. We used this network to determine the distribution of times spent in the twist and chair conformational states as the cyclohexane thermally switches between these two states.
APA, Harvard, Vancouver, ISO, and other styles
20

Peng, Hao Yue, and Guo Hao Zhao. "Resource Industry Operation Study: Cooperation, Constraint and Sustainability." Advanced Materials Research 524-527 (May 2012): 2971–76. http://dx.doi.org/10.4028/www.scientific.net/amr.524-527.2971.

Full text
Abstract:
The history of Human beings is the one of utilizing natural recourses. With the development of the economic, natural resources industry security risk management increasingly becomes an urgent and international issue. Analyzing the resources industry security under the systematic angle, the resources industry risk control is the complex system. This system is full of energy flowing which can be measured by the entropy. Hopfield neural network is the important neural networks model. The use of Hopfield neural network puts extra systematic directionality restraints on such risk control. It makes the objective function and constraints of resources industry security risk control, in terms of negative entropy, optimized by the Hopfield neural network energy function. Then as an effective try, some conclusions about reducing resources industry security risk also can be got.
APA, Harvard, Vancouver, ISO, and other styles
21

Kobayashi, M. "Hyperbolic Hopfield Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 24, no. 2 (February 2013): 335–41. http://dx.doi.org/10.1109/tnnls.2012.2230450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hu, Shigeng, Xiaoxin Liao, and Xuerong Mao. "Stochastic Hopfield neural networks." Journal of Physics A: Mathematical and General 36, no. 9 (February 19, 2003): 2235–49. http://dx.doi.org/10.1088/0305-4470/36/9/303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Feild, William B., and Jainendra K. Navlakha. "On Hopfield neural networks." Neural Networks 1 (January 1988): 21. http://dx.doi.org/10.1016/0893-6080(88)90063-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bharitkar, S., and J. M. Mendel. "The hysteretic Hopfield neural network." IEEE Transactions on Neural Networks 11, no. 4 (July 2000): 879–88. http://dx.doi.org/10.1109/72.857769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kai, Cui. "An ad-hoc network routing algorithm based on improved neural network under the influence of COVID-19." Journal of Intelligent & Fuzzy Systems 39, no. 6 (December 4, 2020): 8767–74. http://dx.doi.org/10.3233/jifs-189273.

Full text
Abstract:
Under the influence of COVID-19, an efficient Ad-hoc network routing algorithm is required in the process of epidemic prevention and control. Artificial neural network has become an effective method to solve large-scale optimization problems. It has been proved that the appropriate neural network can get the exact solution of the problem in real time. Based on the continuous Hopfield neural network (CHNN), this paper focuses on the study of the best algorithm path for QoS routing in Ad-hoc networks. In this paper, a new Hopfield neural network model is proposed to solve the minimum cost problem in Ad-hoc networks with time delay. In the improved version of the path algorithm, the relationship between the parameters of the energy function is provided, and it is proved that the feasible solution of the network belongs to the category of progressive stability by properly selecting the parameters. The calculation example shows that the solution is not affected by the initial value, and the global optimal solution can always be obtained. The algorithm is very effective in the prevention and control in COVID-19 epidemic.
APA, Harvard, Vancouver, ISO, and other styles
26

Ionescu, Carmen, Emilian Panaitescu, and Mihai Stoicescu. "Neural Flows in Hopfield Network Approach." Annals of West University of Timisoara - Physics 57, no. 1 (December 1, 2013): 1–9. http://dx.doi.org/10.1515/awutp-2015-0101.

Full text
Abstract:
Abstract In most of the applications involving neural networks, the main problem consists in finding an optimal procedure to reduce the real neuron to simpler models which still express the biological complexity but allow highlighting the main characteristics of the system. We effectively investigate a simple reduction procedure which leads from complex models of Hodgkin-Huxley type to very convenient binary models of Hopfield type. The reduction will allow to describe the neuron interconnections in a quite large network and to obtain information concerning its symmetry and stability. Both cases, on homogeneous voltage across the membrane and inhomogeneous voltage along the axon will be tackled out. Few numerical simulations of the neural flow based on the cable-equation will be also presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Burakov, M. V., V. F. Shishlakov, and A. S. Konovalov. "ADAPTIVE NEURAL NETWORK PID CONTROLLER." Issues of radio electronics, no. 10 (October 20, 2018): 86–92. http://dx.doi.org/10.21778/2218-5453-2018-10-86-92.

Full text
Abstract:
The problem of constructing an adaptive PID controller based on the Hopfield neural network for a linear dynamic plant of the second order is considered. A description of the plant in the form of a discrete transfer function is used, the coefficients of which are determined with the help of a neural network that minimizes the discrepancy between the outputs of the plant and the model. The neural network processes the current and delayed input and output signals of the plant, forming an output for estimating the coefficients of the model. Another neural network determines the PID regulator coefficients at which the dynamics of the system approach the dynamics of the reference process. The calculation of weights and displacements of neurons in Hopfield networks used for identification and control is based on the construction of Lyapunov functions. The proposed methodology can be used to organize adaptive control of a wide class of linear dynamic systems with variable parameters. The results of the simulation in the article show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
28

Shi, Jun You, Hong Yan Zhai, and Chuang Sheng Su. "Optimal Layout of the Irregular Parts with Neural Networks Hybrid Algorithm." Advanced Materials Research 97-101 (March 2010): 3514–18. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.3514.

Full text
Abstract:
An irregular parts optimal layout method based on artificial neural networks is proposed. The manufacturing process of parts is involved in the layout problem. Every side of shapes is expanded in consideration of the machining allowance. Self-Organizing Map (SOM) and Hopfield artificial neural network are integrated to complete the automatic layout. In the beginning, irregular parts are randomly distributed. Self-Organizing Map is used to look for the best position of the irregular parts by moving them. The overlapping area is gradually reduced to zero. Hopfield neural network is used to rotate each part, and each part's optimum rotating angle is obtained when the neural network is in stable state. The algorithm in this paper can solve the irregular parts layout problem and rectangular parts layout problem in the given region. Examples indicate that this algorithm is effective and practical.
APA, Harvard, Vancouver, ISO, and other styles
29

Martinelli, Giuseppe. "A Hopfield neural network approach to decentralized self-synchronizing sensor networks." Neural Computing and Applications 19, no. 7 (May 12, 2010): 987–96. http://dx.doi.org/10.1007/s00521-010-0369-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

SHEN, Y., and M. WANG. "Broadcast scheduling in wireless sensor networks using fuzzy Hopfield neural network." Expert Systems with Applications 34, no. 2 (February 2008): 900–907. http://dx.doi.org/10.1016/j.eswa.2006.10.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

YEUNG, DANIEL S., SHENSHAN QIU, ERIC C. C. TSANG, and XIZHAO WANG. "A GENERAL UPDATING RULE FOR DISCRETE HOPFIELD-TYPE NEURAL NETWORK WITH TIME-DELAY AND THE CORRESPONDING SEARCH ALGORITHM." International Journal of Computational Intelligence and Applications 01, no. 04 (December 2001): 399–412. http://dx.doi.org/10.1142/s1469026801000329.

Full text
Abstract:
In this paper, the Hopfield neural network with delay (HNND) is studied from the standpoint of regarding it as an optimizing computational model. Two general updating rules for networks with delay (GURD) are given based on Hopfield-type neural networks with delay for optimization problems and characterized by dynamic thresholds. It is proved that in any sequence of updating rule modes, the GURD monotonously converges to a stable state of the network. The diagonal elements of the connection matrix are shown to have an important influence on the convergence process, and they represent the relationship of the local maximum value of the energy function to the stable states of the networks. All the ordinary discrete Hopfield neural network (DHNN) algorithms are instances of the GURD. It can be shown that the convergence conditions of the GURD may be relaxed in the context of applications, for instance, the condition of nonnegative diagonal elements of the connection matrix can be removed from the original convergence theorem. A new updating rule mode and restrictive conditions can guarantee the network to achieve a local maximum of the energy function with a step-by-step algorithm. The convergence rate improves evidently when compared with other methods. For a delay item considered as a noise disturbance item, the step-by-step algorithm demonstrates its efficiency and a high convergence rate. Experimental results support our proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Kojic, Nenad, Irini Reljin, and Branimir Reljin. "Neural network for optimization of routing in communication networks." Facta universitatis - series: Electronics and Energetics 19, no. 2 (2006): 317–29. http://dx.doi.org/10.2298/fuee0602317k.

Full text
Abstract:
The efficient neural network algorithm for optimization of routing in communication networks is suggested. As it was known from literature different optimization and ill-defined problems may be resolved using appropriately designed neural networks, due to their high computational speed and the possibility of working with uncertain data. Under some assumptions the routing in packet-switched communication networks may be considered as optimization problem, more precisely, as a shortest-path problem. The Hopfield-type neural network is a very efficient tool for solving such problems. The suggested routing algorithm is designed to find the optimal path, meaning, the shortest path (if possible), but taking into account the traffic conditions: the incoming traffic flow, routers occupancy, and link capacities, avoiding the packet loss due to the input buffer overflow. The applicability of the proposed model is demonstrated through computer simulations in different traffic conditions and for different full-connected networks with both symmetrical and non-symmetrical links.
APA, Harvard, Vancouver, ISO, and other styles
33

WU, WEI, BAO TONG CUI, and ZHIGANG ZENG. "IMPROVED SUFFICIENT CONDITIONS FOR GLOBAL EXPONENTIAL STABILITY OF RECURRENT NEURAL NETWORKS WITH DISTRIBUTED DELAYS." International Journal of Bifurcation and Chaos 18, no. 07 (July 2008): 2029–37. http://dx.doi.org/10.1142/s021812740802152x.

Full text
Abstract:
In this paper, the globally exponential stability of recurrent neural networks with continuously distributed delays is investigated. New theoretical results are presented in the presence of external stimuli. It is shown that the recurrent neural network is globally exponentially stable, and the estimated location of the equilibrium point can be obtained. As typical representatives, the Hopfield neural network (HNN) and the cellular neural network (CNN) are examined in detail. Comparison between our results and the previous results admits the improvement of our results.
APA, Harvard, Vancouver, ISO, and other styles
34

Mohd Kasihmuddin, Mohd Shareduwan, Mohd Asyraf Mansor, Md Faisal Md Basir, and Saratha Sathasivam. "Discrete Mutation Hopfield Neural Network in Propositional Satisfiability." Mathematics 7, no. 11 (November 19, 2019): 1133. http://dx.doi.org/10.3390/math7111133.

Full text
Abstract:
The dynamic behaviours of an artificial neural network (ANN) system are strongly dependent on its network structure. Thus, the output of ANNs has long suffered from a lack of interpretability and variation. This has severely limited the practical usability of the logical rule in the ANN. The work presents an integrated representation of k-satisfiability (kSAT) in a mutation hopfield neural network (MHNN). Neuron states of the hopfield neural network converge to minimum energy, but the solution produced is confined to the limited number of solution spaces. The MHNN is incorporated with the global search capability of the estimation of distribution algorithms (EDAs), which typically explore various solution spaces. The main purpose is to estimate other possible neuron states that lead to global minimum energy through available output measurements. Furthermore, it is shown that the MHNN can retrieve various neuron states with the lowest minimum energy. Subsequent simulations performed on the MHNN reveal that the approach yields a result that surpasses the conventional hybrid HNN. Furthermore, this study provides a new paradigm in the field of neural networks by overcoming the overfitting issue.
APA, Harvard, Vancouver, ISO, and other styles
35

Gosti, Giorgio, Viola Folli, Marco Leonetti, and Giancarlo Ruocco. "Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks." Entropy 21, no. 8 (July 25, 2019): 726. http://dx.doi.org/10.3390/e21080726.

Full text
Abstract:
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14 N , as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14 N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.
APA, Harvard, Vancouver, ISO, and other styles
36

Vamaraju, Janaki, and Mrinal K. Sen. "Unsupervised physics-based neural networks for seismic migration." Interpretation 7, no. 3 (August 1, 2019): SE189—SE200. http://dx.doi.org/10.1190/int-2018-0230.1.

Full text
Abstract:
We have developed a novel framework for combining physics-based forward models and neural networks to advance seismic processing and inversion algorithms. Migration is an effective tool in seismic data processing and imaging. Over the years, the scope of these algorithms has broadened; today, migration is a central step in the seismic data processing workflow. However, no single migration technique is suitable for all kinds of data and all styles of acquisition. There is always a compromise on the accuracy, cost, and flexibility of these algorithms. On the other hand, machine-learning algorithms and artificial intelligence methods have been found immensely successful in applications in which big data are available. The applicability of these algorithms is being extensively investigated in scientific disciplines such as exploration geophysics with the goal of reducing exploration and development costs. In this context, we have used a special kind of unsupervised recurrent neural network and its variants, Hopfield neural networks and the Boltzmann machine, to solve the problems of Kirchhoff and reverse time migrations. We use the network to migrate seismic data in a least-squares sense using simulated annealing to globally optimize the cost function of the neural network. The weights and biases of the neural network are derived from the physics-based forward models that are used to generate seismic data. The optimal configuration of the neural network after training corresponds to the minimum energy of the network and thus gives the reflectivity solution of the migration problem. Using synthetic examples, we determine that (1) Hopfield neural networks are fast and efficient and (2) they provide reflectivity images with mitigated migration artifacts and improved spatial resolution. Specifically, the presented approach minimizes the artifacts that arise from limited aperture, low subsurface illumination, coarse sampling, and gaps in the data.
APA, Harvard, Vancouver, ISO, and other styles
37

Chuang, Shang Jen, Chiung Hsing Chen, Chien Chih Kao, and Fang Tsung Liu. "Improving Pattern Recognition Rate by Gaussian Hopfield Neural Network." Advanced Materials Research 189-193 (February 2011): 2042–45. http://dx.doi.org/10.4028/www.scientific.net/amr.189-193.2042.

Full text
Abstract:
English letters cannot be recognized by the Hopfield Neural Network if it contains noise over 50%. This paper proposes a new method to improve recognition rate of the Hopfield Neural Network. To advance it, we add the Gaussian distribution feature to the Hopfield Neural Network. The Gaussian filter was added to eliminate noise and improve Hopfield Neural Network’s recognition rate. We use English letters from ‘A’ to ‘Z’ as training data. The noises from 0% to 100% were generated randomly for testing data. Initially, we use the Gaussian filter to eliminate noise and then to recognize test pattern by Hopfield Neural Network. The results are we found that if letters contain noise between 50% and 53% will become reverse phenomenon or unable recognition [6]. In this paper, we propose to uses multiple filters to improve recognition rate when letters contain noise between 50% and 53%.
APA, Harvard, Vancouver, ISO, and other styles
38

Zheng, Pengsheng, Wansheng Tang, and Jianxiong Zhang. "Efficient Continuous-Time Asymmetric Hopfield Networks for Memory Retrieval." Neural Computation 22, no. 6 (June 2010): 1597–614. http://dx.doi.org/10.1162/neco.2010.05-09-1014.

Full text
Abstract:
A novel m energy functions method is adopted to analyze the retrieval property of continuous-time asymmetric Hopfield neural networks. Sufficient conditions for the local and global asymptotic stability of the network are proposed. Moreover, an efficient systematic procedure for designing asymmetric networks is proposed, and a given set of states can be assigned as locally asymptotically stable equilibrium points. Simulation examples show that the asymmetric network can act as an efficient associative memory, and it is almost free from spurious memory problem.
APA, Harvard, Vancouver, ISO, and other styles
39

ABDI, H. "A NEURAL NETWORK PRIMER." Journal of Biological Systems 02, no. 03 (September 1994): 247–81. http://dx.doi.org/10.1142/s0218339094000179.

Full text
Abstract:
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.
APA, Harvard, Vancouver, ISO, and other styles
40

Cepl, Miroslav, and Jiří Šťastný. "Progressive optimization methods for applied in computer network." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 57, no. 6 (2009): 45–50. http://dx.doi.org/10.11118/actaun200957060045.

Full text
Abstract:
Standard core of communications’ networks is represent by active elements, which carries out the processing of transmitted data units. Based on the results of the processing the data are transmitted from sender to recipient. The hardest challenge of the active elements present to determine what the data processing unit and what time of the system to match the processing priority assigned to individual data units. Based on the analysis of the architecture and function of active network components and algorithms, artificial neural networks can be assumed to be effectively useable to manage network elements. This article focuses on the design and use of the selected type of artificial neural network (Hopfield neural network) for the optimal management of network switch.
APA, Harvard, Vancouver, ISO, and other styles
41

RUZAKOVA, Olga. "DEVELOPMENT OF A METHODOLOGY FOR FORMALIZING THE INVESTMENT DECISION-MAKING PROCESS BASED ON THE HOPFIELD NEURAL NETWORK." "EСONOMY. FINANСES. MANAGEMENT: Topical issues of science and practical activity", no. 6 (46) (August 13, 2019): 65–72. http://dx.doi.org/10.37128/2411-4413-2019-6-7.

Full text
Abstract:
The article presents a methodological approach to assessing the investment attractiveness of an enterprise based on the Hopfield neural network mathematical apparatus. An extended set of evaluation parameters of the investment process has been compiled. An algorithm for formalizing the decision-making process regarding the investment attractiveness of the enterprise based on the mathematical apparatus of neural networks has been developed. The proposed approach allows taking into account the constantly changing sets of quantitative and qualitative parameters, identifying the appropriate level of investment attractiveness of the enterprise with minimal money and time expenses – one of the standards of the Hopfield network, which is most similar to the one that characterizes the activity of the enterprise. Developed complex formalization of the investment process allows you to make investment decisions in the context of incompleteness and heterogeneity of information, based on the methodological tools of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
42

Lewis, John E., and Leon Glass. "Nonlinear Dynamics and Symbolic Dynamics of Neural Networks." Neural Computation 4, no. 5 (September 1992): 621–42. http://dx.doi.org/10.1162/neco.1992.4.5.621.

Full text
Abstract:
A piecewise linear equation is proposed as a method of analysis of mathematical models of neural networks. A symbolic representation of the dynamics in this equation is given as a directed graph on an N-dimensional hypercube. This provides a formal link with discrete neural networks such as the original Hopfield models. Analytic criteria are given to establish steady states and limit cycle oscillations independent of network dimension. Model networks that display multiple stable limit cycles and chaotic dynamics are discussed. The results show that such equations are a useful and efficient method of investigating the behavior of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
43

KOBAYASHI, Masaki. "Global Hyperbolic Hopfield Neural Networks." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E99.A, no. 12 (2016): 2511–16. http://dx.doi.org/10.1587/transfun.e99.a.2511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Leblebicioğlu, Kemal, Ugur Halici, and Okay Çelebi. "Infinite dimensional Hopfield neural networks." Nonlinear Analysis: Theory, Methods & Applications 47, no. 9 (August 2001): 5807–13. http://dx.doi.org/10.1016/s0362-546x(01)00710-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kobayashi, Masaki. "Symmetric quaternionic Hopfield neural networks." Neurocomputing 240 (May 2017): 110–14. http://dx.doi.org/10.1016/j.neucom.2017.02.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kobayashi, Masaki. "Diagonal rotor Hopfield neural networks." Neurocomputing 415 (November 2020): 40–47. http://dx.doi.org/10.1016/j.neucom.2020.07.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kobayashi, Masaki. "Dual-numbered Hopfield neural networks." IEEJ Transactions on Electrical and Electronic Engineering 13, no. 2 (August 24, 2017): 280–84. http://dx.doi.org/10.1002/tee.22524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

OHMI, Kazuo. "Neural network PTV using a Hopfield network." Journal of the Visualization Society of Japan 22, no. 1Supplement (2002): 209–12. http://dx.doi.org/10.3154/jvs.22.1supplement_209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

KOBAYASHI, Masaki. "Hybrid Quaternionic Hopfield Neural Network." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E98.A, no. 7 (2015): 1512–18. http://dx.doi.org/10.1587/transfun.e98.a.1512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Miao, Ning Bo Liao, and Chen Zhou. "Neural Networks Model for Optimization an Study on Isomorphism Discernment." Advanced Materials Research 156-157 (October 2010): 492–95. http://dx.doi.org/10.4028/www.scientific.net/amr.156-157.492.

Full text
Abstract:
An artificial neural network is composed of large number of simple processing elements by direct links named connections, the benefits of neural networks extend beyond the high computation rates by massive parallelism. Optimization problems could be transferred into a feedback network, the network interconnects the neurons with a feedback path. Graphs isomorphism discernment is one of the most important and difficult issues in graphs theory based structures design. To solve the problem, a Hopfield neural networks (HNN) model is presented in this paper. The solution of HNN is design as a permutation matrix of two graphs, and some operators are improved to prevent premature convergence. It is concluded that the algorithm presented here is efficient for large-scale graphs isomorphism problem and other NP-complete optimization issues.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography