Journal articles on the topic 'Neural network programming'

To see the other types of publications on this topic, follow the link: Neural network programming.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Qingshan, Jinde Cao, and Guanrong Chen. "A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming." Neural Computation 22, no. 11 (November 2010): 2962–78. http://dx.doi.org/10.1162/neco_a_00029.

Full text
Abstract:
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
APA, Harvard, Vancouver, ISO, and other styles
2

FRANCELIN ROMERO, ROSELI A., JANUSZ KACPRYZK, and FERNANDO GOMIDE. "A BIOLOGICALLY INSPIRED NEURAL NETWORK FOR DYNAMIC PROGRAMMING." International Journal of Neural Systems 11, no. 06 (December 2001): 561–72. http://dx.doi.org/10.1142/s0129065701000965.

Full text
Abstract:
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Dölen, Melik, and Robert D. Lorenz. "General Methodologies for Neural Network Programming." International Journal of Smart Engineering System Design 4, no. 1 (January 2002): 63–73. http://dx.doi.org/10.1080/10255810210629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abdullah, Wan Ahmad Tajuddin Wan. "Logic programming on a neural network." International Journal of Intelligent Systems 7, no. 6 (August 1992): 513–19. http://dx.doi.org/10.1002/int.4550070604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Razaqpur, A. G., A. O. Abd El Halim, and Hosny A. Mohamed. "Bridge management by dynamic programming and neural networks." Canadian Journal of Civil Engineering 23, no. 5 (October 1, 1996): 1064–69. http://dx.doi.org/10.1139/l96-913.

Full text
Abstract:
Bridges and pavements represent the major investment in a highway network. In addition, they are in constant need of maintenance, rehabilitation, and replacement. One of the problems related to highway infrastructure is that the cost of maintaining a network of bridges with an acceptable level-of-service is more than the budgeted funds. For large bridge networks, traditional management practices have become inadequate for dealing with this serious problem. Bridge management systems are a relatively new approach developed to solve the latter problem, following the successful application of similar system concepts to pavement management. Priority setting schemes used in bridge management systems range from subjective basis using engineering judgement to very complex optimization models. However, currently used priority setting schemes do not have the ability to optimize the system benefits in order to get optimal solutions. This paper presents a network optimization model which allocates a limited budget to bridge projects. The objective of the model is to determine the best timing for carrying out these projects and the spending level for each year of the analysis period in order to minimize the losses of the system benefits. A combined dynamic programming and neural network approach was utilized to formulate the model. The bridge problem has two dimensions: the time dimension and the bridge network dimension. The dynamic programming sets its stages in the time dimension, while the neural network handles the network dimension. Key words: bridge management, dynamic programming, neural networks, budget allocation.
APA, Harvard, Vancouver, ISO, and other styles
6

NAZARI, ALI, and SHADI RIAHI. "COMPUTER-AIDED PREDICTION OF PHYSICAL AND MECHANICAL PROPERTIES OF HIGH STRENGTH CEMENTITIOUS COMPOSITE CONTAINING Cr2O3 NANOPARTICLES." Nano 05, no. 05 (October 2010): 301–18. http://dx.doi.org/10.1142/s1793292010002219.

Full text
Abstract:
In the present paper, two models based on artificial neural networks (ANN) and genetic programming (GEP) for predicting flexural strength and percentage of water absorption of concretes containing Cr2O3 nanoparticles have been developed at different ages of curing. For purpose of building these models, training and testing using experimental results for 144 specimens produced with 16 different mixture proportions were conducted. The data used in the multilayer feed forward neural networks models and input variables of genetic programming models are arranged in a format of eight input parameters that cover the cement content (C), nanoparticle content (N), aggregate type (AG), water content (W), the amount of superplasticizer (S), the type of curing medium (CM), Age of curing (AC) and number of testing try (NT). According to these input parameters, in the neural networks and genetic programming models the flexural strength and percentage of water absorption values of concretes containing Cr2O3 nanoparticles were predicted. The training and testing results in the neural network and genetic programming models have shown that every two models have strong potential for predicting the flexural strength and percentage of water absorption values of concretes containing Cr2O3 nanoparticles. Although neural network have predicted better results, genetic programming is able to predict reasonable values with a simpler method rather than neural network.
APA, Harvard, Vancouver, ISO, and other styles
7

SUTO, JOZSEF, and STEFAN ONIGA. "Testing artificial neural network for hand gesture recognition." Creative Mathematics and Informatics 22, no. 2 (2013): 223–28. http://dx.doi.org/10.37193/cmi.2013.02.12.

Full text
Abstract:
Neural networks are well applicable for gesture recognition. In this article we want to present the result of an artificial feed forward network for a simplified hand gesture recognition problem. In neural networks, the learning algorithm is very important because the performance of neural network depends on it. One of the most known learning algorithm is the backpropagation. There are some mathematical software which provides acceptable result for a given problem by a backpropagation based network, but in some cases a high-level programming language implemented program can provide better solution. The main topics of the article cover the structure of the test environment, the mathematical background of the implemented methods, some programming remarks and the test results.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yan Hui, and Zhi Peng Wang. "Genetic Programming for Letters Identification Based on Neural Network." Applied Mechanics and Materials 734 (February 2015): 642–45. http://dx.doi.org/10.4028/www.scientific.net/amm.734.642.

Full text
Abstract:
According to the problem that the letters identification is not high accuracy using neural networks, in this paper, an optimal neural network structure is designed based on genetic algorithm to optimize the number of hidden layer. The English letters can be identified by optimal neural network. The results obtained in the genetic programming optimizations are very satisfactory. Experiments show that the identification system has higher accuracy and achieved good ideal letters identification effect.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Wei. "New Evolutionary Neural Network Based on Continuous Ant Colony Optimization." Applied Mechanics and Materials 58-60 (June 2011): 1773–78. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1773.

Full text
Abstract:
The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.
APA, Harvard, Vancouver, ISO, and other styles
10

M. Butaev, Mikhail, Mikhail Yu. Babich, Igor I. Salnikovq, Alexey I. Martyshkin, Dmitry V. Pashchenko, and Dmitry A. Trokoz. "Neural Network for Handwriting Recognition." Nexo Revista Científica 33, no. 02 (December 31, 2020): 623–37. http://dx.doi.org/10.5377/nexo.v33i02.10798.

Full text
Abstract:
Today, in the digital age, the problem of pattern recognition is very relevant. In particular, the task of text recognition is important in banking, for the automatic reading of documents and their control; in video control systems, for example, to identify the license plate of a car that violated traffic rules; in security systems, for example, to check banknotes at an ATM and in many other areas. A large number of methods are known for solving the problem of pattern recognition, but the main advantage of neural networks over other methods is their learning ability. It is this feature that makes neural networks attractive to study. The article proposes a basic neural network model. The main algorithms are considered and a programming model is implemented in the Python programming language. In the course of research, the following shortcomings of the basic model were revealed: low learning rate (the number of correctly recognized digits in the first epochs of learning); retraining - the network has not learned to generalize the knowledge gained; low probability of recognition - 95.13%.To solve the above disadvantages, various techniques were used that increase the accuracy and speed of work, as well as reduce the effect of network retraining.
APA, Harvard, Vancouver, ISO, and other styles
11

Lu, Jianjun, Shozo Tokinaga, and Yoshikazu Ikeda. "EXPLANATORY RULE EXTRACTION BASED ON THE TRAINED NEURAL NETWORK AND THE GENETIC PROGRAMMING." Journal of the Operations Research Society of Japan 49, no. 1 (2006): 66–82. http://dx.doi.org/10.15807/jorsj.49.66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Singh, A. K., M. C. Deo, and V. Sanil Kumar. "Neural network–genetic programming for sediment transport." Proceedings of the Institution of Civil Engineers - Maritime Engineering 160, no. 3 (September 2007): 113–19. http://dx.doi.org/10.1680/maen.2007.160.3.113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wallace, Susan R., and F. Layne Wallace. "Two neural network programming assignments using arrays." ACM SIGCSE Bulletin 23, no. 1 (March 1991): 43–47. http://dx.doi.org/10.1145/107005.107014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Tao, and XiaoJie Liu. "An intelligent neural network programming system (NNPS)." ACM SIGPLAN Notices 35, no. 3 (March 2000): 65–72. http://dx.doi.org/10.1145/351159.351176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Maa, C. Y., and M. A. Shanblatt. "Linear and quadratic programming neural network analysis." IEEE Transactions on Neural Networks 3, no. 4 (July 1992): 580–94. http://dx.doi.org/10.1109/72.143372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Hong-Xing, and Xu Li Da. "A neural network representation of linear programming." European Journal of Operational Research 124, no. 2 (July 2000): 224–34. http://dx.doi.org/10.1016/s0377-2217(99)00376-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jian, Fang, and Xi Yugeng. "Neural network design based on evolutionary programming." Artificial Intelligence in Engineering 11, no. 2 (April 1997): 155–61. http://dx.doi.org/10.1016/s0954-1810(96)00025-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Filippelli, F. L., M. Forti, and S. Manetti. "New linear and quadratic programming neural network." Electronics Letters 30, no. 20 (September 29, 1994): 1693–94. http://dx.doi.org/10.1049/el:19941117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

CHIA, HENRY WAI-KIT, and CHEW-LIM TAN. "NEURAL LOGIC NETWORK LEARNING USING GENETIC PROGRAMMING." International Journal of Computational Intelligence and Applications 01, no. 04 (December 2001): 357–68. http://dx.doi.org/10.1142/s1469026801000299.

Full text
Abstract:
Neural Logic Networks or Neulonets are hybrids of neural networks and expert systems capable of representing complex human logic in decision making. Each neulonet is composed of rudimentary net rules which themselves depict a wide variety of fundamental human logic rules. An early methodology employed in neulonet learning for pattern classification involved weight adjustments during back-propagation training which ultimately rendered the net rules incomprehensible. A new technique is now developed that allows the neulonet to learn by composing the net rules using genetic programming without the need to impose weight modifications, thereby maintaining the inherent logic of the net rules. Experimental results are presented to illustrate this new and exciting capability in capturing human decision logic from examples. The extraction and analysis of human logic net rules from an evolved neulonet will be discussed. These extracted net rules will be shown to provide an alternate perspective to the greater extent of knowledge that can be expressed and discovered. Comparisons will also be made to demonstrate the added advantage of using net rules, against the use of standard boolean logic of negation, disjunction and conjunction, in the realm of evolutionary computation.
APA, Harvard, Vancouver, ISO, and other styles
20

Tsai, Hsing-Chih, and Yong-Huang Lin. "Modular neural network programming with genetic optimization." Expert Systems with Applications 38, no. 9 (September 2011): 11032–39. http://dx.doi.org/10.1016/j.eswa.2011.02.147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mahdavi, Ali, Mohsen Najarchi, Emadoddin Hazaveie, Seyed Mohammad Mirhosayni Hazave, and Seyed Mohammad Mahdai Najafizadeh. "Comparison of neural networks and genetic algorithms to determine missing precipitation data (Case study: the city of Sari)." Revista de la Universidad del Zulia 11, no. 29 (February 8, 2020): 114–28. http://dx.doi.org/10.46925//rdluz.29.08.

Full text
Abstract:
Neural networks and genetic programming in the investigation of new methods for predicting rainfall in the catchment area of the city of Sari. Various methods are used for prediction, such as the time series model, artificial neural networks, fuzzy logic, fuzzy Nero, and genetic programming. Results based on statistical indicators of root mean square error and correlation coefficient were studied. The results of the optimal model of genetic programming were compared, the correlation coefficients and the root mean square error 0.973 and 0.034 respectively for training, and 0.964 and 0.057 respectively for the optimal neural network model. Genetic programming has been more accurate than artificial neural networks and is recommended as a good way to accurately predict.
APA, Harvard, Vancouver, ISO, and other styles
22

Pandey, Sunil, Naresh Kumar Nagwani, and Shrish Verma. "Aspects of programming for implementation of convolutional neural networks on multisystem HPC architectures." Journal of Physics: Conference Series 2062, no. 1 (November 1, 2021): 012016. http://dx.doi.org/10.1088/1742-6596/2062/1/012016.

Full text
Abstract:
Abstract The training of deep learning convolutional neural networks is extremely compute intensive and takes long times for completion, on all except small datasets. This is a major limitation inhibiting the widespread adoption of convolutional neural networks in real world applications despite their better image classification performance in comparison with other techniques. Multidirectional research and development efforts are therefore being pursued with the objective of boosting the computational performance of convolutional neural networks. Development of parallel and scalable deep learning convolutional neural network implementations for multisystem high performance computing architectures is important in this background. Prior analysis based on computational experiments indicates that a combination of pipeline and task parallelism results in significant convolutional neural network performance gains of up to 18 times. This paper discusses the aspects which are important from the perspective of implementation of parallel and scalable convolutional neural networks on central processing unit based multisystem high performance computing architectures including computational pipelines, convolutional neural networks, convolutional neural network pipelines, multisystem high performance computing architectures and parallel programming models.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yaling, and Hongwei Liu. "A new projection neural network for linear and convex quadratic second-order cone programming." Journal of Intelligent & Fuzzy Systems 42, no. 4 (March 4, 2022): 2925–37. http://dx.doi.org/10.3233/jifs-210164.

Full text
Abstract:
A new projection neural network approach is presented for the linear and convex quadratic second-order cone programming. In the method, the optimal conditions of the linear and convex second-order cone programming are equivalent to the cone projection equations. A Lyapunov function is given based on the G-norm distance function. Based on the cone projection function, the descent direction of Lyapunov function is used to design the new projection neural network. For the proposed neural network, we give the Lyapunov stability analysis and prove the global convergence. Finally, some numerical examples and two kinds of grasping force optimization problems are used to test the efficiency of the proposed neural network. The simulation results show that the proposed neural network is efficient for solving some linear and convex quadratic second-order cone programming problems. Especially, the proposed neural network can overcome the oscillating trajectory of the exist projection neural network for some linear second-order cone programming examples and the min-max grasping force optimization problem.
APA, Harvard, Vancouver, ISO, and other styles
24

Matveeva, N. "Comparative analysis using neural networks programming on Java for of signal recognition." System technologies 1, no. 138 (March 30, 2022): 185–91. http://dx.doi.org/10.34185/1562-9945-1-138-2022-18.

Full text
Abstract:
The results of the study of a multilayer persertron and a radial-basic neural network for signal recognition are presented. Neural networks are implemented in Java in the environment NetBeans. The optimal number of neurons in the hidden layer is selected for building an effec-tive architecture of the neural network. Experiments were performed to analyze MSE values, Euclidean distance and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

WU, AI, and P. K. S. TAM. "A NEURAL NETWORK METHODOLOGY OF QUADRATIC OPTIMIZATION." International Journal of Neural Systems 09, no. 02 (April 1999): 87–93. http://dx.doi.org/10.1142/s0129065799000083.

Full text
Abstract:
According to the basic optimization principle of artificial neural networks, a novel kind of neural network model for solving the quadratic programming problem is presented. The methodology is based on the Lagrange multiplier theory in optimization and seeks to provide solutions satisfying the necessary conditions of optimality. The equilibrium point of the network satisfies the Kuhn-Tucker condition for the problem. The stability and convergency of the neural network is investigated and the strategy of the neural optimization is discussed. The feasibility of the neural network method is verified with the computation examples. Results of the simulation of the neural network to solve optimum problems are presented to illustrate the computational power of the neural network method.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Shenghe, Shivendra S. Panwar, Murali Kodialam, and T. V. Lakshman. "Deep Neural Network Approximated Dynamic Programming for Combinatorial Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1684–91. http://dx.doi.org/10.1609/aaai.v34i02.5531.

Full text
Abstract:
In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs only estimate the best decision at each step. The training procedure of the NDP starts from the smallest problem size and a new DNN for the next size is trained to cooperate with previous DNNs. After all the DNNs are trained, the networks are fine-tuned together to further improve overall performance. We test NDP on the linear sum assignment problem, the traveling salesman problem and the talent scheduling problem. Experimental results show that NDP can achieve considerable computation time reduction on hard problems with reasonable performance loss. In general, NDP can be applied to reducible combinatorial optimization problems for the purpose of computation time reduction.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Yaling, and Hongwei Liu. "A Projection Neural Network for Circular Cone Programming." Mathematical Problems in Engineering 2018 (June 10, 2018): 1–12. http://dx.doi.org/10.1155/2018/4607853.

Full text
Abstract:
A projection neural network method for circular cone programming is proposed. In the KKT condition for the circular cone programming, the complementary slack equation is transformed into an equivalent projection equation. The energy function is constructed by the distance function and the dynamic differential equation is given by the descent direction of the energy function. Since the projection on the circular cone is simple and costs less computation time, the proposed neural network requires less state variables and leads to low complexity. We prove that the proposed neural network is stable in the sense of Lyapunov and globally convergent. The simulation experiments show our method is efficient and effective.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Qingshan, and Jun Wang. "A One-Layer Recurrent Neural Network with a Discontinuous Activation Function for Linear Programming." Neural Computation 20, no. 5 (May 2008): 1366–83. http://dx.doi.org/10.1162/neco.2007.03-07-488.

Full text
Abstract:
A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Quan-Ju, and Xiao Qing Lu. "A Recurrent Neural Network for Nonlinear Fractional Programming." Mathematical Problems in Engineering 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/807656.

Full text
Abstract:
This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.
APA, Harvard, Vancouver, ISO, and other styles
30

Mezher, Liqaa Saadi. "Hamming neural network application with FPGA device." International Journal of Reconfigurable and Embedded Systems (IJRES) 10, no. 1 (March 1, 2021): 37. http://dx.doi.org/10.11591/ijres.v10.i1.pp37-46.

Full text
Abstract:
The Hamming neural network is a kind of counterfeit neural system that substance of two kinds of layers (feed forward layers and repetitive layer). In this study, two pattern entries are utilization in the binary number. In the first layer, two nerves were utilization as the pure line work. In the subsequent layer, three nerves and a positive line work were utilization. The Hamming Neural system calculation was also implemented in three reproduction strategies (logical gate technique, programming program encryption strategy and momentary square chart technique). In this study in programming of VHDL and FPGA machine was utilization.
APA, Harvard, Vancouver, ISO, and other styles
31

Singh, A. K., M. C. Deo, and V. Sanil Kumar. "Discussion: Neural network – genetic programming for sediment transport." Proceedings of the Institution of Civil Engineers - Maritime Engineering 163, no. 3 (September 2010): 135–36. http://dx.doi.org/10.1680/maen.2010.163.3.135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gao, X. B. "A Novel Neural Network for Nonlinear Convex Programming." IEEE Transactions on Neural Networks 15, no. 3 (May 2004): 613–21. http://dx.doi.org/10.1109/tnn.2004.824425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Forti, M., P. Nistri, and M. Quincampoix. "Generalized Neural Network for Nonsmooth Nonlinear Programming Problems." IEEE Transactions on Circuits and Systems I: Regular Papers 51, no. 9 (September 2004): 1741–54. http://dx.doi.org/10.1109/tcsi.2004.834493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chiu, Chinchuan, Chia-Yiu Maa, and Michael A. Shanblatt. "AN ARTIFICIAL NEURAL NETWORK ALGORITHM FOR DYNAMIC PROGRAMMING." International Journal of Neural Systems 01, no. 03 (January 1990): 211–20. http://dx.doi.org/10.1142/s0129065790000114.

Full text
Abstract:
An artificial neural network (ANN) formulation for solving the dynamic programming problem (DPP) is presented. The DPP entails finding an optimal path from a source node to a destination node which minimizes (or maximizes) a performance measure of the problem. The optimization procedure is implemented and demonstrated using a modified Hopfield–Tank ANN. Simulations show that the ANN can provide a near-optimal solution during an elapsed time of only a few characteristic time constants of the circuit for DPPs with sizes as large as 64 stages with 64 states in each stage. An application of the proposed algorithm to an optimal control problem is presented. The proposed artificial neural network dynamic programming algorithm is attractive due to its radically improved speed over conventional techniques especially where real-time near-optimal solutions are required.
APA, Harvard, Vancouver, ISO, and other styles
35

Yousen Xia. "Neural network for solving extended linear programming problems." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 803–6. http://dx.doi.org/10.1109/72.572118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gen, Mitsuo, Kenichi Ida, and Reiko Kobuchi. "Neural network technique for fuzzy multiobjective linear programming." Computers & Industrial Engineering 35, no. 3-4 (December 1998): 543–46. http://dx.doi.org/10.1016/s0360-8352(98)00154-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Linko, P., and Y. H. Zhu. "Neural Network Programming in Bioprocess Estimation and Control." IFAC Proceedings Volumes 25, no. 2 (March 1992): 163–66. http://dx.doi.org/10.1016/s1474-6670(17)50344-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Cuiping, Xingbao Gao, Yawei Li, and Rui Liu. "A new neural network for l1-norm programming." Neurocomputing 202 (August 2016): 98–103. http://dx.doi.org/10.1016/j.neucom.2016.03.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Jun. "A deterministic annealing neural network for convex programming." Neural Networks 7, no. 4 (January 1994): 629–41. http://dx.doi.org/10.1016/0893-6080(94)90041-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, K. z., Y. Leung, K. S. Leung, and X. b. Gao. "A Neural Network for Solving Nonlinear Programming Problems." Neural Computing & Applications 11, no. 2 (October 23, 2002): 103–11. http://dx.doi.org/10.1007/s005210200022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nasira, G. M., S. Ashok Kumar, and T. S. S. Balaji. "Neural Network Implementation for Integer Linear Programming Problem." International Journal of Computer Applications 1, no. 18 (February 25, 2010): 98–102. http://dx.doi.org/10.5120/375-560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Goncharov, Sergey, and Andrey Nechesov. "Polynomial-Computable Representation of Neural Networks in Semantic Programming." J 6, no. 1 (January 6, 2023): 48–57. http://dx.doi.org/10.3390/j6010004.

Full text
Abstract:
A lot of libraries for neural networks are written for Turing-complete programming languages such as Python, C++, PHP, and Java. However, at the moment, there are no suitable libraries implemented for a p-complete logical programming language L. This paper investigates the issues of polynomial-computable representation neural networks for this language, where the basic elements are hereditarily finite list elements, and programs are defined using special terms and formulas of mathematical logic. Such a representation has been shown to exist for multilayer feedforward fully connected neural networks with sigmoidal activation functions. To prove this fact, special p-iterative terms are constructed that simulate the operation of a neural network. This result plays an important role in the application of the p-complete logical programming language L to artificial intelligence algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Sharma, V., R. Jha, and R. Naresh. "Optimal multi-reservoir network control by augmented Lagrange programming neural network." Applied Soft Computing 7, no. 3 (June 2007): 783–90. http://dx.doi.org/10.1016/j.asoc.2005.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sundaram, Arunachalam. "Applications of Artificial Neural Networks to Solve Ordinary Differential Equations." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 882–88. http://dx.doi.org/10.22214/ijraset.2022.40413.

Full text
Abstract:
Abstract: Applications of Neural Networks to numerical problems have gained increasing interest. Solving Ordinary Differential Equations can be realized with simple artificial neural network architectures. In this paper, first step is training a Neural Network to satisfy the condition required by a differential equation and then finding a function whose derivative satisfies the Ordinary Differential Equation condition. The method of solving differential equation is implemented in Python Programming using the TensorFlow library. Keywords: Neural Network, Differential Equation, Loss function, Train function, Mean Squared Error, TensorFlow.
APA, Harvard, Vancouver, ISO, and other styles
45

Zheng, Haoxin, Zhanqiang Chang, and Jiaxi Liu. "Deformation prediction programming based on MATLAB and BP neural network." E3S Web of Conferences 360 (2022): 01042. http://dx.doi.org/10.1051/e3sconf/202236001042.

Full text
Abstract:
In order to solve land tension problems, high-rise buildings become more and more common in the modern big cities. Meanwhile, it is necessary to monitor the deformation of the high-rise buildings for their safety. Based on the deformation monitoring data, we can predict subsequent data, and then analyze the security situation of buildings. In this paper, we will introduce a prediction method, in which BP neural network and MATLAB program are applied. BP neural network is an algorithm that simulates biological neurons to train network learning. The principle of artificial neural network is to build a mathematical model of brain neural activity by imitating the neural activity of human brain. BP neural network has strong learning ability, generalization ability, nonlinear mapping ability and fault tolerance ability.
APA, Harvard, Vancouver, ISO, and other styles
46

CHEN, HONGXIN, SHYAM PRASAD ADHIKARI, HYONOK YOON, and HYONGSUK KIM. "IMPLEMENTATION OF THE COMPLEX PROCESSING OF DYNAMIC PROGRAMMING WITH NONLINEAR TEMPLATES OF CELLULAR NEURAL NETWORKS." International Journal of Bifurcation and Chaos 20, no. 07 (July 2010): 2109–21. http://dx.doi.org/10.1142/s0218127410026952.

Full text
Abstract:
A complex processing of the dynamic programming is implemented with the parallel architecture of Cellular Neural Networks. Dynamic programming is an efficient algorithm to find optimal path and Cellular Neural Network is a parallel computation architecture composed of identical computation cell array and identical connections at each cell. Breaking down complex processing of the dynamic programming into a sequence of simple steps, the dynamic programming algorithm can be built with the nonlinear templates of Cellular Neural Networks. The procedure to breakdown the complex computation into the sequence of CNN building blocks is illustrated. To show the feasibility of the proposed method, the designed CNN-based dynamic programming is applied for detecting the traces of road boundaries. Edge information of road image is extracted and assigned as local distance value accordingly, then dynamic programming algorithm is implemented by a nonlinear CNN template. The proposed algorithm and its possible circuit structure are described, and simulation results are reported.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Jiahan, Michael A. Shanblatt, and Chia-Yiu Maa. "IMPROVED NEURAL NETWORKS FOR LINEAR AND NONLINEAR PROGRAMMING." International Journal of Neural Systems 02, no. 04 (January 1991): 331–39. http://dx.doi.org/10.1142/s0129065791000303.

Full text
Abstract:
A method for improving the performance of artificial neural networks for linear and nonlinear programming is presented. By analyzing the behavior of the conventional penalty function, the reason for the inherent degenerating accuracy is discovered. Based on this, a new combination penalty function is proposed which can ensure that the equilibrium point is acceptably close to the optimal point. A known neural network model has been modified by using the new penalty function and the corresponding circuit scheme is given. Simulation results show that the relative error for linear and nonlinear programming is substantially reduced by the new method.
APA, Harvard, Vancouver, ISO, and other styles
48

Akil, Ibnu. "NEURAL NETWORK FUNDAMENTAL DAN IMPLEMENTASI DALAM PEMROGRAMAN." INTI Nusa Mandiri 14, no. 2 (February 1, 2020): 189–94. http://dx.doi.org/10.33480/inti.v14i2.1179.

Full text
Abstract:
Nowadays machine learning and deep learning are becoming a trend in the world of information system. They are actually is part of an artificial intelligence domain. However, so many people don’t understand that machine learning and deep learning are built using neural networks. Therefore, in order to understand how machine learning and deep learning works, we must understand the basic concept of the neural network first. In this article, the writer describes the basic theory, math function of a neural network, and the example of implementation into the java programming language. The writer hopes that this article may help to understand neural network which is the core of machine learning and deep learning.
APA, Harvard, Vancouver, ISO, and other styles
49

Gridin, V. N., I. A. Evdokimov, B. R. Salem, and V. I. Solodovnikov. "OPTIMIZATION PROCESS ANALYSIS FOR HYPERPARAMETERS OF NEURAL NETWORK DATA PROCESSING STRUCTURES." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 196 (October 2020): 3–10. http://dx.doi.org/10.14489/vkit.2020.10.pp.003-010.

Full text
Abstract:
The analysis of key stages, implementation features and functioning principles of the neural networks, including deep neural networks, has been carried out. The problems of choosing the number of hidden elements, methods for the internal topology selection and setting parameters are considered. It is shown that in the training and validation process it is possible to control the capacity of a neural network and evaluate the qualitative characteristics of the constructed model. The issues of construction processes automation and hyperparameters optimization of the neural network structures are considered depending on the user's tasks and the available source data. A number of approaches based on the use of probabilistic programming, evolutionary algorithms, and recurrent neural networks are presented.
APA, Harvard, Vancouver, ISO, and other styles
50

Gridin, V. N., I. A. Evdokimov, B. R. Salem, and V. I. Solodovnikov. "OPTIMIZATION PROCESS ANALYSIS FOR HYPERPARAMETERS OF NEURAL NETWORK DATA PROCESSING STRUCTURES." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 196 (October 2020): 3–10. http://dx.doi.org/10.14489/vkit.2020.10.pp.003-010.

Full text
Abstract:
The analysis of key stages, implementation features and functioning principles of the neural networks, including deep neural networks, has been carried out. The problems of choosing the number of hidden elements, methods for the internal topology selection and setting parameters are considered. It is shown that in the training and validation process it is possible to control the capacity of a neural network and evaluate the qualitative characteristics of the constructed model. The issues of construction processes automation and hyperparameters optimization of the neural network structures are considered depending on the user's tasks and the available source data. A number of approaches based on the use of probabilistic programming, evolutionary algorithms, and recurrent neural networks are presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography