Добірка наукової літератури з теми "Neural network programming"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Neural network programming".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Neural network programming"

1

Liu, Qingshan, Jinde Cao, and Guanrong Chen. "A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming." Neural Computation 22, no. 11 (November 2010): 2962–78. http://dx.doi.org/10.1162/neco_a_00029.

Повний текст джерела
Анотація:
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

FRANCELIN ROMERO, ROSELI A., JANUSZ KACPRYZK, and FERNANDO GOMIDE. "A BIOLOGICALLY INSPIRED NEURAL NETWORK FOR DYNAMIC PROGRAMMING." International Journal of Neural Systems 11, no. 06 (December 2001): 561–72. http://dx.doi.org/10.1142/s0129065701000965.

Повний текст джерела
Анотація:
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dölen, Melik, and Robert D. Lorenz. "General Methodologies for Neural Network Programming." International Journal of Smart Engineering System Design 4, no. 1 (January 2002): 63–73. http://dx.doi.org/10.1080/10255810210629.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abdullah, Wan Ahmad Tajuddin Wan. "Logic programming on a neural network." International Journal of Intelligent Systems 7, no. 6 (August 1992): 513–19. http://dx.doi.org/10.1002/int.4550070604.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Razaqpur, A. G., A. O. Abd El Halim, and Hosny A. Mohamed. "Bridge management by dynamic programming and neural networks." Canadian Journal of Civil Engineering 23, no. 5 (October 1, 1996): 1064–69. http://dx.doi.org/10.1139/l96-913.

Повний текст джерела
Анотація:
Bridges and pavements represent the major investment in a highway network. In addition, they are in constant need of maintenance, rehabilitation, and replacement. One of the problems related to highway infrastructure is that the cost of maintaining a network of bridges with an acceptable level-of-service is more than the budgeted funds. For large bridge networks, traditional management practices have become inadequate for dealing with this serious problem. Bridge management systems are a relatively new approach developed to solve the latter problem, following the successful application of similar system concepts to pavement management. Priority setting schemes used in bridge management systems range from subjective basis using engineering judgement to very complex optimization models. However, currently used priority setting schemes do not have the ability to optimize the system benefits in order to get optimal solutions. This paper presents a network optimization model which allocates a limited budget to bridge projects. The objective of the model is to determine the best timing for carrying out these projects and the spending level for each year of the analysis period in order to minimize the losses of the system benefits. A combined dynamic programming and neural network approach was utilized to formulate the model. The bridge problem has two dimensions: the time dimension and the bridge network dimension. The dynamic programming sets its stages in the time dimension, while the neural network handles the network dimension. Key words: bridge management, dynamic programming, neural networks, budget allocation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

NAZARI, ALI, and SHADI RIAHI. "COMPUTER-AIDED PREDICTION OF PHYSICAL AND MECHANICAL PROPERTIES OF HIGH STRENGTH CEMENTITIOUS COMPOSITE CONTAINING Cr2O3 NANOPARTICLES." Nano 05, no. 05 (October 2010): 301–18. http://dx.doi.org/10.1142/s1793292010002219.

Повний текст джерела
Анотація:
In the present paper, two models based on artificial neural networks (ANN) and genetic programming (GEP) for predicting flexural strength and percentage of water absorption of concretes containing Cr2O3 nanoparticles have been developed at different ages of curing. For purpose of building these models, training and testing using experimental results for 144 specimens produced with 16 different mixture proportions were conducted. The data used in the multilayer feed forward neural networks models and input variables of genetic programming models are arranged in a format of eight input parameters that cover the cement content (C), nanoparticle content (N), aggregate type (AG), water content (W), the amount of superplasticizer (S), the type of curing medium (CM), Age of curing (AC) and number of testing try (NT). According to these input parameters, in the neural networks and genetic programming models the flexural strength and percentage of water absorption values of concretes containing Cr2O3 nanoparticles were predicted. The training and testing results in the neural network and genetic programming models have shown that every two models have strong potential for predicting the flexural strength and percentage of water absorption values of concretes containing Cr2O3 nanoparticles. Although neural network have predicted better results, genetic programming is able to predict reasonable values with a simpler method rather than neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

SUTO, JOZSEF, and STEFAN ONIGA. "Testing artificial neural network for hand gesture recognition." Creative Mathematics and Informatics 22, no. 2 (2013): 223–28. http://dx.doi.org/10.37193/cmi.2013.02.12.

Повний текст джерела
Анотація:
Neural networks are well applicable for gesture recognition. In this article we want to present the result of an artificial feed forward network for a simplified hand gesture recognition problem. In neural networks, the learning algorithm is very important because the performance of neural network depends on it. One of the most known learning algorithm is the backpropagation. There are some mathematical software which provides acceptable result for a given problem by a backpropagation based network, but in some cases a high-level programming language implemented program can provide better solution. The main topics of the article cover the structure of the test environment, the mathematical background of the implemented methods, some programming remarks and the test results.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liu, Yan Hui, and Zhi Peng Wang. "Genetic Programming for Letters Identification Based on Neural Network." Applied Mechanics and Materials 734 (February 2015): 642–45. http://dx.doi.org/10.4028/www.scientific.net/amm.734.642.

Повний текст джерела
Анотація:
According to the problem that the letters identification is not high accuracy using neural networks, in this paper, an optimal neural network structure is designed based on genetic algorithm to optimize the number of hidden layer. The English letters can be identified by optimal neural network. The results obtained in the genetic programming optimizations are very satisfactory. Experiments show that the identification system has higher accuracy and achieved good ideal letters identification effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gao, Wei. "New Evolutionary Neural Network Based on Continuous Ant Colony Optimization." Applied Mechanics and Materials 58-60 (June 2011): 1773–78. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1773.

Повний текст джерела
Анотація:
The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

M. Butaev, Mikhail, Mikhail Yu. Babich, Igor I. Salnikovq, Alexey I. Martyshkin, Dmitry V. Pashchenko, and Dmitry A. Trokoz. "Neural Network for Handwriting Recognition." Nexo Revista Científica 33, no. 02 (December 31, 2020): 623–37. http://dx.doi.org/10.5377/nexo.v33i02.10798.

Повний текст джерела
Анотація:
Today, in the digital age, the problem of pattern recognition is very relevant. In particular, the task of text recognition is important in banking, for the automatic reading of documents and their control; in video control systems, for example, to identify the license plate of a car that violated traffic rules; in security systems, for example, to check banknotes at an ATM and in many other areas. A large number of methods are known for solving the problem of pattern recognition, but the main advantage of neural networks over other methods is their learning ability. It is this feature that makes neural networks attractive to study. The article proposes a basic neural network model. The main algorithms are considered and a programming model is implemented in the Python programming language. In the course of research, the following shortcomings of the basic model were revealed: low learning rate (the number of correctly recognized digits in the first epochs of learning); retraining - the network has not learned to generalize the knowledge gained; low probability of recognition - 95.13%.To solve the above disadvantages, various techniques were used that increase the accuracy and speed of work, as well as reduce the effect of network retraining.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Neural network programming"

1

Howse, Samuel. "Dynamic programming problems, neural network solutions and economic applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0009/MQ60678.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Collins, Tamar L. "A methodology for engineering neural network systems." Thesis, University of Exeter, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284620.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sulaiman, Md Nasir. "The design of a neural network compiler." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/25628.

Повний текст джерела
Анотація:
Computer simulation is a flexible and economical way for rapid prototyping and concept evaluation with Neural Network (NN) models. Increasing research on NNs has led to the development of several simulation programs. Not all simulations have the same scope. Some simulations allow only a fixed network model and some are more general. Designing a simulation program for general purpose NN models has become a current trend nowadays because of its flexibility and efficiency. A proper programming language specifically for NN models is preferred since the existing high-level languages such as C are for NN designers from a strong computer background. The program translations for NN languages come from combinations which are either interpreter and/or compiler. There are also various styles of programming languages such as a procedural, functional, descriptive and object-oriented. The main focus of this thesis is to study the feasibility of using a compiler method for the development of a general-purpose simulator - NEUCOMP that compiles the program written as a list of mathematical specifications of the particular NN model and translates it into a chosen target program. The language supported by NEUCOMP is based on a procedural style. Information regarding the list of mathematical statements required by the NN models are written in the program. The mathematical statements used are represented by scalar, vector and matrix assignments. NEUCOMP translates these expressions into actual program loops. NEUCOMP enables compilation of a simulation program written in the NEUCOMP language for any NN model, contains graphical facilities such as portraying the NN architecture and displaying a graph of the result during training and finally to have a program that can run on a parallel shared memory multi-processor system.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lukashev, A. "Basics of artificial neural networks (ANNs)." Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sims, Pauline. "Turing's P-type machine and neural network hybrid systems." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Haggett, Simon J. "Towards a multipurpose neural network approach to novelty detection." Thesis, University of Kent, 2008. https://kar.kent.ac.uk/24133/.

Повний текст джерела
Анотація:
Novelty detection, the identification of data that is unusual or different in some way, is relevant in a wide number of real-world scenarios, ranging from identifying unusual weather conditions to detecting evidence of damage in mechanical systems. However, utilising novelty detection approaches in a particular scenario presents significant challenges to the non-expert user. They must first select an appropriate approach from the novelty detection literature for their scenario. Then, suitable values must be determined for any parameters of the chosen approach. These challenges are at best time consuming and at worst prohibitively difficult for the user. Worse still, if no suitable approach can be found from the literature, then the user is left with the impossible task of designing a novelty detector themselves. In order to make novelty detection more accessible, an approach is required which does not pose the above challenges. This thesis presents such an approach, which aims to automatically construct novelty detectors for specific applications. The approach combines a neural network model, recently proposed to explain a phenomenon observed in the neural pathways of the retina, with an evolutionary algorithm that is capable of simultaneously evolving the structure and weights of a neural network in order to optimise its performance in a particular task. The proposed approach was evaluated over a number of very different novelty detection tasks. It was found that, in each task, the approach successfully evolved novelty detectors which outperformed a number of existing techniques from the literature. A number of drawbacks with the approach were also identified, and suggestions were given on ways in which these may potentially be overcome.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Heaton, Jeff T. "Automated Feature Engineering for Deep Neural Networks with Genetic Programming." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/994.

Повний текст джерела
Анотація:
Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gueddar, T. "Neural network and multi-parametric programming based approximation techniques for process optimisation." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1432138/.

Повний текст джерела
Анотація:
In this thesis two approximation techniques are proposed: Artificial Neural Networks (ANN) and Multi – Parametric Programming. The usefulness of these techniques is demonstrated through process optimisation case studies. The oil refining industry mainly uses Linear Programming (LP) for refinery optimization and planning purposes, on a daily basis. LPs are attractive from the computational time point of view; however it has limitations such as the nonlinearity of the refinery processes is not taken into account. The main aim of this work is to develop approximate models to replace the rigorous ones providing a good accuracy without compromising the computational time, for refinery optimization. The data for deriving approximate models is generated from rigorous process models from a commercial software, which is extensively used in the refining industry. In this work we present three model reduction techniques. The first approach is based upon deriving an optimal configuration of artificial neural networks (ANN) for approximating the refinery models. The basic idea is to formulate the existence or not of the nodes and interconnections in the network using binary variables. This results in a Mixed Integer Nonlinear Programming formulation for Artificial Neural Networks (MIPANN). The second approach is concerned with dealing with complexity associated with large amounts of data that is usually available in the refineries; a disagg regation¬aggregation based approach is presented to address the complexity. The data is split (disagg reg ation) into smaller subsets and reduced ANN models are obtained for each of the subset. These ANN models are then combined (aggregation) to obtain an ANN model which represents the whole of the original data. The disagg reg ation step can be carried out within a parallel computing platform. The third approach consists of combining the MIPA NN and the disagg reg ation¬aggregation reduction methods to handle medium and large scale training data using a neural network that has already been reduced through nodes and interconnections optimization. Refinery optimization studies are carried out to demonstrate the applicability and the usefulness of these proposed model reduction approaches. Process synthesis and MIPANN problems are usually formulated as Mixed Integer Nonlinear programming (MINLP) problems requiring efficient algorithm for their solution. An approximate multi-parametric programming Branch and Bound (mpBB) algorithm is proposed. An approximate parametric solution at the root node and other fractional nodes of the Branch and Bound (BB) tree are obtained and used to estimate the solution at the terminal nodes in different sections of the tree. These estimates are then used to guide the search in the BB tree, resulting in fewer nodes being evaluated and reduction in the computational effort. Problems from the literature are solved using the proposed algorithm and compared with the other currently available algorithms for solving MINLP problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Myers, Catherine E. "Learning with delayed reinforcement in an exploratory probabilistic logic neural network." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46462.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lategano, Antonio. "Image-based programming language recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22208/.

Повний текст джерела
Анотація:
Nel presente lavoro di tesi viene affrontato per la prima volta il problema della classificazione dei linguaggi di programmazione mediante approcci image-based. Abbiamo utilizzato alcune Convolutional Neural Network pre-addestrate su task di classificazione di immagini, adattandole alla classificazione di immagini contenenti porzioni di codice sorgente scritto in 149 diversi linguaggi di programmazione. I nostri risultati hanno dimostrato che tali modelli riescono ad apprendere, con buone prestazioni, le feature lessicali presenti nel testo. Aggiungendo del rumore, tramite modifica dei caratteri presenti nelle immagini, siamo riusciti a comprendere quali fossero i caratteri che meglio permettevano ai modelli di discriminare tra una una classe e l’altra. Il risultato, confermato tramite l’utilizzo di tecniche di visualizzazione come la Class Activation Mapping, è che la rete riesce ad apprendere delle feature lessicali di basso livello concentrandosi in particolare sui simboli tipici di ogni linguaggio di programmazione (come punteggiatura e parentesi), piuttosto che sui caratteri alfanumerici.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Neural network programming"

1

Radi, Amr Mohamed. Discovery of neural network learning rules using genetic programming. Birmingham: University of Birmingham, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ellis, Robert W. Neural network programming techniques: With examples in C and C++. Englewood Cliffs, New Jersey: Prentice-Hall, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Neural network control of nonliner discrete-time systems and industrial process. Boca Raton, FL: Taylor & Francis, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

1963-, Segovia Javier, Szczepaniak Piotr S. 1953, and Niedzwiedzinski Marian 1947-, eds. E-commerce and intelligent methods. Heidelberg: Physica-Verlag, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

A, Freeman James. Neural networks: Algorithms, applications, and programming techniques. Reading, Mass: Addison-Wesley, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

N, Tsitsiklis John, ed. Neuro-dynamic programming. Belmont, Mass: Athena Scientific, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Programming neural networks with encog 2 in java. St. Louis, MO: Heaton Research, Inc., 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

I, Bey, ed. Neutral interfaces in design, simulation, and programming for robotics. Berlin: Springer-Verlag, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rogers, Joey. Object-oriented neural networks in C++. San Diego, CA: Academic Press, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Branko, Souček, and IRIS Group, eds. Dynamic, genetic, and chaotic programming: The sixth-generation. New York: Wiley, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Neural network programming"

1

Vadla, Pradeep Kumar, Adarsha Ruwali, Kolla Bhanu Prakash, M. V. Prasanna Lakshmi, and G. R. Kanagachidambaresan. "Neural Network." In Programming with TensorFlow, 39–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-57077-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Watson, Mark. "Neural Network Library." In Programming in SHEME, 89–139. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-2394-8_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Nagapawan, Y. V. R., Kolla Bhanu Prakash, and G. R. Kanagachidambaresan. "Convolutional Neural Network." In Programming with TensorFlow, 45–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-57077-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kanagachidambaresan, G. R., Adarsha Ruwali, Debrup Banerjee, and Kolla Bhanu Prakash. "Recurrent Neural Network." In Programming with TensorFlow, 53–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-57077-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pinto, P., and M. Sette. "On neural network programming." In Trends in Artificial Intelligence, 420–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-54712-6_254.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Sun-Chong. "Artificial Neural Network." In Interdisciplinary Computing in Java Programming, 81–100. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0377-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Angeniol, Bernard, Philip Treleaven, and Heidi Hackbarth. "Neural Network Programming & Applications." In ESPRIT ’90, 315–25. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0705-8_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Xianneng, Wen He, and Kotaro Hirasawa. "Genetic Network Programming with Simplified Genetic Operators." In Neural Information Processing, 51–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42042-9_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bessière, Pierre, Ali Chams, and Traian Muntean. "A Virtual Machine Model for Artificial Neural Network Programming." In International Neural Network Conference, 689–92. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_52.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sher, Gene I. "The Unintentional Neural Network Programming Language." In Handbook of Neuroevolution Through Erlang, 143–50. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4463-3_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Neural network programming"

1

Yang, Zhun, Adam Ishay, and Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Повний текст джерела
Анотація:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Diniz, Jessica Barbosa, Filipe R. Cordeiro, Pericles B. C. Miranda, and Laura A. Tomaz Da Silva. "A Grammar-based Genetic Programming Approach to Optimize Convolutional Neural Network Architectures." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4406.

Повний текст джерела
Анотація:
Deep Learning is a research area under the spotlight in recent years due to its successful application to many domains, such as computer vision and image recognition. The most prominent technique derived from Deep Learning is Convolutional Neural Network, which allows the network to automatically learn representations needed for detection or classification tasks. However, Convolutional Neural Networks have some limitations, as designing these networks are not easy to master and require expertise and insight. In this work, we present the use of Genetic Algorithm associated to Grammar-based Genetic Programming to optimize Convolution Neural Network architectures. To evaluate our proposed approach, we adopted CIFAR-10 dataset to validate the evolution of the generated architectures, using the metric of accuracy to evaluate its classification performance in the test dataset. The results demonstrate that our method using Grammar-based Genetic Programming can easily produce optimized CNN architectures that are competitive and achieve high accuracy results.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bavan, A. S. "NPS: a neural network programming system." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137558.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Marcade, E., and B. Angeniol. "The GALATEA neural network programming system." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Debnath, R., and H. Takahashi. "SVM Training: Second-Order Cone Programming versus Quadratic Programming." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246822.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Manhaeve, Robin, Giuseppe Marra, and Luc De Raedt. "Approximate Inference for Neural Probabilistic Logic Programming." In 18th International Conference on Principles of Knowledge Representation and Reasoning {KR-2021}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/kr.2021/45.

Повний текст джерела
Анотація:
DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks. It is realized by providing an interface between the probabilistic logic and the neural networks. Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation. In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relational AI. Instead of considering all possible proofs for a certain query, the system searches for the best proof. However, training a DeepProbLog model using approximate inference introduces additional challenges, as the best proof is unknown at the start of training which can lead to convergence towards a local optimum. To be able to apply DeepProbLog on larger tasks, we propose: 1) a method for approximate inference using an A*-like search, called DPLA* 2) an exploration strategy for proving in a neural-symbolic setting, and 3) a parametric heuristic to guide the proof search. We empirically evaluate the performance and scalability of the new approach, and also compare the resulting approach to other neural-symbolic systems. The experiments show that DPLA* achieves a speed up of up to 2-3 orders of magnitude in some cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ji, Yu, Youhui Zhang, Wenguang Chen, and Yuan Xie. "Bridge the Gap between Neural Networks and Neuromorphic Hardware with a Neural Network Compiler." In ASPLOS '18: Architectural Support for Programming Languages and Operating Systems. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3173162.3173205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nakayama, H., and Yeboon Yun. "Support Vector Regression Based on Goal Programming and Multi-objective Programming." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Johansson, U., T. Lofstrom, R. Konig, and L. Niklasson. "Building Neural Network Ensembles using Genetic Programming." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246836.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wallace, Susan R., and F. Layne Wallace. "Two neural network programming assignments using arrays." In the twenty-second SIGCSE technical symposium. New York, New York, USA: ACM Press, 1991. http://dx.doi.org/10.1145/107004.107014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Neural network programming"

1

Yaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko, and Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3683.

Повний текст джерела
Анотація:
The purpose of the work is the development and application of models for scoring assessment of microfinance institution borrowers. This model allows to increase the efficiency of work in the field of credit. The object of research is lending. The subject of the study is a direct scoring model for improving the quality of lending using machine learning methods. The objective of the study: to determine the criteria for choosing a solvent borrower, to develop a model for an early assessment, to create software based on neural networks to determine the probability of a loan default risk. Used research methods such as analysis of the literature on banking scoring; artificial intelligence methods for scoring; modeling of scoring estimation algorithm using neural networks, empirical method for determining the optimal parameters of the training model; method of object-oriented design and programming. The result of the work is a neural network scoring model with high accuracy of calculations, an implemented system of automatic customer lending.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії