Letteratura scientifica selezionata sul tema "Neural network"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Neural network".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Neural network"

1

Navghare, Tukaram, Aniket Muley e Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach". Indian Journal Of Science And Technology 17, n. 4 (26 gennaio 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

O. H. Abdelwahed, O. H. Abdelwahed, e M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks". Indian Journal of Applied Research 3, n. 6 (1 ottobre 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tran, Loc. "Directed Hypergraph Neural Network". Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (31 marzo 2020): 1434–41. http://dx.doi.org/10.5373/jardcs/v12sp4/20201622.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Antipova, E. S., e S. A. Rashkovskiy. "Autoassociative Hamming Neural Network". Nelineinaya Dinamika 17, n. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Testo completo
Abstract (sommario):
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Perfetti, R. "A neural network to design neural networks". IEEE Transactions on Circuits and Systems 38, n. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer e Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks". 網際網路技術學刊 23, n. 1 (gennaio 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Testo completo
Abstract (sommario):
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
7

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network". Revista Gestão Inovação e Tecnologias 11, n. 4 (10 luglio 2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan e Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, n. 1 (30 marzo 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, n. 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

CVS, Rajesh, e Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31 ottobre 2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Neural network"

1

Xu, Shuxiang, University of Western Sydney e of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications". THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Testo completo
Abstract (sommario):
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
Doctor of Philosophy (PhD)
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ellerbrock, Thomas M. "Multilayer neural networks learnability, network generation, and network simplification /". [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=958467897.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Patterson, Raymond A. "Hybrid Neural networks and network design". Connect to resource, 1995. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1262707683.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Khliobas. "NEURAL NETWORK". Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33752.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rastogi, Preeti. "Assessing Wireless Network Dependability Using Neural Networks". Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1129134364.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Chambers, Mark Andrew. "Queuing network construction using artificial neural networks /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Dunn, Nathan A. "A Novel Neural Network Analysis Method Applied to Biological Neural Networks". Thesis, view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1251892251&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 122- 131). Also available for download via the World Wide Web; free to University of Oregon users.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

BRUCE, WILLIAM, e OTTER EDVIN VON. "Artificial Neural Network Autonomous Vehicle : Artificial Neural Network controlled vehicle". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191192.

Testo completo
Abstract (sommario):
This thesis aims to explain how a Artificial Neural Network algorithm could be used as means of control for a Autonomous Vehicle. It describes the theory behind the neural network and Autonomous Vehicles, and how a prototype with a camera as its only input can be designed to test and evaluate the algorithms capabilites, and also drive using it. The thesis will show that the Artificial Neural Network can, with a image resolution of 100 × 100 and a training set with 900 images, makes decisions with a 0.78 confidence level.
Denna rapport har som mal att beskriva hur en Artificiellt Neuronnatverk al- goritm kan anvandas for att kontrollera en bil. Det beskriver teorin bakom neu- ronnatverk och autonoma farkoster samt hur en prototyp, som endast anvander en kamera som indata, kan designas for att testa och utvardera algoritmens formagor. Rapporten kommer visa att ett neuronnatverk kan, med bildupplos- ningen 100 × 100 och traningsdata innehallande 900 bilder, ta beslut med en 0.78 sakerhet.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

De, Jongh Albert. "Neural network ensembles". Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Testo completo
Abstract (sommario):
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Simmen, Martin Walter. "Neural network optimization". Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/12942.

Testo completo
Abstract (sommario):
Combinatorial optimization problems arise throughout science, industry, and commerce. The demonstration that analogue neural networks could, in principle, rapidly find near-optimal solutions to such problems - many of which appear computationally intractable - was important both for the novelty of the approach and because these networks are potentially implementable in parallel hardware. However, subsequent research, conducted largely on the travelling salesman problem, revealed problems regarding the original network's parameter sensitivity and tendency to give invalid states. Although this has led to improvements and new network designs which at least partly overcome the above problems, many issues concerning the performance of optimization networks remain unresolved. This thesis explores how to optimize the performance of two neural networks current in the literature: the elastic net, and the mean field Potts network, both of which are designed for the travelling salesman problem. Analytical methods elucidate issues of parameter sensitivty and enable parameter values to be chosen in a rational manner. Systematic numerical experiments on realistic size problems complement and support the theoretical analyses throughout. An existing analysis of how the elastic net algorithm may generate invalid solutions is reviewed and extended. A new analysis locates the parameter regime in which the net may converge to a second type of invalid solution. Combining the two analyses yields a prescription for setting the value of a key parameter optimally with respect to avoiding invalid solutions. The elastic net operates by minimizing a computational energy function. Several new forms of dynamics using locally adaptive step-sizes are developed, and shown to increase greatly the efficiency of the minimization process. Analytical work constraining the range of safe adaptation rates is presented. A new form of dynamics, with a user defined step-size, is introduced for the mean field Potts network. An analysis of the network's critical temperature under these dynamics is given, by generalizing a previous analysis valid for a special case of the dynamics.
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Neural network"

1

De Wilde, Philippe. Neural Network Models. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-84628-614-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Taylor, J. G., E. R. Caianiello, R. M. J. Cotterill e J. W. Clark, a cura di. Neural Network Dynamics. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-2001-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Taylor, J. G., a cura di. Neural Network Applications. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-2003-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Harvey, Robert L. Neural network principles. London: Prentice-Hall International, 1994.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

B, Demuth Howard, e Beale Mark H, a cura di. Neural network design. Boston: PWS Pub., 1996.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Edgar, Sánchez-Sinencio, e Newcomb Robert W, a cura di. Neural network hardware. New York: IEEE, 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bharath, Ramachandran. Neural network computing. New York: Windcrest/McGraw-Hill, 1994.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Toshio, Fukuda, a cura di. Neural network applications. New York: IEEE, 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Shanmuganathan, Subana, e Sandhya Samarasinghe, a cura di. Artificial Neural Network Modelling. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28495-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Skrzypek, Josef, a cura di. Neural Network Simulation Environments. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2736-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Neural network"

1

D’Addona, Doriana Marilena. "Neural Network". In CIRP Encyclopedia of Production Engineering, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-642-35950-7_6563-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

D’Addona, Doriana Marilena. "Neural Network". In CIRP Encyclopedia of Production Engineering, 1268–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-53120-4_6563.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

D’Addona, Doriana Marilena. "Neural Network". In CIRP Encyclopedia of Production Engineering, 911–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-20617-7_6563.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Kim, Phil. "Neural Network". In MATLAB Deep Learning, 19–51. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Chityala, Ravishankar, e Sridevi Pudipeddi. "Neural Network". In Image Processing and Acquisition using Python, 251–64. Second edition. | Boca Raton : Chapman & Hall/CRC Press, 2020. | Series: Chapman & Hall/CRC the Python series: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429243370-11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Weik, Martin H. "neural network". In Computer Science and Communications Dictionary, 1095. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_12300.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Burgos, José E. "Neural Network". In Encyclopedia of Animal Cognition and Behavior, 1–19. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-319-47829-6_775-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Burgos, José E. "Neural Network". In Encyclopedia of Animal Cognition and Behavior, 4634–51. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-55065-7_775.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wang, Liang, Jianxin Zhao e Richard Mortier. "Neural Network". In Undergraduate Topics in Computer Science, 219–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97645-3_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tsai, Kao-Tai. "Neural Network". In Machine Learning for Knowledge Discovery with R, 155–72. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003205685-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Neural network"

1

Parto, Midya, Gordon H. Y. Li, Ryoto Sekine, Robert M. Gray, Luis L. Ledezma, James Williams e Alireza Marandi. "An Optical Neural Network Based on Nanophotonic Optical Parametric Oscillators". In CLEO: Science and Innovations, STu3P.7. Washington, D.C.: Optica Publishing Group, 2024. http://dx.doi.org/10.1364/cleo_si.2024.stu3p.7.

Testo completo
Abstract (sommario):
We experimentally demonstrate a recurrent optical neural network based on a nanophotonic optical parametric oscillator fabricated on thin-film lithium niobate. Our demonstration paves the way for realizing optical neural networks exhibiting ultra-low la-tencies.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin e Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence". In 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Testo completo
Abstract (sommario):
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Yang, Zhun, Adam Ishay e Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Testo completo
Abstract (sommario):
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Huynh, Alex V., John F. Walkup e Thomas F. Krile. "Optical perceptron-based quadratic neural network". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mii8.

Testo completo
Abstract (sommario):
Optical quadratic neural networks are currently being investigated because of their advantages over linear neural networks.1 Based on a quadratic neuron already constructed,2 an optical quadratic neural network utilizing four-wave mixing in photorefractive barium titanate (BaTiO3) has been developed. This network implements a feedback loop using a charge-coupled device camera, two monochrome liquid crystal televisions, a computer, and various optical elements. For training, the network employs the supervised quadratic Perceptron algorithm to associate binary-valued input vectors with specified target vectors. The training session is composed of epochs, each of which comprises an entire set of iterations for all input vectors. The network converges when the interconnection matrix remains unchanged for every successive epoch. Using a spatial multiplexing scheme for two bipolar neurons, the network can classify up to eight different input patterns. To the best of our knowledge, this proof-of-principle experiment represents one of the first working trainable optical quadratic networks utilizing a photorefractive medium.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Shi, Weijia, Andy Shih, Adnan Darwiche e Arthur Choi. "On Tractable Representations of Binary Neural Networks". In 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/91.

Testo completo
Abstract (sommario):
We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bian, Shaoping, Kebin Xu e Jing Hong. "Near neighbor neurons interconnected neural network". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tht27.

Testo completo
Abstract (sommario):
When the Hopfield neural network is extended to deal with a 2-D image composed of N×N pixels, the weight interconnection is a fourth-rank tensor with N4 elements. Each neuron is interconnected with all other neurons of the network. For an image, N will be large. So N4, the number of elements of the interconnection tensor, will be so large as to make the neural network's learning time (which corresponds to the precalculation of the interconnection tensor elements) too long. It is also difficult to implement the 2-D Hopfield neural network optically.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Huynh, Alex V., John F. Walkup e Thomas F. Krile. "Optical quadratic perceptron neural network". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.thy35.

Testo completo
Abstract (sommario):
Optical quadratic neural networks are currently being investigated because of their advantages with respect to linear neural networks.1 A quadratic neuron has previously been implemented by using a photorefractive barium titanate crystal.2 This approach has been improved and enhanced to realize a neural network that implements the perceptron learning algorithm. The input matrix, which is an encoded version of the input vector, is placed on a mask, and the interconnection matrix is computer-generated on a monochrome liquid-crystal television. By performing the four-wave mixing operation, the barium titanate crystal effectively multiplies the light fields representing the input matrix by those representing the interconnection matrix to produce an analog output. This output is then digitized by a computer, thresholded, and compared to a specified target vector. An error signal representing the difference between the target and thresholded output is generated, and the interconnection matrix is iteratively modified until convergence occurs. The characteristics of this quadratic neural network will be presented and discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang e Lise Getoor. "NeuPSL: Neural Probabilistic Soft Logic". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Testo completo
Abstract (sommario):
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zhan, Tiffany. "Hyper-Parameter Tuning in Deep Neural Network Learning". In 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121809.

Testo completo
Abstract (sommario):
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include penalizing parameters during training or trimming connectivity. CNNs use relatively little pre-processing compared to other image classification algorithms. Given the rise in popularity and use of deep neural network learning, the problem of tuning hyperparameters is increasingly prominent tasks in constructing efficient deep neural networks. In this paper, the tuning of deep neural network learning (DNN) hyper-parameters is explored using an evolutionary based approach popularized for use in estimating solutions to problems where the problem space is too large to get an exact solution.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Mehdizadeh, Nasser S., Payam Sinaei e Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network". In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Testo completo
Abstract (sommario):
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis of computational performance along with a comparison between the neural network approach and other conventional methods, used to represent the chemistry, are presented and the ability of neural networks for representing a non-linear chemical system is illustrated.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Neural network"

1

Pollack, Randy B. Neural Network Technologies. Fort Belvoir, VA: Defense Technical Information Center, febbraio 1993. http://dx.doi.org/10.21236/ada262576.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Wilensky, Gregg, Narbik Manukian, Joseph Neuhaus e Natalie Rivetti. Neural Network Studies. Fort Belvoir, VA: Defense Technical Information Center, luglio 1993. http://dx.doi.org/10.21236/ada271593.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tarasenko, Andrii O., Yuriy V. Yakimov e Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], febbraio 2020. http://dx.doi.org/10.31812/123456789/3682.

Testo completo
Abstract (sommario):
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Barto, Andrew. Adaptive Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, ottobre 1987. http://dx.doi.org/10.21236/ada190114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

McDonnell, John R., e Don Waagen. Evolving Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, marzo 1993. http://dx.doi.org/10.21236/ada264802.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

McDonnell, J. R., e D. Waagen. Evolving Neural Network Connectivity. Fort Belvoir, VA: Defense Technical Information Center, ottobre 1993. http://dx.doi.org/10.21236/ada273134.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Saavedra, Gary, e Aidan Thompson. Neural Network Interatomic Potentials. Office of Scientific and Technical Information (OSTI), ottobre 2020. http://dx.doi.org/10.2172/1678825.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Shao, Lu. Automatic Seizure Detection based on a Convolutional Neural Network-Recurrent Neural Network Model. Ames (Iowa): Iowa State University, maggio 2022. http://dx.doi.org/10.31274/cc-20240624-269.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Mark A. Rhode. Tampa Electric Neural Network Sootblowing. Office of Scientific and Technical Information (OSTI), marzo 2004. http://dx.doi.org/10.2172/900191.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Mark A. Rhode. Tampa Electric Neural Network Sootblowing. Office of Scientific and Technical Information (OSTI), giugno 2004. http://dx.doi.org/10.2172/900192.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia