Gotowa bibliografia na temat „Neural network”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Neural network”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Neural network"

1

Navghare, Tukaram, Aniket Muley i Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach". Indian Journal Of Science And Technology 17, nr 4 (26.01.2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

O. H. Abdelwahed, O. H. Abdelwahed, i M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks". Indian Journal of Applied Research 3, nr 6 (1.10.2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Tran, Loc. "Directed Hypergraph Neural Network". Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (31.03.2020): 1434–41. http://dx.doi.org/10.5373/jardcs/v12sp4/20201622.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Antipova, E. S., i S. A. Rashkovskiy. "Autoassociative Hamming Neural Network". Nelineinaya Dinamika 17, nr 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Pełny tekst źródła
Streszczenie:
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
Style APA, Harvard, Vancouver, ISO itp.
5

Perfetti, R. "A neural network to design neural networks". IEEE Transactions on Circuits and Systems 38, nr 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer i Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks". 網際網路技術學刊 23, nr 1 (styczeń 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Pełny tekst źródła
Streszczenie:
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Style APA, Harvard, Vancouver, ISO itp.
7

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network". Revista Gestão Inovação e Tecnologias 11, nr 4 (10.07.2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan i Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, nr 1 (30.03.2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Pełny tekst źródła
Streszczenie:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Style APA, Harvard, Vancouver, ISO itp.
9

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, nr 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

CVS, Rajesh, i Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31.10.2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Neural network"

1

Xu, Shuxiang, University of Western Sydney i of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications". THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Pełny tekst źródła
Streszczenie:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
Doctor of Philosophy (PhD)
Style APA, Harvard, Vancouver, ISO itp.
2

Ellerbrock, Thomas M. "Multilayer neural networks learnability, network generation, and network simplification /". [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=958467897.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Patterson, Raymond A. "Hybrid Neural networks and network design". Connect to resource, 1995. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1262707683.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Khliobas. "NEURAL NETWORK". Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33752.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Rastogi, Preeti. "Assessing Wireless Network Dependability Using Neural Networks". Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1129134364.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chambers, Mark Andrew. "Queuing network construction using artificial neural networks /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Dunn, Nathan A. "A Novel Neural Network Analysis Method Applied to Biological Neural Networks". Thesis, view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1251892251&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 122- 131). Also available for download via the World Wide Web; free to University of Oregon users.
Style APA, Harvard, Vancouver, ISO itp.
8

BRUCE, WILLIAM, i OTTER EDVIN VON. "Artificial Neural Network Autonomous Vehicle : Artificial Neural Network controlled vehicle". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191192.

Pełny tekst źródła
Streszczenie:
This thesis aims to explain how a Artificial Neural Network algorithm could be used as means of control for a Autonomous Vehicle. It describes the theory behind the neural network and Autonomous Vehicles, and how a prototype with a camera as its only input can be designed to test and evaluate the algorithms capabilites, and also drive using it. The thesis will show that the Artificial Neural Network can, with a image resolution of 100 × 100 and a training set with 900 images, makes decisions with a 0.78 confidence level.
Denna rapport har som mal att beskriva hur en Artificiellt Neuronnatverk al- goritm kan anvandas for att kontrollera en bil. Det beskriver teorin bakom neu- ronnatverk och autonoma farkoster samt hur en prototyp, som endast anvander en kamera som indata, kan designas for att testa och utvardera algoritmens formagor. Rapporten kommer visa att ett neuronnatverk kan, med bildupplos- ningen 100 × 100 och traningsdata innehallande 900 bilder, ta beslut med en 0.78 sakerhet.
Style APA, Harvard, Vancouver, ISO itp.
9

De, Jongh Albert. "Neural network ensembles". Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Pełny tekst źródła
Streszczenie:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
Style APA, Harvard, Vancouver, ISO itp.
10

Simmen, Martin Walter. "Neural network optimization". Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/12942.

Pełny tekst źródła
Streszczenie:
Combinatorial optimization problems arise throughout science, industry, and commerce. The demonstration that analogue neural networks could, in principle, rapidly find near-optimal solutions to such problems - many of which appear computationally intractable - was important both for the novelty of the approach and because these networks are potentially implementable in parallel hardware. However, subsequent research, conducted largely on the travelling salesman problem, revealed problems regarding the original network's parameter sensitivity and tendency to give invalid states. Although this has led to improvements and new network designs which at least partly overcome the above problems, many issues concerning the performance of optimization networks remain unresolved. This thesis explores how to optimize the performance of two neural networks current in the literature: the elastic net, and the mean field Potts network, both of which are designed for the travelling salesman problem. Analytical methods elucidate issues of parameter sensitivty and enable parameter values to be chosen in a rational manner. Systematic numerical experiments on realistic size problems complement and support the theoretical analyses throughout. An existing analysis of how the elastic net algorithm may generate invalid solutions is reviewed and extended. A new analysis locates the parameter regime in which the net may converge to a second type of invalid solution. Combining the two analyses yields a prescription for setting the value of a key parameter optimally with respect to avoiding invalid solutions. The elastic net operates by minimizing a computational energy function. Several new forms of dynamics using locally adaptive step-sizes are developed, and shown to increase greatly the efficiency of the minimization process. Analytical work constraining the range of safe adaptation rates is presented. A new form of dynamics, with a user defined step-size, is introduced for the mean field Potts network. An analysis of the network's critical temperature under these dynamics is given, by generalizing a previous analysis valid for a special case of the dynamics.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Neural network"

1

Neural network principles. Englewood Cliffs, NJ: Prentice Hall, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

De Wilde, Philippe. Neural Network Models. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-84628-614-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Taylor, J. G., E. R. Caianiello, R. M. J. Cotterill i J. W. Clark, red. Neural Network Dynamics. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-2001-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Taylor, J. G., red. Neural Network Applications. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-2003-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Harvey, Robert L. Neural network principles. London: Prentice-Hall International, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Edgar, Sánchez-Sinencio, i Newcomb Robert W, red. Neural network hardware. New York: IEEE, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

B, Demuth Howard, i Beale Mark H, red. Neural network design. Boston: PWS Pub., 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Bharath, Ramachandran. Neural network computing. New York: Windcrest/McGraw-Hill, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Toshio, Fukuda, red. Neural network applications. New York: IEEE, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Shanmuganathan, Subana, i Sandhya Samarasinghe, red. Artificial Neural Network Modelling. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28495-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Neural network"

1

D’Addona, Doriana Marilena. "Neural Network". W CIRP Encyclopedia of Production Engineering, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-642-35950-7_6563-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

D’Addona, Doriana Marilena. "Neural Network". W CIRP Encyclopedia of Production Engineering, 1268–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-53120-4_6563.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

D’Addona, Doriana Marilena. "Neural Network". W CIRP Encyclopedia of Production Engineering, 911–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-20617-7_6563.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kim, Phil. "Neural Network". W MATLAB Deep Learning, 19–51. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Chityala, Ravishankar, i Sridevi Pudipeddi. "Neural Network". W Image Processing and Acquisition using Python, 251–64. Second edition. | Boca Raton : Chapman & Hall/CRC Press, 2020. | Series: Chapman & Hall/CRC the Python series: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429243370-11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Weik, Martin H. "neural network". W Computer Science and Communications Dictionary, 1095. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_12300.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Burgos, José E. "Neural Network". W Encyclopedia of Animal Cognition and Behavior, 1–19. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-319-47829-6_775-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Burgos, José E. "Neural Network". W Encyclopedia of Animal Cognition and Behavior, 4634–51. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-55065-7_775.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Liang, Jianxin Zhao i Richard Mortier. "Neural Network". W Undergraduate Topics in Computer Science, 219–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97645-3_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Tsai, Kao-Tai. "Neural Network". W Machine Learning for Knowledge Discovery with R, 155–72. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003205685-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Neural network"

1

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin i Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence". W 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Pełny tekst źródła
Streszczenie:
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
Style APA, Harvard, Vancouver, ISO itp.
2

Yang, Zhun, Adam Ishay i Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Pełny tekst źródła
Streszczenie:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Style APA, Harvard, Vancouver, ISO itp.
3

Huynh, Alex V., John F. Walkup i Thomas F. Krile. "Optical perceptron-based quadratic neural network". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mii8.

Pełny tekst źródła
Streszczenie:
Optical quadratic neural networks are currently being investigated because of their advantages over linear neural networks.1 Based on a quadratic neuron already constructed,2 an optical quadratic neural network utilizing four-wave mixing in photorefractive barium titanate (BaTiO3) has been developed. This network implements a feedback loop using a charge-coupled device camera, two monochrome liquid crystal televisions, a computer, and various optical elements. For training, the network employs the supervised quadratic Perceptron algorithm to associate binary-valued input vectors with specified target vectors. The training session is composed of epochs, each of which comprises an entire set of iterations for all input vectors. The network converges when the interconnection matrix remains unchanged for every successive epoch. Using a spatial multiplexing scheme for two bipolar neurons, the network can classify up to eight different input patterns. To the best of our knowledge, this proof-of-principle experiment represents one of the first working trainable optical quadratic networks utilizing a photorefractive medium.
Style APA, Harvard, Vancouver, ISO itp.
4

Bian, Shaoping, Kebin Xu i Jing Hong. "Near neighbor neurons interconnected neural network". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tht27.

Pełny tekst źródła
Streszczenie:
When the Hopfield neural network is extended to deal with a 2-D image composed of N×N pixels, the weight interconnection is a fourth-rank tensor with N4 elements. Each neuron is interconnected with all other neurons of the network. For an image, N will be large. So N4, the number of elements of the interconnection tensor, will be so large as to make the neural network's learning time (which corresponds to the precalculation of the interconnection tensor elements) too long. It is also difficult to implement the 2-D Hopfield neural network optically.
Style APA, Harvard, Vancouver, ISO itp.
5

Shi, Weijia, Andy Shih, Adnan Darwiche i Arthur Choi. "On Tractable Representations of Binary Neural Networks". W 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/91.

Pełny tekst źródła
Streszczenie:
We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.
Style APA, Harvard, Vancouver, ISO itp.
6

Huynh, Alex V., John F. Walkup i Thomas F. Krile. "Optical quadratic perceptron neural network". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.thy35.

Pełny tekst źródła
Streszczenie:
Optical quadratic neural networks are currently being investigated because of their advantages with respect to linear neural networks.1 A quadratic neuron has previously been implemented by using a photorefractive barium titanate crystal.2 This approach has been improved and enhanced to realize a neural network that implements the perceptron learning algorithm. The input matrix, which is an encoded version of the input vector, is placed on a mask, and the interconnection matrix is computer-generated on a monochrome liquid-crystal television. By performing the four-wave mixing operation, the barium titanate crystal effectively multiplies the light fields representing the input matrix by those representing the interconnection matrix to produce an analog output. This output is then digitized by a computer, thresholded, and compared to a specified target vector. An error signal representing the difference between the target and thresholded output is generated, and the interconnection matrix is iteratively modified until convergence occurs. The characteristics of this quadratic neural network will be presented and discussed.
Style APA, Harvard, Vancouver, ISO itp.
7

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang i Lise Getoor. "NeuPSL: Neural Probabilistic Soft Logic". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
Style APA, Harvard, Vancouver, ISO itp.
8

Zhan, Tiffany. "Hyper-Parameter Tuning in Deep Neural Network Learning". W 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121809.

Pełny tekst źródła
Streszczenie:
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include penalizing parameters during training or trimming connectivity. CNNs use relatively little pre-processing compared to other image classification algorithms. Given the rise in popularity and use of deep neural network learning, the problem of tuning hyperparameters is increasingly prominent tasks in constructing efficient deep neural networks. In this paper, the tuning of deep neural network learning (DNN) hyper-parameters is explored using an evolutionary based approach popularized for use in estimating solutions to problems where the problem space is too large to get an exact solution.
Style APA, Harvard, Vancouver, ISO itp.
9

Mehdizadeh, Nasser S., Payam Sinaei i Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network". W ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Pełny tekst źródła
Streszczenie:
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis of computational performance along with a comparison between the neural network approach and other conventional methods, used to represent the chemistry, are presented and the ability of neural networks for representing a non-linear chemical system is illustrated.
Style APA, Harvard, Vancouver, ISO itp.
10

Jennings, Andrew, Brian Kelly, John Hegarty i Paul Horan. "Second Order Neural Network Implementation Using Asymmetric Fabry-Perot Modulators". W Optical Computing. Washington, D.C.: Optica Publishing Group, 1993. http://dx.doi.org/10.1364/optcomp.1993.owc.2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Neural network"

1

Pollack, Randy B. Neural Network Technologies. Fort Belvoir, VA: Defense Technical Information Center, luty 1993. http://dx.doi.org/10.21236/ada262576.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wilensky, Gregg, Narbik Manukian, Joseph Neuhaus i Natalie Rivetti. Neural Network Studies. Fort Belvoir, VA: Defense Technical Information Center, lipiec 1993. http://dx.doi.org/10.21236/ada271593.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Tarasenko, Andrii O., Yuriy V. Yakimov i Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], luty 2020. http://dx.doi.org/10.31812/123456789/3682.

Pełny tekst źródła
Streszczenie:
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
Style APA, Harvard, Vancouver, ISO itp.
4

Barto, Andrew. Adaptive Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, październik 1987. http://dx.doi.org/10.21236/ada190114.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

McDonnell, John R., i Don Waagen. Evolving Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, marzec 1993. http://dx.doi.org/10.21236/ada264802.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

McDonnell, J. R., i D. Waagen. Evolving Neural Network Connectivity. Fort Belvoir, VA: Defense Technical Information Center, październik 1993. http://dx.doi.org/10.21236/ada273134.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Saavedra, Gary, i Aidan Thompson. Neural Network Interatomic Potentials. Office of Scientific and Technical Information (OSTI), październik 2020. http://dx.doi.org/10.2172/1678825.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shao, Lu. Automatic Seizure Detection based on a Convolutional Neural Network-Recurrent Neural Network Model. Ames (Iowa): Iowa State University, maj 2022. http://dx.doi.org/10.31274/cc-20240624-269.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Mark A. Rhode. Tampa Electric Neural Network Sootblowing. Office of Scientific and Technical Information (OSTI), marzec 2004. http://dx.doi.org/10.2172/900191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mark A. Rhode. Tampa Electric Neural Network Sootblowing. Office of Scientific and Technical Information (OSTI), czerwiec 2004. http://dx.doi.org/10.2172/900192.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii