Littérature scientifique sur le sujet « Neural networks »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Neural networks ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Neural networks"

1

Navghare, Tukaram, Aniket Muley et Vinayak Jadhav. « Siamese Neural Networks for Kinship Prediction : A Deep Convolutional Neural Network Approach ». Indian Journal Of Science And Technology 17, no 4 (26 janvier 2024) : 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

N, Vikram. « Artificial Neural Networks ». International Journal of Research Publication and Reviews 4, no 4 (23 avril 2023) : 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

O. H. Abdelwahed, O. H. Abdelwahed, et M. El-Sayed Wahed. « Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks ». Indian Journal of Applied Research 3, no 6 (1 octobre 2011) : 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer et Marcin Woźniak Wei Wei. « Overview of Capsule Neural Networks ». 網際網路技術學刊 23, no 1 (janvier 2022) : 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Texte intégral
Résumé :
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
5

Perfetti, R. « A neural network to design neural networks ». IEEE Transactions on Circuits and Systems 38, no 9 (1991) : 1099–103. http://dx.doi.org/10.1109/31.83884.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

AVeselý. « Neural networks in data mining ». Agricultural Economics (Zemědělská ekonomika) 49, No. 9 (2 mars 2012) : 427–31. http://dx.doi.org/10.17221/5427-agricecon.

Texte intégral
Résumé :
To posses relevant information is an inevitable condition for successful enterprising in modern business. Information could be parted to data and knowledge. How to gather, store and retrieve data is studied in database theory. In the knowledge engineering, there is in the centre of interest the knowledge and methods of its formalization and gaining are studied. Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Classical methods of gaining knowledge from data sets are statistical methods. In data mining, new methods besides statistical are used. These new methods have their origin in artificial intelligence. They look for unknown and unexpected relations, which can be uncovered by exploring of data in database. In the article, a utilization of modern methods of data mining is described and especially the methods based on neural networks theory are pursued. The advantages and drawbacks of applications of multiplayer feed forward neural networks and Kohonen&rsquo;s self-organizing maps are discussed. Kohonen&rsquo;s self-organizing map is the most promising neural data-mining algorithm regarding its capability to visualize high-dimensional data.
Styles APA, Harvard, Vancouver, ISO, etc.
7

J, Joselin, Dinesh T et Ashiq M. « A Review on Neural Networks ». International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31 octobre 2018) : 565–69. http://dx.doi.org/10.31142/ijtsrd18461.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ziroyan, M. A., E. A. Tusova, A. S. Hovakimian et S. G. Sargsyan. « Neural networks apparatus in biometrics ». Contemporary problems of social work 1, no 2 (30 juin 2015) : 129–37. http://dx.doi.org/10.17922/2412-5466-2015-1-2-129-137.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Marton, Sascha, Stefan Lüdtke et Christian Bartelt. « Explanations for Neural Networks by Neural Networks ». Applied Sciences 12, no 3 (18 janvier 2022) : 980. http://dx.doi.org/10.3390/app12030980.

Texte intégral
Résumé :
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rodriguez, Nathaniel, Eduardo Izquierdo et Yong-Yeol Ahn. « Optimal modularity and memory capacity of neural reservoirs ». Network Neuroscience 3, no 2 (janvier 2019) : 551–66. http://dx.doi.org/10.1162/netn_a_00082.

Texte intégral
Résumé :
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network’s architecture and function is still primitive. Here we reveal that a neural network’s modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain’s modular organization.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Neural networks"

1

Patterson, Raymond A. « Hybrid Neural networks and network design ». Connect to resource, 1995. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1262707683.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rastogi, Preeti. « Assessing Wireless Network Dependability Using Neural Networks ». Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1129134364.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Chambers, Mark Andrew. « Queuing network construction using artificial neural networks / ». The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dunn, Nathan A. « A Novel Neural Network Analysis Method Applied to Biological Neural Networks ». Thesis, view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1251892251&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 122- 131). Also available for download via the World Wide Web; free to University of Oregon users.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dong, Yue. « Higher Order Neural Networks and Neural Networks for Stream Learning ». Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35731.

Texte intégral
Résumé :
The goal of this thesis is to explore some variations of neural networks. The thesis is mainly split into two parts: a variation of the shaping functions in neural networks and a variation of learning rules in neural networks. In the first part, we mainly investigate polynomial perceptrons - a perceptron with a polynomial shaping function instead of a linear one. We prove the polynomial perceptron convergence theorem and illustrate the notion by showing that a higher order perceptron can learn the XOR function through empirical experiments with implementation. In the second part, we propose three models (SMLP, SA, SA2) for stream learning and anomaly detection in streams. The main technique allowing these models to perform at a level comparable to the state-of-the-art algorithms in stream learning is the learning rule used. We employ mini-batch gradient descent algorithm and stochastic gradient descent algorithm to speed up the models. In addition, the use of parallel processing with multi-threads makes the proposed methods highly efficient in dealing with streaming data. Our analysis shows that all models have linear runtime and constant memory requirement. We also demonstrate empirically that the proposed methods feature high detection rate, low false alarm rate, and fast response. The paper on the first two models (SMLP, SA) is published in the 29th Canadian AI Conference and won the best paper award. The invited journal paper on the third model (SA2) for Computational Intelligence is under peer review.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xu, Shuxiang, University of Western Sydney et of Informatics Science and Technology Faculty. « Neuron-adaptive neural network models and applications ». THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Texte intégral
Résumé :
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
Doctor of Philosophy (PhD)
Styles APA, Harvard, Vancouver, ISO, etc.
7

Allen, T. J. « Optoelectronic neural networks ». Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362900.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Sloan, Cooper Stokes. « Neural bus networks ». Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Boychenko, I. V., et G. I. Litvinenko. « Artificial neural networks ». Thesis, Вид-во СумДУ, 2009. http://essuir.sumdu.edu.ua/handle/123456789/17044.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Landry, Kenneth D. « Evolutionary neural networks ». Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/51904.

Texte intégral
Résumé :
To create neural networks that work, one needs to specify a structure and the interconnection weights between each pair of connected computing elements. The structure of a network can be selected by the designer depending on the application, although the selection of interconnection weights is a much larger problem. Algorithms have been developed to alter the weights slightly in order to produce the desired results. Learning algorithms such as Hebb's rule, the Delta rule and error propagation have been used, with success, to learn the appropriate weights. The major objection to this class of algorithms is that one cannot specify what is not desired in the network in addition to what is desired. An alternate method to learning the correct interconnection weights is to evolve a network in an environment that rewards "good” behavior and punishes "bad" behavior, This technique allows interesting networks to appear which otherwise may not be discovered by other methods of learning. In order to teach a network the correct weights, this approach simply needs a direction where an acceptable solution can be obtained rather than a complete answer to the problem.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Neural networks"

1

Dominique, Valentin, et Edelman Betty, dir. Neural networks. Thousand Oaks, Calif : Sage Publications, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Abdi, Hervé, Dominique Valentin et Betty Edelman. Neural Networks. 2455 Teller Road, Thousand Oaks California 91320 United States of America : SAGE Publications, Inc., 1999. http://dx.doi.org/10.4135/9781412985277.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Davalo, Eric, et Patrick Naïm. Neural Networks. London : Macmillan Education UK, 1991. http://dx.doi.org/10.1007/978-1-349-12312-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Müller, Berndt, et Joachim Reinhardt. Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rojas, Raúl. Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Müller, Berndt, Joachim Reinhardt et Michael T. Strickland. Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Almeida, Luis B., et Christian J. Wellekens, dir. Neural Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52255-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

1931-, Taylor John, et UNICOM Seminars, dir. Neural networks. Henley-on-Thames : A. Waller, 1995.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Picton, Philip. Neural networks. New York : Palgrave, 2000.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Teh, H. H. Neural logic networks : A new class of neural networks. Singapore : World Scientific, 1995.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Neural networks"

1

Bile, Alessandro. « Introduction to Neural Networks : Biological Neural Network ». Dans Solitonic Neural Networks, 1–18. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48655-5_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wüthrich, Mario V., et Michael Merz. « Recurrent Neural Networks ». Dans Springer Actuarial, 381–406. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Texte intégral
Résumé :
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Müller, Berndt, Joachim Reinhardt et Michael T. Strickland. « Cybernetic Networks ». Dans Neural Networks, 46–51. Berlin, Heidelberg : Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rojas, Raúl. « Associative Networks ». Dans Neural Networks, 309–34. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rojas, Raúl. « Stochastic Networks ». Dans Neural Networks, 371–87. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rojas, Raúl. « Kohonen Networks ». Dans Neural Networks, 389–410. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Müller, Berndt, et Joachim Reinhardt. « Cybernetic Networks ». Dans Neural Networks, 45–50. Berlin, Heidelberg : Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hardy, Yorick, et Willi-Hans Steeb. « Neural Networks ». Dans Classical and Quantum Computing, 261–312. Basel : Birkhäuser Basel, 2001. http://dx.doi.org/10.1007/978-3-0348-8366-5_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bungay, Henry R. « Neural Networks ». Dans Environmental Systems Engineering, 121–38. Boston, MA : Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5507-0_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cios, Krzysztof J., Witold Pedrycz et Roman W. Swiniarski. « Neural Networks ». Dans Data Mining Methods for Knowledge Discovery, 309–74. Boston, MA : Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5589-6_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Neural networks"

1

Yang, Zhun, Adam Ishay et Joohyung Lee. « NeurASP : Embracing Neural Networks into Answer Set Programming ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Texte intégral
Résumé :
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Shi, Weijia, Andy Shih, Adnan Darwiche et Arthur Choi. « On Tractable Representations of Binary Neural Networks ». Dans 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/91.

Texte intégral
Résumé :
We consider the compilation of a binary neural network’s decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network’s behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Najmon, Joel C., et Andres Tovar. « Comparing Derivatives of Neural Networks for Regression ». Dans ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-117571.

Texte intégral
Résumé :
Abstract In the past decades, neural networks have rapidly grown in popularity as a way to model complex non-linear relationships. The computational efficiently and flexibility of neural networks has made them popular for machine learning-based optimization methods. As such the derivative of a neural network’s output is required for gradient-based optimization algorithms. Recently, there have been several works towards improving derivatives of neural network targets, however there is yet to be done a comparative study on the different derivation methods for the derivative of a neural network’s targets with respect to its input features. Consequently, this paper’s objective is to implement and compare common methods for obtaining or approximating the derivative of neural network targets with respect to their inputs. The methods studied include analytical derivatives, finite differences, complex step approximation, and automatic differentiation. The methods are tested by training deep multilayer perceptrons for regression with several analytical functions. The derivatives of the neural network-derived methods are evaluated against the exact derivative of the test functions. Results show that all of the derivation methods provide the same derivative approximation to near working precision of the computer. Implementation of the study is done using the TensorFlow library in a provided Python code.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ozcan, Neyir. « New results for global stability of neutral-type delayed neural networks ». Dans The 11th International Conference on Integrated Modeling and Analysis in Applied Control and Automation. CAL-TEK srl, 2018. http://dx.doi.org/10.46354/i3m.2018.imaaca.004.

Texte intégral
Résumé :
"This paper deals with the stability analysis of the class of neutral-type neural networks with constant time delay. By using a suitable Lyapunov functional, some delay independent sufficient conditions are derived, which ensure the global asymptotic stability of the equilibrium point for this this class of neutral-type neural networks with time delays with respect to the Lipschitz activation functions. The presented stability results rely on checking some certain properties of matrices. Therefore, it is easy to verify the validation of the constraint conditions on the network parameters of neural system by simply using some basic information of the matrix theory."
Styles APA, Harvard, Vancouver, ISO, etc.
5

Benmaghnia, Hanane, Matthieu Martel et Yassamine Seladji. « Fixed-Point Code Synthesis for Neural Networks ». Dans 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.

Texte intégral
Résumé :
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Adewusi, S. A., et B. O. Al-Bedoor. « Detection of Propagating Cracks in Rotors Using Neural Networks ». Dans ASME 2002 Pressure Vessels and Piping Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/pvp2002-1518.

Texte intégral
Résumé :
This paper presents the application of neural networks for rotor cracks detection. The basic working principles of neural networks are presented. Experimental vibration signals of rotors with and without a propagating crack were used to train the Multi-layer Feed-forward Neural Networks using back-propagation algorithm. The trained neural networks were tested with other set of vibration data. A simple two-layer feed-forward neural network with two neurons in the input layer and one neuron in the output layer trained with the signals of a cracked rotor and a normal rotor without a crack was found to be satisfactory in detecting a propagating crack. Trained three-layer networks were able to detect both the propagating and non-propagating cracks. The FFT of the vibration signals showing variation in amplitude of the harmonics as time progresses are also presented for comparison.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang et Lise Getoor. « NeuPSL : Neural Probabilistic Soft Logic ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Texte intégral
Résumé :
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Owechko, Yuri. « Self-Pumped Optical Neural Networks ». Dans Optical Computing. Washington, D.C. : Optica Publishing Group, 1989. http://dx.doi.org/10.1364/optcomp.1989.md4.

Texte intégral
Résumé :
Neural network models for artificial intelligence offer an approach fundamentally different from conventional symbolic approaches, but the merits of the two paradigms cannot be fairly compared until neural network models with large numbers of ”neurons” are implemented. Despite the attractiveness of neural networks for computing applications which involve adaptation and learning, most of the published demonstrations of neural network technology have involved relatively small numbers of ”neurons”. One reason for this is the poor match between conventional electronic serial or coarse-grained multiple-processor computers and the massive parallelism and communication requirements of neural network models. The self-pumped optical neural network (SPONN) described here is a fine-grained optical architecture which features massive parallelism and a much greater degree of interconnectivity than bus-oriented or hypercube electronic architectures. SPONN is potentially capable of implementing neural networks consisting of 105-106 neurons with 109-1010 interconnections. The mapping of neural network models onto the architecture occurs naturally without the need for multiplexing neurons or dealing with contention, routing, and communication bottleneck problems. This simplifies the programming involved compared to electronic implementations.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Spall, James, Xianxin Guo et A. I. Lvovsky. « Hybrid training of optical neural networks ». Dans Frontiers in Optics. Washington, D.C. : Optica Publishing Group, 2022. http://dx.doi.org/10.1364/fio.2022.ftu6d.2.

Texte intégral
Résumé :
Optical neural networks are often trained “in-silico” on digital simulators, but physical imperfections that cannot be modelled may lead to a “reality gap” between the simulator and the physical system. In this work we present hybrid training, where the weight matrix is trained by computing neuron values optically using the actual physical network.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Xie, Xuan, Kristian Kersting et Daniel Neider. « Neuro-Symbolic Verification of Deep Neural Networks ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/503.

Texte intégral
Résumé :
Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., "an autonomous vehicle has to stop in front of a stop sign") remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Neural networks"

1

Smith, Patrick I. Neural Networks. Office of Scientific and Technical Information (OSTI), septembre 2003. http://dx.doi.org/10.2172/815740.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Johnson, John L., et C. C. Sung. Neural Networks. Fort Belvoir, VA : Defense Technical Information Center, janvier 1990. http://dx.doi.org/10.21236/ada222110.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tarasenko, Andrii O., Yuriy V. Yakimov et Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], février 2020. http://dx.doi.org/10.31812/123456789/3682.

Texte intégral
Résumé :
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Holder, Nanette S. Introduction to Neural Networks. Fort Belvoir, VA : Defense Technical Information Center, mars 1992. http://dx.doi.org/10.21236/ada248258.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Abu-Mostafa, Yaser S., et Amir F. Atiya. Theory of Neural Networks. Fort Belvoir, VA : Defense Technical Information Center, juillet 1991. http://dx.doi.org/10.21236/ada253187.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Alltop, W. O. Piecewise Linear Neural Networks. Fort Belvoir, VA : Defense Technical Information Center, août 1992. http://dx.doi.org/10.21236/ada265031.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wiggins, Vince L., Larry T. Looper et Sheree K. Engquist. Neural Networks : A Primer. Fort Belvoir, VA : Defense Technical Information Center, mai 1991. http://dx.doi.org/10.21236/ada235920.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang et Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, décembre 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Texte intégral
Résumé :
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Keller, P. E. Artificial neural networks in medicine. Office of Scientific and Technical Information (OSTI), juillet 1994. http://dx.doi.org/10.2172/10162484.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Johnson, M. A., G. Kendall, P. J. Cote et L. V. Meisel. Neural Networks in Seizure Diagnosis. Fort Belvoir, VA : Defense Technical Information Center, mars 1995. http://dx.doi.org/10.21236/ada295629.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie