Academic literature on the topic 'Artificial neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial neural networks"

1

N, Vikram. "Artificial Neural Networks." International Journal of Research Publication and Reviews 4, no. 4 (April 23, 2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development." Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Full text
Abstract:
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Sarwar, Abid. "Diagnosis of hyperglycemia using Artificial Neural Networks." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (December 31, 2017): 606–10. http://dx.doi.org/10.31142/ijtsrd7045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

CVS, Rajesh, and M. Padmanabham. "Basics and Features of Artificial Neural Networks." International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (February 28, 2018): 1065–69. http://dx.doi.org/10.31142/ijtsrd9578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Partridge, Derek, Sarah Rae, and Wen Jia Wang. "Artificial Neural Networks." Journal of the Royal Society of Medicine 92, no. 7 (July 1999): 385. http://dx.doi.org/10.1177/014107689909200723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moore, K. L. "Artificial neural networks." IEEE Potentials 11, no. 1 (February 1992): 23–28. http://dx.doi.org/10.1109/45.127697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dalton, J., and A. Deshmane. "Artificial neural networks." IEEE Potentials 10, no. 2 (April 1991): 33–36. http://dx.doi.org/10.1109/45.84097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yoon, Youngohc, and Lynn Peterson. "Artificial neural networks." ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, no. 1 (March 1992): 55–57. http://dx.doi.org/10.1145/134347.134362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

MAKHOUL, JOHN. "Artificial Neural Networks." INVESTIGATIVE RADIOLOGY 25, no. 6 (June 1990): 748–50. http://dx.doi.org/10.1097/00004424-199006000-00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Watt, R. C., E. S. Maslana, M. J. Navabi, and K. C. Mylrea. "ARTIFICIAL NEURAL NETWORKS." Anesthesiology 77, Supplement (September 1992): A506. http://dx.doi.org/10.1097/00000542-199209001-00506.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Artificial neural networks"

1

Boychenko, I. V., and G. I. Litvinenko. "Artificial neural networks." Thesis, Вид-во СумДУ, 2009. http://essuir.sumdu.edu.ua/handle/123456789/17044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Menneer, Tamaryn Stable Ia. "Quantum artificial neural networks." Thesis, University of Exeter, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chambers, Mark Andrew. "Queuing network construction using artificial neural networks /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Orr, Ewan. "Evolving Turing's Artificial Neural Networks." Thesis, University of Canterbury. Department of Physics and Astronomy, 2010. http://hdl.handle.net/10092/4620.

Full text
Abstract:
Our project uses ideas first presented by Alan Turing. Turing's immense contribution to mathematics and computer science is widely known, but his pioneering work in artificial intelligence is relatively unknown. In the late 1940s Turing introduced discrete Boolean artificial neural networks and, it has been argued that, he suggested that these networks be trained via evolutionary algorithms. Both artificial neural networks and evolutionary algorithms are active fields of research. Turing's networks are very basic yet capable of complex tasks such as processing sequential input; consequently, they are an excellent model for investigating the application of evolutionary algorithms to artificial neural networks. We define an example of these networks using sequential input and output, and we devise evolutionary algorithms that train these networks. Our networks are discrete Boolean networks where every 'neuron' either performs NAND or identity, and they can represent any function that maps one sequence of bit strings to another. Our algorithms use supervised learning to discover networks that represent such functions. That is, when searching for a network that represents a particular function our algorithms use input-output pairs of that function as examples to aid the discovery of solution networks. To test our ideas we encode our networks and implement the algorithms in a computer program. Using this program we investigate the performance of our networks and algorithms on simple problems such as searching for networks that realize the parity function and the multiplexer function. This investigation includes the construction and testing of an intricate crossover operator. Because our networks are composed of simple 'neurons' they are a suitable test-bed for novel training schemes. To improve our evolutionary algorithms for some problems we employ the symmetry of the problem to reduce its search space. We devise and test a means of using subgroups of the group of permutation of inputs of a function to aid evolutionary searches search for networks that represent that function. In particular, we employ the action of the permutation group S₂ to 'cut down' the search space when we search for networks that represent functions such as parity.
APA, Harvard, Vancouver, ISO, and other styles
5

Varoonchotikul, Pichaid. "Flood forecasting using artificial neural networks /." Lisse : Balkema, 2003. http://www.e-streams.com/es0704/es0704_3168.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fraticelli, Chiara. "Λc reconstruction with artificial neural networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19985/.

Full text
Abstract:
Il rivelatore ALICE studia collisioni di ioni pesanti ultrarelativistici in modo da creare e di conseguenza studiare lo stato della materia chiamato plasma di quark e gluoni. Questo obiettivo risulta difficoltoso data la sua vita breve, quindi facciamo riferimento a misure indirette per la prova della sua esistenza. In questa tesi abbiamo sfruttato tecniche di machine learning per studiare il decadimento del barione charmato Λc per dedurre alcune sue proprietà. In particolare abbiamo usato il metodo delle reti neurali per ricavare tutte le informazioni possibili con la tecninca di un'analisi multivariata.
APA, Harvard, Vancouver, ISO, and other styles
7

Millevik, Daniel, and Michael Wang. "Stock Forecasting Using Artificial Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166455.

Full text
Abstract:
This paper studies the potential of artificial neural networks (ANNs) in stock forecasting. It also investigates how the number of neurons in the network, as well as the distribution of the training data into training, validation and testing sets, affect the accuracy of the network. By using MATLAB and its Neural Network Toolbox tests were carried out with a two-layer feedforward neural network (FFNN). These are carried out by collecting five years of historical data from the Dow Jones Industrial Average (DJIA) stock index, which is then used for training the network. Finally, retraining the network with different configurations, with respect to the number of neurons and the training data distribution, in order to perform tests on a separate year of the DJIA stock index. The best acquired accuracy for predicting the closing stock price one day ahead is around 99\%. There are configurations that give worse accuracy. These are mainly the configurations using many neurons as well as the ones with low training data percentage. The conclusion is that there is potential for stock forecasting using ANNs but only predicting one day forward might not be practically useful. It is important to adapt the network to the given problem and its complexity and thus choosing the number of neurons accordingly. It will also be necessary to retrain the network several times in order to find one with good performance. Besides the training data distribution it is more important to gather enough data for the network's training set to allow it to adapt and generalize to the problem at hand.
Denna rapport studerar ifall artificiella neuronnät (ANN) potentiellt kan tillämpas på den finansiella marknaden för att förutspå aktiepriser. Det undersöks även hur antalet neuroner i nätverket och hur fördelningen av träningsdatat i träning, validering och testning, påverkar nätverkets noggrannhet. Tester utfördes på en ''two layer feedforward neural network'' (FFNN) med hjälp av MATLAB och dess Neural Network Toolbox. Dessa utfördes genom att samla fem år av historisk data för ''Dow Jones Industrial Average'' (DJIA) aktieindex som används för att träna nätverket. Slutligen så tränas nätverket i omgångar med olika konfigurationer bestående av ändringar på antalet neuroner och fördelningen av träningsdatat. Detta för att utföra tester på ett separat år av DJIA aktieindex. Den bästa noggrannheten som erhölls vid förutsägning av stängningspriset i börsen efter en dag är ca 99\%. Det finns konfigurationer som ger sämre noggrannhet. Dessa är i synnerhet konfigurationer med ett stort antal neuroner samt de med låg andel träningsdata. Slutsatsen är att det finns potential vid användning av artificiella neuronnät men det är inte praktiskt användbart att bara förutspå aktiepriser en dag framåt. Det är viktigt att anpassa nätverket till det givna problemet och dess komplexitet. Därför ska antalet neuroner i nätverket väljas därefter. Det är också nödvändigt att träna om nätverket ett flertal gånger för att erhålla ett med bra prestanda. Utöver fördelningen av träningsdatat så är det viktigare att samla tillräckligt med data för träningen av nätverket för att försäkra sig om att den anpassar och generaliserar sig till det aktuella problemet.
APA, Harvard, Vancouver, ISO, and other styles
8

Prasad, Jayan Ganesh Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Financial forecasting using artificial neural networks." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2008. http://handle.unsw.edu.au/1959.4/38700.

Full text
Abstract:
Despite the extent of a theoretical framework in financial market studies, a vast majority of the traders, investors and computer scientists have relied only on technical and timeseries data for predicting future prices. So far, the forecasting models have rarely incorporated macro-economic and market fundamentals successfully, especially with short-term predictions ranging less than a month. In this investigation on the predictability of certain financial markets, an attempt has been made to incorporate a un-exampled and encompassing set of parameters into an Artificial Neural Network prediction system. Experiments were carried out on three market instruments ??? namely currency exchange rates, share prices and oil prices. The choice of parameters for inclusion or exclusion, and the time frame adopted for the experimental sets were derived from the market literature. Good directional prediction accuracies were achieved for currency exchange rates and share prices with certain parameters as inputs, which consisted of predicting short-term movements based on past movements. These predictions were better than the results produced by a traditional least square prediction method. The trading strategy developed based on the predictions also achieved a higher percentage of winning trades. No significant predictions were observed for oil prices. These results open up questions in the microstructure of the markets and provide an insight into the inputs required for market forecasting in the corresponding time frame, for future investigation. The study concludes by advocating the use of trend based input parameters and suggests ways to improve neural network forecasting models.
APA, Harvard, Vancouver, ISO, and other styles
9

Ng, Roger K. W. "Rapid prototyping of artificial neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23440.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hook, Jaroslav. "Are artificial neural networks learning machines?" Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38651.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Artificial neural networks"

1

Koprinkova-Hristova, Petia, Valeri Mladenov, and Nikola K. Kasabov, eds. Artificial Neural Networks. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Livingstone, David J., ed. Artificial Neural Networks. Totowa, NJ: Humana Press, 2009. http://dx.doi.org/10.1007/978-1-60327-101-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Braspenning, P. J., F. Thuijsman, and A. J. M. M. Weijters, eds. Artificial Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0027019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Prieto, Alberto, ed. Artificial Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/bfb0035870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cartwright, Hugh, ed. Artificial Neural Networks. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2239-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karayiannis, N. B., and A. N. Venetsanopoulos. Artificial Neural Networks. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. Artificial Neural Networks. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-43162-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cartwright, Hugh, ed. Artificial Neural Networks. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-0826-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Artificial neural networks. New York: McGraw-Hill, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kwon, Seoyun J. Artificial neural networks. Hauppauge, N.Y: Nova Science Publishers, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Artificial neural networks"

1

Pratt, Ian. "Neural Networks." In Artificial Intelligence, 216–45. London: Macmillan Education UK, 1994. http://dx.doi.org/10.1007/978-1-349-13277-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aggarwal, Charu C. "Neural Networks." In Artificial Intelligence, 211–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72357-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bell, Tony. "Artificial dendritic learning." In Neural Networks, 161–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52255-7_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pérez Castaño, Arnaldo. "Neural Networks." In Practical Artificial Intelligence, 411–60. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3357-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wehenkel, Louis A. "Artificial Neural Networks." In Automatic Learning Techniques in Power Systems, 71–98. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5451-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Czischek, Stefanie. "Artificial Neural Networks." In Springer Theses, 53–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52715-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fortuna, Luigi, Gianguido Rizzotto, Mario Lavorgna, Giuseppe Nunnari, M. Gabriella Xibilia, and Riccardo Caponetto. "Artificial Neural Networks." In Advanced Textbooks in Control and Signal Processing, 53–79. London: Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0357-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Broese, Einar, and Hans Ulrich Löffler. "Artificial Neural Networks." In Continuum Scale Simulation of Engineering Materials, 185–99. Weinheim, FRG: Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603786.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dracopoulos, Dimitris C. "Artificial Neural Networks." In Perspectives in Neural Computing, 47–70. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0903-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buscema, Paolo Massimo, Giulia Massini, Marco Breda, Weldon A. Lodwick, Francis Newman, and Masoud Asadi-Zeydabadi. "Artificial Neural Networks." In Artificial Adaptive Systems Using Auto Contractive Maps, 11–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75049-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial neural networks"

1

Yang, Zhun, Adam Ishay, and Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Full text
Abstract:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
APA, Harvard, Vancouver, ISO, and other styles
2

Lacrama, Dan L., Loredana Ileana Viscu, and Cornelia Victoria Anghel Drugarin. "Artificial vs. natural neural networks." In 2016 13th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2016. http://dx.doi.org/10.1109/neurel.2016.7800093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin, and Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence." In 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Full text
Abstract:
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
4

Atashgahi, Zahra. "Cost-effective Artificial Neural Networks." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/810.

Full text
Abstract:
Deep neural networks (DNNs) have gained huge attention over the last several years due to their promising results in various tasks. However, due to their large model size and over-parameterization, they are recognized as being computationally demanding. Therefore, deep learning models are not well-suited to applications with limited computational resources and battery life. Current solutions to reduce computation costs mainly focus on inference efficiency while being resource-intensive during training. This Ph.D. research aims to address these challenges by developing cost-effective neural networks that can achieve decent performance on various complex tasks using minimum computational resources during training and inference of the network.
APA, Harvard, Vancouver, ISO, and other styles
5

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang, and Lise Getoor. "NeuPSL: Neural Probabilistic Soft Logic." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Full text
Abstract:
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
APA, Harvard, Vancouver, ISO, and other styles
6

Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/503.

Full text
Abstract:
Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., "an autonomous vehicle has to stop in front of a stop sign") remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
7

Barua, Susamma. "Optical and systolic implementation of an artificial neural network." In Photonic Neural Networks. SPIE, 1993. http://dx.doi.org/10.1117/12.983197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Caulfield, H. John. "Artificial neural/chemical networks." In International Symposium on Optical Science and Technology, edited by Bruno Bosacchi, David B. Fogel, and James C. Bezdek. SPIE, 2001. http://dx.doi.org/10.1117/12.448345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Badgero, M. L. "Digitizing artificial neural networks." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Benmaghnia, Hanane, Matthieu Martel, and Yassamine Seladji. "Fixed-Point Code Synthesis for Neural Networks." In 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.

Full text
Abstract:
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial neural networks"

1

Keller, P. E. Artificial neural networks in medicine. Office of Scientific and Technical Information (OSTI), July 1994. http://dx.doi.org/10.2172/10162484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dawes, Robert L. BIOMASSCOMP: Artificial Neural Networks and Neurocomputers. Fort Belvoir, VA: Defense Technical Information Center, September 1988. http://dx.doi.org/10.21236/ada200902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, September 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Teeter, Corinne. Short Term Plasticity for Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/2004879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blough, D. K., and K. K. Anderson. A comparison of artificial neural networks and statistical analyses. Office of Scientific and Technical Information (OSTI), January 1994. http://dx.doi.org/10.2172/10146489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Amidi, Erfan. A Search for top quark using artificial neural networks. Office of Scientific and Technical Information (OSTI), February 1996. http://dx.doi.org/10.2172/1156358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baud-Bovy, Gabriel. A gaze-addressing communication system using artificial neural networks. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Powell, Jr., James Estes. Learning Memento archive routing with Character-based Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), October 2018. http://dx.doi.org/10.2172/1477616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez Pibernat, Gabriel, and Miguel Mascaró Portells. Dynamic structure of single-layer neural networks. Fundación Avanza, May 2023. http://dx.doi.org/10.60096/fundacionavanza/2392022.

Full text
Abstract:
This article examines the practical applications of single hidden layer neural networks in machine learning and artificial intelligence. They have been used in diverse fields, such as finance, medicine, and autonomous vehicles, due to their simplicit
APA, Harvard, Vancouver, ISO, and other styles
10

Waqas, Muhammad Talha. Synaptic Symmetry: Exploring Similarities in Neural Connections between Human Brain and Artificial Neural Networks. ResearchHub Technologies, Inc., February 2024. http://dx.doi.org/10.55277/researchhub.c4dckln9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography