Gotowa bibliografia na temat „Artificial neural networks”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Artificial neural networks”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Artificial neural networks"

1

N, Vikram. "Artificial Neural Networks". International Journal of Research Publication and Reviews 4, nr 4 (23.04.2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development". Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Pełny tekst źródła
Streszczenie:
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
Style APA, Harvard, Vancouver, ISO itp.
3

Sarwar, Abid. "Diagnosis of hyperglycemia using Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31.12.2017): 606–10. http://dx.doi.org/10.31142/ijtsrd7045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

CVS, Rajesh, i M. Padmanabham. "Basics and Features of Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (28.02.2018): 1065–69. http://dx.doi.org/10.31142/ijtsrd9578.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Partridge, Derek, Sarah Rae i Wen Jia Wang. "Artificial Neural Networks". Journal of the Royal Society of Medicine 92, nr 7 (lipiec 1999): 385. http://dx.doi.org/10.1177/014107689909200723.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Moore, K. L. "Artificial neural networks". IEEE Potentials 11, nr 1 (luty 1992): 23–28. http://dx.doi.org/10.1109/45.127697.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Dalton, J., i A. Deshmane. "Artificial neural networks". IEEE Potentials 10, nr 2 (kwiecień 1991): 33–36. http://dx.doi.org/10.1109/45.84097.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yoon, Youngohc, i Lynn Peterson. "Artificial neural networks". ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, nr 1 (marzec 1992): 55–57. http://dx.doi.org/10.1145/134347.134362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

MAKHOUL, JOHN. "Artificial Neural Networks". INVESTIGATIVE RADIOLOGY 25, nr 6 (czerwiec 1990): 748–50. http://dx.doi.org/10.1097/00004424-199006000-00027.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Watt, R. C., E. S. Maslana, M. J. Navabi i K. C. Mylrea. "ARTIFICIAL NEURAL NETWORKS". Anesthesiology 77, Supplement (wrzesień 1992): A506. http://dx.doi.org/10.1097/00000542-199209001-00506.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Artificial neural networks"

1

Boychenko, I. V., i G. I. Litvinenko. "Artificial neural networks". Thesis, Вид-во СумДУ, 2009. http://essuir.sumdu.edu.ua/handle/123456789/17044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Menneer, Tamaryn Stable Ia. "Quantum artificial neural networks". Thesis, University of Exeter, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286530.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chambers, Mark Andrew. "Queuing network construction using artificial neural networks /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Orr, Ewan. "Evolving Turing's Artificial Neural Networks". Thesis, University of Canterbury. Department of Physics and Astronomy, 2010. http://hdl.handle.net/10092/4620.

Pełny tekst źródła
Streszczenie:
Our project uses ideas first presented by Alan Turing. Turing's immense contribution to mathematics and computer science is widely known, but his pioneering work in artificial intelligence is relatively unknown. In the late 1940s Turing introduced discrete Boolean artificial neural networks and, it has been argued that, he suggested that these networks be trained via evolutionary algorithms. Both artificial neural networks and evolutionary algorithms are active fields of research. Turing's networks are very basic yet capable of complex tasks such as processing sequential input; consequently, they are an excellent model for investigating the application of evolutionary algorithms to artificial neural networks. We define an example of these networks using sequential input and output, and we devise evolutionary algorithms that train these networks. Our networks are discrete Boolean networks where every 'neuron' either performs NAND or identity, and they can represent any function that maps one sequence of bit strings to another. Our algorithms use supervised learning to discover networks that represent such functions. That is, when searching for a network that represents a particular function our algorithms use input-output pairs of that function as examples to aid the discovery of solution networks. To test our ideas we encode our networks and implement the algorithms in a computer program. Using this program we investigate the performance of our networks and algorithms on simple problems such as searching for networks that realize the parity function and the multiplexer function. This investigation includes the construction and testing of an intricate crossover operator. Because our networks are composed of simple 'neurons' they are a suitable test-bed for novel training schemes. To improve our evolutionary algorithms for some problems we employ the symmetry of the problem to reduce its search space. We devise and test a means of using subgroups of the group of permutation of inputs of a function to aid evolutionary searches search for networks that represent that function. In particular, we employ the action of the permutation group S₂ to 'cut down' the search space when we search for networks that represent functions such as parity.
Style APA, Harvard, Vancouver, ISO itp.
5

Varoonchotikul, Pichaid. "Flood forecasting using artificial neural networks /". Lisse : Balkema, 2003. http://www.e-streams.com/es0704/es0704_3168.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Fraticelli, Chiara. "Λc reconstruction with artificial neural networks". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19985/.

Pełny tekst źródła
Streszczenie:
Il rivelatore ALICE studia collisioni di ioni pesanti ultrarelativistici in modo da creare e di conseguenza studiare lo stato della materia chiamato plasma di quark e gluoni. Questo obiettivo risulta difficoltoso data la sua vita breve, quindi facciamo riferimento a misure indirette per la prova della sua esistenza. In questa tesi abbiamo sfruttato tecniche di machine learning per studiare il decadimento del barione charmato Λc per dedurre alcune sue proprietà. In particolare abbiamo usato il metodo delle reti neurali per ricavare tutte le informazioni possibili con la tecninca di un'analisi multivariata.
Style APA, Harvard, Vancouver, ISO itp.
7

Millevik, Daniel, i Michael Wang. "Stock Forecasting Using Artificial Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166455.

Pełny tekst źródła
Streszczenie:
This paper studies the potential of artificial neural networks (ANNs) in stock forecasting. It also investigates how the number of neurons in the network, as well as the distribution of the training data into training, validation and testing sets, affect the accuracy of the network. By using MATLAB and its Neural Network Toolbox tests were carried out with a two-layer feedforward neural network (FFNN). These are carried out by collecting five years of historical data from the Dow Jones Industrial Average (DJIA) stock index, which is then used for training the network. Finally, retraining the network with different configurations, with respect to the number of neurons and the training data distribution, in order to perform tests on a separate year of the DJIA stock index. The best acquired accuracy for predicting the closing stock price one day ahead is around 99\%. There are configurations that give worse accuracy. These are mainly the configurations using many neurons as well as the ones with low training data percentage. The conclusion is that there is potential for stock forecasting using ANNs but only predicting one day forward might not be practically useful. It is important to adapt the network to the given problem and its complexity and thus choosing the number of neurons accordingly. It will also be necessary to retrain the network several times in order to find one with good performance. Besides the training data distribution it is more important to gather enough data for the network's training set to allow it to adapt and generalize to the problem at hand.
Denna rapport studerar ifall artificiella neuronnät (ANN) potentiellt kan tillämpas på den finansiella marknaden för att förutspå aktiepriser. Det undersöks även hur antalet neuroner i nätverket och hur fördelningen av träningsdatat i träning, validering och testning, påverkar nätverkets noggrannhet. Tester utfördes på en ''two layer feedforward neural network'' (FFNN) med hjälp av MATLAB och dess Neural Network Toolbox. Dessa utfördes genom att samla fem år av historisk data för ''Dow Jones Industrial Average'' (DJIA) aktieindex som används för att träna nätverket. Slutligen så tränas nätverket i omgångar med olika konfigurationer bestående av ändringar på antalet neuroner och fördelningen av träningsdatat. Detta för att utföra tester på ett separat år av DJIA aktieindex. Den bästa noggrannheten som erhölls vid förutsägning av stängningspriset i börsen efter en dag är ca 99\%. Det finns konfigurationer som ger sämre noggrannhet. Dessa är i synnerhet konfigurationer med ett stort antal neuroner samt de med låg andel träningsdata. Slutsatsen är att det finns potential vid användning av artificiella neuronnät men det är inte praktiskt användbart att bara förutspå aktiepriser en dag framåt. Det är viktigt att anpassa nätverket till det givna problemet och dess komplexitet. Därför ska antalet neuroner i nätverket väljas därefter. Det är också nödvändigt att träna om nätverket ett flertal gånger för att erhålla ett med bra prestanda. Utöver fördelningen av träningsdatat så är det viktigare att samla tillräckligt med data för träningen av nätverket för att försäkra sig om att den anpassar och generaliserar sig till det aktuella problemet.
Style APA, Harvard, Vancouver, ISO itp.
8

Prasad, Jayan Ganesh Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Financial forecasting using artificial neural networks". Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2008. http://handle.unsw.edu.au/1959.4/38700.

Pełny tekst źródła
Streszczenie:
Despite the extent of a theoretical framework in financial market studies, a vast majority of the traders, investors and computer scientists have relied only on technical and timeseries data for predicting future prices. So far, the forecasting models have rarely incorporated macro-economic and market fundamentals successfully, especially with short-term predictions ranging less than a month. In this investigation on the predictability of certain financial markets, an attempt has been made to incorporate a un-exampled and encompassing set of parameters into an Artificial Neural Network prediction system. Experiments were carried out on three market instruments ??? namely currency exchange rates, share prices and oil prices. The choice of parameters for inclusion or exclusion, and the time frame adopted for the experimental sets were derived from the market literature. Good directional prediction accuracies were achieved for currency exchange rates and share prices with certain parameters as inputs, which consisted of predicting short-term movements based on past movements. These predictions were better than the results produced by a traditional least square prediction method. The trading strategy developed based on the predictions also achieved a higher percentage of winning trades. No significant predictions were observed for oil prices. These results open up questions in the microstructure of the markets and provide an insight into the inputs required for market forecasting in the corresponding time frame, for future investigation. The study concludes by advocating the use of trend based input parameters and suggests ways to improve neural network forecasting models.
Style APA, Harvard, Vancouver, ISO itp.
9

Ng, Roger K. W. "Rapid prototyping of artificial neural networks". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23440.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hook, Jaroslav. "Are artificial neural networks learning machines?" Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38651.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Artificial neural networks"

1

Koprinkova-Hristova, Petia, Valeri Mladenov i Nikola K. Kasabov, red. Artificial Neural Networks. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Livingstone, David J., red. Artificial Neural Networks. Totowa, NJ: Humana Press, 2009. http://dx.doi.org/10.1007/978-1-60327-101-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Braspenning, P. J., F. Thuijsman i A. J. M. M. Weijters, red. Artificial Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0027019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Prieto, Alberto, red. Artificial Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/bfb0035870.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cartwright, Hugh, red. Artificial Neural Networks. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2239-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Karayiannis, N. B., i A. N. Venetsanopoulos. Artificial Neural Networks. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni i Silas Franco dos Reis Alves. Artificial Neural Networks. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-43162-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Cartwright, Hugh, red. Artificial Neural Networks. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-0826-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Artificial neural networks. New York: McGraw-Hill, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kwon, Seoyun J. Artificial neural networks. Hauppauge, N.Y: Nova Science Publishers, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Artificial neural networks"

1

Pratt, Ian. "Neural Networks". W Artificial Intelligence, 216–45. London: Macmillan Education UK, 1994. http://dx.doi.org/10.1007/978-1-349-13277-5_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Aggarwal, Charu C. "Neural Networks". W Artificial Intelligence, 211–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72357-6_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bell, Tony. "Artificial dendritic learning". W Neural Networks, 161–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52255-7_37.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Pérez Castaño, Arnaldo. "Neural Networks". W Practical Artificial Intelligence, 411–60. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3357-3_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wehenkel, Louis A. "Artificial Neural Networks". W Automatic Learning Techniques in Power Systems, 71–98. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5451-6_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Czischek, Stefanie. "Artificial Neural Networks". W Springer Theses, 53–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52715-0_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Fortuna, Luigi, Gianguido Rizzotto, Mario Lavorgna, Giuseppe Nunnari, M. Gabriella Xibilia i Riccardo Caponetto. "Artificial Neural Networks". W Advanced Textbooks in Control and Signal Processing, 53–79. London: Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0357-8_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Broese, Einar, i Hans Ulrich Löffler. "Artificial Neural Networks". W Continuum Scale Simulation of Engineering Materials, 185–99. Weinheim, FRG: Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603786.ch7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Dracopoulos, Dimitris C. "Artificial Neural Networks". W Perspectives in Neural Computing, 47–70. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0903-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Buscema, Paolo Massimo, Giulia Massini, Marco Breda, Weldon A. Lodwick, Francis Newman i Masoud Asadi-Zeydabadi. "Artificial Neural Networks". W Artificial Adaptive Systems Using Auto Contractive Maps, 11–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75049-1_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Artificial neural networks"

1

Yang, Zhun, Adam Ishay i Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Pełny tekst źródła
Streszczenie:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Style APA, Harvard, Vancouver, ISO itp.
2

Lacrama, Dan L., Loredana Ileana Viscu i Cornelia Victoria Anghel Drugarin. "Artificial vs. natural neural networks". W 2016 13th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2016. http://dx.doi.org/10.1109/neurel.2016.7800093.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin i Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence". W 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Pełny tekst źródła
Streszczenie:
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
Style APA, Harvard, Vancouver, ISO itp.
4

Atashgahi, Zahra. "Cost-effective Artificial Neural Networks". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/810.

Pełny tekst źródła
Streszczenie:
Deep neural networks (DNNs) have gained huge attention over the last several years due to their promising results in various tasks. However, due to their large model size and over-parameterization, they are recognized as being computationally demanding. Therefore, deep learning models are not well-suited to applications with limited computational resources and battery life. Current solutions to reduce computation costs mainly focus on inference efficiency while being resource-intensive during training. This Ph.D. research aims to address these challenges by developing cost-effective neural networks that can achieve decent performance on various complex tasks using minimum computational resources during training and inference of the network.
Style APA, Harvard, Vancouver, ISO itp.
5

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang i Lise Getoor. "NeuPSL: Neural Probabilistic Soft Logic". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
Style APA, Harvard, Vancouver, ISO itp.
6

Xie, Xuan, Kristian Kersting i Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/503.

Pełny tekst źródła
Streszczenie:
Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., "an autonomous vehicle has to stop in front of a stop sign") remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.
Style APA, Harvard, Vancouver, ISO itp.
7

Barua, Susamma. "Optical and systolic implementation of an artificial neural network". W Photonic Neural Networks. SPIE, 1993. http://dx.doi.org/10.1117/12.983197.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Caulfield, H. John. "Artificial neural/chemical networks". W International Symposium on Optical Science and Technology, redaktorzy Bruno Bosacchi, David B. Fogel i James C. Bezdek. SPIE, 2001. http://dx.doi.org/10.1117/12.448345.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Badgero, M. L. "Digitizing artificial neural networks". W Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374850.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Benmaghnia, Hanane, Matthieu Martel i Yassamine Seladji. "Fixed-Point Code Synthesis for Neural Networks". W 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.

Pełny tekst źródła
Streszczenie:
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Artificial neural networks"

1

Keller, P. E. Artificial neural networks in medicine. Office of Scientific and Technical Information (OSTI), lipiec 1994. http://dx.doi.org/10.2172/10162484.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Dawes, Robert L. BIOMASSCOMP: Artificial Neural Networks and Neurocomputers. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1988. http://dx.doi.org/10.21236/ada200902.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, wrzesień 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Teeter, Corinne. Short Term Plasticity for Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), wrzesień 2021. http://dx.doi.org/10.2172/2004879.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Blough, D. K., i K. K. Anderson. A comparison of artificial neural networks and statistical analyses. Office of Scientific and Technical Information (OSTI), styczeń 1994. http://dx.doi.org/10.2172/10146489.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Amidi, Erfan. A Search for top quark using artificial neural networks. Office of Scientific and Technical Information (OSTI), luty 1996. http://dx.doi.org/10.2172/1156358.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Baud-Bovy, Gabriel. A gaze-addressing communication system using artificial neural networks. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.6142.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Powell, Jr., James Estes. Learning Memento archive routing with Character-based Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), październik 2018. http://dx.doi.org/10.2172/1477616.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gonzalez Pibernat, Gabriel, i Miguel Mascaró Portells. Dynamic structure of single-layer neural networks. Fundación Avanza, maj 2023. http://dx.doi.org/10.60096/fundacionavanza/2392022.

Pełny tekst źródła
Streszczenie:
This article examines the practical applications of single hidden layer neural networks in machine learning and artificial intelligence. They have been used in diverse fields, such as finance, medicine, and autonomous vehicles, due to their simplicit
Style APA, Harvard, Vancouver, ISO itp.
10

Waqas, Muhammad Talha. Synaptic Symmetry: Exploring Similarities in Neural Connections between Human Brain and Artificial Neural Networks. ResearchHub Technologies, Inc., luty 2024. http://dx.doi.org/10.55277/researchhub.c4dckln9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii