Segui questo link per vedere altri tipi di pubblicazioni sul tema: Artificial neural networks.

Articoli di riviste sul tema "Artificial neural networks"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Artificial neural networks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

N, Vikram. "Artificial Neural Networks". International Journal of Research Publication and Reviews 4, n. 4 (23 aprile 2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development". Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Testo completo
Abstract (sommario):
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sarwar, Abid. "Diagnosis of hyperglycemia using Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31 dicembre 2017): 606–10. http://dx.doi.org/10.31142/ijtsrd7045.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

CVS, Rajesh, e M. Padmanabham. "Basics and Features of Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (28 febbraio 2018): 1065–69. http://dx.doi.org/10.31142/ijtsrd9578.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Partridge, Derek, Sarah Rae e Wen Jia Wang. "Artificial Neural Networks". Journal of the Royal Society of Medicine 92, n. 7 (luglio 1999): 385. http://dx.doi.org/10.1177/014107689909200723.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Moore, K. L. "Artificial neural networks". IEEE Potentials 11, n. 1 (febbraio 1992): 23–28. http://dx.doi.org/10.1109/45.127697.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Dalton, J., e A. Deshmane. "Artificial neural networks". IEEE Potentials 10, n. 2 (aprile 1991): 33–36. http://dx.doi.org/10.1109/45.84097.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Yoon, Youngohc, e Lynn Peterson. "Artificial neural networks". ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, n. 1 (marzo 1992): 55–57. http://dx.doi.org/10.1145/134347.134362.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

MAKHOUL, JOHN. "Artificial Neural Networks". INVESTIGATIVE RADIOLOGY 25, n. 6 (giugno 1990): 748–50. http://dx.doi.org/10.1097/00004424-199006000-00027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Watt, R. C., E. S. Maslana, M. J. Navabi e K. C. Mylrea. "ARTIFICIAL NEURAL NETWORKS". Anesthesiology 77, Supplement (settembre 1992): A506. http://dx.doi.org/10.1097/00000542-199209001-00506.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Allinson, N. M. "Artificial Neural Networks". Electronics & Communications Engineering Journal 2, n. 6 (1990): 249. http://dx.doi.org/10.1049/ecej:19900051.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Drew, Philip J., e John R. T. Monson. "Artificial neural networks". Surgery 127, n. 1 (gennaio 2000): 3–11. http://dx.doi.org/10.1067/msy.2000.102173.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

McGuire, Tim. "Artificial neural networks". Computer Audit Update 1997, n. 7 (luglio 1997): 25–29. http://dx.doi.org/10.1016/s0960-2593(97)84495-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Fulcher, John. "Artificial neural networks". Computer Standards & Interfaces 16, n. 3 (luglio 1994): 183–84. http://dx.doi.org/10.1016/0920-5489(94)90010-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Buyse, Marc, e Pascal Piedbois. "Artificial neural networks". Lancet 350, n. 9085 (ottobre 1997): 1175. http://dx.doi.org/10.1016/s0140-6736(05)63819-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Drew, Philip, Leonardo Bottaci, Graeme S. Duthie e John RT Monson. "Artificial neural networks". Lancet 350, n. 9085 (ottobre 1997): 1175–76. http://dx.doi.org/10.1016/s0140-6736(05)63820-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Piuri, Vincenzo, e Cesare Alippi. "Artificial neural networks". Journal of Systems Architecture 44, n. 8 (aprile 1998): 565–67. http://dx.doi.org/10.1016/s1383-7621(97)00063-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Roy, Asim. "Artificial neural networks". ACM SIGKDD Explorations Newsletter 1, n. 2 (gennaio 2000): 33–38. http://dx.doi.org/10.1145/846183.846192.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Griffith, John. "Artificial Neural Networks:". Medical Decision Making 20, n. 2 (aprile 2000): 243–44. http://dx.doi.org/10.1177/0272989x0002000210.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hopfield, J. J. "Artificial neural networks". IEEE Circuits and Devices Magazine 4, n. 5 (settembre 1988): 3–10. http://dx.doi.org/10.1109/101.8118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Dayhoff, Judith E., e James M. DeLeo. "Artificial neural networks". Cancer 91, S8 (2001): 1615–35. http://dx.doi.org/10.1002/1097-0142(20010415)91:8+<1615::aid-cncr1175>3.0.co;2-l.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Raol, Jitendra R., e Sunilkumar S. Mankame. "Artificial neural networks". Resonance 1, n. 2 (febbraio 1996): 47–54. http://dx.doi.org/10.1007/bf02835699.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Wang, Jun. "Artificial neural networks versus natural neural networks". Decision Support Systems 11, n. 5 (giugno 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering Research and Science 5, n. 2 (13 febbraio 2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering and Technology Research 5, n. 2 (13 febbraio 2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Begum, Afsana, Md Masiur Rahman e Sohana Jahan. "Medical diagnosis using artificial neural networks". Mathematics in Applied Sciences and Engineering 5, n. 2 (4 giugno 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Testo completo
Abstract (sommario):
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

JORGENSEN, THOMAS D., BARRY P. HAYNES e CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, n. 05 (ottobre 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Testo completo
Abstract (sommario):
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Demiralay, Raziye. "Estimating of student success with artificial neural networks". New Trends and Issues Proceedings on Humanities and Social Sciences 03, n. 07 (23 luglio 2017): 21–27. http://dx.doi.org/10.18844/prosoc.v2i7.1980.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Klyuchko, O. M. "APPLICATION OF ARTIFICIAL NEURAL NETWORKS METHOD IN BIOTECHNOLOGY". Biotechnologia Acta 10, n. 4 (agosto 2017): 5–13. http://dx.doi.org/10.15407/biotech10.04.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan e Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, n. 1 (30 marzo 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Еськов, В. М., М. А. Филатов, Г. В. Газя e Н. Ф. Стратан. "Artificial Intellect with Artificial Neural Networks". Успехи кибернетики / Russian Journal of Cybernetics, n. 3 (11 ottobre 2021): 44–52. http://dx.doi.org/10.51790/2712-9942-2021-2-3-6.

Testo completo
Abstract (sommario):
В настоящее время не существует единого определения искусственного интеллекта. Требуется такая классификация задач, которые должны решать системы искусственного интеллекта. В сообщении дана классификация задач при использовании искусственных нейросетей (в виде получения субъективно и объективно новой информации). Показаны преимущества таких нейросетей (неалгоритмизируемые задачи) и показан класс систем (третьего типа — биосистем), которые принципиально не могут изучаться в рамках статистики (и всей науки). Для изучения таких биосистем (с уникальными выборками) предлагается использовать искусственные нейросети, которые решают задачи системного синтеза (отыскание параметров порядка). Сейчас такие задачи решает человек в режиме эвристики, что не моделируется современными системами искусственного интеллекта. Currently, there is no single definition of artificial intelligence. We need a Such categorization of tasks to be solved by artificial intelligence. The paper proposes a task categorization for artificial neural networks (in terms of obtaining subjectively and objectively new information). The advantages of such neural networks (non-algorithmizable problems) are shown, and a class of systems (third type biosystems) which cannot be studied by statistical methods (and all science) is presented. To study such biosystems (with unique samples) it is suggested to use artificial neural networks able to perform system synthesis (search for order parameters). Nowadays such problems are solved by humans through heuristics, and this process cannot be modeled by the existing artificial intelligence systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Basu, Abhirup, Pinaki Bisaws, Sarmi Ghosh e Debarshi Datta. "Reconfigurable Artificial Neural Networks". International Journal of Computer Applications 179, n. 6 (15 dicembre 2017): 5–8. http://dx.doi.org/10.5120/ijca2017915961.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Chen, Tin-Chih, Cheng-Li Liu e Hong-Dar Lin. "Advanced Artificial Neural Networks". Algorithms 11, n. 7 (10 luglio 2018): 102. http://dx.doi.org/10.3390/a11070102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Xin Yao. "Evolving artificial neural networks". Proceedings of the IEEE 87, n. 9 (1999): 1423–47. http://dx.doi.org/10.1109/5.784219.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

YAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS". International Journal of Neural Systems 04, n. 03 (settembre 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.

Testo completo
Abstract (sommario):
Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Boutsinas, B., e M. N. Vrahatis. "Artificial nonmonotonic neural networks". Artificial Intelligence 132, n. 1 (ottobre 2001): 1–38. http://dx.doi.org/10.1016/s0004-3702(01)00126-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Schaub, Nicholas J., e Nathan Hotaling. "Assessing Efficiency in Artificial Neural Networks". Applied Sciences 13, n. 18 (14 settembre 2023): 10286. http://dx.doi.org/10.3390/app131810286.

Testo completo
Abstract (sommario):
The purpose of this work was to develop an assessment technique and subsequent metrics that help in developing an understanding of the balance between network size and task performance in simple model networks. Here, exhaustive tests on simple model neural networks and datasets are used to validate both the assessment approach and the metrics derived from it. The concept of neural layer state space is introduced as a simple mechanism for understanding layer utilization, where a state is the on/off activation state of all neurons in a layer for an input. Neural efficiency is computed from state space to measure neural layer utilization, and a second metric called the artificial intelligence quotient (aIQ) was created to balance neural network performance and neural efficiency. To study aIQ and neural efficiency, two simple neural networks were trained on MNIST: a fully connected network (LeNet-300-100) and a convolutional neural network (LeNet-5). The LeNet-5 network with the highest aIQ was 2.32% less accurate but contained 30,912 times fewer parameters than the network with the highest accuracy. Both batch normalization and dropout layers were found to increase neural efficiency. Finally, networks with a high aIQ are shown to be resistant to memorization and overtraining as well as capable of learning proper digit classification with an accuracy of 92.51%, even when 75% of the class labels are randomized. These results demonstrate the utility of aIQ and neural efficiency as metrics for determining the performance and size of a small network using exemplar data.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)". Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Testo completo
Abstract (sommario):
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Næs, Tormod, Knut Kvaal, Tomas Isaksson e Charles Miller. "Artificial Neural Networks in Multivariate Calibration". Journal of Near Infrared Spectroscopy 1, n. 1 (gennaio 1993): 1–11. http://dx.doi.org/10.1255/jnirs.1.

Testo completo
Abstract (sommario):
This paper is about the use of artificial neural networks for multivariate calibration. We discuss network architecture and estimation as well as the relationship between neural networks and related linear and non-linear techniques. A feed-forward network is tested on two applications of near infrared spectroscopy, both of which have been treated previously and which have indicated non-linear features. In both cases, the network gives more precise prediction results than the linear calibration method of PCR.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Walczak, Steven. "Artificial Neural Network Research in Online Social Networks". International Journal of Virtual Communities and Social Networking 10, n. 4 (ottobre 2018): 1–15. http://dx.doi.org/10.4018/ijvcsn.2018100101.

Testo completo
Abstract (sommario):
Artificial neural networks are a machine learning method ideal for solving classification and prediction problems using Big Data. Online social networks and virtual communities provide a plethora of data. Artificial neural networks have been used to determine the emotional meaning of virtual community posts, determine age and sex of users, classify types of messages, and make recommendations for additional content. This article reviews and examines the utilization of artificial neural networks in online social network and virtual community research. An artificial neural network to predict the maintenance of online social network “friends” is developed to demonstrate the applicability of artificial neural networks for virtual community research.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Mahdi, Qasim Abbood, Andrii Shyshatskyi, Oleksandr Symonenko, Nadiia Protas, Oleksandr Trotsko, Volodymyr Kyvliuk, Artem Shulhin, Petro Steshenko, Eduard Ostapchuk e Tetiana Holenkovska. "Development of a method for training artificial neural networks for intelligent decision support systems". Eastern-European Journal of Enterprise Technologies 1, n. 9(115) (28 febbraio 2022): 35–44. http://dx.doi.org/10.15587/1729-4061.2022.251637.

Testo completo
Abstract (sommario):
We developed a method of training artificial neural networks for intelligent decision support systems. A distinctive feature of the proposed method consists in training not only the synaptic weights of an artificial neural network, but also the type and parameters of the membership function. In case of impossibility to ensure a given quality of functioning of artificial neural networks by training the parameters of an artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the device and taking into account the type and amount of information coming to the input of the artificial neural network. Another distinctive feature of the developed method is that no preliminary calculation data are required to calculate the input data. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, while making unambiguous decisions. According to the results of the study, this training method provides on average 10–18 % higher efficiency of training artificial neural networks and does not accumulate training errors. This method will allow training artificial neural networks by training the parameters and architecture, determining effective measures to improve the efficiency of artificial neural networks. This method will allow reducing the use of computing resources of decision support systems, developing measures to improve the efficiency of training artificial neural networks, increasing the efficiency of information processing in artificial neural networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Hertz, J. A., T. W. Kjær, E. N. Eskandar e B. J. Richmond. "MEASURING NATURAL NEURAL PROCESSING WITH ARTIFICIAL NEURAL NETWORKS". International Journal of Neural Systems 03, supp01 (gennaio 1992): 91–103. http://dx.doi.org/10.1142/s0129065792000425.

Testo completo
Abstract (sommario):
We show how to use artificial neural networks as a quantitative tool in studying real neuronal processing in the monkey visual system. Training a network to classify neuronal signals according to the stimulus that elicited them permits us to calculate the information transmitted by these signals. We illustrate this for neurons in the primary visual cortex with measurements of the information transmitted about visual stimuli and for cells in inferior temporal cortex with measurements of information about behavioral context. For the latter neurons we also illustrate how artificial neural networks can be used to model the computation they do.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Wongsathan, Rati, e Pasit Pothong. "Heart Disease Classification Using Artificial Neural Networks". Applied Mechanics and Materials 781 (agosto 2015): 624–27. http://dx.doi.org/10.4028/www.scientific.net/amm.781.624.

Testo completo
Abstract (sommario):
Neural Networks (NNs) has emerged as an importance tool for classification in the field of decision making. The main objective of this work is to design the structure and select the optimized parameter in the neural networks to implement the heart disease classifier. Three types of neural networks, i.e. Multi-layered Perceptron Neural Network (MLP-NN), Radial Basis Function Neural Networks (RBF-NN), and Generalized Regression Neural Network (GR-NN) have been used to test the performance of heart disease classification. The classification accuracy obtained by RBFNN gave a very high performance than MLP-NN and GR-NN respectively. The performance of accuracy is very promising compared with the previously reported another type of neural networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Anil, Nihal. "A Review on NEAT and Other Reinforcement Algorithms in Robotics". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 03 (28 marzo 2024): 1–5. http://dx.doi.org/10.55041/ijsrem29667.

Testo completo
Abstract (sommario):
Artificial neural networks, or ANNs, find widespread use in real-world scenarios ranging from pattern recognition to robotics control. Choosing an architecture (which includes a model for the neurons) and a learning algorithm are important decisions when building a neural network for a particular purpose. An automated method for resolving these issues is provided by evolutionary search techniques.Genetic algorithms are used to artificially evolve neural networks in a process known as neuroevolution (NE), which has shown great promise in solving challenging reinforcement learning problems. This study offers a thorough review of the state-of-the-art techniques for evolving artificial neural networks (ANNs), with a focus on optimizing their performance-enhancing capabilities. Key Words: Artificial neural networks,Neuroevolution, NeuroEvolution of Augmenting Topologies,Brain computer interface.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Abdul Khader Jilani, Saudagar, e Syed Abdul Sattar. "JPEG Image Compression Using FPGA with Artificial Neural Networks". International Journal of Engineering and Technology 2, n. 3 (2010): 252–57. http://dx.doi.org/10.7763/ijet.2010.v2.129.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Fernández, Ana Isabel Velasco, Ricardo José Rejas Muslera, Juan Padilla Fernández-Vega e María Isabel Cepeda González. "Bankruptcy Prediction Models: A Bet on Artificial Neural Networks". Global Journal For Research Analysis 3, n. 2 (15 giugno 2012): 48–50. http://dx.doi.org/10.15373/22778160/february2014/16.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Çelik, Şenol. "MODELING AVOCADO PRODUCTION IN MEXICO WITH ARTIFICIAL NEURAL NETWORKS". Engineering and Technology Journal 07, n. 10 (31 ottobre 2022): 1605–9. http://dx.doi.org/10.47191/etj/v7i10.08.

Testo completo
Abstract (sommario):
An Artificial Neural Network (ANN) model was created in this research to estimate and predict the amount of avocado production in Mexico. In the development of the ANN model, the years that are time variable were used as the input parameter, and the avocado production amount (tons) was used as the output parameter. The research data includes avocado production in Mexico for 1961-2020 period. Mean Squared Error (MSE) and Mean Absolut Error (MAE) statistics were calculated using hyperbolic tangent activation function to determine the appropriate model. ANN model is a network architecture with 12 hidden layers, 12 process elements (12-12-1) and Levenberg-Marquardt back propagation algorithm. The amount of avocado production was estimated between 2021 and 2030 with the ANN. As a result of the prediction, it is expected that the amount of avocado production for the period 2021-2030 will be between 2,410,741-2,502,302 tons.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Maksutova, K., N. Saparkhojayev e Dusmat Zhamangarin. "DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS". Bulletin D. Serikbayev of EKTU, n. 1 (marzo 2024): 190–201. http://dx.doi.org/10.51885/1561-4212_2024_1_190.

Testo completo
Abstract (sommario):
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging this integration to address practical problems. The paper explores the development, training, and integration of artificial neural net- works, emphasizing their adaptation to knowledge bases. This adaptation involves processes such as in- tegration, communication, representation of ontological structures, and interpretation by the knowledge base of the artificial neural network's representation through input and output. The paper also delves into the direction of establishing an intellectual environment conducive to the development, training, and integration of adapted artificial neural networks with knowledge bases. The knowledge base embedded in an artificial neural network is constructed using a homogeneous semantic network, and knowledge processing employs a multi-agent approach. The representation of artificial neural networks and their specifications within a unified semantic model of knowledge representation is detailed, encompassing text-based specifications in the language of knowledge representation with theoretical semantics. The models shared with the knowledge base include dynamic and other types that vary in their capabilities for knowledge representation. Furthermore, the paper conducts an analysis of approaches to creating artificial neural networks across various libraries of the high-level programming language Python. It explores techniques for developing arti- ficial neural networks within the Python development environment, investigating the key features and func- tions of these libraries. A comparative analysis of neural networks created in object-oriented programming languages is provided, along with the development of an ontological model for deep learning neural net- works.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Shih, Yi-Fan, Yu-Ren Wang, Shih-Shian Wei e Chin-Wen Chen. "Improving Non-Destructive Test Results Using Artificial Neural NetworksImproving Non-Destructive Test Results Using Artificial Neural Networks". International Journal of Machine Learning and Computing 5, n. 6 (dicembre 2015): 480–83. http://dx.doi.org/10.18178/ijmlc.2015.5.6.557.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Matveeva, Nataliya. "ARTIFICIAL NEURAL NETWORKS IN MEDICAL DIAGNOSIS". System technologies 2, n. 133 (1 marzo 2021): 33–41. http://dx.doi.org/10.34185/1562-9945-2-133-2021-05.

Testo completo
Abstract (sommario):
Artificial neural networks are finding many uses in the medical diagnosis application. The article examines cases of renopathy in type 2 diabetes. Data are symptoms of disease. The multilayer perceptron networks (MLP) is used as a classifier to distinguish between a sick and a healthy person. The results of applying artificial neural networks for diagnose renopathy based on selected symptoms show the network's ability to recognize to recognize diseases corresponding to human symptoms. Various parameters, structures and learning algorithms of neural networks were tested in the modeling process.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia