Artykuły w czasopismach na temat „Artificial neural networks”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Artificial neural networks.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Artificial neural networks”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

N, Vikram. "Artificial Neural Networks". International Journal of Research Publication and Reviews 4, nr 4 (23.04.2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development". Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Pełny tekst źródła
Streszczenie:
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
Style APA, Harvard, Vancouver, ISO itp.
3

Sarwar, Abid. "Diagnosis of hyperglycemia using Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31.12.2017): 606–10. http://dx.doi.org/10.31142/ijtsrd7045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

CVS, Rajesh, i M. Padmanabham. "Basics and Features of Artificial Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (28.02.2018): 1065–69. http://dx.doi.org/10.31142/ijtsrd9578.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Partridge, Derek, Sarah Rae i Wen Jia Wang. "Artificial Neural Networks". Journal of the Royal Society of Medicine 92, nr 7 (lipiec 1999): 385. http://dx.doi.org/10.1177/014107689909200723.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Moore, K. L. "Artificial neural networks". IEEE Potentials 11, nr 1 (luty 1992): 23–28. http://dx.doi.org/10.1109/45.127697.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Dalton, J., i A. Deshmane. "Artificial neural networks". IEEE Potentials 10, nr 2 (kwiecień 1991): 33–36. http://dx.doi.org/10.1109/45.84097.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yoon, Youngohc, i Lynn Peterson. "Artificial neural networks". ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, nr 1 (marzec 1992): 55–57. http://dx.doi.org/10.1145/134347.134362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

MAKHOUL, JOHN. "Artificial Neural Networks". INVESTIGATIVE RADIOLOGY 25, nr 6 (czerwiec 1990): 748–50. http://dx.doi.org/10.1097/00004424-199006000-00027.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Watt, R. C., E. S. Maslana, M. J. Navabi i K. C. Mylrea. "ARTIFICIAL NEURAL NETWORKS". Anesthesiology 77, Supplement (wrzesień 1992): A506. http://dx.doi.org/10.1097/00000542-199209001-00506.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Allinson, N. M. "Artificial Neural Networks". Electronics & Communications Engineering Journal 2, nr 6 (1990): 249. http://dx.doi.org/10.1049/ecej:19900051.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Drew, Philip J., i John R. T. Monson. "Artificial neural networks". Surgery 127, nr 1 (styczeń 2000): 3–11. http://dx.doi.org/10.1067/msy.2000.102173.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

McGuire, Tim. "Artificial neural networks". Computer Audit Update 1997, nr 7 (lipiec 1997): 25–29. http://dx.doi.org/10.1016/s0960-2593(97)84495-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Fulcher, John. "Artificial neural networks". Computer Standards & Interfaces 16, nr 3 (lipiec 1994): 183–84. http://dx.doi.org/10.1016/0920-5489(94)90010-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Buyse, Marc, i Pascal Piedbois. "Artificial neural networks". Lancet 350, nr 9085 (październik 1997): 1175. http://dx.doi.org/10.1016/s0140-6736(05)63819-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Drew, Philip, Leonardo Bottaci, Graeme S. Duthie i John RT Monson. "Artificial neural networks". Lancet 350, nr 9085 (październik 1997): 1175–76. http://dx.doi.org/10.1016/s0140-6736(05)63820-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Piuri, Vincenzo, i Cesare Alippi. "Artificial neural networks". Journal of Systems Architecture 44, nr 8 (kwiecień 1998): 565–67. http://dx.doi.org/10.1016/s1383-7621(97)00063-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Roy, Asim. "Artificial neural networks". ACM SIGKDD Explorations Newsletter 1, nr 2 (styczeń 2000): 33–38. http://dx.doi.org/10.1145/846183.846192.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Griffith, John. "Artificial Neural Networks:". Medical Decision Making 20, nr 2 (kwiecień 2000): 243–44. http://dx.doi.org/10.1177/0272989x0002000210.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Hopfield, J. J. "Artificial neural networks". IEEE Circuits and Devices Magazine 4, nr 5 (wrzesień 1988): 3–10. http://dx.doi.org/10.1109/101.8118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Dayhoff, Judith E., i James M. DeLeo. "Artificial neural networks". Cancer 91, S8 (2001): 1615–35. http://dx.doi.org/10.1002/1097-0142(20010415)91:8+<1615::aid-cncr1175>3.0.co;2-l.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Raol, Jitendra R., i Sunilkumar S. Mankame. "Artificial neural networks". Resonance 1, nr 2 (luty 1996): 47–54. http://dx.doi.org/10.1007/bf02835699.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Wang, Jun. "Artificial neural networks versus natural neural networks". Decision Support Systems 11, nr 5 (czerwiec 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering Research and Science 5, nr 2 (13.02.2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Pełny tekst źródła
Streszczenie:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Style APA, Harvard, Vancouver, ISO itp.
25

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering and Technology Research 5, nr 2 (13.02.2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Pełny tekst źródła
Streszczenie:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Style APA, Harvard, Vancouver, ISO itp.
26

Begum, Afsana, Md Masiur Rahman i Sohana Jahan. "Medical diagnosis using artificial neural networks". Mathematics in Applied Sciences and Engineering 5, nr 2 (4.06.2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Pełny tekst źródła
Streszczenie:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Style APA, Harvard, Vancouver, ISO itp.
27

JORGENSEN, THOMAS D., BARRY P. HAYNES i CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, nr 05 (październik 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Pełny tekst źródła
Streszczenie:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Style APA, Harvard, Vancouver, ISO itp.
28

Demiralay, Raziye. "Estimating of student success with artificial neural networks". New Trends and Issues Proceedings on Humanities and Social Sciences 03, nr 07 (23.07.2017): 21–27. http://dx.doi.org/10.18844/prosoc.v2i7.1980.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Klyuchko, O. M. "APPLICATION OF ARTIFICIAL NEURAL NETWORKS METHOD IN BIOTECHNOLOGY". Biotechnologia Acta 10, nr 4 (sierpień 2017): 5–13. http://dx.doi.org/10.15407/biotech10.04.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan i Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, nr 1 (30.03.2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Pełny tekst źródła
Streszczenie:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Style APA, Harvard, Vancouver, ISO itp.
31

Еськов, В. М., М. А. Филатов, Г. В. Газя i Н. Ф. Стратан. "Artificial Intellect with Artificial Neural Networks". Успехи кибернетики / Russian Journal of Cybernetics, nr 3 (11.10.2021): 44–52. http://dx.doi.org/10.51790/2712-9942-2021-2-3-6.

Pełny tekst źródła
Streszczenie:
В настоящее время не существует единого определения искусственного интеллекта. Требуется такая классификация задач, которые должны решать системы искусственного интеллекта. В сообщении дана классификация задач при использовании искусственных нейросетей (в виде получения субъективно и объективно новой информации). Показаны преимущества таких нейросетей (неалгоритмизируемые задачи) и показан класс систем (третьего типа — биосистем), которые принципиально не могут изучаться в рамках статистики (и всей науки). Для изучения таких биосистем (с уникальными выборками) предлагается использовать искусственные нейросети, которые решают задачи системного синтеза (отыскание параметров порядка). Сейчас такие задачи решает человек в режиме эвристики, что не моделируется современными системами искусственного интеллекта. Currently, there is no single definition of artificial intelligence. We need a Such categorization of tasks to be solved by artificial intelligence. The paper proposes a task categorization for artificial neural networks (in terms of obtaining subjectively and objectively new information). The advantages of such neural networks (non-algorithmizable problems) are shown, and a class of systems (third type biosystems) which cannot be studied by statistical methods (and all science) is presented. To study such biosystems (with unique samples) it is suggested to use artificial neural networks able to perform system synthesis (search for order parameters). Nowadays such problems are solved by humans through heuristics, and this process cannot be modeled by the existing artificial intelligence systems.
Style APA, Harvard, Vancouver, ISO itp.
32

Basu, Abhirup, Pinaki Bisaws, Sarmi Ghosh i Debarshi Datta. "Reconfigurable Artificial Neural Networks". International Journal of Computer Applications 179, nr 6 (15.12.2017): 5–8. http://dx.doi.org/10.5120/ijca2017915961.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Chen, Tin-Chih, Cheng-Li Liu i Hong-Dar Lin. "Advanced Artificial Neural Networks". Algorithms 11, nr 7 (10.07.2018): 102. http://dx.doi.org/10.3390/a11070102.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Xin Yao. "Evolving artificial neural networks". Proceedings of the IEEE 87, nr 9 (1999): 1423–47. http://dx.doi.org/10.1109/5.784219.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

YAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS". International Journal of Neural Systems 04, nr 03 (wrzesień 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.

Pełny tekst źródła
Streszczenie:
Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.
Style APA, Harvard, Vancouver, ISO itp.
36

Boutsinas, B., i M. N. Vrahatis. "Artificial nonmonotonic neural networks". Artificial Intelligence 132, nr 1 (październik 2001): 1–38. http://dx.doi.org/10.1016/s0004-3702(01)00126-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Schaub, Nicholas J., i Nathan Hotaling. "Assessing Efficiency in Artificial Neural Networks". Applied Sciences 13, nr 18 (14.09.2023): 10286. http://dx.doi.org/10.3390/app131810286.

Pełny tekst źródła
Streszczenie:
The purpose of this work was to develop an assessment technique and subsequent metrics that help in developing an understanding of the balance between network size and task performance in simple model networks. Here, exhaustive tests on simple model neural networks and datasets are used to validate both the assessment approach and the metrics derived from it. The concept of neural layer state space is introduced as a simple mechanism for understanding layer utilization, where a state is the on/off activation state of all neurons in a layer for an input. Neural efficiency is computed from state space to measure neural layer utilization, and a second metric called the artificial intelligence quotient (aIQ) was created to balance neural network performance and neural efficiency. To study aIQ and neural efficiency, two simple neural networks were trained on MNIST: a fully connected network (LeNet-300-100) and a convolutional neural network (LeNet-5). The LeNet-5 network with the highest aIQ was 2.32% less accurate but contained 30,912 times fewer parameters than the network with the highest accuracy. Both batch normalization and dropout layers were found to increase neural efficiency. Finally, networks with a high aIQ are shown to be resistant to memorization and overtraining as well as capable of learning proper digit classification with an accuracy of 92.51%, even when 75% of the class labels are randomized. These results demonstrate the utility of aIQ and neural efficiency as metrics for determining the performance and size of a small network using exemplar data.
Style APA, Harvard, Vancouver, ISO itp.
38

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)". Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Pełny tekst źródła
Streszczenie:
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
Style APA, Harvard, Vancouver, ISO itp.
39

Næs, Tormod, Knut Kvaal, Tomas Isaksson i Charles Miller. "Artificial Neural Networks in Multivariate Calibration". Journal of Near Infrared Spectroscopy 1, nr 1 (styczeń 1993): 1–11. http://dx.doi.org/10.1255/jnirs.1.

Pełny tekst źródła
Streszczenie:
This paper is about the use of artificial neural networks for multivariate calibration. We discuss network architecture and estimation as well as the relationship between neural networks and related linear and non-linear techniques. A feed-forward network is tested on two applications of near infrared spectroscopy, both of which have been treated previously and which have indicated non-linear features. In both cases, the network gives more precise prediction results than the linear calibration method of PCR.
Style APA, Harvard, Vancouver, ISO itp.
40

Walczak, Steven. "Artificial Neural Network Research in Online Social Networks". International Journal of Virtual Communities and Social Networking 10, nr 4 (październik 2018): 1–15. http://dx.doi.org/10.4018/ijvcsn.2018100101.

Pełny tekst źródła
Streszczenie:
Artificial neural networks are a machine learning method ideal for solving classification and prediction problems using Big Data. Online social networks and virtual communities provide a plethora of data. Artificial neural networks have been used to determine the emotional meaning of virtual community posts, determine age and sex of users, classify types of messages, and make recommendations for additional content. This article reviews and examines the utilization of artificial neural networks in online social network and virtual community research. An artificial neural network to predict the maintenance of online social network “friends” is developed to demonstrate the applicability of artificial neural networks for virtual community research.
Style APA, Harvard, Vancouver, ISO itp.
41

Mahdi, Qasim Abbood, Andrii Shyshatskyi, Oleksandr Symonenko, Nadiia Protas, Oleksandr Trotsko, Volodymyr Kyvliuk, Artem Shulhin, Petro Steshenko, Eduard Ostapchuk i Tetiana Holenkovska. "Development of a method for training artificial neural networks for intelligent decision support systems". Eastern-European Journal of Enterprise Technologies 1, nr 9(115) (28.02.2022): 35–44. http://dx.doi.org/10.15587/1729-4061.2022.251637.

Pełny tekst źródła
Streszczenie:
We developed a method of training artificial neural networks for intelligent decision support systems. A distinctive feature of the proposed method consists in training not only the synaptic weights of an artificial neural network, but also the type and parameters of the membership function. In case of impossibility to ensure a given quality of functioning of artificial neural networks by training the parameters of an artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the device and taking into account the type and amount of information coming to the input of the artificial neural network. Another distinctive feature of the developed method is that no preliminary calculation data are required to calculate the input data. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, while making unambiguous decisions. According to the results of the study, this training method provides on average 10–18 % higher efficiency of training artificial neural networks and does not accumulate training errors. This method will allow training artificial neural networks by training the parameters and architecture, determining effective measures to improve the efficiency of artificial neural networks. This method will allow reducing the use of computing resources of decision support systems, developing measures to improve the efficiency of training artificial neural networks, increasing the efficiency of information processing in artificial neural networks.
Style APA, Harvard, Vancouver, ISO itp.
42

Hertz, J. A., T. W. Kjær, E. N. Eskandar i B. J. Richmond. "MEASURING NATURAL NEURAL PROCESSING WITH ARTIFICIAL NEURAL NETWORKS". International Journal of Neural Systems 03, supp01 (styczeń 1992): 91–103. http://dx.doi.org/10.1142/s0129065792000425.

Pełny tekst źródła
Streszczenie:
We show how to use artificial neural networks as a quantitative tool in studying real neuronal processing in the monkey visual system. Training a network to classify neuronal signals according to the stimulus that elicited them permits us to calculate the information transmitted by these signals. We illustrate this for neurons in the primary visual cortex with measurements of the information transmitted about visual stimuli and for cells in inferior temporal cortex with measurements of information about behavioral context. For the latter neurons we also illustrate how artificial neural networks can be used to model the computation they do.
Style APA, Harvard, Vancouver, ISO itp.
43

Wongsathan, Rati, i Pasit Pothong. "Heart Disease Classification Using Artificial Neural Networks". Applied Mechanics and Materials 781 (sierpień 2015): 624–27. http://dx.doi.org/10.4028/www.scientific.net/amm.781.624.

Pełny tekst źródła
Streszczenie:
Neural Networks (NNs) has emerged as an importance tool for classification in the field of decision making. The main objective of this work is to design the structure and select the optimized parameter in the neural networks to implement the heart disease classifier. Three types of neural networks, i.e. Multi-layered Perceptron Neural Network (MLP-NN), Radial Basis Function Neural Networks (RBF-NN), and Generalized Regression Neural Network (GR-NN) have been used to test the performance of heart disease classification. The classification accuracy obtained by RBFNN gave a very high performance than MLP-NN and GR-NN respectively. The performance of accuracy is very promising compared with the previously reported another type of neural networks.
Style APA, Harvard, Vancouver, ISO itp.
44

Anil, Nihal. "A Review on NEAT and Other Reinforcement Algorithms in Robotics". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 03 (28.03.2024): 1–5. http://dx.doi.org/10.55041/ijsrem29667.

Pełny tekst źródła
Streszczenie:
Artificial neural networks, or ANNs, find widespread use in real-world scenarios ranging from pattern recognition to robotics control. Choosing an architecture (which includes a model for the neurons) and a learning algorithm are important decisions when building a neural network for a particular purpose. An automated method for resolving these issues is provided by evolutionary search techniques.Genetic algorithms are used to artificially evolve neural networks in a process known as neuroevolution (NE), which has shown great promise in solving challenging reinforcement learning problems. This study offers a thorough review of the state-of-the-art techniques for evolving artificial neural networks (ANNs), with a focus on optimizing their performance-enhancing capabilities. Key Words: Artificial neural networks,Neuroevolution, NeuroEvolution of Augmenting Topologies,Brain computer interface.
Style APA, Harvard, Vancouver, ISO itp.
45

Abdul Khader Jilani, Saudagar, i Syed Abdul Sattar. "JPEG Image Compression Using FPGA with Artificial Neural Networks". International Journal of Engineering and Technology 2, nr 3 (2010): 252–57. http://dx.doi.org/10.7763/ijet.2010.v2.129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Fernández, Ana Isabel Velasco, Ricardo José Rejas Muslera, Juan Padilla Fernández-Vega i María Isabel Cepeda González. "Bankruptcy Prediction Models: A Bet on Artificial Neural Networks". Global Journal For Research Analysis 3, nr 2 (15.06.2012): 48–50. http://dx.doi.org/10.15373/22778160/february2014/16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Çelik, Şenol. "MODELING AVOCADO PRODUCTION IN MEXICO WITH ARTIFICIAL NEURAL NETWORKS". Engineering and Technology Journal 07, nr 10 (31.10.2022): 1605–9. http://dx.doi.org/10.47191/etj/v7i10.08.

Pełny tekst źródła
Streszczenie:
An Artificial Neural Network (ANN) model was created in this research to estimate and predict the amount of avocado production in Mexico. In the development of the ANN model, the years that are time variable were used as the input parameter, and the avocado production amount (tons) was used as the output parameter. The research data includes avocado production in Mexico for 1961-2020 period. Mean Squared Error (MSE) and Mean Absolut Error (MAE) statistics were calculated using hyperbolic tangent activation function to determine the appropriate model. ANN model is a network architecture with 12 hidden layers, 12 process elements (12-12-1) and Levenberg-Marquardt back propagation algorithm. The amount of avocado production was estimated between 2021 and 2030 with the ANN. As a result of the prediction, it is expected that the amount of avocado production for the period 2021-2030 will be between 2,410,741-2,502,302 tons.
Style APA, Harvard, Vancouver, ISO itp.
48

Maksutova, K., N. Saparkhojayev i Dusmat Zhamangarin. "DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS". Bulletin D. Serikbayev of EKTU, nr 1 (marzec 2024): 190–201. http://dx.doi.org/10.51885/1561-4212_2024_1_190.

Pełny tekst źródła
Streszczenie:
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging this integration to address practical problems. The paper explores the development, training, and integration of artificial neural net- works, emphasizing their adaptation to knowledge bases. This adaptation involves processes such as in- tegration, communication, representation of ontological structures, and interpretation by the knowledge base of the artificial neural network's representation through input and output. The paper also delves into the direction of establishing an intellectual environment conducive to the development, training, and integration of adapted artificial neural networks with knowledge bases. The knowledge base embedded in an artificial neural network is constructed using a homogeneous semantic network, and knowledge processing employs a multi-agent approach. The representation of artificial neural networks and their specifications within a unified semantic model of knowledge representation is detailed, encompassing text-based specifications in the language of knowledge representation with theoretical semantics. The models shared with the knowledge base include dynamic and other types that vary in their capabilities for knowledge representation. Furthermore, the paper conducts an analysis of approaches to creating artificial neural networks across various libraries of the high-level programming language Python. It explores techniques for developing arti- ficial neural networks within the Python development environment, investigating the key features and func- tions of these libraries. A comparative analysis of neural networks created in object-oriented programming languages is provided, along with the development of an ontological model for deep learning neural net- works.
Style APA, Harvard, Vancouver, ISO itp.
49

Shih, Yi-Fan, Yu-Ren Wang, Shih-Shian Wei i Chin-Wen Chen. "Improving Non-Destructive Test Results Using Artificial Neural NetworksImproving Non-Destructive Test Results Using Artificial Neural Networks". International Journal of Machine Learning and Computing 5, nr 6 (grudzień 2015): 480–83. http://dx.doi.org/10.18178/ijmlc.2015.5.6.557.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Matveeva, Nataliya. "ARTIFICIAL NEURAL NETWORKS IN MEDICAL DIAGNOSIS". System technologies 2, nr 133 (1.03.2021): 33–41. http://dx.doi.org/10.34185/1562-9945-2-133-2021-05.

Pełny tekst źródła
Streszczenie:
Artificial neural networks are finding many uses in the medical diagnosis application. The article examines cases of renopathy in type 2 diabetes. Data are symptoms of disease. The multilayer perceptron networks (MLP) is used as a classifier to distinguish between a sick and a healthy person. The results of applying artificial neural networks for diagnose renopathy based on selected symptoms show the network's ability to recognize to recognize diseases corresponding to human symptoms. Various parameters, structures and learning algorithms of neural networks were tested in the modeling process.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii