Статті в журналах з теми "Neural networks (Computer science)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Neural networks (Computer science).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Neural networks (Computer science)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cottrell, G. W. "COMPUTER SCIENCE: New Life for Neural Networks." Science 313, no. 5786 (July 28, 2006): 454–55. http://dx.doi.org/10.1126/science.1129813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Xiao Guang. "Research on the Development and Applications of Artificial Neural Networks." Applied Mechanics and Materials 556-562 (May 2014): 6011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Intelligent control is a class of control techniques that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. In computer science and related fields, artificial neural networks are computational models inspired by animals’ central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.
3

Schöneburg, E. "Neural networks hunt computer viruses." Neurocomputing 2, no. 5-6 (July 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Turega, M. A. "Neural Networks." Computer Journal 35, no. 3 (June 1, 1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Widrow, Bernard, David E. Rumelhart, and Michael A. Lehr. "Neural networks." Communications of the ACM 37, no. 3 (March 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (September 9, 2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
7

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yen, Gary G., and Haiming Lu. "Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
9

SIEGELMANN, HAVA T. "ON NIL: THE SOFTWARE CONSTRUCTOR OF NEURAL NETWORKS." Parallel Processing Letters 06, no. 04 (December 1996): 575–82. http://dx.doi.org/10.1142/s0129626496000510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Analog recurrent neural networks have attracted much attention lately as powerful tools of automatic learning. However, they are not as popular in industry as should be justified by their usefulness. The lack of any programming tool for networks. and their vague internal representation, leave the networks for the use of experts only. We propose a way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system. We show how to compile a NIL program into a network which computes exactly as the original program and requires the same computation/convergence time and physical size. Allowing for a natural neural evolution after the construction, the neural networks are both capable of dynamical continuous learning and represent any given symbolic knowledge. Thus, the language along with its compiler may be thought of as the ultimate bridge from symbolic to analog computation.
10

Cerf, Vinton G. "On neural networks." Communications of the ACM 61, no. 7 (June 25, 2018): 7. http://dx.doi.org/10.1145/3224195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wong, Eugene. "Stochastic neural networks." Algorithmica 6, no. 1-6 (June 1991): 466–78. http://dx.doi.org/10.1007/bf01759054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yashchenko, V. A. "Multidimensional neural growing networks and computer intelligence." Cybernetics and Systems Analysis 30, no. 4 (July 1994): 505–17. http://dx.doi.org/10.1007/bf02366560.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Xu, Xianfeng, Xinwei Wang, Weilong Luo, Hao Wang, and Yuting Sun. "Efficient Computer-Generated Holography Based on Mixed Linear Convolutional Neural Networks." Applied Sciences 12, no. 9 (April 21, 2022): 4177. http://dx.doi.org/10.3390/app12094177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Imaging based on computer-generated holography using traditional methods has the problems of poor quality and long calculation cycles. However, recently, the development of deep learning has provided new ideas for this problem. Here, an efficient computer-generated holography (ECGH) method is proposed for computational holographic imaging. This method can be used for computational holographic imaging based on mixed linear convolutional neural networks (MLCNN). By introducing fully connected layers in the network, the suggested design is more powerful and efficient at information mining and information exchange. Using the ECGH, the pure phase image required can be obtained after calculating the custom light field. Compared with traditional computed holography based on deep learning, the method used here can reduce the number of network parameters needed for network training by about two-thirds while obtaining a high-quality image in the reconstruction, and the network structure has the potential to solve various image-reconstruction problems.
14

Chartuni, Andrés, and José Márquez. "Multi-Classifier of DDoS Attacks in Computer Networks Built on Neural Networks." Applied Sciences 11, no. 22 (November 11, 2021): 10609. http://dx.doi.org/10.3390/app112210609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The great commitment in different areas of computer science for the study of computer networks used to fulfill specific and major business tasks has generated a need for their maintenance and optimal operability. Distributed denial of service (DDoS) is a frequent threat to computer networks because of its disruption to the services they cause. This disruption results in the instability and/or inoperability of the network. There are different classes of DDoS attacks, each with a different mode of operation, so detecting them has become a difficult task for network monitoring and control systems. The objective of this work is based on the exploration and choice of a set of data that represents DDoS attack events, on their treatment in a preprocessing phase, and later, the generation of a model of sequential neural networks of multi-class classification. This is done to identify and classify the various types of DDoS attacks. The result was compared with previous works treating the same dataset used herein. We compared their classification method, against ours. During this research, the CIC DDoS2019 dataset was used. Previous works carried out with this dataset proposed a binary classification approach, our approach is based on multi-classification. Our proposed model was capable of achieving around 94% in metrics such as precision, accuracy, recall and F1 score. The added value of multiclass classification during this work is identified and compared with binary classifications using the models presented in the previous.
15

Et. al., K. P. Moholkar,. "Visual Question Answering using Convolutional Neural Networks." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (April 11, 2021): 170–75. http://dx.doi.org/10.17762/turcomat.v12i1s.1602.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ability of a computer system to be able to understand surroundings and elements and to think like a human being to process the information has always been the major point of focus in the field of Computer Science. One of the ways to achieve this artificial intelligence is Visual Question Answering. Visual Question Answering (VQA) is a trained system which can answer the questions associated to a given image in Natural Language. VQA is a generalized system which can be used in any image-based scenario with adequate training on the relevant data. This is achieved with the help of Neural Networks, particularly Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In this study, we have compared different approaches of VQA, out of which we are exploring CNN based model. With the continued progress in the field of Computer Vision and Question answering system, Visual Question Answering is becoming the essential system which can handle multiple scenarios with their respective data.
16

Zaharakis, Ioannis D., and Achilles D. Kameas. "Modeling spiking neural networks." Theoretical Computer Science 395, no. 1 (April 2008): 57–76. http://dx.doi.org/10.1016/j.tcs.2007.11.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Saiful, Muhammad, Lalu Muhammad Samsu, and Fathurrahman Fathurrahman. "Sistem Deteksi Infeksi COVID-19 Pada Hasil X-Ray Rontgen menggunakan Algoritma Convolutional Neural Network (CNN)." Infotek : Jurnal Informatika dan Teknologi 4, no. 2 (July 31, 2021): 217–27. http://dx.doi.org/10.29408/jit.v4i2.3582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The development of the world's technology is growing rapidly, especially in the field of health in the form of detection tools of various objects, including disease objects. The technology in point is part of artificial intelligence that is able to recognize a set of imagery and classify automatically with deep learning techniques. One of the deep learning networks widely used is convolutional neural network with computer vision technology. One of the problems with computer vision that is still developing is object detection as a useful technology to recognize objects in the image as if humans knew the object of the image. In this case, a computer machine is trained in learning using artificial neural networks. One of the sub types of artificial neural networks that are able to handle computer vision problems is by using deep learning techniques with convolutional neural network algorithms. The purpose of this research is to find out how to design the system, the network architecture used for COVID-19 infection detection. The system cannot perform detection of other objects. The results of COVID-19 infection detection with convolutional neural network algorithm show unlimited accuracy value that ranges from 60-99%.
18

HUANG, WEI, KIN KEUNG LAI, YOSHITERU NAKAMORI, SHOUYANG WANG, and LEAN YU. "NEURAL NETWORKS IN FINANCE AND ECONOMICS FORECASTING." International Journal of Information Technology & Decision Making 06, no. 01 (March 2007): 113–40. http://dx.doi.org/10.1142/s021962200700237x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANNs) have been widely applied to finance and economic forecasting as a powerful modeling technique. By reviewing the related literature, we discuss the input variables, type of neural network models, performance comparisons for the prediction of foreign exchange rates, stock market index and economic growth. Economic fundamentals are important in driving exchange rates, stock market index price and economic growth. Most neural network inputs for exchange rate prediction are univariate, while those for stock market index prices and economic growth predictions are multivariate in most cases. There are mixed comparison results of forecasting performance between neural networks and other models. The reasons may be the difference of data, forecasting horizons, types of neural network models and so on. Prediction performance of neural networks can be improved by being integrated with other technologies. Nonlinear combining forecasting by neural networks also provides encouraging results.
19

Reddy*, M. Venkata Krishna, and Pradeep S. "Envision Foundational of Convolution Neural Network." International Journal of Innovative Technology and Exploring Engineering 10, no. 6 (April 30, 2021): 54–60. http://dx.doi.org/10.35940/ijitee.f8804.0410621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. Bilal, A. Jourabloo, M. Ye, X. Liu, and L. Ren. Do Convolutional Neural Networks Learn Class Hierarchy? IEEE Transactions on Visualization and Computer Graphics, 24(1):152–162, Jan. 2018. 2. M. Carney, B. Webster, I. Alvarado, K. Phillips, N. Howell, J. Griffith, J. Jongejan, A. Pitaru, and A. Chen. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20. ACM, Honolulu, HI, USA, 2020. 3. A. Karpathy. CS231n Convolutional Neural Networks for Visual Recognition, 2016 4. M. Kahng, N. Thorat, D. H. Chau, F. B. Viegas, and M. Wattenberg. GANLab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation. IEEE Transactions on Visualization and Computer Graphics, 25(1):310–320, Jan. 2019. 5. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015 6. M. Kahng, P. Y. Andrews, A. Kalro, and D. H. Chau. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models. IEEE Transactions on Visualization and Computer Graphics, 24(1):88–97, Jan. 2018. 7. https://cs231n.github.io/convolutional-networks/ 8. https://www.analyticsvidhya.com/blog/2020/02/learn-imageclassification-cnn-convolutional-neural-networks-3-datasets/ 9. https://towardsdatascience.com/understanding-cnn-convolutionalneural- network-69fd626ee7d4 10. https://medium.com/@birdortyedi_23820/deep-learning-lab-episode-2- cifar- 10-631aea84f11e 11. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen. Recent advances in convolutional neural networks. Pattern Recognition, 77:354–377, May 2018. 12. Hamid, Y., Shah, F.A. and Sugumaram, M. (2014), ―Wavelet neural network model for network intrusion detection system‖, International Journal of Information Technology, Vol. 11 No. 2, pp. 251-263 13. G Sreeram , S Pradeep, K SrinivasRao , B.Deevan Raju , Parveen Nikhat , ― Moving ridge neuronal espionage network simulation for reticulum invasion sensing‖. International Journal of Pervasive Computing and Communications.https://doi.org/10.1108/IJPCC-05- 2020-0036 14. E. Stevens, L. Antiga, and T. Viehmann. Deep Learning with PyTorch. O’Reilly Media, 2019. 15. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015. 16. Aman Dureja, Payal Pahwa, ―Analysis of Non-Linear Activation Functions for Classification Tasks Using Convolutional Neural Networks‖, Recent Advances in Computer Science , Vol 2, Issue 3, 2019 ,PP-156-161 17. https://missinglink.ai/guides/neural-network-concepts/7-types-neuralnetwork-activation-functions-right/
20

Fogel, D. B., L. J. Fogel, and V. W. Porto. "Evolving neural networks." Biological Cybernetics 63, no. 6 (October 1990): 487–93. http://dx.doi.org/10.1007/bf00199581.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kovačič, M. "Markovian neural networks." Biological Cybernetics 64, no. 4 (February 1991): 337–42. http://dx.doi.org/10.1007/bf00199598.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kononenko, I. "Bayesian neural networks." Biological Cybernetics 61, no. 5 (September 1989): 361–70. http://dx.doi.org/10.1007/bf00200801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Armenta, Marco, and Pierre-Marc Jodoin. "The Representation Theory of Neural Networks." Mathematics 9, no. 24 (December 13, 2021): 3216. http://dx.doi.org/10.3390/math9243216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this work, we show that neural networks can be represented via the mathematical theory of quiver representations. More specifically, we prove that a neural network is a quiver representation with activation functions, a mathematical object that we represent using a network quiver. Furthermore, we show that network quivers gently adapt to common neural network concepts such as fully connected layers, convolution operations, residual connections, batch normalization, pooling operations and even randomly wired neural networks. We show that this mathematical representation is by no means an approximation of what neural networks are as it exactly matches reality. This interpretation is algebraic and can be studied with algebraic methods. We also provide a quiver representation model to understand how a neural network creates representations from the data. We show that a neural network saves the data as quiver representations, and maps it to a geometrical space called the moduli space, which is given in terms of the underlying oriented graph of the network, i.e., its quiver. This results as a consequence of our defined objects and of understanding how the neural network computes a prediction in a combinatorial and algebraic way. Overall, representing neural networks through the quiver representation theory leads to 9 consequences and 4 inquiries for future research that we believe are of great interest to better understand what neural networks are and how they work.
24

Zhou, Zhi-Hua. "Rule extraction: Using neural networks or for neural networks?" Journal of Computer Science and Technology 19, no. 2 (March 2004): 249–53. http://dx.doi.org/10.1007/bf02944803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tridgell, Stephen, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, and Philip H. W. Leong. "Unrolling Ternary Neural Networks." ACM Transactions on Reconfigurable Technology and Systems 12, no. 4 (November 27, 2019): 1–23. http://dx.doi.org/10.1145/3359983.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Asakawa, Kazuo, and Hideyuki Takagi. "Neural networks in Japan." Communications of the ACM 37, no. 3 (March 1994): 106–12. http://dx.doi.org/10.1145/175247.175258.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Guidotti, Dario. "Verification and Repair of Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15714–15. http://dx.doi.org/10.1609/aaai.v35i18.17854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neural Networks (NNs) are popular machine learning models which have found successful application in many different domains across computer science. However, it is hard to provide any formal guarantee on the behaviour of neural networks and therefore their reliability is still in doubt, especially concerning their deployment in safety and security-critical applications. Verification emerged as a promising solution to address some of these problems. In the following, I will present some of my recent efforts in verifying NNs.
28

Ma, Gang-Feng, Xu-Hua Yang, Yue Tong, and Yanbo Zhou. "Graph neural networks for preference social recommendation." PeerJ Computer Science 9 (May 19, 2023): e1393. http://dx.doi.org/10.7717/peerj-cs.1393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social recommendation aims to improve the performance of recommendation systems with additional social network information. In the state of art, there are two major problems in applying graph neural networks (GNNs) to social recommendation: (i) Social network is connected through social relationships, not item preferences, i.e., there may be connected users with completely different preferences, and (ii) the user representation of current graph neural network layer of social network and user-item interaction network is the output of the mixed user representation of the previous layer, which causes information redundancy. To address the above problems, we propose graph neural networks for preference social recommendation. First, a friend influence indicator is proposed to transform social networks into a new view for describing the similarity of friend preferences. We name the new view the Social Preference Network. Next, we use different GNNs to capture the respective information of the social preference network and the user-item interaction network, which effectively avoids information redundancy. Finally, we use two losses to penalize the unobserved user-item interaction and the unit space vector angle, respectively, to preserve the original connection relationship and widen the distance between positive and negative samples. Experiment results show that the proposed PSR is effective and lightweight for recommendation tasks, especially in dealing with cold-start problems.
29

Hou, Xiaohui, Lei Huang, and Xuefei Li. "An Effective Method to Evaluate the Scientific Research Projects." Foundations of Computing and Decision Sciences 39, no. 3 (July 1, 2014): 175–88. http://dx.doi.org/10.2478/fcds-2014-0010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The evaluation of the scientific research projects is an important procedure before the scientific research projects are approved. The BP neural network and linear neural network are adopted to evaluate the scientific research projects in this paper. The evaluation index system with 12 indexes is set up. The basic principle of the neural network is analyzed and then the BP neural network and linear neural network models are constructed and the output error function of the neural networks is introduced. The Matlab software is applied to set the parameters and calculate the neural networks. By computing a real-world example, the evaluation results of the scientific research projects are obtained and the results of the BP neural network, linear neural network and linear regression forecasting are compared. The analysis shows that the BP neural network has higher efficiency than the linear neural network and linear regression forecasting in the evaluation of the scientific research projects problem. The method proposed in this paper is an effective method to evaluate the scientific research projects.
30

HAYASHI, YOICHI. "NEURAL NETWORK RULE EXTRACTION BY A NEW ENSEMBLE CONCEPT AND ITS THEORETICAL AND HISTORICAL BACKGROUND: A REVIEW." International Journal of Computational Intelligence and Applications 12, no. 04 (December 2013): 1340006. http://dx.doi.org/10.1142/s1469026813400063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents theoretical and historical backgrounds related to neural network rule extraction. It also investigates approaches for neural network rule extraction by ensemble concepts. Bologna pointed out that although many authors had generated comprehensive models from individual networks, much less work had been done to explain ensembles of neural networks. This paper carefully surveyed the previous work on rule extraction from neural network ensembles since 1988. We are aware of three major research groups i.e., Bologna' group, Zhou' group and Hayashi' group. The reason of these situations is obvious. Since the structures of previous neural network ensembles were quite complicated, the research on the efficient rule extraction algorithm from neural network ensembles was few although their learning capability was extremely high. Thus, these issues make rule extraction algorithm for neural network ensemble difficult task. However, there is a practical need for new ideas for neural network ensembles in order to realize the extremely high-performance needs of various rule extraction problems in real life. This paper successively explain nature of artificial neural networks, origin of neural network rule extraction, incorporating fuzziness in neural network rule extraction, theoretical foundation of neural network rule extraction, computational complexity of neural network rule extraction, neuro-fuzzy hybridization, previous rule extraction from neural network ensembles and difficulties of previous neural network ensembles. Next, this paper address three principles of proposed neural network rule extraction: to increase recognition rates, to extract rules from neural network ensembles, and to minimize the use of computing resources. We also propose an ensemble-recursive-rule extraction (E-Re-RX) by two or three standard backpropagation to train multi-layer perceptrons (MLPs), which enabled extremely high recognition accuracy and the extraction of comprehensible rules. Furthermore, this enabled rule extraction that resulted in fewer rules than those in previously proposed methods. This paper summarizes experimental results of rule extraction using E-Re-RX by multiple standard backpropagation MLPs and provides deep discussions. The results make it possible for the output from a neural network ensemble to be in the form of rules, thus open the "black box" of trained neural networks ensembles. Finally, we provide valuable conclusions and as future work, three open questions on the E-Re-RX algorithm.
31

Purushothaman, G., and N. B. Karayiannis. "Quantum neural networks (QNNs): inherently fuzzy feedforward neural networks." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 679–93. http://dx.doi.org/10.1109/72.572106.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kovalnogov, Vladislav N., Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, and Vasilios N. Katsikis. "Computing quaternion matrix pseudoinverse with zeroing neural networks." AIMS Mathematics 8, no. 10 (2023): 22875–95. http://dx.doi.org/10.3934/math.20231164.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<abstract><p>In recent years, it has become essential to compute the time-varying quaternion (TVQ) matrix Moore-Penrose inverse (MP-inverse or pseudoinverse) to solve time-varying issues in a range of disciplines, including engineering, physics and computer science. This study examines the problem of computing the TVQ matrix MP-inverse using the zeroing neural network (ZNN) approach, which is nowadays considered a cutting edge technique. As a consequence, three new ZNN models are introduced for computing the TVQ matrix MP-inverse in the literature for the first time. Particularly, one model directly employs the TVQ input matrix in the quaternion domain, while the other two models, respectively, use its complex and real representations. In four numerical simulations and a real-world application involving robotic motion tracking, the models exhibit excellent performance.</p></abstract>
33

Aiken, William, Hyoungshick Kim, Simon Woo, and Jungwoo Ryoo. "Neural network laundering: Removing black-box backdoor watermarks from deep neural networks." Computers & Security 106 (July 2021): 102277. http://dx.doi.org/10.1016/j.cose.2021.102277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Siegelmann, Hava T., and Eduardo D. Sontag. "Analog computation via neural networks." Theoretical Computer Science 131, no. 2 (September 1994): 331–60. http://dx.doi.org/10.1016/0304-3975(94)90178-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Uteuliyeva, Malika, Abylay Zhumekenov, Rustem Takhanov, Zhenisbek Assylbekov, Alejandro J. Castro, and Olzhas Kabdolov. "Fourier neural networks: A comparative study." Intelligent Data Analysis 24, no. 5 (September 30, 2020): 1107–20. http://dx.doi.org/10.3233/ida-195050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We review neural network architectures which were motivated by Fourier series and integrals and which are referred to as Fourier neural networks. These networks are empirically evaluated in synthetic and real-world tasks. Neither of them outperforms the standard neural network with sigmoid activation function in the real-world tasks. All neural networks, both Fourier and the standard one, empirically demonstrate lower approximation error than the truncated Fourier series when it comes to approximation of a known function of multiple variables.
36

Kramer, M. A. "Autoassociative neural networks." Computers & Chemical Engineering 16, no. 4 (April 1992): 313–28. http://dx.doi.org/10.1016/0098-1354(92)80051-a.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (November 28, 2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
38

Neruda, M., and R. Neruda. "To contemplate quantitative and qualitative water features by neural networks method." Plant, Soil and Environment 48, No. 7 (December 21, 2011): 322–26. http://dx.doi.org/10.17221/4375-pse.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An application deals with calibration of neural model and Fourier series model for Ploučnice catchment. This approach has an advantage, that the network choice is independent of other example&rsquo;s parameters. Each networks, and their variants (different units and hidden layer number) can be connected in as a&nbsp;black box and tested independently. A&nbsp;Stuttgart neural simulator SNNS and a&nbsp;multiagent hybrid system Bang2 developed in Institute of Computer Science, AS CR have been used for testing. A&nbsp;perceptron network has been constructed, which was trained by back propagation method improved with a&nbsp;momentum term. The network is capable of an accurate forecast of the next day runoff based on the runoff and rainfall values from previous day.
39

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
40

Gafurov, Artur M., and Oleg P. Yermolayev. "Automatic Gully Detection: Neural Networks and Computer Vision." Remote Sensing 12, no. 11 (May 28, 2020): 1743. http://dx.doi.org/10.3390/rs12111743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Transition from manual (visual) interpretation to fully automated gully detection is an important task for quantitative assessment of modern gully erosion, especially when it comes to large mapping areas. Existing approaches to semi-automated gully detection are based on either object-oriented selection based on multispectral images or gully selection based on a probabilistic model obtained using digital elevation models (DEMs). These approaches cannot be used for the assessment of gully erosion on the territory of the European part of Russia most affected by gully erosion due to the lack of national large-scale DEM and limited resolution of open source multispectral satellite images. An approach based on the use of convolutional neural networks for automated gully detection on the RGB-synthesis of ultra-high resolution satellite images publicly available for the test region of the east of the Russian Plain with intensive basin erosion has been proposed and developed. The Keras library and U-Net architecture of convolutional neural networks were used for training. Preliminary results of application of the trained gully erosion convolutional neural network (GECNN) allow asserting that the algorithm performs well in detecting active gullies, well differentiates gullies from other linear forms of slope erosion — rills and balkas, but so far has errors in detecting complex gully systems. Also, GECNN does not identify a gully in 10% of cases and in another 10% of cases it identifies not a gully. To solve these problems, it is necessary to additionally train the neural network on the enlarged training data set.
41

Hansen, James V., and Ray D. Nelson. "Time-series analysis with neural networks and ARIMA-neural network hybrids." Journal of Experimental & Theoretical Artificial Intelligence 15, no. 3 (January 2003): 315–30. http://dx.doi.org/10.1080/0952813031000116488.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kamgar-Parsi, Behzad, J. A. Gualtieri, J. E. Devaney, and Behrooz Kamgar-Parsi. "Clustering with neural networks." Biological Cybernetics 63, no. 3 (July 1990): 201–8. http://dx.doi.org/10.1007/bf00195859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mahul, Antoine, and Alex Aussem. "Distributed Neural Networks for Quality of Service Estimation in Communication Networks." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 297–308. http://dx.doi.org/10.1142/s1469026803000999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We study an original scheme based on distributed feedforward neural networks, aimed at modelling several queueing systems in cascade fed with bursty traffic. For each queueing system, a neural network is trained to anticipate the average number of waiting packets, the packet loss rate and the coefficient of variation of the packet inter-departure time, given the mean rate, the peak rate and the coefficient of variation of the packet inter-arrival time. The latter serves for the calculation of the coefficient of variation of the cell inter-arrival time of the aggregated traffic which is fed as input to the next neural network along the path. The potential of this method is successfully illustrated on several single server FIFO (First In, First Out) queues and on small queueing networks made up from a combination of queues in tandem and in parallel fed by a superposition of ideal sources. Our long-term goal is the design of preventive control strategy in a multiservice communication network.
44

MAINZER, KLAUS. "CELLULAR NEURAL NETWORKS AND VISUAL COMPUTING." International Journal of Bifurcation and Chaos 13, no. 01 (January 2003): 1–6. http://dx.doi.org/10.1142/s0218127403006534.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Brain-like information processing has become a challenge to modern computer science and chip technology. The CNN (Cellular Neural Network) Universal Chip is the first fully programmable industrial-sized brain-like stored-program dynamic array computer which dates back to an invention of Leon O. Chua and Lin Yang in Berkeley in 1988. Since then, many papers have been written on the mathematical foundations and technical applications of CNN chips. They are already used to model artificial, physical, chemical, as well as living biological systems. CNN is now a new computing paradigm of interdisciplinary interest. In this state of development a textbook is needed in order to recruit new generations of students and researchers from different fields of research. Thus, Chua's and Roska's textbook is a timely and historic publication. On the background of their teaching experience, they have aimed at undergraduate students from engineering, physics, chemistry, as well as biology departments. But, actually, it offers more. It is a brilliant introduction to the foundations and applications of CNN which is distinguished with conceptual and mathematical precision as well as with detailed explanations of applications in visual computing.
45

Humphries, Mark D. "Dynamical networks: Finding, measuring, and tracking neural population activity using network science." Network Neuroscience 1, no. 4 (December 2017): 324–38. http://dx.doi.org/10.1162/netn_a_00020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Systems neuroscience is in a headlong rush to record from as many neurons at the same time as possible. As the brain computes and codes using neuron populations, it is hoped these data will uncover the fundamentals of neural computation. But with hundreds, thousands, or more simultaneously recorded neurons come the inescapable problems of visualizing, describing, and quantifying their interactions. Here I argue that network science provides a set of scalable, analytical tools that already solve these problems. By treating neurons as nodes and their interactions as links, a single network can visualize and describe an arbitrarily large recording. I show that with this description we can quantify the effects of manipulating a neural circuit, track changes in population dynamics over time, and quantitatively define theoretical concepts of neural populations such as cell assemblies. Using network science as a core part of analyzing population recordings will thus provide both qualitative and quantitative advances to our understanding of neural computation.
46

Redwan, Sadi M., Md Rashed-Al-Mahfuz, and Md Ekramul Hamid. "Recognizing Command Words using Deep Recurrent Neural Network for Both Acoustic and Throat Speech." European Journal of Information Technologies and Computer Science 3, no. 2 (May 22, 2023): 7–13. http://dx.doi.org/10.24018/compute.2023.3.2.88.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The importance of speech command recognition in a human-machine interaction system is increased in recent years. In this study, we propose a deep neural network-based system for acoustic and throat command speech recognition. We apply a preprocessed pipeline to create the input of the deep learning model. Firstly, speech commands are decomposed into components using well-known signal decomposition techniques. The Mel-frequency cepstral coefficients (MFCC) feature extraction method is applied to each component of the speech commands to obtain the feature inputs for the recognition system. At this stage, we apply and compare performance using different speech decomposition techniques such as wavelet packet decomposition (WPD), continuous wavelet transform (CWT), and empirical mode decomposition (EMD) in order to find out the best technique for our model. We observe that WPD shows the best performance in terms of classification accuracy. This paper investigates long short-term memory (LSTM)-based recurrent neural network (RNN), which is trained using the extracted MFCC features. The proposed neural network is trained and tested using acoustic speech commands. Moreover, we also train and test the proposed model using a throat mic. speech commands as well. Lastly, the transfer learning technique is employed to increase the test accuracy for throat speech recognition. The weights of the model train with the acoustic signal are used to initialize the model used for throat speech recognition. Overall, we have found significant classification accuracy for both acoustic and throat command speech. We obtain LSTM is much better than the GMM-HMM model, convolutional neural networks such as CNN-tpool2 and residual networks such as res15 and res26 with an accuracy score of over 97% on Google’s Speech Commands dataset and we achieve 95.35% accuracy on our throat speech data set using the transfer learning technique.
47

Seiffert, Udo. "Artificial neural networks on massively parallel computer hardware." Neurocomputing 57 (March 2004): 135–50. http://dx.doi.org/10.1016/j.neucom.2004.01.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zainel, Hanan, and Cemal Koçak. "LAN Intrusion Detection Using Convolutional Neural Networks." Applied Sciences 12, no. 13 (June 30, 2022): 6645. http://dx.doi.org/10.3390/app12136645.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The world’s reliance the use of the internet is growing constantly, and data are considered the most precious parameter nowadays. It is critical to keep information secure from unauthorized people and organizations. When a network is compromised, information is taken. An intrusion detection system detects both known and unexpected assaults that allow a network to be breached. In this research, we model an intrusion detection system trained to identify such attacks in LANs, and any computer network that uses data. We accomplish this by employing neural networks, a machine learning technique. We also investigate how well our model performs in multiclass categorization scenarios. On the NSL-KDD dataset, we investigate the performance of Convolutional Neural Networks such as CNN and CNN with LSTM. Our findings suggest that utilizing Convolutional Neural Networks to identify network intrusions is an effective strategy.
49

Gupta, Rajiv. "Research Paper on Artificial Intelligence." International Journal of Engineering and Computer Science 12, no. 02 (February 18, 2023): 25654–0656. http://dx.doi.org/10.18535/ijecs/v12i02.4720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This branch of computer science is concerned with making computers behave like humans. Artificial intelligence includes game playing, expert systems, neural networks, natural language, and robotics. Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and natural-language processing. There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog. Artificial intelligence is working a lot in decreasing human effort but with less growth.
50

Dolgobrodov, S. D., R. Marshall, P. Moore, R. Bittern, R. J. C. Steele, and A. Cuschieri. "e-Science and artificial neural networks in cancer management." Concurrency and Computation: Practice and Experience 19, no. 2 (2006): 251–63. http://dx.doi.org/10.1002/cpe.1045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії