Статті в журналах з теми "Neural Networks method"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Neural Networks method.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Neural Networks method".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Klyuchko, O. M. "APPLICATION OF ARTIFICIAL NEURAL NETWORKS METHOD IN BIOTECHNOLOGY." Biotechnologia Acta 10, no. 4 (August 2017): 5–13. http://dx.doi.org/10.15407/biotech10.04.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Keping, Shuang Gu, and Dongyang Yan. "A Link Prediction Method Based on Neural Networks." Applied Sciences 11, no. 11 (June 3, 2021): 5186. http://dx.doi.org/10.3390/app11115186.

Повний текст джерела
Анотація:
Link prediction to optimize network performance is of great significance in network evolution. Because of the complexity of network systems and the uncertainty of network evolution, it faces many challenges. This paper proposes a new link prediction method based on neural networks trained on scale-free networks as input data, and optimized networks trained by link prediction models as output data. In order to solve the influence of the generalization of the neural network on the experiments, a greedy link pruning strategy is applied. We consider network efficiency and the proposed global network structure reliability as objectives to comprehensively evaluate link prediction performance and the advantages of the neural network method. The experimental results demonstrate that the neural network method generates the optimized networks with better network efficiency and global network structure reliability than the traditional link prediction models.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Golubinskiy, Andrey, and Andrey Tolstykh. "Hybrid method of conventional neural network training." Informatics and Automation 20, no. 2 (March 30, 2021): 463–90. http://dx.doi.org/10.15622/ia.2021.20.2.8.

Повний текст джерела
Анотація:
The paper proposes a hybrid method for training convolutional neural networks. The method consists of combining second and first-order methods for different elements of the architecture of a convolutional neural network. The hybrid convolution neural network training method allows to achieve significantly better convergence compared to Adam; however, it requires fewer computational operations to implement. Using the proposed method, it is possible to train networks on which learning paralysis occurs when using first-order methods. Moreover, the proposed method could adjust its computational complexity to the hardware on which the computation is performed; at the same time, the hybrid method allows using the mini-packet learning approach. The analysis of the ratio of computations between convolutional neural networks and fully connected artificial neural networks is presented. The mathematical apparatus of error optimization of artificial neural networks is considered, including the method of backpropagation of the error, the Levenberg-Marquardt algorithm. The main limitations of these methods that arise when training a convolutional neural network are analyzed. The analysis of the stability of the proposed method when the initialization parameters are changed. The results of the applicability of the method in various problems are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (October 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Повний текст джерела
Анотація:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Peng, Yun, and Zonglin Zhou. "A neural network learning method for belief networks." International Journal of Intelligent Systems 11, no. 11 (December 7, 1998): 893–915. http://dx.doi.org/10.1002/(sici)1098-111x(199611)11:11<893::aid-int3>3.0.co;2-u.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Neruda, M., and R. Neruda. "To contemplate quantitative and qualitative water features by neural networks method." Plant, Soil and Environment 48, No. 7 (December 21, 2011): 322–26. http://dx.doi.org/10.17221/4375-pse.

Повний текст джерела
Анотація:
An application deals with calibration of neural model and Fourier series model for Ploučnice catchment. This approach has an advantage, that the network choice is independent of other example&rsquo;s parameters. Each networks, and their variants (different units and hidden layer number) can be connected in as a&nbsp;black box and tested independently. A&nbsp;Stuttgart neural simulator SNNS and a&nbsp;multiagent hybrid system Bang2 developed in Institute of Computer Science, AS CR have been used for testing. A&nbsp;perceptron network has been constructed, which was trained by back propagation method improved with a&nbsp;momentum term. The network is capable of an accurate forecast of the next day runoff based on the runoff and rainfall values from previous day.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Feng, Yifan, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. "Hypergraph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3558–65. http://dx.doi.org/10.1609/aaai.v33i01.33013558.

Повний текст джерела
Анотація:
In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-theart methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fan, Yuanliang, Han Wu, Weiming Chen, Zeyu Jiang, Xinghua Huang, and Si-Zhe Chen. "A Data Augmentation Method to Optimize Neural Networks for Predicting SOH of Lithium Batteries." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012034. http://dx.doi.org/10.1088/1742-6596/2203/1/012034.

Повний текст джерела
Анотація:
Abstract Neural Network is an excellent methodology for predicting lithium battery state of health (SOH). However, if the data amount is insufficient, the neural network will be overfitted, which decreass the prediction accuracy of SOH. To solve this issue, a data augmentation method based on random noise superposition is proposed. The original dataset is expanded in this approach, which enhances the neural network’s generalization ability. Moreover, random noises simulate capacity regeneration, capacity dips and sensor errors during the actual operation of lithium batteries, which also improves the adaptive and robustness of the SOH prediction method. The proposed method is validated on mainstream neural networks, including long short-term memory (LSTM) and gated recurrent unit (GRU) neural networks. In terms of the results, the proposed data augmentation method effectively improves the neural network generalization ability and lithium battery SOH prediction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shi, Lin, and Lei Zheng. "An IGWOCNN Deep Method for Medical Education Quality Estimating." Mathematical Problems in Engineering 2022 (August 9, 2022): 1–5. http://dx.doi.org/10.1155/2022/9037726.

Повний текст джерела
Анотація:
The deep learning and mining ability of big data are used to analyze the shortcomings in the teaching scheme, and the teaching scheme is optimized to improve the teaching ability. The convolution neural network optimized by improved grey wolf optimization is used to train the data so as to improve the efficiency of searching the optimal value of the algorithm and prevent the algorithm from tending to the local optimal value. In order to solve the shortcoming of grey wolf optimization, an improved grey wolf optimization, that is, grey wolf optimization with variable convergence factor, is used to optimize the convolution neural network. The grey wolf optimization with variable convergence factor is to balance the global search ability and local search ability of the algorithm. The testing results show that the quality estimating accuracy of convolutional neural networks optimized by improved grey wolf optimization is 100%, the quality estimating accuracy of convolutional neural networks optimized by grey wolf optimization is 93.33%, and the quality estimating accuracy of classical convolutional neural networks is 86.67%. We can conclude that the medical education quality estimating ability of convolutional neural network optimized by improved grey wolf optimization is the best among convolutional neural networks optimized by improved grey wolf optimization and classical convolutional neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yang, Yunfeng, and Fengxian Tang. "Network Intrusion Detection Based on Stochastic Neural Networks Method." International Journal of Security and Its Applications 10, no. 8 (August 31, 2016): 435–46. http://dx.doi.org/10.14257/ijsia.2016.10.8.38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xianhao Shen, Xianhao Shen, Changhong Zhu Xianhao Shen, Yihao Zang Changhong Zhu, and Shaohua Niu Yihao Zang. "A Method for Detecting Abnormal Data of Network Nodes Based on Convolutional Neural Network." 電腦學刊 33, no. 3 (June 2022): 049–58. http://dx.doi.org/10.53106/199115992022063303004.

Повний текст джерела
Анотація:
<p>Abnormal data detection is an important step to ensure the accuracy and reliability of node data in wireless sensor networks. In this paper, a data classification method based on convolutional neural network is proposed to solve the problem of data anomaly detection in wireless sensor networks. First, Normal data and abnormal data generated after injection fault are normalized and mapped to gray image as input data of the convolutional neural network. Then, based on the classical convolution neural network, three new convolutional neural network models are designed by designing the parameters of the convolutional layer and the fully connected layer. This model solves the problem that the performance of traditional detection algorithm is easily affected by relevant threshold through self-learning data characteristics of convolution layer. The experimental results show that this method has better detection performance and higher reliability.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ma, Lili, Jiangping Liu, and Jidong Luo. "Method of Wireless Sensor Network Data Fusion." International Journal of Online Engineering (iJOE) 13, no. 09 (September 22, 2017): 114. http://dx.doi.org/10.3991/ijoe.v13i09.7589.

Повний текст джерела
Анотація:
<p style="margin: 1em 0px;"><span lang="EN-US"><span style="font-family: 宋体; font-size: medium;">In order to better deal with large data information in computer networks, a large data fusion method based on wireless sensor networks is designed. Based on the analysis of the structure and learning algorithm of RBF neural networks, a heterogeneous RBF neural network information fusion algorithm in wireless sensor networks is presented. The effectiveness of information fusion processing methods is tested by RBF information fusion algorithm. The proposed algorithm is applied to heterogeneous information fusion of cluster heads or sink nodes in wireless sensor networks. The simulation results show the effectiveness of the proposed algorithm. Based on the above finding, it is concluded that the RBF neural network has good real-time performance and small network delay. In addition, this method can reduce the amount of information transmission and the network conflicts and congestion.</span></span></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Escudero, Cristian A., Andrés F. Calvo, and Arley Bejarano. "Black Sigatoka Classification Using Convolutional Neural Networks." International Journal of Machine Learning and Computing 11, no. 4 (August 2021): 323–26. http://dx.doi.org/10.18178/ijmlc.2021.11.4.1055.

Повний текст джерела
Анотація:
In this paper we present a methodology for the automatic recognition of black Sigatoka in commercial banana crops. This method uses a LeNet convolutional neural network to detect the progress of infection by the disease in different regions of a leaf image; using this information, we trained a decision tree in order to classify the level of infection severity. The methodology was validated with an annotated database, which was built in the process of this work and which can be compared with other state-of-the-art alternatives. The results show that the method is robust against atypical values and photometric variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Hou, Xiaohui, Lei Huang, and Xuefei Li. "An Effective Method to Evaluate the Scientific Research Projects." Foundations of Computing and Decision Sciences 39, no. 3 (July 1, 2014): 175–88. http://dx.doi.org/10.2478/fcds-2014-0010.

Повний текст джерела
Анотація:
Abstract The evaluation of the scientific research projects is an important procedure before the scientific research projects are approved. The BP neural network and linear neural network are adopted to evaluate the scientific research projects in this paper. The evaluation index system with 12 indexes is set up. The basic principle of the neural network is analyzed and then the BP neural network and linear neural network models are constructed and the output error function of the neural networks is introduced. The Matlab software is applied to set the parameters and calculate the neural networks. By computing a real-world example, the evaluation results of the scientific research projects are obtained and the results of the BP neural network, linear neural network and linear regression forecasting are compared. The analysis shows that the BP neural network has higher efficiency than the linear neural network and linear regression forecasting in the evaluation of the scientific research projects problem. The method proposed in this paper is an effective method to evaluate the scientific research projects.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhou, Wuneng, Xueqing Yang, Jun Yang, and Jun Zhou. "Stochastic Synchronization of Neutral-Type Neural Networks with Multidelays Based onM-Matrix." Discrete Dynamics in Nature and Society 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/826810.

Повний текст джерела
Анотація:
The problem of stochastic synchronization of neutral-type neural networks with multidelays based onM-matrix is researched. Firstly, we designed a control law of stochastic synchronization of the neural-type and multiple time-delays neural network. Secondly, by making use of Lyapunov functional andM-matrix method, we obtained a criterion under which the drive and response neutral-type multiple time-delays neural networks with stochastic disturbance and Markovian switching are stochastic synchronization. The synchronization condition is expressed as linear matrix inequality which can be easily solved by MATLAB. Finally, we introduced a numerical example to illustrate the effectiveness of the method and result obtained in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mahdi, Qasim Abbood, Andrii Shyshatskyi, Oleksandr Symonenko, Nadiia Protas, Oleksandr Trotsko, Volodymyr Kyvliuk, Artem Shulhin, Petro Steshenko, Eduard Ostapchuk, and Tetiana Holenkovska. "Development of a method for training artificial neural networks for intelligent decision support systems." Eastern-European Journal of Enterprise Technologies 1, no. 9(115) (February 28, 2022): 35–44. http://dx.doi.org/10.15587/1729-4061.2022.251637.

Повний текст джерела
Анотація:
We developed a method of training artificial neural networks for intelligent decision support systems. A distinctive feature of the proposed method consists in training not only the synaptic weights of an artificial neural network, but also the type and parameters of the membership function. In case of impossibility to ensure a given quality of functioning of artificial neural networks by training the parameters of an artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the device and taking into account the type and amount of information coming to the input of the artificial neural network. Another distinctive feature of the developed method is that no preliminary calculation data are required to calculate the input data. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, while making unambiguous decisions. According to the results of the study, this training method provides on average 10–18 % higher efficiency of training artificial neural networks and does not accumulate training errors. This method will allow training artificial neural networks by training the parameters and architecture, determining effective measures to improve the efficiency of artificial neural networks. This method will allow reducing the use of computing resources of decision support systems, developing measures to improve the efficiency of training artificial neural networks, increasing the efficiency of information processing in artificial neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Leoshchenko, S. D., A. O. Oliinyk, S. A. Subbotin, Ye O. Gofman, and M. B. Ilyashenko. "EVOLUTIONARY METHOD FOR SYNTHESIS SPIKING NEURAL NETWORKS USING THE NEUROPATTHERN MECHANISM." Radio Electronics, Computer Science, Control, no. 3 (October 20, 2022): 77. http://dx.doi.org/10.15588/1607-3274-2022-3-8.

Повний текст джерела
Анотація:
Context. The problem of synthesizing pulsed neural networks based on an evolutionary approach to the synthesis of artificial neural networks using a neuropathic mechanism for constructing diagnostic models with a high level of accuracy is considered. The object of research is the process of synthesis of pulsed neural networks using an evolutionary approach and a neuropathic mechanism. Objective of the work is to develop a method for synthesizing pulsed neural networks based on an evolutionary approach using a neuropathic mechanism to build diagnostic models with a high level of accuracy of work. Method. A method for synthesizing pulsed neural networks based on an evolutionary approach is proposed. At the beginning, a population of pulsed neural networks is generated, and a neuropathic mechanism is used for their encoding and further development, which consists in separate encoding of neurons with different activation functions that are determined beforehand. So each pattern with multiple entry points can define the relationship between a pair of points. In the future, this simplifies the evolutionary development of networks. To decipher a pulsed neural network from a pattern, the coordinates for a pair of neurons are passed to the network that creates the pattern. The network output determines the weight and delay of the connection between two neurons in a pulsed neural network. After that, you can evaluate each neuromodel after evolutionary changes and check the criteria for stopping synthesis. This method allows you to reduce the resource intensity during network synthesis by abstracting the evolutionary changes of the network pattern from itself. Results. The developed method is implemented and investigated on the example of the synthesis of a pulsed neural network for use as a model for technical diagnostics. Using the developed method to increase the accuracy of the neuromodel with a test sample by 20%, depending on the computing resources used. Conclusions. The conducted experiments confirmed the operability of the proposed mathematical software and allow us to recommend it for use in practice in the synthesis of pulsed neural networks as the basis of diagnostic models for further automation of tasks of diagnostics, forecasting, evaluation and pattern recognition using big data. Prospects for further research may lie in the use of a neuropathic mechanism for indirect encoding of pulsed neural networks, which will provide even more compact data storage and speed up the synthesis process.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Vahed, A., and C. W. Omlin. "A Machine Learning Method for Extracting Symbolic Knowledge from Recurrent Neural Networks." Neural Computation 16, no. 1 (January 1, 2004): 59–71. http://dx.doi.org/10.1162/08997660460733994.

Повний текст джерела
Анотація:
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks' states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics that either limit the number of clusters that may form during training or limit the exploration of the space of hidden recurrent state neurons. These limitations, while necessary, may lead to decreased fidelity, in which the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input-output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chang, Jing, and Herbert K. H. Lee. "Graphical Jump Method for Neural Networks." Journal of Data Science 15, no. 4 (March 4, 2021): 669–90. http://dx.doi.org/10.6339/jds.201710_15(4).00006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lopez-Hernandez, Andres Ali, Ricardo Francisco Martinez-Gonzalez, Jose Antonio Hernandez-Reyes, Leonardo Palacios-Luengas, and Ruben Vazquez-Medina. "A Steganography Method Using Neural Networks." IEEE Latin America Transactions 18, no. 03 (March 2020): 495–506. http://dx.doi.org/10.1109/tla.2020.9082720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lendl, Markus, Rolf Unbehauen, and Fa-Long Luo. "Homotopy method for training neural networks." Computer Standards & Interfaces 20, no. 6-7 (March 1999): 470. http://dx.doi.org/10.1016/s0920-5489(99)91033-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Thamer, Khudhair Abed. "Method of Artificial Neural Networks Teaching." Webology 17, no. 1 (May 10, 2020): 43–64. http://dx.doi.org/10.14704/web/v17i1/a207.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tsoulos, Ioannis G., Alexandros Tzallas, and Evangelos Karvounis. "A Two-Phase Evolutionary Method to Train RBF Networks." Applied Sciences 12, no. 5 (February 25, 2022): 2439. http://dx.doi.org/10.3390/app12052439.

Повний текст джерела
Анотація:
This article proposes a two-phase hybrid method to train RBF neural networks for classification and regression problems. During the first phase, a range for the critical parameters of the RBF network is estimated and in the second phase a genetic algorithm is incorporated to locate the best RBF neural network for the underlying problem. The method is compared against other training methods of RBF neural networks on a wide series of classification and regression problems from the relevant literature and the results are reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

CHIU, CHINCHUAN, and MICHAEL A. SHANBLATT. "HUMAN-LIKE DYNAMIC PROGRAMMING NEURAL NETWORKS FOR DYNAMIC TIME WARPING SPEECH RECOGNITION." International Journal of Neural Systems 06, no. 01 (March 1995): 79–89. http://dx.doi.org/10.1142/s012906579500007x.

Повний текст джерела
Анотація:
This paper presents a human-like dynamic programming neural network method for speech recognition using dynamic time warping. The networks are configured, much like human’s, such that the minimum states of the network’s energy function represent the near-best correlation between test and reference patterns. The dynamics and properties of the neural networks are analytically explained. Simulations for classifying speaker-dependent isolated words, consisting of 0 to 9 and A to Z, show that the method is better than conventional methods. The hardware implementation of this method is also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Liu, Dong Dong. "A Method about Load Distribution of Rolling Mills Based on RBF Neural Network." Advanced Materials Research 279 (July 2011): 418–22. http://dx.doi.org/10.4028/www.scientific.net/amr.279.418.

Повний текст джерела
Анотація:
Rolling mills process is too complicated to be described by formulas. RBF neural networks can establish finishing thickness and rolling force models. Traditional models are still useful to the neural network output. Compared with those finishing models which have or do not have traditional models as input, the importance of traditional models in application of neural networks is obvious. For improving the predictive precision, BP and RBF neural networks are established, and the result indicates that the model of load distribution based on RBF neural network is more accurate.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sotirov, Sotir, Vassia Atanassova, Evdokia Sotirova, Lyubka Doukovska, Veselina Bureva, Deyan Mavrov, and Jivko Tomov. "Application of the Intuitionistic Fuzzy InterCriteria Analysis Method with Triples to a Neural Network Preprocessing Procedure." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/2157852.

Повний текст джерела
Анотація:
The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network’s processing of data and images.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Inohira, Eiichi, and Hirokazu Yokoi. "An Optimal Design Method for Artificial Neural Networks by Using the Design of Experiments." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 593–99. http://dx.doi.org/10.20965/jaciii.2007.p0593.

Повний текст джерела
Анотація:
This paper presents a method to optimally design artificial neural networks with many design parameters using the Design of Experiment (DOE), whose features are efficient experiments using an orthogonal array and quantitative analysis by analysis of variance. Neural networks can approximate arbitrary nonlinear functions. The accuracy of a trained neural network at a certain number of learning cycles depends on both weights and biases and its structure and learning rate. Design methods such as trial-and-error, brute-force approaches, network construction, and pruning, cannot deal with many design parameters such as the number of elements in a layer and a learning rate. Our design method realizes efficient optimization using DOE, and obtains confidence of optimal design through statistical analysis even though trained neural networks very due to randomness in initial weights. We apply our design method three-layer and five-layer feedforward neural networks in a preliminary study and show that approximation accuracy of multilayer neural networks is increased by picking up many more parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit, and Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays." Fractal and Fractional 6, no. 9 (September 13, 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Повний текст джерела
Анотація:
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Franke, Jürgen, and Michael H. Neumann. "Bootstrapping Neural Networks." Neural Computation 12, no. 8 (August 1, 2000): 1929–49. http://dx.doi.org/10.1162/089976600300015204.

Повний текст джерела
Анотація:
Knowledge about the distribution of a statistical estimator is important for various purposes, such as the construction of confidence intervals for model parameters or the determination of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap, which is based on an imitation of the probabilistic structure of the data-generating process on the basis of the information provided by a given set of random observations. In this article we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Duckro, Donald E., Dennis W. Quinn, and Samuel J. Gardner. "Neural Network Pruning with Tukey-Kramer Multiple Comparison Procedure." Neural Computation 14, no. 5 (May 1, 2002): 1149–68. http://dx.doi.org/10.1162/089976602753633420.

Повний текст джерела
Анотація:
Reducing a neural network's complexity improves the ability of the network to generalize future examples. Like an overfitted regression function, neural networks may miss their target because of the excessive degrees of freedom stored up in unnecessary parameters. Over the past decade, the subject of pruning networks produced nonstatistical algorithms like Skeletonization, Optimal Brain Damage, and Optimal Brain Surgeon as methods to remove connections with the least salience. The method proposed here uses the bootstrap algorithm to estimate the distribution of the model parameter saliences. Statistical multiple comparison procedures are then used to make pruning decisions. We show this method compares well with Optimal Brain Surgeon in terms of ability to prune and the resulting network performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Rudenko, Oleg, Oleksandr Bezsonov, and Oleksandr Romanyk. "Neural network time series prediction based on multilayer perceptron." Development Management 17, no. 1 (May 7, 2019): 23–34. http://dx.doi.org/10.21511/dm.5(1).2019.03.

Повний текст джерела
Анотація:
Until recently, the statistical approach was the main technique in solving the prediction problem. In the framework of static models, the tasks of forecasting, the identification of hidden periodicity in data, analysis of dependencies, risk assessment in decision making, and others are solved. The general disadvantage of statistical models is the complexity of choosing the type of the model and selecting its parameters. Computing intelligence methods, among which artificial neural networks should be considered at first, can serve as alternative to statistical methods. The ability of the neural network to comprehensively process information follows from their ability to generalize and isolate hidden dependencies between input and output data. Significant advantage of neural networks is that they are capable of learning and generalizing the accumulated knowledge. The article proposes a method of neural networks training in solving the problem of prediction of the time series. Most of the predictive tasks of the time series are characterized by high levels of nonlinearity and non-stationary, noisiness, irregular trends, jumps, abnormal emissions. In these conditions, rigid statistical assumptions about the properties of the time series often limit the possibilities of classical forecasting methods. The alternative methods to statistical methods can be the methods of computational intelligence, which include artificial neural networks. The simulation results confirmed that the proposed method of training the neural network can significantly improve the prediction accuracy of the time series.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sohn, Insoo. "A robust complex network generation method based on neural networks." Physica A: Statistical Mechanics and its Applications 523 (June 2019): 593–601. http://dx.doi.org/10.1016/j.physa.2019.02.046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Bindi, Marco, Maria Cristina Piccirilli, Antonio Luchetta, Francesco Grasso, and Stefano Manetti. "Testability Evaluation in Time-Variant Circuits: A New Graphical Method." Electronics 11, no. 10 (May 16, 2022): 1589. http://dx.doi.org/10.3390/electronics11101589.

Повний текст джерела
Анотація:
DC–DC converter fault diagnosis, executed via neural networks built by exploiting the information deriving from testability analysis, is the subject of this paper. The networks under consideration are complex valued neural networks (CVNNs), whose fundamental feature is the proper treatment of the phase and the information contained in it. In particular, a multilayer neural network based on multi-valued neurons (MLMVN) is considered. In order to effectively design the network, testability analysis is exploited. Two possible ways for executing this analysis on DC–DC converters are proposed, taking into account the single-fault hypothesis. The theoretical foundations and some applicative examples are presented. Computer programs, based on symbolic analysis techniques, are used for both the testability analysis and the neural network training phase. The obtained results are very satisfactory and demonstrate the optimal performances of the method.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Duch, Wlodzislaw, Rafal Adamczak, KrzysAof Grabczewski, and Grzegorz Zal. "Hybrid Neural-global Minimization Method of Logical Rule Extraction." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 5 (October 20, 1999): 348–56. http://dx.doi.org/10.20965/jaciii.1999.p0348.

Повний текст джерела
Анотація:
Methodology of extraction of optimal sets of logical rules using neural networks and global minimization procedures has been developed. Initial rules are extracted using density estimation neural networks with rectangular functions or multilayered perceptron (MLP) networks trained with constrained backpropagation algorithm, transforming MLPs into simpler networks performing logical functions. A constructive algorithm called CMLP2LN is proposed, in which rules of increasing specificity are generated consecutively by adding more nodes to the network. Neural rule extraction is followed by optimization of rules using global minimization techniques. Estimation of confidence of various sets of rules is discussed. The hybrid approach to rule extraction has been applied to a number of benchmark and real life problems with very good results.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bartsev, S. I., and G. M. Markova. "Decoding of stimuli time series by neural activity patterns of recurrent neural network." Journal of Physics: Conference Series 2388, no. 1 (December 1, 2022): 012052. http://dx.doi.org/10.1088/1742-6596/2388/1/012052.

Повний текст джерела
Анотація:
Abstract The study is concerned with question whether it is possible to identify the specific sequence of input stimuli received by artificial neural network using its neural activity pattern. We used neural activity of simple recurrent neural network in course of “Even-Odd” game simulation. For identification of input sequences we applied the method of neural network-based decoding. Multilayer decoding neural network is required for this task. The accuracy of decoding appears up to 80%. Based on the results: 1) residual excitation levels of recurrent network’s neurons are important for stimuli time series processing, 2) trajectories of neural activity of recurrent networks while receiving a specific input stimuli sequence are complex cycles, we claim the presence of neural activity attractors even in extremely simple neural networks. This result suggests the fundamental role of attractor dynamics in reflexive processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (January 20, 2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Повний текст джерела
Анотація:
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is depending on identifying the net difference between processed output which is usually a prediction and the real targeted output which occurs as an error, then a series of adjustments achieved to gain a proper learning result, this process called supervised learning. Artificial neural networks have found and proved itself in many applications in a variety of fields due to their capacity to recreate and simulate nonlinear phenomena. System identification and control (process control, vehicle control, quantum chemistry, trajectory prediction, and natural resource management. Etc.) In addition to face recognition which proved to be very effective. Neural network was proved to be a very promising technique in many fields due to its accuracy and problem solving properties.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Drieiev, Oleksandr, Oleksandr Dorenskyi, and Hanna Drieieva. "Neural Network Method for Detecting Textural Anomalies in a Digital Image." Central Ukrainian Scientific Bulletin. Technical Sciences 2, no. 5(36) (2022): 335–46. http://dx.doi.org/10.32515/2664-262x.2022.5(36).2.335-346.

Повний текст джерела
Анотація:
Modern computer vision systems often use neural networks to process images. But to use neural networks, you need to create databases to train these neural networks. In some cases, creating a training database takes the vast majority of the project's financial and human resources. Therefore, the actual task of finding methods to improve the quality of learning neural networks on small data is considered in this article. The ability to process data, which nature was not present in the original training database is relevant, also. To solve the problem of improving the quality of image segmentation by textural anomalies, this research is proposed to use as input to the neural network not only the image but also its local statistic data. It can increase the information content of the input information for the neural network. Therefore, neural networks do not need to learn to choose statistical features but simply use them. This investigation classifies the requirements for image segmentation systems to indicate atypical texture anomalies. The literature analysis revealed various methods and algorithms for solving such problems. As a result, in this science work, the process of finding features in the photo is summarized in stages. The division into stages of search for features allowed to choose arguments for methods and algorithms that can perform the task. At each stage, requirements were formed for methods, that allowed separate the transformation of image fragments into a vector of features by using an artificial neural network (trained on a separate image of the autoencoder). Statistical features supplement by the vector of features of the image fragment. Numerous experiments have shown that the generated feature vectors improve the classification result for an artificial Kohonen neural network, which is able to detect atypical image fragments.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Nikiforov, Aleksandr, Aleksei Kuchumov, Sergei Terentev, Inessa Karamulina, Iraida Romanova, and Sergei Glushakov. "Neural network method as means of processing experimental data on grain crop yields." E3S Web of Conferences 161 (2020): 01031. http://dx.doi.org/10.1051/e3sconf/202016101031.

Повний текст джерела
Анотація:
In the work based on agroecological and technological testing of varieties of grain crops of domestic and foreign breeding, winter triticale in particular, conducted on the experimental field of the Smolensk State Agricultural Academy between 2015 and 2019, we present the methodology and results of processing the experimental data used for constructing the neural network model. Neural networks are applicable for solving tasks that are difficult for computers of traditional design and humans alike. Those are processing large volumes of experimental data, automation of image recognition, approximation of functions and prognosis. Neural networks include analyzing subject areas and weight coefficients of neurons, detecting conflict samples and outliers, normalizing data, determining the number of samples required for teaching a neural network and increasing the learning quality when their number is insufficient, as well as selecting the neural network type and decomposition based on the number of input neurons. We consider the technology of initial data processing and selecting the optimal neural network structure that allows to significantly reduce modeling errors in comparison with neural networks created with unprepared source data. Our accumulated experience of working with neural networks has demonstrated encouraging results, which indicates the prospects of this area, especially when describing processes with large amounts of variables. In order to verify the resulting neural network model, we have carried out a computational experiment, which showed the possibility of applying scientific results in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Meng, Ya Feng, Sai Zhu, and Rong Li Han. "A Fault Diagnosis Method Based on Combination of Neural Network and Fault Dictionary." Advanced Materials Research 765-767 (September 2013): 2078–81. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2078.

Повний текст джерела
Анотація:
Neural network and Fault dictionary are two kinds of very useful fault diagnosis method. But for large scale and complex circuits, the fault dictionary is huge, and the speed of fault searching affects the efficiency of real-time diagnosing. When the fault samples are few, it is difficulty to train the neural network, and the trained neural network can not diagnose the entire faults. In this paper, a new fault diagnosis method based on combination of neural network and fault dictionary is introduced. The fault dictionary with large scale is divided into several son fault dictionary with smaller scale, and the search index of the son dictionary is organized with the neural networks trained with the son fault dictionary. The complexity of training neural network is reduced, and this method using the neural networks ability that could accurately describe the relation between input data and corresponding goal organizes the index in a multilayer binary tree with many neural networks. Through this index, the seeking scope is reduced greatly, the searching speed is raised, and the efficiency of real-time diagnosing is improved. At last, the validity of the method is proved by the experimental results.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Behboodi, Bahareh, Sung-Ho Lim, Miguel Luna, Hyeon-Ae Jeon, and Ji-Woong Choi. "Artificial and convolutional neural networks for assessing functional connectivity in resting-state functional near infrared spectroscopy." Journal of Near Infrared Spectroscopy 27, no. 3 (March 21, 2019): 191–205. http://dx.doi.org/10.1177/0967033519836623.

Повний текст джерела
Анотація:
Functional connectivity derived from resting-state functional near infrared spectroscopy has gained attention of recent scholars because of its capability in providing valuable insight into intrinsic networks and various neurological disorders in a human brain. Several progressive methodologies in detecting resting-state functional connectivity patterns in functional near infrared spectroscopy, such as seed-based correlation analysis and independent component analysis as the most widely used methods, were adopted in previous studies. Although these two methods provide complementary information each other, the conventional seed-based method shows degraded performance compared to the independent component analysis-based scheme in terms of the sensitivity and specificity. In this study, artificial neural network and convolutional neural network were utilized in order to overcome the performance degradation of the conventional seed-based method. First of all, the results of artificial neural network- and convolutional neural network-based method illustrated the superior performance in terms of specificity and sensitivity compared to both conventional approaches. Second, artificial neural network, convolutional neural network, and independent component analysis methods showed more robustness compared to seed-based method. Moreover, resting-state functional connectivity patterns derived from artificial neural network- and convolutional neural network-based methods in sensorimotor and motor areas were consistent with the previous findings. The main contribution of the present work is to emphasize that artificial neural network as well as convolutional neural network can be exploited for a high-performance seed-based method to estimate the temporal relation among brain networks during resting state.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Schwenk, Holger, and Yoshua Bengio. "Boosting Neural Networks." Neural Computation 12, no. 8 (August 1, 2000): 1869–87. http://dx.doi.org/10.1162/089976600300015178.

Повний текст джерела
Анотація:
Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4% error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5% error on the UCI letters and 8.1% error on the UCI satellite data set, which is significantly better than boosted decision trees.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Teslyuk, Vasyl, Artem Kazarian, Natalia Kryvinska, and Ivan Tsmots. "Optimal Artificial Neural Network Type Selection Method for Usage in Smart House Systems." Sensors 21, no. 1 (December 24, 2020): 47. http://dx.doi.org/10.3390/s21010047.

Повний текст джерела
Анотація:
In the process of the “smart” house systems work, there is a need to process fuzzy input data. The models based on the artificial neural networks are used to process fuzzy input data from the sensors. However, each artificial neural network has a certain advantage and, with a different accuracy, allows one to process different types of data and generate control signals. To solve this problem, a method of choosing the optimal type of artificial neural network has been proposed. It is based on solving an optimization problem, where the optimization criterion is an error of a certain type of artificial neural network determined to control the corresponding subsystem of a “smart” house. In the process of learning different types of artificial neural networks, the same historical input data are used. The research presents the dependencies between the types of neural networks, the number of inner layers of the artificial neural network, the number of neurons on each inner layer, the error of the settings parameters calculation of the relative expected results.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ruan, Kun, Shun Zhao, Xueqin Jiang, Yixuan Li, Jianbo Fei, Dinghua Ou, Qiang Tang, Zhiwei Lu, Tao Liu, and Jianguo Xia. "A 3D Fluorescence Classification and Component Prediction Method Based on VGG Convolutional Neural Network and PARAFAC Analysis Method." Applied Sciences 12, no. 10 (May 12, 2022): 4886. http://dx.doi.org/10.3390/app12104886.

Повний текст джерела
Анотація:
Three-dimensional fluorescence is currently studied by methods such as parallel factor analysis (PARAFAC), fluorescence regional integration (FRI), and principal component analysis (PCA). There are also many studies combining convolutional neural networks at present, but there is no one method recognized as the most effective among the methods combining convolutional neural networks and 3D fluorescence analysis. Based on this, we took some samples from the actual environment for measuring 3D fluorescence data and obtained a batch of public datasets from the internet species. Firstly, we preprocessed the data (including two steps of PARAFAC analysis and CNN dataset generation), and then we proposed a 3D fluorescence classification method and a components fitting method based on VGG16 and VGG11 convolutional neural networks. The VGG16 network is used for the classification of 3D fluorescence data with a training accuracy of 99.6% (as same as the PCA + SVM method (99.6%)). Among the component maps fitting networks, we comprehensively compared the improved LeNet network, the improved AlexNet network, and the improved VGG11 network, and finally selected the improved VGG11 network as the component maps fitting network. In the improved VGG11 network training, we used the MSE loss function and cosine similarity to judge the merit of the model, and the MSE loss of the network training reached 4.6 × 10−4 (characterizing the variability of the training results and the actual results), and we used the cosine similarity as the accuracy criterion, and the cosine similarity of the training results reached 0.99 (comparison of the training results and the actual results). The network performance is excellent. The experiments demonstrate that the convolutional neural network has a great application in 3D fluorescence analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Singh, S., and A. K. Ghosh. "Estimation of lateral-directional parameters using neural networks based modified delta method." Aeronautical Journal 111, no. 1124 (October 2007): 659–67. http://dx.doi.org/10.1017/s0001924000004838.

Повний текст джерела
Анотація:
Abstract The aim of the study described herein was to develop and verify an efficient neural network based method for extracting aircraft stability and control derivatives from real flight data using feed-forward neural networks. The proposed method (Modified Delta method) draws its inspiration from feed forward neural network based the Delta method for estimating stability and control derivatives. The neural network is trained using differential variation of aircraft motion/control variables and coefficients as the network inputs and outputs respectively. For the purpose of parameter estimation, the trained neural network is presented with a suitably modified input file and the corresponding predicted output file of aerodynamic coefficients is obtained. An appropriate interpretation and manipulation of such input-output files yields the estimates of the parameter. The method is validated first on the simulated flight data using various combinations and types of real-flight control inputs and then on real flight data. A new technique is also proposed for validating the estimated parameters using feed-forward neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hu, Gang, Chahna Dixit, and Guanqiu Qi. "Discriminative Shape Feature Pooling in Deep Neural Networks." Journal of Imaging 8, no. 5 (April 20, 2022): 118. http://dx.doi.org/10.3390/jimaging8050118.

Повний текст джерела
Анотація:
Although deep learning approaches are able to generate generic image features from massive labeled data, discriminative handcrafted features still have advantages in providing explicit domain knowledge and reflecting intuitive visual understanding. Much of the existing research focuses on integrating both handcrafted features and deep networks to leverage the benefits. However, the issues of parameter quality have not been effectively solved in existing applications of handcrafted features in deep networks. In this research, we propose a method that enriches deep network features by utilizing the injected discriminative shape features (generic edge tokens and curve partitioning points) to adjust the network’s internal parameter update process. Thus, the modified neural networks are trained under the guidance of specific domain knowledge, and they are able to generate image representations that incorporate the benefits from both handcrafted and deep learned features. The comparative experiments were performed on several benchmark datasets. The experimental results confirmed our method works well on both large and small training datasets. Additionally, compared with existing models using either handcrafted features or deep network representations, our method not only improves the corresponding performance, but also reduces the computational costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Camacho, Jose David, Carlos Villaseñor, Carlos Lopez-Franco, and Nancy Arana-Daniel. "Neuroplasticity-Based Pruning Method for Deep Convolutional Neural Networks." Applied Sciences 12, no. 10 (May 13, 2022): 4945. http://dx.doi.org/10.3390/app12104945.

Повний текст джерела
Анотація:
In this paper, a new pruning strategy based on the neuroplasticity of biological neural networks is presented. The novel pruning algorithm proposed is inspired by the knowledge remapping ability after injuries in the cerebral cortex. Thus, it is proposed to simulate induced injuries into the network by pruning full convolutional layers or entire blocks, assuming that the knowledge from the removed segments of the network may be remapped and compressed during the recovery (retraining) process. To reconnect the remaining segments of the network, a translator block is introduced. The translator is composed of a pooling layer and a convolutional layer. The pooling layer is optional and placed to ensure that the spatial dimension of the feature maps matches across the pruned segments. After that, a convolutional layer (simulating the intact cortex) is placed to ensure that the depth of the feature maps matches and is used to remap the removed knowledge. As a result, lightweight, efficient and accurate sub-networks are created from the base models. Comparison analysis shows that in our approach is not necessary to define a threshold or metric as the criterion to prune the network in contrast to other pruning methods. Instead, only the origin and destination of the prune and reconnection points must be determined for the translator connection.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

A.T.C. Goh and C.G. Chua. "Nonlinear modeling with confidence estimation using Bayesian neural networks." Electronic Journal of Structural Engineering 4 (January 1, 2004): 108–18. http://dx.doi.org/10.56748/ejse.445.

Повний текст джерела
Анотація:
There is a growing interest in the use of neural networks in civil engineering to model complicated nonlinearity problems. A recent enhancement to the conventional back-propagation neural network algorithm is the adoption of a Bayesian inference procedure that provides good generalization and a statistical approach to deal with data uncertainty. A review of the Bayesian approach for neural network learning is presented. One distinct advantage of this method over the conventional back-propagation method is that the algorithm is able to provide assessments of the confidence associated with the network’s predictions. Two examples are presented to demonstrate the capabilities of this algorithm. A third example considers the practical application of the Bayesian neural network approach for analyzing the ultimate shear strength of deep beams.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Luqman Ibrahim, Alaa, and Mohammed Guhdar Mohammed. "A new three-term conjugate gradient method for training neural networks with global convergence." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 1 (October 1, 2022): 551. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp551-558.

Повний текст джерела
Анотація:
Conjugate gradient methods (CG) constitute excellent neural network training methods that are simplicity, flexibility, numerical efficiency, and low memory requirements. In this paper, we introduce a new three-term conjugate gradient method, for solving optimization problems and it has been tested on artificial neural networks (ANN) for training a feed-forward neural network. The new method satisfied the descent condition and sufficient descent condition. Global convergence of the new (NTTCG) method has been tested. The results of numerical experiences on some wellknown test function shown that our new modified method is very effective, by relying on the number of functions evaluation and number of iterations, also included the numerical results for training feed-forward neural networks with other well-known method in this field.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (November 3, 2022): 4107. http://dx.doi.org/10.3390/math10214107.

Повний текст джерела
Анотація:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Li, Jian Fu, and Ying Sheng Su. "Index Tracking Method Based on the Neural Networks and its Empirical Study." Advanced Materials Research 267 (June 2011): 974–78. http://dx.doi.org/10.4028/www.scientific.net/amr.267.974.

Повний текст джерела
Анотація:
In this paper, the neural network is used to improve the structure of assets allocation of index tracking portfolio. The empirical results shows that the performance of index tracking based on the neural networks is better than that of other methods mentioned in the literatures. The index tracking approach based on neural networks is a good method for index tracking problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії