Journal articles on the topic 'Backpropagation of error'

To see the other types of publications on this topic, follow the link: Backpropagation of error.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Backpropagation of error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Refenes, A. N., and C. Alippi. "Iiistological image understanding by error backpropagation." Microprocessing and Microprogramming 32, no. 1-5 (August 1991): 437–46. http://dx.doi.org/10.1016/0165-6074(91)90383-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pengsheng Zheng, Jianxiong Zhang, and Wansheng Tang. "Learning Associative Memories by Error Backpropagation." IEEE Transactions on Neural Networks 22, no. 3 (March 2011): 347–55. http://dx.doi.org/10.1109/tnn.2010.2099239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ong, H. C., and S. H. Quah. "Error backpropagation using least absolute criterion." International Journal of Computer Mathematics 82, no. 3 (March 2005): 301–12. http://dx.doi.org/10.1080/0020716042000301743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tesauro, Gerald, Yu He, and Subutai Ahmad. "Asymptotic Convergence of Backpropagation." Neural Computation 1, no. 3 (September 1989): 382–91. http://dx.doi.org/10.1162/neco.1989.1.3.382.

Full text
Abstract:
We calculate analytically the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. For networks without hidden units using the standard quadratic error function and a sigmoidal transfer function, we find that the error decreases as 1/t for large t, and the output states approach their target values as 1/√t. It is possible to obtain a different convergence rate for certain error and transfer functions, but the convergence can never be faster than 1/t. These results are unaffected by a momentum term in the learning algorithm, but convergence can be substantially improved by an adaptive learning rate scheme. For networks with hidden units, we generally expect the same rate of convergence to be obtained as in the single-layer case; however, under certain circumstances one can obtain a polynomial speed-up for non sigmoidal units, or a logarithmic speed-up for sigmoidal units. Our analytic results are confirmed by empirical measurements of the convergence rate in numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
5

Sang-Hoon Oh. "Improving the error backpropagation algorithm with a modified error function." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 799–803. http://dx.doi.org/10.1109/72.572117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gogoi, Munmi, Ashim Jyoti Gogoi, and Shahin Ara Begum. "Optimizing Error Function of Backpropagation Neural Network." International Journal of Computer Sciences and Engineering 7, no. 4 (April 30, 2019): 1011–16. http://dx.doi.org/10.26438/ijcse/v7i4.10111016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

M., MOUSSA, ELARABY M., and KOUTB A. "LEARNING USING ERROR BACKPROPAGATION: A NEW VERSION." International Conference on Aerospace Sciences and Aviation Technology 9, ASAT Conference, 8-10 May 2001 (May 1, 2001): 1–13. http://dx.doi.org/10.21608/asat.2001.31148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

M., MOUSSA, ELARABY M., and KOUTB A. "LEARNING USING ERROR BACKPROPAGATION: A NEW VERSION." International Conference on Aerospace Sciences and Aviation Technology 9, no. 9 (May 1, 2001): 959–71. http://dx.doi.org/10.21608/asat.2001.59777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Widder, D. R., and M. A. Fiddy. "High performance learning by modified error backpropagation." Neural Computing & Applications 1, no. 3 (September 1993): 183–87. http://dx.doi.org/10.1007/bf01414945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Suyatno, Suyatno, Sisno Riyoko, and R. Hadapiningradja Kusumodestoni. "PREDIKSI BISNIS FOREX MENGGUNAKAN MODEL NEURAL NETWORK BERBASIS ADA BOOST MENGGUNAKAN 2047 DATA." Simetris : Jurnal Teknik Mesin, Elektro dan Ilmu Komputer 7, no. 2 (November 1, 2016): 483. http://dx.doi.org/10.24176/simet.v7i2.758.

Full text
Abstract:
Setelah melakukan penelitian dan percobaan maka didapatkan hasil penelitian pertama yang telah dilakukan dengan menggunakan Algoritma Neural Network Backpropagatioan dengan menggunakan data sebanyak 268 menunjungkan tingkat akurasi error prediksi pada waktu prediksi per 5 menit sebesar 0.758619403, bila menggunakan data sebanyak 2047 menunjukkan tingkat akurasi error prediksi sebesar 0.500161212 dan hasil penelitian kedua yang telah dilakukan menggunakan Algoritma Optimasi Adaboost pada proses trainning dan ditambah Neural Network Backpropagation pada proses learning menunjukkan tingkat akurasi error prediksi pada waktu prediksi per 5 menit menggunakan data sebanyak 268 sebesar 0.397014925, bila menggunakan data sebanyak 2047 menunjukkan tingkat akurasi error prediksi sebesar 0.099951148. Tahap awal dalam melakukan penelitian ini sampai dengan pengujian menggunakan perhitungan prediksi nilai akurasi error menggunakan rumus MSE (Mean Sequare Error) dengan menggunakan algoritma optimasi adaboost untuk memberikan jawaban atas permasalahan bahwa nilai akurasi error Algoritma Neural Network Backpropagation perlu direndahkan agar akurasi prediksi meningkat dan tahap kedua dilakukan uji coba menggunakan data yang lebih banyak dibandingan dengan tahap ke satu. Berdasarkan hasil penelitian yang telah dilakukan, dapat disimpulkan bahwa Algoritma Neural Network memiliki akurasi yang lebih rendah bila dibandingkan dengan akurasi menggunakan metode optimasi adaboost pada proses trainning ditambah dengan Neural Network, ini dapat dilihat dengan rendahnya tingkat error MSE menggunakan metode adaboost + neural network dan dapat disimpukan pula bahwa dengan menggunakan jumlah data yang lebih banyak maka dapat menurunkan tingkat akurasi error MSE sehingga berhasil meningkatkan akurasi prediksi dalam bisnis forex trading. Kata kunci: forex, trading, neural network, adaboost, central capital futures.
APA, Harvard, Vancouver, ISO, and other styles
11

Pineda, Fernando J. "Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation." Neural Computation 1, no. 2 (June 1989): 161–72. http://dx.doi.org/10.1162/neco.1989.1.2.161.

Full text
Abstract:
Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.
APA, Harvard, Vancouver, ISO, and other styles
12

Knoblauch, Andreas. "Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification." Neural Computation 33, no. 8 (July 26, 2021): 2193–225. http://dx.doi.org/10.1162/neco_a_01407.

Full text
Abstract:
Supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions yn and the target values tn given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference yn- tn, which is optimal for several commonly used loss functions like cross-entropy or sum of squared errors. Here I evaluate a more general error initialization method using power functions |yn- tn|q for q>0, corresponding to a new family of loss functions that generalize cross-entropy. Surprisingly, experiments on various learning tasks reveal that a proper choice of q can significantly improve the speed and convergence of backpropagation learning, in particular in deep and recurrent neural networks. The results suggest two main reasons for the observed improvements. First, compared to cross-entropy, the new loss functions provide better fits to the distribution of error signals in the output layer and therefore maximize the model's likelihood more efficiently. Second, the new error initialization procedure may often provide a better gradient-to-loss ratio over a broad range of neural output activity, thereby avoiding flat loss landscapes with vanishing gradients.
APA, Harvard, Vancouver, ISO, and other styles
13

Lin, Jyh-Woei, Chun-Tang Chao, and Juing-Shian Chiou. "Backpropagation neural network as earthquake early warning tool using a new modified elementary Levenberg–Marquardt Algorithm to minimise backpropagation errors." Geoscientific Instrumentation, Methods and Data Systems 7, no. 3 (August 16, 2018): 235–43. http://dx.doi.org/10.5194/gi-7-235-2018.

Full text
Abstract:
Abstract. A new modified elementary Levenberg–Marquardt Algorithm (M-LMA) was used to minimise backpropagation errors in training a backpropagation neural network (BPNN) to predict the records related to the Chi-Chi earthquake from four seismic stations: Station-TAP003, Station-TAP005, Station-TCU084, and Station-TCU078 belonging to the Free Field Strong Earthquake Observation Network, with the learning rates of 0.3, 0.05, 0.2, and 0.28, respectively. For these four recording stations, the M-LMA has been shown to produce smaller predicted errors compared to the Levenberg–Marquardt Algorithm (LMA). A sudden predicted error could be an indicator for Early Earthquake Warning (EEW), which indicated the initiation of strong motion due to large earthquakes. A trade-Off decision-making process with BPNN (TDPB), using two alarms, adjusted the threshold of the magnitude of predicted error without a mistaken alarm. With this approach, it is unnecessary to consider the problems of characterising the wave phases and pre-processing, and does not require complex hardware; an existing seismic monitoring network-covered research area was already sufficient for these purposes.
APA, Harvard, Vancouver, ISO, and other styles
14

Scellier, Benjamin, and Yoshua Bengio. "Equivalence of Equilibrium Propagation and Recurrent Backpropagation." Neural Computation 31, no. 2 (February 2019): 312–29. http://dx.doi.org/10.1162/neco_a_01160.

Full text
Abstract:
Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. In this work, we establish a close connection between these two algorithms. We show that at every moment in the second phase, the temporal derivatives of the neural activities in equilibrium propagation are equal to the error derivatives computed iteratively by recurrent backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives and supports the hypothesis that in biological neural networks, temporal derivatives of neural activities may code for error signals.
APA, Harvard, Vancouver, ISO, and other styles
15

Rimer, Michael, and Tony Martinez. "CB3: An Adaptive Error Function for Backpropagation Training." Neural Processing Letters 24, no. 1 (August 2006): 81–92. http://dx.doi.org/10.1007/s11063-006-9014-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yu, X. H. "Can backpropagation error surface not have local minima." IEEE Transactions on Neural Networks 3, no. 6 (1992): 1019–21. http://dx.doi.org/10.1109/72.165604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, X. G., Z. Tang, H. Tamura, and M. Ishii. "A modified error function for the backpropagation algorithm." Neurocomputing 57 (March 2004): 477–84. http://dx.doi.org/10.1016/j.neucom.2003.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nopoles, Gonzalo, Frank Vanhoenshoven, Rafael Falcon, and Koen Vanhoof. "Nonsynaptic Error Backpropagation in Long-Term Cognitive Networks." IEEE Transactions on Neural Networks and Learning Systems 31, no. 3 (March 2020): 865–75. http://dx.doi.org/10.1109/tnnls.2019.2910555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

McKennoch, Sam, Thomas Voegtlin, and Linda Bushnell. "Spike-Timing Error Backpropagation in Theta Neuron Networks." Neural Computation 21, no. 1 (January 2009): 9–45. http://dx.doi.org/10.1162/neco.2009.09-07-610.

Full text
Abstract:
The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of.
APA, Harvard, Vancouver, ISO, and other styles
20

Ghozali, Muhammad Imam. "RANTAI PASOK BERAS PADA BULOG BERBASIS NEURAL NETWORK." Simetris : Jurnal Teknik Mesin, Elektro dan Ilmu Komputer 7, no. 2 (November 1, 2016): 743. http://dx.doi.org/10.24176/simet.v7i2.790.

Full text
Abstract:
Sebagai lembaga terpenting dalam menjaga ketahanan pangan di Indonesia, perusahaan umum (Perum) Badan urusan Logistik (BULOG) sejak didirikannya memiliki tugas memasok bahan pangan, sehingga pengetahuan dan pengalaman BULOG dalam manajemen rantai pasok pangan dan hasil pertanian lainnya seyogianya dapat diandalkan. Namun BULOG belum teruji dalam perspektif masih menghadapi berbagai permasalahan yang sangat kompleks, yang muncul mulai dari masalah pasokan gabah di level petani, proses penggilingan gabah di level industri penggilingan (miller), hingga proses distribusi beras ke level konsumen. Dengan demikian, sebagai komoditas pangan utama, permasalahan beras bukan hanya merupakan permasalahan ekonomi saja tetapi juga bersifat politis. Data mining dapat membantu dalam memprediksi suatu sistem, sehingga dapat dilakukan pada penelitian ini agar prediksi lebih tepat dan akurat. Penelitian ini teknik yang dipakai ialah neural network backpropagation, ada beberapa tahap dalam peneilitian ini yaitu tahap pengumpulan data historik, pengolahan data, model atau metode yang diusulkan, eksperimen pada model tersebut, evaluasi dan validasi hasil. Pada hasil analisa menunjukan bahwa model ini mempunyai tingkat kesalahan atau error yang kecil atau didalam backpropagation sering disebut dengan mean square erorr (MSE). Disimpulkan bahwa teknik data mining menggunakan neural network backpropagation dapat menghasilkan suatu nilai error yang minimal sehingga tepat dan akurat untuk menentukan jumlah pasokan beras pada tahun berikutnya. Kata kunci: pasok beras, supply chain, data mining, neural network backpropagation, mean square erorr.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Guoyan, Meng Wang, and Weizhang Liang. "A Comparative Study of SSA-BPNN, SSA-ENN, and SSA-SVR Models for Predicting the Thickness of an Excavation Damaged Zone around the Roadway in Rock." Mathematics 10, no. 8 (April 18, 2022): 1351. http://dx.doi.org/10.3390/math10081351.

Full text
Abstract:
Due to the disturbance effect of excavation, the original stress is redistributed, resulting in an excavation damaged zone around the roadway. It is significant to predict the thickness of an excavation damaged zone because it directly affects the stability of roadways. This study used a sparrow search algorithm to improve a backpropagation neural network, and an Elman neural network and support vector regression models to predict the thickness of an excavation damaged zone. Firstly, 209 cases with four indicators were collected from 34 mines. Then, the sparrow search algorithm was used to optimize the parameters of the backpropagation neural network, Elman neural network, and support vector regression models. According to the optimal parameters, these three predictive models were established based on the training set (80% of the data). Finally, the test set (20% of the data) was used to verify the reliability of each model. The mean absolute error, coefficient of determination, Nash–Sutcliffe efficiency coefficient, mean absolute percentage error, Theil’s U value, root-mean-square error, and the sum of squares error were used to evaluate the predictive performance. The results showed that the sparrow search algorithm improved the predictive performance of the traditional backpropagation neural network, Elman neural network, and support vector regression models, and the sparrow search algorithm–backpropagation neural network model had the best comprehensive prediction performance. The mean absolute error, coefficient of determination, Nash–Sutcliffe efficiency coefficient, mean absolute percentage error, Theil’s U value, root-mean-square error, and sum of squares error of the sparrow search algorithm–backpropagation neural network model were 0.1246, 0.9277, −1.2331, 8.4127%, 0.0084, 0.1636, and 1.1241, respectively. The proposed model could provide a reliable reference for the thickness prediction of an excavation damaged zone, and was helpful in the risk management of roadway stability.
APA, Harvard, Vancouver, ISO, and other styles
22

Fan, Sijie, and Yaqun Zhao. "Analysis of DES Plaintext Recovery Based on BP Neural Network." Security and Communication Networks 2019 (November 11, 2019): 1–5. http://dx.doi.org/10.1155/2019/9580862.

Full text
Abstract:
Backpropagation neural network algorithms are one of the most widely used algorithms in the current neural network algorithm. It uses the output error rate to estimate the error rate of the direct front layer of the output layer, so that we can get the error rate of each layer through the layer-by-layer backpropagation. The purpose of this paper is to simulate the decryption process of DES with backpropagation algorithm. By inputting a large number of plaintext and ciphertext pairs, a neural network simulator for the decryption of the target cipher is constructed, and the ciphertext given is decrypted. In this paper, how to modify the backpropagation neural network classifier and apply it to the process of building the regression analysis model is introduced in detail. The experimental results show that the final result of restoring plaintext of the neural network model built in this paper is ideal, and the fitting rate is higher than 90% compared with the true plaintext.
APA, Harvard, Vancouver, ISO, and other styles
23

Wiegerinck, Wim, and Tom Heskes. "How Dependencies between Successive Examples Affect On-Line Learning." Neural Computation 8, no. 8 (November 1996): 1743–65. http://dx.doi.org/10.1162/neco.1996.8.8.1743.

Full text
Abstract:
We study the dynamics of on-line learning for a large class of neural networks and learning rules, including backpropagation for multilayer perceptrons. In this paper, we focus on the case where successive examples are dependent, and we analyze how these dependencies affect the learning process. We define the representation error and the prediction error. The representation error measures how well the environment is represented by the network after learning. The prediction error is the average error that a continually learning network makes on the next example. In the neighborhood of a local minimum of the error surface, we calculate these errors. We find that the more predictable the example presentation, the higher the representation error, i.e., the less accurate the asymptotic representation of the whole environment. Furthermore we study the learning process in the presence of a plateau. Plateaus are flat spots on the error surface, which can severely slow down the learning process. In particular, they are notorious in applications with multilayer perceptrons. Our results, which are confirmed by simulations of a multilayer perceptron learning a chaotic time series using backpropagation, explain how dependencies between examples can help the learning process to escape from a plateau.
APA, Harvard, Vancouver, ISO, and other styles
24

Purwoharjono, Purwoharjono. "Penerapan Metode Jaringan Syaraf Tiruan Untuk Prediksi Kebutuhan Beban Listrik." ALINIER: Journal of Artificial Intelligence & Applications 2, no. 1 (May 31, 2021): 36–42. http://dx.doi.org/10.36040/alinier.v2i1.3566.

Full text
Abstract:
Penelitian ini bertujuan untuk memprediksi kebutuhan beban listrik. Prediksi kebutuhan beban listrik ini dilakukan dengan menggunakan Metode Jaringan Syaraf Tiruan (JST). JST ini menggunakan algoritma backpropagation. Lokasi penelitian ini dilakukukan di Kota Pontianak Kalimantan Barat. Peningkatan konsumsi listrik diwilayah Kota Pontianak mengalami peningkatan setiap tahunnya namun tidak diimbangi dengan pemenuhan energi listrik yang mencukupi. Berdasarkan hasil yang diperoleh dari simulasi menggunakan algoritma backpropagation ini dapat dikatakan bahwa algoritma backpropagation ini dapat bekerja dengan baik dalam mengenali data masukan yang diberikan ke sistem karena tingkat kesalahan menggunakan Mean Square Error (MSE) dan Mean Absolute Percentage Error (MAPE) relatif kecil.
APA, Harvard, Vancouver, ISO, and other styles
25

Handayani, Anik Nur, Heru Wahyu Herwanto, Katya Lindi Chandrika, and Kohei Arai. "Recognition of Handwritten Javanese Script using Backpropagation with Zoning Feature Extraction." Knowledge Engineering and Data Science 4, no. 2 (December 15, 2021): 117. http://dx.doi.org/10.17977/um018v4i22021p117-127.

Full text
Abstract:
Backpropagation is part of supervised learning, in which the training process requires a target. The resulting error is transmitted back to the units below in its training process. Backpropagation can solve complicated problems because it consumes less memory than other algorithms. In addition, it also can produce solutions with a low error rate while executing less time. In image pattern recognition, backpropagation can be utilized for cultural preservation in many places worldwide, including Indonesia. It is used to recognize picture patterns in Javanese script writings. This study concluded that feature extraction approaches, zoning, and backpropagation could be utilized to distinguish handwritten Javanese characters. The best accuracy is attained at 77.00%, with the network architecture comprising 64 input neurons, 40 hidden neurons, a learning rate of 0.003, a momentum of 0.03, and an iteration of 5000.
APA, Harvard, Vancouver, ISO, and other styles
26

Ye, Yuanyu, Aichao Yang, Yu Wu, Chen Hu, Min Li, Yan Li, and Xiaosong Deng. "Short-Term Prediction of Electronic Transformer Error Based on Intelligent Algorithms." Journal of Control Science and Engineering 2020 (August 28, 2020): 1–9. http://dx.doi.org/10.1155/2020/9867985.

Full text
Abstract:
As the key metering equipment in the smart grid, the measurement accuracy and stability of electronic transformer are important for the normal operation of power system. In order to solve the problem that there is no effective way to predict the error developing trend of electronic transformer, this paper proposed two kinds of short-term prediction methods for electronic transformer error based on the backpropagation neural network and the Prophet model, respectively. First, preprocessing and visualization operation are performed on the original error data. Then, the data fitting and short-term prediction of electronic transformer error are made on the basis of the backpropagation neural network and the Prophet model, and the fitting and prediction results of the two methods are compared and analysed in combination with four evaluation indexes. Finally, the Prophet model is adopted to simulate the development trend and periodic fluctuation of error, and the reason for fluctuation is analysed. The simulation results show that the Prophet model is more suitable for the prediction of electronic transformer measurement error than the backpropagation neural network.
APA, Harvard, Vancouver, ISO, and other styles
27

Nugraha, Harry Ganda, and Azhari SN. "Optimasi Bobot Jaringan Syaraf Tiruan Mengunakan Particle Swarm Optimization." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 8, no. 1 (January 31, 2014): 25. http://dx.doi.org/10.22146/ijccs.3492.

Full text
Abstract:
AbstrakMasalah peramalan adalah masalah yang sering ditemukan dalam proses pengambilan keputusan. Tool yang cukup populer untuk menangani masalah peramalan adalah jaringan syaraf tiruan. Jaringan syaraf tiruan banyak digunakan karena kemampuannya untuk meramalkan data nonlinear time series. Algoritma pembelajaran yang sering digunakan untuk memperbaiki bobot pada jaringan syaraf tiruan adalah backpropagation. Namun proses pembelajaran backpropagation terkadang menemui kendala seperti over fiting sehingga tidak dapat menggeneralisasi masalah. Untuk mengatasi masalah tersebut diusulkan penggunaan particle swarm optimization untuk melatih bobot pada jaringan. Performa dari masing-masing model akan diukur dengan mean square error, mean absolute percentage error, normalized mean square error, prediction of change in direction, average relative variance. Untuk keperluan analisis model digunakan data time series inflasi di indonesia. Metode yang diusulkan menunjukan sistem jaringan hybrid mampu menangani masalah peramalan data time series dengan performa mendekati jaringan syaraf tiruan backpropagation.. Kata kunci—jaringan syaraf tiruan, particle swarm optimization, prediction of change in direction, average relative variance . AbstractForecasting problem is common problem that easily found in decision making process. The popular tool to handle that problem is artificial neural network. Artificial neural network have been widely use because its ability to forecast nonlinear time series data. The learning method that have been widely use to train artificial neural network weight is backpropagation. Otherwise backpropagation learning process sometimes find problem such as over fiting so it can’t generalized the problem. Particle swarm optimization method had been proposed to train artificial neural network weigth. Mean square error, mean absolute percentage error, normalized mean square error, prediction of change in direction, average relative variance had been use to measures the model performance. Indonesia inflation time series data had been use to analyzed the model. The proposed method show that hybrid system could handle the time series forecasting problem as good as backpropagation artificial neural network Keywords—artificial neural network, particle swarm optimization, prediction of change in direction, average relative variance.
APA, Harvard, Vancouver, ISO, and other styles
28

Falah, Miftahul, Dian Palupi Rini, and Iwan Pahendra. "Kombinasi Algoritma Backpropagation Neural Network dengan Gravitational Search Algorithm Dalam Meningkatkan Akurasi." JURNAL MEDIA INFORMATIKA BUDIDARMA 5, no. 1 (January 22, 2021): 90. http://dx.doi.org/10.30865/mib.v5i1.2597.

Full text
Abstract:
Predicting disease is usually done based on the experience and knowledge of the doctor. Diagnosis of such a disease is traditionally less effective. The development of medical diagnosis based on machine learning in terms of disease prediction provides a more accurate diagnosis than the traditional way. In terms of predicting disease can use artificial neural networks. The artificial neural network consists of various algorithms, one of which is the Backpropagation Algorithm. In this paper it is proposed that disease prediction systems use the Backpropagation algorithm. Backpropagation algorithms are often used in disease prediction, but the Backpropagation algorithm has a slight drawback that tends to take a long time in obtaining optimum accuracy values. Therefore, a combination of algorithms can overcome the shortcomings of the Backpropagation algorithm by using the success of the Gravitational Search Algorithm (GSA) algorithm, which can overcome the slow convergence and local minimum problems contained in the Backpropagation algorithm. So the authors propose to combine the Backpropagation algorithm using the Gravitational Search Algorithm (GSA) in hopes of improving accuracy results better than using only the Backpropagation algorithm. The results resulted in a higher level of accuracy with the same number of iterations than using Backpropagation only. Can be seen in the first trial of breast cancer data with parameters namely hidden layer 5, learning rate of 2 and iteration as much as 5000 resulting in accuracy of 99.3 % with error 0.7% on Backpropagation Algorithm, while in combination BP & GSA got accuracy of 99.68 % with error of 0.32%.
APA, Harvard, Vancouver, ISO, and other styles
29

Tang, Zheng, Xu Gang Wang, Hiroki Tamura, and Masahiro Ishii. "An Algorithm of Supervised Learning for Multilayer Neural Networks." Neural Computation 15, no. 5 (May 1, 2003): 1125–42. http://dx.doi.org/10.1162/089976603765202686.

Full text
Abstract:
A method of supervised learning for multilayer artificial neural networks to escape local minima is proposed. The learning model has two phases: a backpropagation phase and a gradient ascent phase. The backpropagation phase performs steepest descent on a surface in weight space whose height at any point in weight space is equal to an error measure, and it finds a set of weights minimizing this error measure. When the backpropagation gets stuck in local minima, the gradient ascent phase attempts to fill up the valley by modifying gain parameters in a gradient ascent direction of the error measure. The two phases are repeated until the network gets out of local minima. The algorithm has been tested on benchmark problems, such as exclusive-or (XOR), parity, alphabetic characters learning, Arabic numerals with a noise recognition problem, and a realistic real-world problem: classification of radar returns from the ionosphere. For all of these problems, the systems are shown to be capable of escaping from the backpropagation local minima and converge faster when using the new proposed method than using the simulated annealing techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

Cucu, Tatang Rohana. "Kajian Adaptive Neuro-Fuzzy Inference System (ANFIS) Dalam Memprediksi Penerimaan Mahasiswa Baru Pada Universitas Buana Perjuangan Karawang." Techno Xplore : Jurnal Ilmu Komputer dan Teknologi Informasi 6, no. 1 (July 23, 2021): 44–54. http://dx.doi.org/10.36805/technoxplore.v6i1.1371.

Full text
Abstract:
Abstract - The process of admitting new students is an annual routine activity that occurs in a university. This activity is the starting point of the process of searching for prospective new students who meet the criteria expected by the college. One of the colleges that holds new student admissions every year is Buana Perjuangan University, Karawang. There have been several studies that have been conducted on predictions of new students by other researchers, but the results have not been very satisfying, especially problems with the level of accuracy and error. Research on ANFIS studies to predict new students as a solution to the problem of accuracy. This study uses two ANFIS models, namely Backpropagation and Hybrid techniques. The application of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model in the predictions of new students at Buana Perjuangan University, Karawang was successful. Based on the results of training, the Backpropagation technique has an error rate of 0.0394 and the Hybrid technique has an error rate of 0.0662. Based on the predictive accuracy value that has been done, the Backpropagation technique has an accuracy of 4.8 for the value of Mean Absolute Deviation (MAD) and 0.156364623 for the value of Mean Absolute Percentage Error (MAPE). Meanwhile, based on the Mean Absolute Deviation (MAD) value, the Backpropagation technique has a value of 0.5 and 0.09516671 for the Mean Absolute Percentage Error (MAPE) value. So it can be concluded that the Hybrid technique has a better level of accuracy than the Backpropation technique in predicting the number of new students at the University of Buana Perjuangan Karawang. Keywords: ANFIS, Backpropagation, Hybrid, Prediction
APA, Harvard, Vancouver, ISO, and other styles
31

Gomolka, Zbigniew. "Backpropagation algorithm with fractional derivatives." ITM Web of Conferences 21 (2018): 00004. http://dx.doi.org/10.1051/itmconf/20182100004.

Full text
Abstract:
The paper presents a model of a neural network with a novel backpropagation rule, which uses a fractional order derivative mechanism. Using the Grunwald Letnikow definition of the discrete approximation of the fractional derivative, the author proposed the smooth modeling of the transition functions of a single neuron. On this basis, a new concept of a modified backpropagation algorithm was proposed that uses the fractional derivative mechanism both for modeling the dynamics of individual neurons and for minimizing the error function. The description of the signal flow through the neural network and the mechanism of smooth shape control of the activation functions of individual neurons are given. The model of minimization of the error function is presented, which takes into account the possibility of changes in the characteristics of individual neurons. For the proposed network model, example courses of the learning processes are presented, which prove the convergence of the learning process for different shapes of the transition function. The proposed algorithm allows the learning process to be conducted with a smooth modification of the shape of the transition function without the need for modifying the IT model of the designed neural network. The proposed network model is a new tool that can be used in signal classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
32

Indrayati Sijabat, Petti, Yuhandri Yuhandri, Gunadi Widi Nurcahyo, and Anita Sindar. "Algoritma Backpropagation Prediksi Harga Komoditi terhadap Karakteristik Konsumen Produk Kopi Lokal Nasional." Digital Zone: Jurnal Teknologi Informasi dan Komunikasi 11, no. 1 (May 8, 2020): 96–107. http://dx.doi.org/10.31849/digitalzone.v11i1.3880.

Full text
Abstract:
Kopi bagian penting dari komoditi pasar nasional maupun internasional. Secara nasional jenis kopi lokal beragam sesuai nama daerah penghasil mengalami naik turun harga Perlu perencanaan teknologi untuk mengetahui harga kopi ke depan. Peramalan atau prediksi dalam ilmu komputer berkaitan dengan perkiraan berkala produksi, penawaran dan permintaan pada masa tertentu menggunakan alat ukur yang akurat dan teruji. Metode Backpropagation digunakan untuk prediksi harga. Proses algoritma backpropagation antara lain input data, melakukan tahap normalisasi /transformasi data, iterasi, pelatihan dan menentukan parameter jaringan, kalkulasi error, mendapatkan hasil prediksi. Perancangan arsitektur JST, dilakukan penentuan jumlah layer pada lapisan input, lapisan tersembunyi dan lapisan output. Penelitian ini menggunakan Matlab R2013a dengan metode Backpropagation. Pengambilan input, penelusuran error dan penyesuaian bobot berguna untuk menghasilkan nilai prediksi harga kopi. Hasil prediksi harga kopi dari harga aktual 74205 ke hasil harga prediksi 73668 dengan akurasi 99.9928, harga aktual 73892 ke harga prediksi 73175 dengan akurasi 99.9903, harga aktual 77981 ke hasil prediksi 77481 akurasi 99.9936. Kata Kunci: Syaraf Tiruan, Prediksi, Harga Kopi, Backpropagation Abstract Coffee is an important part of the national and international market commodity. Nationally, the types of local coffee vary according to the name of the producing region experiencing ups and downs in price. It needs technology planning to find out the price of coffee going forward. Forecasting or prediction in computer science is related to periodic estimates of production, supply and demand at certain times using accurate and tested measuring tools. Backpropagation method is used for price prediction. The backpropagation algorithm process includes inputting data, performing the normalization / transformation of data, iterating, training and determining network parameters, calculating errors, getting predictive results. The design of the ANN architecture determines the number of layers in the input layer, the hidden layer and the output layer. This research uses Matlab R2013a. Taking input, tracking errors and adjusting weights are useful for producing predictive value of coffee prices. Coffee prediction results from actual prices 74205 to the predicted price of 73668 with an accuracy of 99.9928, the actual price of 73892 to the predicted price of 73175 with an accuracy of 99.9903, the actual price of 77981 to the predicted result of 77481 with an accuracy of 99.9936. Keywords: Neural Networks, Predictions, Coffee Prices, Backpropagation
APA, Harvard, Vancouver, ISO, and other styles
33

Masruroh, Masruroh. "PERBANDINGAN METODE REGRESI LINEAR DAN NEURAL NETWORK BACKPROPAGATION DALAM PREDIKSI NILAI UJIAN NASIONAL SISWA SMP MENGGUNAKAN SOFTWARE R." Joutica 5, no. 1 (March 31, 2020): 331. http://dx.doi.org/10.30736/jti.v5i1.347.

Full text
Abstract:
Metode regresi linear dan neural network backpropagation merupakan metode yang kerap digunakan dalam model prediksi. Penelitian ini bertujuan untuk membandingkan akurasi metode regresi linear dan backpropagation dalam prediksi nilai Ujian Nasional siswa SMP. Data yang digunakan berupa data nilai ujian akhir semester dan ujian sekolah sebagai input dan nilai ujian nasional sebagai output. Data didapatkan dari SMPN 1 dan SMPN 2 Lamongan.. Jumlah dataset sebanyak 701 dibagi menjadi 75% data training dan 25% data testing. Simulasi prediksi dilakukan menggunakan software R. Parameter akurasi yang digunakan adalah Root Mean Squared Error (RMSE) dan Mean Absolute Percentage Error (MAPE). Hasil penelitian menunjukkan model prediksi menggunakan metode regresi linear menghasilkan RMSE sebesar 9,04 dan MAPE sebesar 3,94%, sedangkan model prediksi menggunakan backpropagation menghasilkan RMSE sebesar 7,28 dan MAPE sebesar 0,55%. Dengan demikian dalam penelitian ini metode neural network backpropagation memiliki akurasi yang lebih baik dalam prediksi nilai Ujian Nasional siswa SMP.
APA, Harvard, Vancouver, ISO, and other styles
34

Xie, Xiaohui, and H. Sebastian Seung. "Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network." Neural Computation 15, no. 2 (February 1, 2003): 441–54. http://dx.doi.org/10.1162/089976603762552988.

Full text
Abstract:
Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.
APA, Harvard, Vancouver, ISO, and other styles
35

Rizqulloh, Fakhruddin Rafi, Suprihadi Prasetyono, and Widya Cahyadi. "ANALISIS PERBANDINGAN PERAMALAN BEBAN LISTRIK JANGKA PENDEK ANTARA METODE BACKPROPAGATION NEURAL NETWORK DENGAN METODE REGRESI LINEAR." Jurnal Arus Elektro Indonesia 6, no. 3 (December 31, 2020): 69. http://dx.doi.org/10.19184/jaei.v6i3.19210.

Full text
Abstract:
Peningkatan pada jumlah populasi serta berbagai macam aktivitas manusia di dunia, ini memungkinkan adanya perubahan pada peningkatan kebutuhan tenaga listrik dengan permintaan yang tidak sama dalam setiap waktunya. Berdasarkan uraian tersebut, penelitian ini akan membuat penelitian yang berjudul “Analisis Perbandingan Peramalan Beban Listrik Jangka Pendek antara Metode Backpropagation Neural Network dengan Metode Regresi Linear”. Penggunaan metode backpropagation neural network dan metode regresi linear dengan harapan meningkatkan iakurasi dari sistem peramalan beban listrik. Hasil peramalan beban listrik menggunakan metode backpropagation neural network didapatkan nilai error terkecil sebesar -0.022554% dengan nilai MSE sebesar 0.0249909% dan hasil peramalan beban listrik menggunakan metode regresi linear didapatkan nilai error terkecil sebesar -0.179% dengan nilai MSE sebesar 3.118%.
APA, Harvard, Vancouver, ISO, and other styles
36

Kusumoputro, Benyamin, and Lina Lina. "Infrared Face Recognition System Using Cross Entropy Error Function Based Ensemble Backpropagation Neural Networks." International Journal of Computer Theory and Engineering 8, no. 2 (2016): 161–66. http://dx.doi.org/10.7763/ijcte.2016.v8.1037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Weber, M., P. B. Crilly, and W. E. Blass. "Adaptive noise filtering using an error-backpropagation neural network." IEEE Transactions on Instrumentation and Measurement 40, no. 5 (1991): 820–25. http://dx.doi.org/10.1109/19.106304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bohte, Sander M., Joost N. Kok, and Han La Poutré. "Error-backpropagation in temporally encoded networks of spiking neurons." Neurocomputing 48, no. 1-4 (October 2002): 17–37. http://dx.doi.org/10.1016/s0925-2312(01)00658-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bishop, Chris. "A FAST PROCEDURE FOR RETRAINING THE MULTILAYER PERCEPTRON." International Journal of Neural Systems 02, no. 03 (January 1991): 229–36. http://dx.doi.org/10.1142/s0129065791000212.

Full text
Abstract:
In this paper we describe a fast procedure for retraining a feedforward network, previously trained by error backpropagation, following a small change in the training data. This technique would permit fine calibration of individual neural network based control systems in a mass-production environment. We also derive a generalised error backpropagation algorithm which allows an exact evaluation of all of the terms in the Hessian matrix. The fast retraining procedure is illustrated using a simple example.
APA, Harvard, Vancouver, ISO, and other styles
40

Bengio, Yoshua, Thomas Mesnard, Asja Fischer, Saizheng Zhang, and Yuhuai Wu. "STDP-Compatible Approximation of Backpropagation in an Energy-Based Model." Neural Computation 29, no. 3 (March 2017): 555–77. http://dx.doi.org/10.1162/neco_a_00934.

Full text
Abstract:
We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with respect to output units that have received an outside driving force pushing them away from the stationary point. Backpropagated error gradients correspond to temporal derivatives with respect to the activation of hidden units. These lead to a weight update proportional to the product of the presynaptic firing rate and the temporal rate of change of the postsynaptic firing rate. Simulations and a theoretical argument suggest that this rate-based update rule is consistent with those associated with spike-timing-dependent plasticity. The ideas presented in this article could be an element of a theory for explaining how brains perform credit assignment in deep hierarchies as efficiently as backpropagation does, with neural computation corresponding to both approximate inference in continuous-valued latent variables and error backpropagation, at the same time.
APA, Harvard, Vancouver, ISO, and other styles
41

Citra Perdana, Wahyu Muhammad, and Anita Qoiriah. "Game Edukatif Simulasi Pembuatan SIM Menggunakan Neural Network Backpropagation Sebagai Rekomendasi Penentu Kelulusan." Journal of Informatics and Computer Science (JINACS) 1, no. 04 (May 22, 2020): 217–27. http://dx.doi.org/10.26740/jinacs.v1n04.p217-227.

Full text
Abstract:
Abstrak— Surat Izin Mengemudi (SIM) merupakan identifikasi dan bukti registrasi yang diberikan oleh Kepolisian Republik Indonesia khususnya bagian Satlantas (Satuan Lalu Lintas) kepada warga yang memenuhi berbagai persyaratan seperti sehat jasmani dan rohani, administrasi, memahami peraturan lalu lintas dan terampil mengemudikan kendaraan bermotor. Seringkali pengguna kendaraan bermotor belum lulus ketika melaksanakan ujian SIM karena belum sepenuhnya mengetahui teori tentang rambu dan marka jalan maupun mahir dalam mengendarai kendaraan. Dengan digunakannya game sebagai media sosialisasi, Satlantas Polrestabes Surabaya dapat memberikan informasi mengenai pembuatan SIM secara inovatif dan tepat sasaran, sehingga dapat lebih menarik minat masyarakat untuk belajar. Neural Network Backpropagation merupakan algoritma pembelajaran untuk memperkecil tingkat error dengan cara menyesuaikan bobotnya berdasarkan perbedaan output dan target yang diinginkan. Algoritma Neural Network Backpropagation pada game ini digunakan untuk menentukan kelulusan berdasarkan skor akhir dan waktu dalam menyelesaikan tiap tes. Dalam penentuan skor akhir, untuk tes teori didasarkan pada tingkat kebenaran menjawab soal, sedangkan untuk tes praktik didasarkan pada keberhasilan menuju garis finish tanpa menyentuh apapun. Penerapan algoritma Neural Network Backpropagation pada game simulasi pembuatan SIM ini menghasilkan nilai Mean Absolute Error (MAE) yang baik yaitu sebesar 0% pada percobaan algoritma Neural Network Backpropagation dengan nilai max epoch = 1500, learning rate = 0.3, dan toleransi error = 0.41 ketika tes teori dan praktik dengan masing – masing sebanyak 10 kali percobaan. Dan akurasi yang dilakukan dengan pengujian K-Fold Cross Va1idation menghasilkan akurasi sebesar 100 %. Kata Kunci— SIM, neural network, backpropagation, game, android
APA, Harvard, Vancouver, ISO, and other styles
42

Reza, Muhammad, and Suprayogi. "Prediksi Jangka Waktu Pengiriman Barang Pada PT. Pos Indonesia menggunakan Backpropagation." CogITo Smart Journal 3, no. 1 (July 18, 2017): 111. http://dx.doi.org/10.31154/cogito.v3i1.50.111-122.

Full text
Abstract:
Prediksi jangka waktu pengiriman dilakukan dengan tujuan untuk memperoleh tolak ukur waktu pada saat terjadi proses pengiriman sehingga dapat dimanfaatkan sebagai acuan dalam kontrol manajemen pengiriman. Prediksi pengiriman di kantor Pos masih kurang efektif dan cenderung menggunakan prakiraan konvensional. Maka untuk membantu melakukan prediksi pengiriman barang maka perlu dibuat sistem prediksi yang mempunyai tingkat akurasi prediksi yang tinggi.Penelitian ini menggunakan pendekatan komputasi cerdas yaitu Jaringan Syaraf Tiruan dengan algoritma Backpropagation untuk memprediksi waktu pengiriman barang. Backprogagation bekerja dengan memproses data yang dimasukkan kemudian menghasilkan nilai output, jika nilai yang output dihasilkan belum sama dengan nilai label sebenarnya maka dilakukan perambatan balik untuk melakukan perbaikan nilai bobot yang selanjutnya melakukan perhitungan ulang sampai didapat nilai output yang memiliki nilai Root Mean Square Error (RMSE) yang miniminal atau dengan kata lain nilai yang dihasilkan sama dengan nilai label sebenarnya . Dengan menggunakan metode ini dihasilkan nilai error sebesar 2,1111 %. Prediksi dengan menggunakan algoritma Backpropagation terbukti akurat dalam kasus pengiriman barang ini. Kata kunci : Data Mining, Prediksi, Backpropagation, Record, Error, Sample.
APA, Harvard, Vancouver, ISO, and other styles
43

Rohana, Tatang, and Bayu Priyatna. "Performance Evaluation of Adaptive Neuro-Fuzzy Inference System (ANFIS) In Predicting New Students (Case Study : UBP Karawang)." Buana Information Technology and Computer Sciences (BIT and CS) 2, no. 2 (July 17, 2021): 31–36. http://dx.doi.org/10.36805/bit-cs.v2i2.1417.

Full text
Abstract:
The process of admitting new students is an annual routine activity that occurs in a university. This activity is the starting point of the process of searching for prospective new students who meet the criteria expected by the college. One of the colleges that holds new student admissions every year is Buana Perjuangan University, Karawang. There have been several studies that have been conducted on predictions of new students by other researchers, but the results have not been very satisfying, especially problems with the level of accuracy and error. Research on ANFIS studies to predict new students as a solution to the problem of accuracy. This study uses two ANFIS models, namely Backpropagation and Hybrid techniques. The application of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model in the predictions of new students at Buana Perjuangan University, Karawang was successful. Based on the results of training, the Backpropagation technique has an error rate of 0.0394 and the Hybrid technique has an error rate of 0.0662. Based on the predictive accuracy value that has been done, the Backpropagation technique has an accuracy of 4.8 for the value of Mean Absolute Deviation (MAD) and 0.156364623 for the value of Mean Absolute Percentage Error (MAPE). Meanwhile, based on the Mean Absolute Deviation (MAD) value, the Backpropagation technique has a value of 0.5 and 0.09516671 for the Mean Absolute Percentage Error (MAPE) value. So it can be concluded that the Hybrid technique has a better level of accuracy than the Backpropation technique in predicting the number of new students at the University of Buana Perjuangan Karawang
APA, Harvard, Vancouver, ISO, and other styles
44

Gangle, Rocco. "Backpropagation of Spirit: Hegelian Recollection and Human-A.I. Abductive Communities." Philosophies 7, no. 2 (March 26, 2022): 36. http://dx.doi.org/10.3390/philosophies7020036.

Full text
Abstract:
This article examines types of abductive inference in Hegelian philosophy and machine learning from a formal comparative perspective and argues that Robert Brandom’s recent reconstruction of the logic of recollection in Hegel’s Phenomenology of Spirit may be fruitful for anticipating modes of collaborative abductive inference in human/A.I. interactions. Firstly, the argument consists of showing how Brandom’s reading of Hegelian recollection may be understood as a specific type of abductive inference, one in which the past interpretive failures and errors of a community are explained hypothetically by way of the construction of a narrative that rehabilitates those very errors as means for the ongoing successful development of the community, as in Brandom’s privileged jurisprudential example of Anglo-American case law. Next, this Hegelian abductive dynamic is contrasted with the error-reducing backpropagation algorithms characterizing many current versions of machine learning, which can be understood to perform abductions in a certain sense for various problems but not (yet) in the full self-constituting communitarian mode of creative recollection canvassed by Brandom. Finally, it is shown how the two modes of “error correction” may possibly coordinate successfully on certain types of abductive inference problems that are neither fully recollective in the Hegelian sense nor algorithmically optimizable.
APA, Harvard, Vancouver, ISO, and other styles
45

Mustafidah, Hindayati, and Suwarsito Suwarsito. "Performance of Levenberg-Marquardt Algorithm in Backpropagation Network Based on the Number of Neurons in Hidden Layers and Learning Rate." JUITA: Jurnal Informatika 8, no. 1 (May 4, 2020): 29. http://dx.doi.org/10.30595/juita.v8i1.7150.

Full text
Abstract:
One of the supervised learning paradigms in artificial neural networks (ANN) that are in great developed is the backpropagation model. Backpropagation is a perceptron learning algorithm with many layers to change weights connected to neurons in hidden layers. The performance of the algorithm is influenced by several network parameters including the number of neurons in the input layer, the maximum epoch used, learning rate (lr) value, the hidden layer configuration, and the resulting error (MSE). Some of the tests conducted in previous studies obtained information that the Levenberg-Marquardt training algorithm has better performance than other algorithms in the backpropagation network, which produces the smallest average error with a test level of α = 5% which used 10 neurons in a hidden layer. The number of neurons in hidden layers varies depending on the number of neurons in the input layer. In this study an analysis of the performance of the Levenberg-Marquardt training algorithm was carried out with 5 neurons in the input layer, a number of n neurons in hidden layers (n = 2, 4, 5, 7, 9), and 1 neuron in the output layer. Performance analysis is based on network-generated errors. This study uses a mixed method, namely development research with quantitative and qualitative testing using ANOVA statistical tests. Based on the analysis, the Levenberg-Marquardt training algorithm produces the smallest error of 0.00014 ± 0.00018 on 9 neurons in hidden layers with lr = 0.5. Keywords: hidden layer, backpropogation, MSE, learning rate, Levenberg-Marquardt.
APA, Harvard, Vancouver, ISO, and other styles
46

Joost, Merten, and Wolfram Schiffmann. "Speeding Up Backpropagation Algorithms by Using Cross-Entropy Combined with Pattern Normalization." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 06, no. 02 (April 1998): 117–26. http://dx.doi.org/10.1142/s0218488598000100.

Full text
Abstract:
This paper demonstrates how the backpropagation algorithm (BP) and its variants can be accelerated significantly while the quality of the trained nets will increase. Two modifications were proposed: First, instead of the usual quadratic error we use the cross entropy as an error function and second, we normalize the input patterns. The first modification eliminates the so called sigmoid prime factor of the update rule for the output units. In order to balance the dynamic range of the inputs we use normalization. The combination of both modifications is called CEN–Optimization (Cross Entropy combined with Pattern Normalization). As our simulation results show CEN–Optimization can't only improve online BP but also RPPROP, the most sophisticated BP variant known today. Even though RPROP yields usually much better results than online BP the performance gap between CEN–BP and CEN–RPROP is smaller than between the standard versions of those algorithms. By means of CEN–RPROP it is nearly guaranteed to achieve an error of zero (with respect to the training set). Simultaneously, the generalization performance of the trained nets can be increased, because less complex networks suffice to fit the training set. Compared to the usual SSE (summed squared error) one can yield lower training errors with fewer weights.
APA, Harvard, Vancouver, ISO, and other styles
47

Astion, M. L., M. H. Wener, R. G. Thomas, G. G. Hunder, and D. A. Bloch. "Overtraining in neural networks that interpret clinical data." Clinical Chemistry 39, no. 9 (September 1, 1993): 1998–2004. http://dx.doi.org/10.1093/clinchem/39.9.1998.

Full text
Abstract:
Abstract Backpropagation neural networks are a computer-based pattern-recognition method that has been applied to the interpretation of clinical data. Unlike rule-based pattern recognition, backpropagation networks learn by being repetitively trained with examples of the patterns to be differentiated. We describe and analyze the phenomenon of overtraining in backpropagation networks. Overtraining refers to the reduction in generalization ability that can occur as networks are trained. The clinical application we used was the differentiation of giant cell arteritis (GCA) from other forms of vasculitis (OTH) based on results for 807 patients (593 OTH, 214 GCA) and eight clinical predictor variables. The 807 cases were randomly assigned to either a training set with 404 cases or to a cross-validation set with the remaining 403 cases. The cross-validation set was used to monitor generalization during training. Results were obtained for eight networks, each derived from a different random assignment of the 807 cases. Training error monotonically decreased during training. In contrast, the cross-validation error usually reached a minimum early in training while the training error was still decreasing. Training beyond the minimum cross-validation error was associated with an increased cross-validation error. The shape of the cross-validation error curve and the point during training corresponding to the minimum cross-validation error varied with the composition of the data sets and the training conditions. The study indicates that training error is not a reliable indicator of a network's ability to generalize. To find the point during training when a network generalizes best, one must monitor cross-validation error separately.
APA, Harvard, Vancouver, ISO, and other styles
48

Suhendra, Christian Dwi, and Retantyo Wardoyo. "Penentuan Arsitektur Jaringan Syaraf Tiruan Backpropagation (Bobot Awal dan Bias Awal) Menggunakan Algoritma Genetika." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 9, no. 1 (January 31, 2015): 77. http://dx.doi.org/10.22146/ijccs.6642.

Full text
Abstract:
AbstrakKelemahan dari jaringan syaraf tiruan backpropagation adalah sangat lama untuk konvergen dan permasalahan lokal mininum yang membuat jaringan syaraf tiruan (JST) sering terjebak pada lokal minimum. Kombinasi parameter arsiktektur, bobot awal dan bias awal yang baik sangat menentukan kemampuan belajar dari JST untuk mengatasi kelemahan dari JST backpropagation. Pada penelitian Ini dikembangkan sebuah metode untuk menentukan kombinasi parameter arsitektur, bobot awal dan bias awal. Selama ini kombinasi ini dilakukan dengan mencoba kemungkinan satu per satu, baik kombinasi hidden layer pada architecture maupun bobot awal, dan bias awal. Bobot awal dan bias awal digunakan sebagai parameter dalam perhitungan nilai fitness. Ukuran setiap individu terbaik dilihat dari besarnya jumlah kuadrat galat (sum of squared error = SSE) masing – masing individu, individu dengan SSE terkecil merupakan individu terbaik. Kombinasi parameter arsiktektur, bobot awal dan bias awal yang terbaik akan digunakan sebagai parameter dalam pelatihan JST backpropagation.Hasil dari penelitian ini adalah sebuah solusi alternatif untuk menyelesaikan permasalahan pada pembelajaran backpropagation yang sering mengalami masalah dalam penentuan parameter pembelajaran. Hasil penelitian ini menunjukan bahwa metode algoritma genetika dapat memberikan solusi bagi pembelajaran backpropagation dan memberikan tingkat akurasi yang lebih baik, serta menurunkan lama pembelajaran jika dibandingkan dengan penentuan parameter yang dilakukan secara manual. Kata kunci Jaringan syaraf tiruan, algoritma genetika, backpropagation, SSE, lokal minimum AbstractThe weakness of back propagation neural network is very slow to converge and local minima issues that makes artificial neural networks (ANN) are often being trapped in a local minima. A good combination between architecture, intial weight and bias are so important to overcome the weakness of backpropagation neural network.This study developed a method to determine the combination parameter of architectur, initial weight and bias. So far, trial and error is commonly used to select the combination of hidden layer, intial weight and bias. Initial weight and bias is used as a parameter in order to evaluate fitness value. Sum of squared error(SSE) is used to determine best individual. individual with the smallest SSE is the best individual. Best combination parameter of architecture, initial weight and bias will be used as a paramater in the backpropagation neural network learning. The results of this study is an alternative solution to solve the problems on the backpropagation learning that often have problems in determining the parameters of the learning. The result shows genetic algorithm method can provide a solution for backpropagation learning and can improve the accuracy, also reduce long learning when it compared with the parameters were determined manually. Keywords: Artificial neural network, genetic algorithm, backpropagation, SSE, local minima.
APA, Harvard, Vancouver, ISO, and other styles
49

Cipta, Rito, and Tezhar Rayendra Trastaronny Pastika Nugraha. "Evaluasi Prediksi Curah Hujan dengan Algoritma Backpropogation di BMKG Cilacap." ZONAsi: Jurnal Sistem Informasi 2, no. 2 (October 28, 2020): 97–109. http://dx.doi.org/10.31849/zn.v2i2.4445.

Full text
Abstract:
Backpropogation atau biasa disebut dengan backprop adalah algoritma yang mempelajari tentang bagaimana cara memperkecil atau meminimkan tingkat ke-error-an dengan dimenyesuaikannya bobot berdasarkan perbedaan output dan target sesuai dengan yang diinginkan. Penelitian ini akan membahas mengenai prediksi curah hujan bulanan di BMKG Cilacap dengan algoritma Backpropagation. Memprediksi untuk masa depan kadang belum menemukan ketepatan. Oleh sebab itu, maka peramalan harus mampu mengurangi/memperkecil tingkat kesalahannya. Hasil dari penelitian tersebut menyatakan bahwa peramalan curah hujan dengan algoritma backpropagation ini akurat dengan hasil dari Mean Square Error (MSE) adalah 0,011465, Mean Absolute Percent Error (MAPE) adalah 0.3289 pada proses pelatihan jaringan. Pada penilaian MSE dan MAPE untuk proses pengujian secara keseluruhan adalah 0,011807 dan 0,050448.
APA, Harvard, Vancouver, ISO, and other styles
50

Arya, Putu Bagus, Wayan Firdaus Mahmudy, and Achmad Basuki. "Website Visitors Forecasting using Recurrent Neural Network Method." Journal of Information Technology and Computer Science 6, no. 2 (September 3, 2021): 137–45. http://dx.doi.org/10.25126/jitecs.202162296.

Full text
Abstract:
Abstract. The number of visitors and content accessed by users on a site shows the performance of the site. Therefore, forecasting needs to be done to find out how many users a website will come. This study applies the Long Short Term Memory method which is a development of the Recurrent Neural Network method. Long Short Term Memory has the advantage that there is an architecture of remembering and forgetting the output to be processed back into the input. In addition, the ability of another Long Short Term Memory is to be able to maintain errors that occur when doing backpropagation so that it does not allow errors to increase. The comparison method used in this study is Backpropagation. Neural Network method that is often used in various fields. The testing using new visitor data and first time visitors from 2018 to 2019 with vulnerable time per month. The computational experiment prove that the Long Short Term Memory produces better result in term of the mean square error (MSE) comparable to those achieved by Backpropagation Neural Network method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography