Artículos de revistas sobre el tema "Artificial Neural Networks and Recurrent Neutral Networks"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Artificial Neural Networks and Recurrent Neutral Networks.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Artificial Neural Networks and Recurrent Neutral Networks".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Prathibha, Dr G., Y. Kavya, P. Vinay Jacob y L. Poojita. "Speech Emotion Recognition Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 07 (4 de julio de 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.

Texto completo
Resumen
Speech is one of the primary forms of expression and is important for Emotion Recognition. Emotion Recognition is helpful to derive various useful insights about the thoughts of a person. Automatic speech emotion recognition is an active field of study in Artificial intelligence and Machine learning, which aims to generate machines that communicate with people via speech. In this work, deep learning algorithms such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are explored to extract features and classify emotions such as calm, happy, fearful, disgust, angry, neutral, surprised and sad using the Toronto emotional speech set (TESS) dataset which consists of 2800 files. The features like Mel-frequency cepstral coefficients(MFCC), chroma and mel spectrogram are extracted from speech using the pre-trained networks such as Xception, VGG16, Resnet50, MobileNetV2, DenseNet121, NASNetLarge, EfficientNetB5, EfficientNetV2M, InceptionV3, ConvNeXtTiny, EfficientNetV2B2, EfficientNetB6, ResNet152V2. Features of the two different networks are fused using the fusion techniques such as Early, Mid, Late to get better optimum results. Features are then classified initially with the Long Short Term Memory (LSTM) finally resulted in the accuracy of 99%. In this paper the work is extended to RAVDESS dataset also which consists of seven emotions such as calm, joyful, sad, surprised, afraid, disgust and angry in total of 1440 files. Keywords: Convolution Neural Network, Recurrent Neural Network, speech emotion recognition, MFCC, Chroma, Mel, LSTM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ali, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin y Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement". Energies 15, n.º 20 (13 de octubre de 2022): 7553. http://dx.doi.org/10.3390/en15207553.

Texto completo
Resumen
This research work aims at providing power quality improvement for the nonlinear load to improve the system performance indices by eliminating maximum total harmonic distortion (THD) and reducing neutral wire current. The idea is to integrate a shunt hybrid active power filter (SHAPF) with the system using machine learning control techniques. The system proposed has been evaluated under an artificial neural network (ANN), gated recurrent unit, and long short-term memory for the optimization of the SHAPF. The method is based on the detection of harmonic presence in the power system by testing and comparison of traditional pq0 theory and deep learning neural networks. The results obtained through the proposed methodology meet all the suggested international standards of THD. The results also satisfy the current removal from the neutral wire and deal efficiently with minor DC voltage variations occurring in the voltage-regulating current. The proposed algorithms have been evaluated on the performance indices of accuracy and computational complexities, which show effective results in terms of 99% accuracy and computational complexities. deep learning-based findings are compared based on their root-mean-square error (RMSE) and loss function. The proposed system can be applied for domestic and industrial load conditions in a four-wire three-phase power distribution system for harmonic mitigation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct". Journal of Electrical Systems 20, n.º 3s (4 de abril de 2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.

Texto completo
Resumen
AI is anticipated to enhance competitive advantages for financial organisations by increasing efficiency through cost reduction and productivity improvement, as well as by enhancing the quality of services and goods provided to consumers. AI applications in finance have the potential to create or exacerbate financial and non-financial risks, which could result in consumer and investor protection concerns like biassed, unfair, or discriminatory results, along with challenges related to data management and usage. The AI model's lack of transparency may lead to pro-cyclicality and systemic risk in markets, posing issues for financial supervision and internal governance frameworks that may not be in line with a technology-neutral regulatory approach. The primary objective of this research is to explore the effectiveness of Artificial Intelligence in preventing financial misconduct. This study extensively examines sophisticated methods for combating financial fraud, specifically evaluating the efficacy of Machine Learning and Artificial Intelligence. When examining the assessment metrics, this study utilized various metrics like accuracy, precision, recall, F1 score, and the ROC-AUC. The study found that Deep Learning techniques such as “Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks /Long Short-Term Memory, and Auto encoders” achieved high precision and AUC-ROC scores in detecting financial fraud. Voting classifiers, stacking, random forests, and gradient boosting machines demonstrated durability and precision in the face of adversarial attacks, showcasing the strength of unity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Nassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar y Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments". Information 13, n.º 10 (27 de septiembre de 2022): 456. http://dx.doi.org/10.3390/info13100456.

Texto completo
Resumen
Speech signals carry various bits of information relevant to the speaker such as age, gender, accent, language, health, and emotions. Emotions are conveyed through modulations of facial and vocal expressions. This paper conducts an empirical comparison of performances between the classical classifiers: Gaussian Mixture Model (GMM), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Artificial neural networks (ANN); and the deep learning classifiers, i.e., Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Gated Recurrent Unit (GRU) in addition to the ivector approach for a text-independent speaker verification task in neutral and emotional talking environments. The deep models undergo hyperparameter tuning using the Grid Search optimization algorithm. The models are trained and tested using a private Arabic Emirati Speech Database, Ryerson Audio–Visual Database of Emotional Speech and Song dataset (RAVDESS) database, and a public Crowd-Sourced Emotional Multimodal Actors (CREMA) database. Experimental results illustrate that deep architectures do not necessarily outperform classical classifiers. In fact, evaluation was carried out through Equal Error Rate (EER) along with Area Under the Curve (AUC) scores. The findings reveal that the GMM model yields the lowest EER values and the best AUC scores across all datasets, amongst classical classifiers. In addition, the ivector model surpasses all the fine-tuned deep models (CNN, LSTM, and GRU) based on both evaluation metrics in the neutral, as well as the emotional speech. In addition, the GMM outperforms the ivector using the Emirati and RAVDESS databases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lee, Hong Jae y Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)". Academic Society of Global Business Administration 20, n.º 5 (30 de octubre de 2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.

Texto completo
Resumen
The purpose of this study is to present alternative methods to overcome market limitations such as the assumption of fixing the dynamics of underlying assets and market friction for the traditional Black-Scholes option pricing model. As for the research method of this paper, Adam is used as the gradient descent optimizer for the hedging model of the call option short portfolio using artificial neural network models SNN (simple neural network) and RNN (recurrent neural network) as analysis models, and deep neural network (deep neural network) is used as the hedging model. neural network) methodology was applied. The research results of this paper are as follows. Regarding the ATM call option price, the BS model showed 10.245, the risk neutral model showed 10.268, the SNN-DH model showed 11.834, and the RNN-DH model showed 11.882. Therefore, it appears that there is a slight difference in the call option price according to each analysis model. At this time, the DH analysis data sample is 100,000 (1×) training samples generated by Monte Carlo Simulation, and the number of testing samples is the training sample. 20% (20,000 pieces) was used, and the option payoff function is lambda(), which is -max(, 0). In addition, the option price and P&L (P&L) of SNN-DH and RNN-DH appear linearly on the basis, and the longer the remaining maturity, the closer the delta value of SNN-DH and BS to the distribution of the S-curve, and the closer the expiration date, the closer the two models are. Delta is densely distributed between 0 and 1. As for the total loss of delta hedging of the short call option position, SNN-DH was -0.0027 and RNN-DH was -0.0061, indicating that the total hedged profit and loss was close to 0. The implications of this study are to present DL techniques based on AI methods as an alternative way to overcome limitations such as fixing underlying asset dynamics and market friction of the traditional Black-Scholes model. In addition, it is an independent analysis model that considers valid robust tools by applying deep neural network algorithms for portfolio hedging problems. One limitation of the research is that it analyzed the model under the assumption of zero transaction costs, so future studies should consider this aspect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sutskever, Ilya y Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks". Neural Networks 23, n.º 2 (marzo de 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks". Applied and Computational Engineering 40, n.º 1 (21 de febrero de 2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.

Texto completo
Resumen
This paper presents an in-depth analysis of Feed-Forward Neural Networks (FNNs) and Recurrent Neural Networks (RNNs), two powerful models in the field of artificial intelligence. Understanding these models and their applications is crucial for harnessing their potential. The study addresses the need to comprehend the unique characteristics and architectures of FNNs and RNNs. These models excel at processing sequential and temporal data, making them indispensable in tasks. Furthermore, the paper emphasises the importance of variables in FNNs and proposes a novel method to rank the importance of independent variables in predicting the output variable. By understanding the relationship between inputs and outputs, valuable insights can be gained into the underlying patterns and mechanisms driving the system being modelled. Additionally, the research explores the impact of initial weights on model performance. Contrary to conventional beliefs, the study provides evidence that neural networks with random weights can achieve competitive performance, particularly in situations with limited training datasets. This finding challenges the traditional notion that careful initialization is necessary for neural networks to perform well. In summary, this paper provides a comprehensive analysis of FNNs and RNNs while highlighting the importance of understanding the relationship between variables and the impact of initial weights on model performance. By shedding light on these crucial aspects, this research contributes to the advancement and effective utilisation of neural networks, paving the way for improved predictions and insights in various domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Poudel, Sushan y Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, n.º 2 (26 de mayo de 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.

Texto completo
Resumen
Speech is one of the most effective way for human and machine to interact. This project aims to build Speech Command Recognition System that is capable of predicting the predefined speech commands. Dataset provided by Google’s TensorFlow and AIY teams is used to implement different Neural Network models which include Convolutional Neural Network and Recurrent Neural Network combined with Convolutional Neural Network. The combination of Convolutional and Recurrent Neural Network outperforms Convolutional Neural Network alone by 8% and achieved 96.66% accuracy for 20 labels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Turner, Andrew James y Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks". Genetic Programming and Evolvable Machines 18, n.º 2 (8 de agosto de 2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ziemke, Tom. "Radar image segmentation using recurrent artificial neural networks". Pattern Recognition Letters 17, n.º 4 (abril de 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

MASKARA, ARUN y ANDREW NOETZEL. "Sequence Recognition with Recurrent Neural Networks". Connection Science 5, n.º 2 (enero de 1993): 139–52. http://dx.doi.org/10.1080/09540099308915692.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Kawano, Makoto y Kazuhiro Ueda. "Microblog Geolocation Estimation with Recurrent Neural Networks". Transactions of the Japanese Society for Artificial Intelligence 32, n.º 1 (2017): WII—E_1–8. http://dx.doi.org/10.1527/tjsai.wii-e.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Aliev, R. A., B. G. Guirimov, Bijan Fazlollahi y R. R. Aliev. "Evolutionary algorithm-based learning of fuzzy neural networks. Part 2: Recurrent fuzzy neural networks". Fuzzy Sets and Systems 160, n.º 17 (septiembre de 2009): 2553–66. http://dx.doi.org/10.1016/j.fss.2008.12.018.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Imam, Nabil. "Wiring up recurrent neural networks". Nature Machine Intelligence 3, n.º 9 (septiembre de 2021): 740–41. http://dx.doi.org/10.1038/s42256-021-00391-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Dalhoum, Abdel Latif Abu y Mohammed Al-Rawi. "High-Order Neural Networks are Equivalent to Ordinary Neural Networks". Modern Applied Science 13, n.º 2 (27 de enero de 2019): 228. http://dx.doi.org/10.5539/mas.v13n2p228.

Texto completo
Resumen
Equivalence of computational systems can assist in obtaining abstract systems, and thus enable better understanding of issues related their design and performance. For more than four decades, artificial neural networks have been used in many scientific applications to solve classification problems as well as other problems. Since the time of their introduction, multilayer feedforward neural network referred as Ordinary Neural Network (ONN), that contains only summation activation (Sigma) neurons, and multilayer feedforward High-order Neural Network (HONN), that contains Sigma neurons, and product activation (Pi) neurons, have been treated in the literature as different entities. In this work, we studied whether HONNs are mathematically equivalent to ONNs. We have proved that every HONN could be converted to some equivalent ONN. In most cases, one just needs to modify the neuronal transfer function of the Pi neuron to convert it to a Sigma neuron. The theorems that we have derived clearly show that the original HONN and its corresponding equivalent ONN would give exactly the same output, which means; they can both be used to perform exactly the same functionality. We also derived equivalence theorems for several other non-standard neural networks, for example, recurrent HONNs and HONNs with translated multiplicative neurons. This work rejects the hypothesis that HONNs and ONNs are different entities, a conclusion that might initiate a new research frontier in artificial neural network research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Goudreau, Mark W. y C. Lee Giles. "Using recurrent neural networks to learn the structure of interconnection networks". Neural Networks 8, n.º 5 (enero de 1995): 793–804. http://dx.doi.org/10.1016/0893-6080(95)00025-u.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Lalapura, Varsha S., J. Amudha y Hariramn Selvamuruga Satheesh. "Recurrent Neural Networks for Edge Intelligence". ACM Computing Surveys 54, n.º 4 (mayo de 2021): 1–38. http://dx.doi.org/10.1145/3448974.

Texto completo
Resumen
Recurrent Neural Networks are ubiquitous and pervasive in many artificial intelligence applications such as speech recognition, predictive healthcare, creative art, and so on. Although they provide accurate superior solutions, they pose a massive challenge “training havoc.” Current expansion of IoT demands intelligent models to be deployed at the edge. This is precisely to handle increasing model sizes and complex network architectures. Design efforts to meet these for greater performance have had inverse effects on portability on edge devices with real-time constraints of memory, latency, and energy. This article provides a detailed insight into various compression techniques widely disseminated in the deep learning regime. They have become key in mapping powerful RNNs onto resource-constrained devices. While compression of RNNs is the main focus of the survey, it also highlights challenges encountered while training. The training procedure directly influences model performance and compression alongside. Recent advancements to overcome the training challenges with their strengths and drawbacks are discussed. In short, the survey covers the three-step process, namely, architecture selection, efficient training process, and suitable compression technique applicable to a resource-constrained environment. It is thus one of the comprehensive survey guides a developer can adapt for a time-series problem context and an RNN solution for the edge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Dobnikar, Andrej y Branko Šter. "Structural Properties of Recurrent Neural Networks". Neural Processing Letters 29, n.º 2 (12 de febrero de 2009): 75–88. http://dx.doi.org/10.1007/s11063-009-9096-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Semyonov, E. D., M. Ya Braginsky, D. V. Tarakanov y I. L. Nazarova. "NEURAL NETWORK FORECASTING OF INPUT PARAMETERS IN OIL DEVELOPMENT". PROCEEDINGS IN CYBERNETICS 22, n.º 4 (2023): 42–51. http://dx.doi.org/10.35266/1999-7604-2023-4-6.

Texto completo
Resumen
The article examines use of artificial neural networks for forecasting of technological pa-rameters of oil development. Artificial neural networks based on the long short-term memory architecture and gated recurrent units are used to solve the problem. The findings of neural network forecasting prove the effi-cacy of recurrent neural networks, especially the long short-term memory one, for forecasting of time series.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Dimopoulos, Nikitas J., John T. Dorocicz, Chris Jubien y Stephen Neville. "Training Asymptotically Stable Recurrent Neural Networks". Intelligent Automation & Soft Computing 2, n.º 4 (enero de 1996): 375–88. http://dx.doi.org/10.1080/10798587.1996.10750681.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Phan, Manh Cong y Martin T. Hagan. "Error Surface of Recurrent Neural Networks". IEEE Transactions on Neural Networks and Learning Systems 24, n.º 11 (noviembre de 2013): 1709–21. http://dx.doi.org/10.1109/tnnls.2013.2258470.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Valle, Marcos Eduardo. "Complex-Valued Recurrent Correlation Neural Networks". IEEE Transactions on Neural Networks and Learning Systems 25, n.º 9 (septiembre de 2014): 1600–1612. http://dx.doi.org/10.1109/tnnls.2014.2341013.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Goundar, Sam, Suneet Prakash, Pranil Sadal y Akashdeep Bhardwaj. "Health Insurance Claim Prediction Using Artificial Neural Networks". International Journal of System Dynamics Applications 9, n.º 3 (julio de 2020): 40–57. http://dx.doi.org/10.4018/ijsda.2020070103.

Texto completo
Resumen
A number of numerical practices exist that actuaries use to predict annual medical claim expense in an insurance company. This amount needs to be included in the yearly financial budgets. Inappropriate estimating generally has negative effects on the overall performance of the business. This study presents the development of artificial neural network model that is appropriate for predicting the anticipated annual medical claims. Once the implementation of the neural network models was finished, the focus was to decrease the mean absolute percentage error by adjusting the parameters, such as epoch, learning rate, and neurons in different layers. Both feed forward and recurrent neural networks were implemented to forecast the yearly claims amount. In conclusion, the artificial neural network model that was implemented proved to be an effective tool for forecasting the anticipated annual medical claims for BSP Life. Recurrent neural network outperformed the feed forward neural network in terms of accuracy and computation power required to carry out the forecasting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

WANG, JUN. "ON THE ASYMPTOTIC PROPERTIES OF RECURRENT NEURAL NETWORKS FOR OPTIMIZATION". International Journal of Pattern Recognition and Artificial Intelligence 05, n.º 04 (octubre de 1991): 581–601. http://dx.doi.org/10.1142/s0218001491000338.

Texto completo
Resumen
Asymptotic properties of recurrent neural networks for optimization are analyzed. Specifically, asymptotic stability of recurrent neural networks with monotonically time-varying penalty parameters for optimization is proven; sufficient conditions of feasibility and optimality of solutions generated by the recurrent neural networks are characterized. Design methodology of the recurrent neural networks for solving optimization problems is discussed. Operating characteristics of the recurrent neural networks are also presented using illustrative examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Martins, T. D., J. M. Annichino-Bizzacchi, A. V. C. Romano y R. Maciel Filho. "Artificial neural networks for prediction of recurrent venous thromboembolism". International Journal of Medical Informatics 141 (septiembre de 2020): 104221. http://dx.doi.org/10.1016/j.ijmedinf.2020.104221.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Saleh, Shadi. "Artificial Intelligence & Machine Learning in Computer Vision Applications". Embedded Selforganising Systems 7, n.º 1 (20 de febrero de 2020): 2–3. http://dx.doi.org/10.14464/ess71432.

Texto completo
Resumen
Deep learning and machine learning innovations are at the core of the ongoing revolution in Artificial Intelligence for the interpretation and analysis of multimedia data. The convergence of large-scale datasets and more affordable Graphics Processing Unit (GPU) hardware has enabled the development of neural networks for data analysis problems that were previously handled by traditional handcrafted features. Several deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM)/Gated Recurrent Unit (GRU), Deep Believe Networks (DBN), and Deep Stacking Networks (DSNs) have been used with new open source software and libraries options to shape an entirely new scenario in computer vision processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

de Vos, N. J. "Echo state networks as an alternative to traditional artificial neural networks in rainfall–runoff modelling". Hydrology and Earth System Sciences 17, n.º 1 (22 de enero de 2013): 253–67. http://dx.doi.org/10.5194/hess-17-253-2013.

Texto completo
Resumen
Abstract. Despite theoretical benefits of recurrent artificial neural networks over their feedforward counterparts, it is still unclear whether the former offer practical advantages as rainfall–runoff models. The main drawback of recurrent networks is the increased complexity of the training procedure due to their architecture. This work uses the recently introduced and conceptually simple echo state networks for streamflow forecasts on twelve river basins in the Eastern United States, and compares them to a variety of traditional feedforward and recurrent approaches. Two modifications on the echo state network models are made that increase the hydrologically relevant information content of their internal state. The results show that the echo state networks outperform feedforward networks and are competitive with state-of-the-art recurrent networks, across a range of performance measures. This, along with their simplicity and ease of training, suggests that they can be considered promising alternatives to traditional artificial neural networks in rainfall–runoff modelling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Lan, Nur, Michal Geyer, Emmanuel Chemla y Roni Katzir. "Minimum Description Length Recurrent Neural Networks". Transactions of the Association for Computational Linguistics 10 (2022): 785–99. http://dx.doi.org/10.1162/tacl_a_00489.

Texto completo
Resumen
Abstract We train neural networks to optimize a Minimum Description Length score, that is, to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as anbn, anbncn, anb2n, anbmcn +m, and they perform addition. Moreover, they often do so with 100% accuracy. The networks are small, and their inner workings are transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Nasr, Mounir Ben y Mohamed Chtourou. "Training recurrent neural networks using a hybrid algorithm". Neural Computing and Applications 21, n.º 3 (31 de diciembre de 2010): 489–96. http://dx.doi.org/10.1007/s00521-010-0506-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Chen, Z. Y., C. P. Kwong y Z. B. Xu. "Multiple-valued feedback and recurrent correlation neural networks". Neural Computing & Applications 3, n.º 4 (diciembre de 1995): 242–50. http://dx.doi.org/10.1007/bf01414649.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Goulas, Alexandros, Fabrizio Damicelli y Claus C. Hilgetag. "Bio-instantiated recurrent neural networks: Integrating neurobiology-based network topology in artificial networks". Neural Networks 142 (octubre de 2021): 608–18. http://dx.doi.org/10.1016/j.neunet.2021.07.011.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Freitag, Steffen, Wolfgang Graf y Michael Kaliske. "Recurrent neural networks for fuzzy data". Integrated Computer-Aided Engineering 18, n.º 3 (17 de junio de 2011): 265–80. http://dx.doi.org/10.3233/ica-2011-0373.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Carta, Antonio, Alessandro Sperduti y Davide Bacciu. "Encoding-based memory for recurrent neural networks". Neurocomputing 456 (octubre de 2021): 407–20. http://dx.doi.org/10.1016/j.neucom.2021.04.051.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Graves, Daniel y Witold Pedrycz. "Fuzzy prediction architecture using recurrent neural networks". Neurocomputing 72, n.º 7-9 (marzo de 2009): 1668–78. http://dx.doi.org/10.1016/j.neucom.2008.07.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Cruse, Hoik, Jeffrey Dean, Thomas Kindermann, Josef Schmitz y Michael Schumm. "Simulation of Complex Movements Using Artificial Neural Networks". Zeitschrift für Naturforschung C 53, n.º 7-8 (1 de agosto de 1998): 628–38. http://dx.doi.org/10.1515/znc-1998-7-816.

Texto completo
Resumen
Abstract A simulated network for controlling a six-legged, insect-like walking system is proposed. The network contains internal recurrent connections, but important recurrent connections utilize the loop through the environment. This approach leads to a subnet for controlling the three joints of a leg during its swing which is arguably the simplest possible solution. The task for the stance subnet appears more difficult because the movements of a larger and varying number of joints (9 -18: three for each leg in stance) have to be controlled such that each leg contributes efficiently to support and propulsion and legs do not work at cross purposes. Already inherently non-linear, this task is further complicated by four factors: 1) the combination of legs in stance varies continuously, 2) during curve walking, legs must move at different speeds, 3) on compliant substrates, the speed of the individual leg may vary unpredictably, and 4) the geometry of the system may vary through growth and injury or due to non-rigid suspension of the joints. This task appears to require some kind of “motor intelligence”. We show that an extremely decentralized, simple controller, based on a combi­nation of negative and positive feedback at the joint level, copes with all these problems by exploiting the physical properties of the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

GANCHEV, TODOR. "ENHANCED TRAINING FOR THE LOCALLY RECURRENT PROBABILISTIC NEURAL NETWORKS". International Journal on Artificial Intelligence Tools 18, n.º 06 (diciembre de 2009): 853–81. http://dx.doi.org/10.1142/s0218213009000433.

Texto completo
Resumen
In the present contribution we propose an integral training procedure for the Locally Recurrent Probabilistic Neural Networks (LR PNNs). Specifically, the adjustment of the smoothing factor "sigma" in the pattern layer of the LR PNN and the training of the recurrent layer weights are integrated in an automatic process that iteratively estimates all adjustable parameters of the LR PNN from the available training data. Furthermore, in contrast to the original LR PNN, whose recurrent layer was trained to provide optimum separation among the classes on the training dataset, while striving to keep a balance between the learning rates for all classes, here the training strategy is oriented towards optimizing the overall classification accuracy, straightforwardly. More precisely, the new training strategy directly targets at maximizing the posterior probabilities for the target class and minimizing the posterior probabilities estimated for the non-target classes. The new fitness function requires fewer computations for each evaluation, and therefore the overall computational demands for training the recurrent layer weights are reduced. The performance of the integrated training procedure is illustrated on three different speech processing tasks: emotion recognition, speaker identification and speaker verification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Leung, Chi-Sing y Lai-Wan Chan. "Dual extended Kalman filtering in recurrent neural networks". Neural Networks 16, n.º 2 (marzo de 2003): 223–39. http://dx.doi.org/10.1016/s0893-6080(02)00230-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Parga, N., L. Serrano-Fernández y J. Falcó-Roget. "Emergent computations in trained artificial neural networks and real brains". Journal of Instrumentation 18, n.º 02 (1 de febrero de 2023): C02060. http://dx.doi.org/10.1088/1748-0221/18/02/c02060.

Texto completo
Resumen
Abstract Synaptic plasticity allows cortical circuits to learn new tasks and to adapt to changing environments. How do cortical circuits use plasticity to acquire functions such as decision-making or working memory? Neurons are connected in complex ways, forming recurrent neural networks, and learning modifies the strength of their connections. Moreover, neurons communicate emitting brief discrete electric signals. Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories and how computations emerge in the trained networks. Surprisingly, artificial networks and real brains can use similar computational strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ziemke, Tom. "Radar Image Segmentation Using Self-Adapting Recurrent Networks". International Journal of Neural Systems 08, n.º 01 (febrero de 1997): 47–54. http://dx.doi.org/10.1142/s0129065797000070.

Texto completo
Resumen
This paper presents a novel approach to the segmentation and integration of (radar) images using a second-order recurrent artificial neural network architecture consisting of two sub-networks: a function network that classifies radar measurements into four different categories of objects in sea environments (water, oil spills, land and boats), and a context network that dynamically computes the function network's input weights. It is shown that in experiments (using simulated radar images) this mechanism outperforms conventional artificial neural networks since it allows the network to learn to solve the task through a dynamic adaptation of its classification function based on its internal state closely reflecting the current context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

FRANKLIN, JUDY A. y KRYSTAL K. LOCKE. "RECURRENT NEURAL NETWORKS FOR MUSICAL PITCH MEMORY AND CLASSIFICATION". International Journal on Artificial Intelligence Tools 14, n.º 01n02 (febrero de 2005): 329–42. http://dx.doi.org/10.1142/s0218213005002120.

Texto completo
Resumen
We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. We then discuss limited results using other types of networks on the same tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ma, Tingsong, Ping Kuang y Wenhong Tian. "An improved recurrent neural networks for 3d object reconstruction". Applied Intelligence 50, n.º 3 (23 de octubre de 2019): 905–23. http://dx.doi.org/10.1007/s10489-019-01523-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Huang, Chuangxia, Yigang He y Ping Chen. "Dynamic Analysis of Stochastic Recurrent Neural Networks". Neural Processing Letters 27, n.º 3 (11 de abril de 2008): 267–76. http://dx.doi.org/10.1007/s11063-008-9075-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Stepchenko, Arthur y Jurij Chizhov. "NDVI Short-Term Forecasting Using Recurrent Neural Networks". Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (16 de junio de 2015): 180. http://dx.doi.org/10.17770/etr2015vol3.167.

Texto completo
Resumen
In this paper predictions of the Normalized Difference Vegetation Index (NDVI) data recorded by satellites over Ventspils Municipality in Courland, Latvia are discussed. NDVI is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. Artificial Neural Networks (ANN) are computational models and universal approximators, which are widely used for nonlinear, non-stationary and dynamical process modeling and forecasting. In this paper Elman Recurrent Neural Networks (ERNN) are used to make one-step-ahead prediction of univariate NDVI time series.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Manfredini, Ricardo Augusto. "Hybrid Artificial Neural Networks for Electricity Consumption Prediction". International Journal of Advanced Engineering Research and Science 9, n.º 8 (2022): 292–99. http://dx.doi.org/10.22161/ijaers.98.32.

Texto completo
Resumen
We present a comparative study of electricity consumption predictions using the SARIMAX method (Seasonal Auto Regressive Moving Average eXogenous variables), the HyFis2 model (Hybrid Neural Fuzzy Inference System) and the LSTNetA model (Long and Short Time series Network Adapted), a hybrid neural network containing GRU (Gated Recurrent Unit), CNN (Convolutional Neural Network) and dense layers, specially adapted for this case study. The comparative experimental study developed showed a superior result for the LSTNetA model with consumption predictions much closer to the real consumption. The LSTNetA model in the case study had a rmse (root mean squared error) of 198.44, the HyFis2 model 602.71 and the SARIMAX method 604.58.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Siriporananon, Somsak y Boonlert Suechoey. "Power Losses Analysis in a Three-Phase Distribution Transformer Using Artificial Neural Networks". ECTI Transactions on Electrical Engineering, Electronics, and Communications 18, n.º 2 (31 de agosto de 2020): 130–36. http://dx.doi.org/10.37936/ecti-eec.2020182.223203.

Texto completo
Resumen
This research article presents the analysis of power losses in a three - phase distribution transformer, 100 kVA 22 kV-400/230V by using artificial neutral networks that can analyze the power losses in distribution transformer faster and use less variables than the original method. The measurement data were collected 100,000 sets at the transformer manufacturer factory by setting the current flow from 1% to 100% at temperature 30 °C, 35 °C, 40 °C, 45 °C, 50 °C, 55°C, 60 °C, 65 °C, 70 °C, and 75 °C to calculate the power losses in a distribution transformer. The collected data were divided 80,000 sets to use for training in order to find parameters of artificial neutral networks and 20,000 sets were used for the artificial neutral networks input in order to calculate power losses in a distribution transformer. From the power losses in the distribution transformer of artificial neutral networks testing compared with the calculated valued from the measurement. The percentage error was at the satisfactory level and can be applied to design the testing of power losses in the distribution transformer in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bacciu, Davide y Francesco Crecchi. "Augmenting Recurrent Neural Networks Resilience by Dropout". IEEE Transactions on Neural Networks and Learning Systems 31, n.º 1 (enero de 2020): 345–51. http://dx.doi.org/10.1109/tnnls.2019.2899744.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Liu, Shiwei, Iftitahu Ni’mah, Vlado Menkovski, Decebal Constantin Mocanu y Mykola Pechenizkiy. "Efficient and effective training of sparse recurrent neural networks". Neural Computing and Applications 33, n.º 15 (26 de enero de 2021): 9625–36. http://dx.doi.org/10.1007/s00521-021-05727-y.

Texto completo
Resumen
AbstractRecurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Zhai, Jun-Yong, Shu-Min Fei y Xiao-Hui Mo. "Multiple models switching control based on recurrent neural networks". Neural Computing and Applications 17, n.º 4 (24 de agosto de 2007): 365–71. http://dx.doi.org/10.1007/s00521-007-0123-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Khan, Yaser Daanial, Farooq Ahmed y Sher Afzal Khan. "Situation recognition using image moments and recurrent neural networks". Neural Computing and Applications 24, n.º 7-8 (24 de marzo de 2013): 1519–29. http://dx.doi.org/10.1007/s00521-013-1372-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

PENG, CHUN-CHENG y GEORGE D. MAGOULAS. "ADVANCED ADAPTIVE NONMONOTONE CONJUGATE GRADIENT TRAINING ALGORITHM FOR RECURRENT NEURAL NETWORKS". International Journal on Artificial Intelligence Tools 17, n.º 05 (octubre de 2008): 963–84. http://dx.doi.org/10.1142/s0218213008004242.

Texto completo
Resumen
Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to deal with complex data in the form of sequences of vectors. They are well known for their power to model temporal dependencies and process sequences for classification, recognition, and transduction. In this paper we propose an advanced nonmonotone Conjugate Gradient training algorithm for recurrent neural networks, which is equipped with an adaptive tuning strategy for both the nonmonotone learning horizon and the stepsize. Simulation results in sequence processing using three different recurrent architectures demonstrate that this modification of the Conjugate Gradient method is more effective than previous attempts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía