Literatura científica selecionada sobre o tema "Artificial Neural Networks and Recurrent Neutral Networks"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Artificial Neural Networks and Recurrent Neutral Networks".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

Prathibha, Dr G., Y. Kavya, P. Vinay Jacob e L. Poojita. "Speech Emotion Recognition Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 07 (4 de julho de 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.

Texto completo da fonte
Resumo:
Speech is one of the primary forms of expression and is important for Emotion Recognition. Emotion Recognition is helpful to derive various useful insights about the thoughts of a person. Automatic speech emotion recognition is an active field of study in Artificial intelligence and Machine learning, which aims to generate machines that communicate with people via speech. In this work, deep learning algorithms such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are explored to extract features and classify emotions such as calm, happy, fearful, disgust, angry, neutral, surprised and sad using the Toronto emotional speech set (TESS) dataset which consists of 2800 files. The features like Mel-frequency cepstral coefficients(MFCC), chroma and mel spectrogram are extracted from speech using the pre-trained networks such as Xception, VGG16, Resnet50, MobileNetV2, DenseNet121, NASNetLarge, EfficientNetB5, EfficientNetV2M, InceptionV3, ConvNeXtTiny, EfficientNetV2B2, EfficientNetB6, ResNet152V2. Features of the two different networks are fused using the fusion techniques such as Early, Mid, Late to get better optimum results. Features are then classified initially with the Long Short Term Memory (LSTM) finally resulted in the accuracy of 99%. In this paper the work is extended to RAVDESS dataset also which consists of seven emotions such as calm, joyful, sad, surprised, afraid, disgust and angry in total of 1440 files. Keywords: Convolution Neural Network, Recurrent Neural Network, speech emotion recognition, MFCC, Chroma, Mel, LSTM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ali, Ayesha, Ateeq Ur Rehman, Ahmad Almogren, Elsayed Tag Eldin e Muhammad Kaleem. "Application of Deep Learning Gated Recurrent Unit in Hybrid Shunt Active Power Filter for Power Quality Enhancement". Energies 15, n.º 20 (13 de outubro de 2022): 7553. http://dx.doi.org/10.3390/en15207553.

Texto completo da fonte
Resumo:
This research work aims at providing power quality improvement for the nonlinear load to improve the system performance indices by eliminating maximum total harmonic distortion (THD) and reducing neutral wire current. The idea is to integrate a shunt hybrid active power filter (SHAPF) with the system using machine learning control techniques. The system proposed has been evaluated under an artificial neural network (ANN), gated recurrent unit, and long short-term memory for the optimization of the SHAPF. The method is based on the detection of harmonic presence in the power system by testing and comparison of traditional pq0 theory and deep learning neural networks. The results obtained through the proposed methodology meet all the suggested international standards of THD. The results also satisfy the current removal from the neutral wire and deal efficiently with minor DC voltage variations occurring in the voltage-regulating current. The proposed algorithms have been evaluated on the performance indices of accuracy and computational complexities, which show effective results in terms of 99% accuracy and computational complexities. deep learning-based findings are compared based on their root-mean-square error (RMSE) and loss function. The proposed system can be applied for domestic and industrial load conditions in a four-wire three-phase power distribution system for harmonic mitigation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Pranav Kumar Chaudhary, Aakash Kishore Chotrani, Raja Mohan, Mythili Boopathi, Piyush Ranjan, Madhavi Najana,. "Ai in Fraud Detection: Evaluating the Efficacy of Artificial Intelligence in Preventing Financial Misconduct". Journal of Electrical Systems 20, n.º 3s (4 de abril de 2024): 1332–38. http://dx.doi.org/10.52783/jes.1508.

Texto completo da fonte
Resumo:
AI is anticipated to enhance competitive advantages for financial organisations by increasing efficiency through cost reduction and productivity improvement, as well as by enhancing the quality of services and goods provided to consumers. AI applications in finance have the potential to create or exacerbate financial and non-financial risks, which could result in consumer and investor protection concerns like biassed, unfair, or discriminatory results, along with challenges related to data management and usage. The AI model's lack of transparency may lead to pro-cyclicality and systemic risk in markets, posing issues for financial supervision and internal governance frameworks that may not be in line with a technology-neutral regulatory approach. The primary objective of this research is to explore the effectiveness of Artificial Intelligence in preventing financial misconduct. This study extensively examines sophisticated methods for combating financial fraud, specifically evaluating the efficacy of Machine Learning and Artificial Intelligence. When examining the assessment metrics, this study utilized various metrics like accuracy, precision, recall, F1 score, and the ROC-AUC. The study found that Deep Learning techniques such as “Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks /Long Short-Term Memory, and Auto encoders” achieved high precision and AUC-ROC scores in detecting financial fraud. Voting classifiers, stacking, random forests, and gradient boosting machines demonstrated durability and precision in the face of adversarial attacks, showcasing the strength of unity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Nassif, Ali Bou, Ismail Shahin, Mohammed Lataifeh, Ashraf Elnagar e Nawel Nemmour. "Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments". Information 13, n.º 10 (27 de setembro de 2022): 456. http://dx.doi.org/10.3390/info13100456.

Texto completo da fonte
Resumo:
Speech signals carry various bits of information relevant to the speaker such as age, gender, accent, language, health, and emotions. Emotions are conveyed through modulations of facial and vocal expressions. This paper conducts an empirical comparison of performances between the classical classifiers: Gaussian Mixture Model (GMM), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Artificial neural networks (ANN); and the deep learning classifiers, i.e., Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Gated Recurrent Unit (GRU) in addition to the ivector approach for a text-independent speaker verification task in neutral and emotional talking environments. The deep models undergo hyperparameter tuning using the Grid Search optimization algorithm. The models are trained and tested using a private Arabic Emirati Speech Database, Ryerson Audio–Visual Database of Emotional Speech and Song dataset (RAVDESS) database, and a public Crowd-Sourced Emotional Multimodal Actors (CREMA) database. Experimental results illustrate that deep architectures do not necessarily outperform classical classifiers. In fact, evaluation was carried out through Equal Error Rate (EER) along with Area Under the Curve (AUC) scores. The findings reveal that the GMM model yields the lowest EER values and the best AUC scores across all datasets, amongst classical classifiers. In addition, the ivector model surpasses all the fine-tuned deep models (CNN, LSTM, and GRU) based on both evaluation metrics in the neutral, as well as the emotional speech. In addition, the GMM outperforms the ivector using the Emirati and RAVDESS databases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Lee, Hong Jae, e Tae Seog Kim. "Comparison and Analysis of SNN and RNN Results for Option Pricing and Deep Hedging Using Artificial Neural Networks (ANN)". Academic Society of Global Business Administration 20, n.º 5 (30 de outubro de 2023): 146–78. http://dx.doi.org/10.38115/asgba.2023.20.5.146.

Texto completo da fonte
Resumo:
The purpose of this study is to present alternative methods to overcome market limitations such as the assumption of fixing the dynamics of underlying assets and market friction for the traditional Black-Scholes option pricing model. As for the research method of this paper, Adam is used as the gradient descent optimizer for the hedging model of the call option short portfolio using artificial neural network models SNN (simple neural network) and RNN (recurrent neural network) as analysis models, and deep neural network (deep neural network) is used as the hedging model. neural network) methodology was applied. The research results of this paper are as follows. Regarding the ATM call option price, the BS model showed 10.245, the risk neutral model showed 10.268, the SNN-DH model showed 11.834, and the RNN-DH model showed 11.882. Therefore, it appears that there is a slight difference in the call option price according to each analysis model. At this time, the DH analysis data sample is 100,000 (1×) training samples generated by Monte Carlo Simulation, and the number of testing samples is the training sample. 20% (20,000 pieces) was used, and the option payoff function is lambda(), which is -max(, 0). In addition, the option price and P&L (P&L) of SNN-DH and RNN-DH appear linearly on the basis, and the longer the remaining maturity, the closer the delta value of SNN-DH and BS to the distribution of the S-curve, and the closer the expiration date, the closer the two models are. Delta is densely distributed between 0 and 1. As for the total loss of delta hedging of the short call option position, SNN-DH was -0.0027 and RNN-DH was -0.0061, indicating that the total hedged profit and loss was close to 0. The implications of this study are to present DL techniques based on AI methods as an alternative way to overcome limitations such as fixing underlying asset dynamics and market friction of the traditional Black-Scholes model. In addition, it is an independent analysis model that considers valid robust tools by applying deep neural network algorithms for portfolio hedging problems. One limitation of the research is that it analyzed the model under the assumption of zero transaction costs, so future studies should consider this aspect.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sutskever, Ilya, e Geoffrey Hinton. "Temporal-Kernel Recurrent Neural Networks". Neural Networks 23, n.º 2 (março de 2010): 239–43. http://dx.doi.org/10.1016/j.neunet.2009.10.009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Wang, Rui. "Generalisation of Feed-Forward Neural Networks and Recurrent Neural Networks". Applied and Computational Engineering 40, n.º 1 (21 de fevereiro de 2024): 242–46. http://dx.doi.org/10.54254/2755-2721/40/20230659.

Texto completo da fonte
Resumo:
This paper presents an in-depth analysis of Feed-Forward Neural Networks (FNNs) and Recurrent Neural Networks (RNNs), two powerful models in the field of artificial intelligence. Understanding these models and their applications is crucial for harnessing their potential. The study addresses the need to comprehend the unique characteristics and architectures of FNNs and RNNs. These models excel at processing sequential and temporal data, making them indispensable in tasks. Furthermore, the paper emphasises the importance of variables in FNNs and proposes a novel method to rank the importance of independent variables in predicting the output variable. By understanding the relationship between inputs and outputs, valuable insights can be gained into the underlying patterns and mechanisms driving the system being modelled. Additionally, the research explores the impact of initial weights on model performance. Contrary to conventional beliefs, the study provides evidence that neural networks with random weights can achieve competitive performance, particularly in situations with limited training datasets. This finding challenges the traditional notion that careful initialization is necessary for neural networks to perform well. In summary, this paper provides a comprehensive analysis of FNNs and RNNs while highlighting the importance of understanding the relationship between variables and the impact of initial weights on model performance. By shedding light on these crucial aspects, this research contributes to the advancement and effective utilisation of neural networks, paving the way for improved predictions and insights in various domains.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Poudel, Sushan, e Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, n.º 2 (26 de maio de 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.

Texto completo da fonte
Resumo:
Speech is one of the most effective way for human and machine to interact. This project aims to build Speech Command Recognition System that is capable of predicting the predefined speech commands. Dataset provided by Google’s TensorFlow and AIY teams is used to implement different Neural Network models which include Convolutional Neural Network and Recurrent Neural Network combined with Convolutional Neural Network. The combination of Convolutional and Recurrent Neural Network outperforms Convolutional Neural Network alone by 8% and achieved 96.66% accuracy for 20 labels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Turner, Andrew James, e Julian Francis Miller. "Recurrent Cartesian Genetic Programming of Artificial Neural Networks". Genetic Programming and Evolvable Machines 18, n.º 2 (8 de agosto de 2016): 185–212. http://dx.doi.org/10.1007/s10710-016-9276-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ziemke, Tom. "Radar image segmentation using recurrent artificial neural networks". Pattern Recognition Letters 17, n.º 4 (abril de 1996): 319–34. http://dx.doi.org/10.1016/0167-8655(95)00128-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /". The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Shao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Gudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for". Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.

Texto completo da fonte
Resumo:

n this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.

Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism". Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

NAPOLI, CHRISTIAN. "A-I: Artificial intelligence". Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.

Texto completo da fonte
Resumo:
In this thesis we proposed new neural architectures and information theory approaches. By means of wavelet analysis, neural networks, and the results of our own creations, namely the wavelet recurrent neural networks and the radial basis probabilistic neural networks,we tried to better understand, model and cope with the human behavior itself. The first idea was to model the workers of a crowdsourcing project as nodes on a cloud-computing system, we also hope to have exceeded the limits of such a definition. We hope to have opened a door on new possibilities to model the behavior of socially interconnected groups of people cooperating for the execution of a common task. We showed how it is possible to use the Wavelet Recurrent Neural Networks to model a quite complex thing such as the availability of resources on an online service or a computational cloud, then we showed that, similarly, the availability of crowd workers can be modeled, as well as the execution time of tasks performed by crowd workers. Doing that we created a tool to tamper with the timeline, hence allowing us to obtain predictions regarding the status of the crowd in terms of available workers and executed workflows. Moreover, with our inanimate reasoner based on the developed Radial Basis Probabilistic Neural Networks, firstly applied to social networks, then applied to living companies, we also understood how to model and manage cooperative networks in terms of workgroups creation and optimization. We have done that by automatically interpreting worker profiles, then automatically extrapolating and interpreting the relevant information among hundreds of features for each worker in order to create workgroups based on their skills, professional attitudes, experience, etc. Finally, also thanks to the suggestions of prof. Michael Bernstein of the Stanford University, we simply proposed to connect the developed automata. We made use of artificial intelligence to model the availability of human resources, but then we had to use a second level of artificial intelligence in order to model human workgroups and skills, finally we used a third level of artificial intelligence to model workflows executed by the said human resources once organized in groups and levels according to their experiences. In our best intentions, such a three level artificial intelligence could address the limits that, until now, have refrained the crowds from growing up as companies, with a well recognizable pyramidal structure, in order to reward experience, skill and professionalism of their workers. We cannot frankly say whether our work will really contribute or not to the so called "crowdsourcing revolution", but we hope at least to have shedded some light on the agreeable possibilities that are yet to come.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation". Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications". Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.

Texto completo da fonte
Resumo:
Philosophiae Doctor - PhD
In this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.

Texto completo da fonte
Resumo:
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories.

Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Willmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference". UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.

Texto completo da fonte
Resumo:
Recurrent neural networks (RNNs) are state of the art sequential machine learning tools, but have difficulty learning sequences with long-range dependencies due to the exponential growth or decay of gradients backpropagated through the RNN. Some methods overcome this problem by modifying the standard RNN architecure to force the recurrent weight matrix W to remain orthogonal throughout training. The first half of this thesis presents a novel orthogonal RNN architecture that enforces orthogonality of W by parametrizing with a skew-symmetric matrix via the Cayley transform. We present rules for backpropagation through the Cayley transform, show how to deal with the Cayley transform's singularity, and compare its performance on benchmark tasks to other orthogonal RNN architectures. The second half explores two deep learning approaches to problems in RNA secondary structure inference and compares them to a standard structure inference tool, the nearest neighbor thermodynamic model (NNTM). The first uses RNNs to detect paired or unpaired nucleotides in the RNA structure, which are then converted into synthetic auxiliary data that direct NNTM structure predictions. The second method uses recurrent and convolutional networks to directly infer RNA base pairs. In many cases, these approaches improve over NNTM structure predictions by 20-30 percentage points.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Jain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Jain, Lakhmi C., e Larry Medsker. Recurrent Neural Networks: Design and Applications. Taylor & Francis Group, 1999.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Integration Of Swarm Intelligence And Artificial Neutral Network. World Scientific Publishing Company, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sangeetha, V., e S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part II. Springer London, Limited, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zhang, Huaguang, Derong Liu, Zeng-Guang Hou, Changyin Sun e Shumin Fei. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007 Nanjing, China, June 3-7, 2007. Proceedings, Part I. Springer London, Limited, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

(Editor), Derong Liu, Shumin Fei (Editor), Zeng-Guang Hou (Editor), Huaguang Zhang (Editor) e Changyin Sun (Editor), eds. Advances in Neural Networks - ISNN 2007: 4th International Symposium on Neutral Networks, ISNN 2007Nanjing, China, June 3-7, 2007. Proceedings, Part I (Lecture Notes in Computer Science). Springer, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Churchland, Paul M. The Engine of Reason, the Seat of the Soul. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/2758.001.0001.

Texto completo da fonte
Resumo:
In a fast-paced, entertaining narrative, replete with examples and numerous explanatory illustrations, Churchland brings together an exceptionally broad range of intellectual issues. He summarizes new results from neuroscience and recent work with artificial neural networks that together suggest a unified set of answers to questions about how the brain actually works; how it sustains a thinking, feeling, dreaming self; and how it sustains a self-conscious person. Churchland first explains the science—the powerful role of vector coding in sensory representation and pattern recognition, artificial neural networks that imitate parts of the brain, recurrent networks, neural representation of the social world, and diagnostic technologies and therapies for the brain in trouble. He then explores the far-reaching consequences of the current neurocomputational understanding of mind for our philosophical convictions, and for our social, moral, legal, medical, and personal lives. Churchland's wry wit and skillful teaching style are evident throughout. He introduces the remarkable representational power of a single human brain, for instance, via a captivating brain/World-Trade-Tower TV screen analogy. "Who can be watching this pixilated show?" Churchland queries; the answer is a provocative "no one." And he has included a folded stereoscopic viewer, attached to the inside back cover of the book, that readers can use to participate directly in several revealing experiments concerning stereo vision. Bradford Books imprint
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Recurrent Hopfield Networks". In Artificial Neural Networks, 139–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Krauss, Patrick. "Recurrent Neural Networks". In Artificial Intelligence and Brain Research, 131–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_14.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lynch, Stephen. "Recurrent Neural Networks". In Python for Scientific Computing and Artificial Intelligence, 267–84. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003285816-19.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sharma, Arpana, Kanupriya Goswami, Vinita Jindal e Richa Gupta. "A Road Map to Artificial Neural Network". In Recurrent Neural Networks, 3–21. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kathirvel, A., Debashreet Das, Stewart Kirubakaran, M. Subramaniam e S. Naveneethan. "Artificial Intelligence–Based Mobile Bill Payment System Using Biometric Fingerprint". In Recurrent Neural Networks, 233–45. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni e Silas Franco dos Reis Alves. "Forecast of Stock Market Trends Using Recurrent Networks". In Artificial Neural Networks, 221–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lindgren, Kristian, Anders Nilsson, Mats G. Nordahl e Ingrid Råde. "Evolving Recurrent Neural Networks". In Artificial Neural Nets and Genetic Algorithms, 55–62. Vienna: Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Schäfer, Anton Maximilian, e Hans Georg Zimmermann. "Recurrent Neural Networks Are Universal Approximators". In Artificial Neural Networks – ICANN 2006, 632–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_66.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Riaza, Ricardo, e Pedro J. Zufiria. "Time-Scaling in Recurrent Neural Learning". In Artificial Neural Networks — ICANN 2002, 1371–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_221.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Hammer, Barbara. "On the Generalization Ability of Recurrent Networks". In Artificial Neural Networks — ICANN 2001, 731–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_102.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

Cao, Zhu, Linlin Wang e Gerard de Melo. "Multiple-Weight Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/205.

Texto completo da fonte
Resumo:
Recurrent neural networks (RNNs) have enjoyed great success in speech recognition, natural language processing, etc. Many variants of RNNs have been proposed, including vanilla RNNs, LSTMs, and GRUs. However, current architectures are not particularly adept at dealing with tasks involving multi-faceted contents. In this work, we solve this problem by proposing Multiple-Weight RNNs and LSTMs, which rely on multiple weight matrices in an attempt to mimic the human ability of switching between contexts. We present a framework for adapting RNN-based models and analyze the properties of this approach. Our detailed experimental results show that our model outperforms previous work across a range of different tasks and datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wu, Hao, Ziyang Chen, Weiwei Sun, Baihua Zheng e Wei Wang. "Modeling Trajectories with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/430.

Texto completo da fonte
Resumo:
Modeling trajectory data is a building block for many smart-mobility initiatives. Existing approaches apply shallow models such as Markov chain and inverse reinforcement learning to model trajectories, which cannot capture the long-term dependencies. On the other hand, deep models such as Recurrent Neural Network (RNN) have demonstrated their strength of modeling variable length sequences. However, directly adopting RNN to model trajectories is not appropriate because of the unique topological constraints faced by trajectories. Motivated by these findings, we design two RNN-based models which can make full advantage of the strength of RNN to capture variable length sequence and meanwhile to address the constraints of topological structure on trajectory modeling. Our experimental study based on real taxi trajectory datasets shows that both of our approaches largely outperform the existing approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Omar, Tarek A., Nabih E. Bedewi e Azim Eskandarian. "Recurrent Artificial Neural Networks for Crashworthiness Analysis". In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1190.

Texto completo da fonte
Resumo:
Abstract The initial velocity and structural characteristics of any vehicle are the main factors affecting the vehicle response in case of frontal Impact. Finite Element (FE) simulations are essential tools for crashworthiness analysis, however, the FE models are getting bigger which increases the simulation time and cost. An advanced recurrent Artificial Neural Network (ANN) is used to store the nonlinear dynamic characteristics of the vehicle structure. Therefore, hundreds of impact scenarios can be performed quickly with much less cost by using the trained networks. The equation of motion of the dynamic system was used to define the inputs and outputs of the ANN. The back-propagation learning rule was used to adjust the connecting weights and biases of the developed Network. To include the dynamics of the system, the delayed acceleration was fed back as an input to the ANN together with the velocity and displacement. A Finite Element (FE) model for a simple box beam with rigid mass attached to it was developed to represent a general crushable object. The simulation results were performed by impacting this model into a rigid wall with different initial velocities. The displacement, velocity and acceleration curves obtained from the simulation — for the C.G. of the moving mass — were used to train the ANN. After a successful training phase, the ANN was tested by predicting a new acceleration curve. The points of the acceleration curve were predicted sequentially since only one point of the curve is predicted through one cycle of the NN operation. The predicted acceleration curve showed a good correlation with the actual curve obtained from the simulation. During the recall phase, the predicted acceleration of a new state was integrated twice to obtain the velocity and displacement by using a second order integration scheme. Then, the displacement, velocity and acceleration of this new state were fed to the ANN to predict the next state acceleration, and so forth. The results indicated that the recurrent ANN can accurately capture the frontal crash characteristics of any impacting structure, and predict the crash performance of the same structure for any other crash scenario within the training limits. The current paper considered only the front impact, however, an offset and oblique impact scenarios will be included in further research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Mak, M. W. "Speaker identification using modular recurrent neural networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950519.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Chen, Yuexing, e Jiarun Li. "Recurrent Neural Networks algorithms and applications". In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2021. http://dx.doi.org/10.1109/icbase53849.2021.00015.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sharma, Shambhavi. "Emotion Recognition from Speech using Artificial Neural Networks and Recurrent Neural Networks". In 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, 2021. http://dx.doi.org/10.1109/confluence51648.2021.9377192.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lee, Jinhyuk, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi e Jaewoo Kang. "Name Nationality Classification with Recurrent Neural Networks". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/289.

Texto completo da fonte
Resumo:
Personal names tend to have many variations differing from country to country. Though there exists a large amount of personal names on the Web, nationality prediction solely based on names has not been fully studied due to its difficulties in extracting subtle character level features. We propose a recurrent neural network based model which predicts nationalities of each name using automatic feature extraction. Evaluation of Olympic record data shows that our model achieves greater accuracy than previous feature based approaches in nationality prediction tasks. We also evaluate our proposed model and baseline models on name ethnicity classification task, again achieving better or comparable performances. We further investigate the effectiveness of character embeddings used in our proposed model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Argun, Aykut, Tobias Thalheim, Frank Cichos e Giovanni Volpe. "Calibration of force fields using recurrent neural networks". In Emerging Topics in Artificial Intelligence 2020, editado por Giovanni Volpe, Joana B. Pereira, Daniel Brunner e Aydogan Ozcan. SPIE, 2020. http://dx.doi.org/10.1117/12.2567931.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

"INTERACTIVE EVOLVING RECURRENT NEURAL NETWORKS ARE SUPER-TURING". In International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003740603280333.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Swanston, D. J. "Relative order defines a topology for recurrent networks". In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950564.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Artificial Neural Networks and Recurrent Neutral Networks"

1

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak e Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, julho de 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Texto completo da fonte
Resumo:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia