Academic literature on the topic 'Artificial Neural Networks and Recurrent Neutral Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neural Networks and Recurrent Neutral Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Artificial Neural Networks and Recurrent Neutral Networks"

1

Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for." Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.

Full text
Abstract:
<p>n this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison is based on computational power needed for evolution, and complexity, robustness, and generalisation of the resulting ANNs. The results show that marker-based encoding is the most efficient method tested and does not need delta-coding to increase the speed of evolution process. Additionally the results indicate that delta-coding does not increase the speed of evolution with marker-based encoding.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

NAPOLI, CHRISTIAN. "A-I: Artificial intelligence." Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.

Full text
Abstract:
In this thesis we proposed new neural architectures and information theory approaches. By means of wavelet analysis, neural networks, and the results of our own creations, namely the wavelet recurrent neural networks and the radial basis probabilistic neural networks,we tried to better understand, model and cope with the human behavior itself. The first idea was to model the workers of a crowdsourcing project as nodes on a cloud-computing system, we also hope to have exceeded the limits of such a definition. We hope to have opened a door on new possibilities to model the behavior of socially interconnected groups of people cooperating for the execution of a common task. We showed how it is possible to use the Wavelet Recurrent Neural Networks to model a quite complex thing such as the availability of resources on an online service or a computational cloud, then we showed that, similarly, the availability of crowd workers can be modeled, as well as the execution time of tasks performed by crowd workers. Doing that we created a tool to tamper with the timeline, hence allowing us to obtain predictions regarding the status of the crowd in terms of available workers and executed workflows. Moreover, with our inanimate reasoner based on the developed Radial Basis Probabilistic Neural Networks, firstly applied to social networks, then applied to living companies, we also understood how to model and manage cooperative networks in terms of workgroups creation and optimization. We have done that by automatically interpreting worker profiles, then automatically extrapolating and interpreting the relevant information among hundreds of features for each worker in order to create workgroups based on their skills, professional attitudes, experience, etc. Finally, also thanks to the suggestions of prof. Michael Bernstein of the Stanford University, we simply proposed to connect the developed automata. We made use of artificial intelligence to model the availability of human resources, but then we had to use a second level of artificial intelligence in order to model human workgroups and skills, finally we used a third level of artificial intelligence to model workflows executed by the said human resources once organized in groups and levels according to their experiences. In our best intentions, such a three level artificial intelligence could address the limits that, until now, have refrained the crowds from growing up as companies, with a well recognizable pyramidal structure, in order to reward experience, skill and professionalism of their workers. We cannot frankly say whether our work will really contribute or not to the so called "crowdsourcing revolution", but we hope at least to have shedded some light on the agreeable possibilities that are yet to come.
APA, Harvard, Vancouver, ISO, and other styles
6

Kramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications." Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.

Full text
Abstract:
Philosophiae Doctor - PhD<br>In this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have been developed by other researchers. However, to the best of our knowledge, no algorithm for processing HMMs through learning has been given. Taking advantage of the structural similarities of the architectural dynamics of the RNNs and HMMs, in this work we analyze the combination of these two systems into the hybrid architecture. To this end, the main objective of this study is to improve the sequence recognition/classi_cation performance by applying a hybrid neural/symbolic approach. In particular, trained HMMs are used as the initial symbolic domain theory and directly encoded into appropriate RNN architecture, meaning that the prior knowledge is processed through the training of RNNs. Proposed algorithm is then implemented on sample test beds and other real time biological applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.

Full text
Abstract:
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. <p> <p>Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis. <p><br>Doctorat en Sciences<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Willmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference." UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.

Full text
Abstract:
Recurrent neural networks (RNNs) are state of the art sequential machine learning tools, but have difficulty learning sequences with long-range dependencies due to the exponential growth or decay of gradients backpropagated through the RNN. Some methods overcome this problem by modifying the standard RNN architecure to force the recurrent weight matrix W to remain orthogonal throughout training. The first half of this thesis presents a novel orthogonal RNN architecture that enforces orthogonality of W by parametrizing with a skew-symmetric matrix via the Cayley transform. We present rules for backpropagation through the Cayley transform, show how to deal with the Cayley transform's singularity, and compare its performance on benchmark tasks to other orthogonal RNN architectures. The second half explores two deep learning approaches to problems in RNA secondary structure inference and compares them to a standard structure inference tool, the nearest neighbor thermodynamic model (NNTM). The first uses RNNs to detect paired or unpaired nucleotides in the RNA structure, which are then converted into synthetic auxiliary data that direct NNTM structure predictions. The second method uses recurrent and convolutional networks to directly infer RNA base pairs. In many cases, these approaches improve over NNTM structure predictions by 20-30 percentage points.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography