Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Neural computer.

Dissertationen zum Thema „Neural computer“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Neural computer" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Somers, Harriet. „A neural computer“. Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Churcher, Stephen. „VLSI neural networks for computer vision“. Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/13397.

Der volle Inhalt der Quelle
Annotation:
Recent years have seen the rise to prominence of a powerful new computational paradigm - the so-called artificial neural network. Loosely based on the microstructure of the central nervous system, neural networks are massively parallel arrangements of simple processing elements (neurons) which communicate with each other through variable strength connections (synapses). The simplicity of such a description belies the complexity of calculations which neural networks are able to perform. Allied to this, the emergent properties of noise resistance, fault tolerance, and large data bandwidths (all arising from the parallel architecture) mean that neural networks, when appropriately implemented, represent a powerful tool for solving many problems which require the processing of real-world data. A computer vision task (viz. the classification of regions in images of segmented natural scenes) is presented, as a problem in which large numbers of data need to be processed quickly and accurately, whilst, in certain circumstances, being disambiguated. Of the classifiers tried, the neural network (a multi-layer perceptron) was found to provide the best overall solution, to the task of distinguishing between regions which were 'roads', and those which were 'not roads'. In order that best use might be made of the parallel processing abilities of neural networks, a variety of special purpose hardware implementations are discussed, before two different analogue VLSI designs are presented, complete with characterisation and test results. The latter of these chips (the EPSILON device) is used as the basis for a practical neuro-computing system. The results of experimentation with different applications are presented. Comparisons with computer simulations demonstrate the accuracy of the chips, and their ability to support learning algorithms, thereby proving the viability of the use of pulsed analogue VLSI techniques for the implementation of artificial neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khan, Altaf Hamid. „Feedforward neural networks with constrained weights“. Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Der volle Inhalt der Quelle
Annotation:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kulakov, Anton. „Multiprocessing neural network simulator“. Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348420/.

Der volle Inhalt der Quelle
Annotation:
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Durrant, Simon. „Negative correlation in neural systems“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2387/.

Der volle Inhalt der Quelle
Annotation:
In our attempt to understand neural systems, it is useful to identify statistical principles that may be beneficial in neural information processing, outline how these principles may work in theory, and demonstrate the benefits through computational modelling and simulation. Negative correlation is one such principle, and is the subject of this work. The main body of the work falls into three parts. The first part demonstrates the space filling and accelerated central limit convergence benefits of negative correlation, both generally and in the specific neural context of V1 receptive fields. I outline two new algorithms combining traditional ICA with a correlation objective function. Correlated component analysis seeks components with a given correlation matrix, while correlated basis analysis seeks basis functions with a given correlation matrix. The benefits of recovering components and basis functions with negative correlations are shown. The second part looks at the functional role of negative correlation for integrate- and-fire neurons in the context of suprathreshold stochastic resonance, for neurons receiving Poisson inputs modelled by a diffusion approximation. I show how the SSR effect can be seen in networks of spiking neurons, and further show how correlation can be used to control the noise level, and that optimal information transmission occurs for negatively correlated inputs when parameters take biophysically plausible values. The final part examines the question of how negative correlation may be implemented in the context of small networks of spiking neurons. Networks of integrate-and-fire neurons with and without lateral inhibitory connections are tested, and the networks with the inhibitory connections are found to perform better and show negatively correlated firing patterns. This result is extended to more biophysically detailed neuron and synapse models, highlighting the robust nature of the mechanism. Finally, the mechanism is explained as a threshold-unit approximation to non-threshold maximum likelihood signal/noise decomposition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Baker, Thomas Edward. „Implementation limits for artificial neural networks“. Full text open access at:, 1990. http://content.ohsu.edu/u?/etd,268.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lam, Yiu Man. „Self-organized cortical map formation by guiding connections /“. View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20LAM.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Adamu, Abdullahi S. „An empirical study towards efficient learning in artificial neural networks by neuronal diversity“. Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Der volle Inhalt der Quelle
Annotation:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

McMichael, Lonny D. (Lonny Dean). „A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model“. Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Der volle Inhalt der Quelle
Annotation:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yang, Horng-Chang. „Multiresolution neural networks for image edge detection and restoration“. Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Der volle Inhalt der Quelle
Annotation:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Zhang, Fu. „Intelligent feature selection for neural regression : techniques and applications“. Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/49639/.

Der volle Inhalt der Quelle
Annotation:
Feature Selection (FS) and regression are two important technique categories in Data Mining (DM). In general, DM refers to the analysis of observational datasets to extract useful information and to summarise the data so that it can be more understandable and be used more efficiently in terms of storage and processing. FS is the technique of selecting a subset of features that are relevant to the development of learning models. Regression is the process of modelling and identifying the possible relationships between groups of features (variables). Comparing with the conventional techniques, Intelligent System Techniques (ISTs) are usually favourable due to their flexible capabilities for handling real‐life problems and the tolerance to data imprecision, uncertainty, partial truth, etc. This thesis introduces a novel hybrid intelligent technique, namely Sensitive Genetic Neural Optimisation (SGNO), which is capable of reducing the dimensionality of a dataset by identifying the most important group of features. The capability of SGNO is evaluated with four practical applications in three research areas, including plant science, civil engineering and economics. SGNO is constructed using three key techniques, known as the core modules, including Genetic Algorithm (GA), Neural Network (NN) and Sensitivity Analysis (SA). The GA module controls the progress of the algorithm and employs the NN module as its fitness function. The SA module quantifies the importance of each available variable using the results generated in the GA module. The global sensitivity scores of the variables are used determine the importance of the variables. Variables of higher sensitivity scores are considered to be more important than the variables with lower sensitivity scores. After determining the variables’ importance, the performance of SGNO is evaluated using the NN module that takes various numbers of variables with the highest global sensitivity scores as the inputs. In addition, the symbolic relationship between a group of variables with the highest global sensitivity scores and the model output is discovered using the Multiple‐Branch Encoded Genetic Programming (MBE‐GP). A total of four datasets have been used to evaluate the performance of SGNO. These datasets involve the prediction of short‐term greenhouse tomato yield, prediction of longitudinal dispersion coefficients in natural rivers, prediction of wave overtopping at coastal structures and the modelling of relationship between the growth of industrial inputs and the growth of the gross industrial output. SGNO was applied to all these datasets to explore its effectiveness of reducing the dimensionality of the datasets. The performance of SGNO is benchmarked with four dimensionality reduction techniques, including Backward Feature Selection (BFS), Forward Feature Selection (FFS), Principal Component Analysis (PCA) and Genetic Neural Mathematical Method (GNMM). The applications of SGNO on these datasets showed that SGNO is capable of identifying the most important feature groups of in the datasets effectively and the general performance of SGNO is better than those benchmarking techniques. Furthermore, the symbolic relationships discovered using MBE‐GP can generate performance competitive to the performance of NN models in terms of regression accuracies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Czuchry, Andrew J. Jr. „Toward a formalism for the automation of neural network construction and processing control“. Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9199.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bragansa, John. „On the performance issues of the bidirectional associative memory“. Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17809.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Dugan, Kier. „Non-neural computing on the SpiNNaker neuromorphic computer“. Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/400083/.

Der volle Inhalt der Quelle
Annotation:
Moore’s law scaling has slowed dramatically since the turn of the millennium, causing new generations of computer hardware to include more processor cores to offer more performance. Desktop computers, server machines, and even mobile phones are all multi-core devices now, and this trend has shown no signs of slowing soon. Eventually, computers will contain so many cores that they will be an abundant resource. Using this many processors requires new ways of thinking about software. Biology leads computer architecture here: mammalian brains contain billions of neurons embedded in a dense fabric of synapses—the human brain contains about 1011 neurons and 1015 synapses. Each neuron is essentially a small processing element in its own right. Neuromorphic hardware draws inspiration from this and is typically used to support neural network simulations. SpiNNaker is one such platform, designed to support simulations containing up to 109 neurons and 1012 synapses (about 1% of a human brain) in biological real-time. This is achieved by embedding a million ARM processors in a bespoke interconnection fabric which is non-deterministic, modelled after spiking neural networks, and predicated on the inherent fault-tolerance present in biological systems. This thesis uses SpiNNaker as a test-bed for massively-parallel non-neural applications, showing how very fine-grain parallel software can be structured to solve real-world problems. First, we address the inherent non-determinism of the underlying platform, by designing a set of algorithms that discover the topology of an arbitrary SpiNNaker-like machine so that fine-grain parallel software can be mapped onto it. These algorithms are verified against various fault conditions, and remove a shortcoming present in the existing system. Secondly, we demonstrate a fine-grain parallel application, by solving two-dimensional heat-diffusion where each point of the problem grid is essentially a self contained program. This software architecture is subject to various fault conditions to demonstrate the resilience of the approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Billings, Rachel Mae. „On Efficient Computer Vision Applications for Neural Networks“. Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102957.

Der volle Inhalt der Quelle
Annotation:
Since approximately the dawn of the new millennium, neural networks and other machine learning algorithms have become increasingly capable of adeptly performing difficult, dull, and dangerous work conventionally carried out by humans in times of old. As these algorithms become steadily more commonplace in everyday consumer and industry applications, the consideration of how they may be implemented on constrained hardware systems such as smartphones and Internet-of-Things (IoT) peripheral devices in a time- and power- efficient manner while also understanding the scenarios in which they fail is of increasing importance. This work investigates implementations of convolutional neural networks specifically in the context of image inference tasks. Three areas are analyzed: (1) a time- and power-efficient face recognition framework, (2) the development of a COVID-19-related mask classification system suitable for deployment on low-cost, low-power devices, and (3) an investigation into the implementation of spiking neural networks on mobile hardware and their conversion from traditional neural network architectures.
Master of Science
The subject of machine learning and its associated jargon have become ubiquitous in the past decade as industries seek to develop automated tools and applications and researchers continue to develop new methods for artificial intelligence and improve upon existing ones. Neural networks are a type of machine learning algorithm that can make predictions in complex situations based on input data with human-like (or better) accuracy. Real-time, low-power, and low-cost systems using these algorithms are increasingly used in consumer and industry applications, often improving the efficiency of completing mundane and hazardous tasks traditionally performed by humans. The focus of this work is (1) to explore when and why neural networks may make incorrect decisions in the domain of image-based prediction tasks, (2) the demonstration of a low-power, low-cost machine learning use case using a mask recognition system intended to be suitable for deployment in support of COVID-19-related mask regulations, and (3) the investigation of how neural networks may be implemented on resource-limited technology in an efficient manner using an emerging form of computing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Brande, Julia K. Jr. „Computer Network Routing with a Fuzzy Neural Network“. Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/29685.

Der volle Inhalt der Quelle
Annotation:
The growing usage of computer networks is requiring improvements in network technologies and management techniques so users will receive high quality service. As more individuals transmit data through a computer network, the quality of service received by the users begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing. A more effective method for routing data through a computer network can assist with the new problems being encountered with today's growing networks. Effective routing algorithms use various techniques to determine the most appropriate route for transmitting data. Determining the best route through a wide area network (WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices present on the network. The most relevant routing information involves various measures that are often obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it maintains the ability to learn in dynamic situations. Once the neural network is initially designed, any alterations in the computer routing environment can easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is essential in today's rapidly growing and changing computer networks. These techniques, fuzzy reasoning and neural networks, when combined together provide a very effective routing algorithm for computer networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the computer network migrates from a stable network to a more variable one. The advantages of applying this fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Åström, Fredrik. „Neural Network on Compute Shader : Running and Training a Neural Network using GPGPU“. Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2036.

Der volle Inhalt der Quelle
Annotation:
In this thesis I look into how one can train and run an artificial neural network using Compute Shader and what kind of performance can be expected. An artificial neural network is a computational model that is inspired by biological neural networks, e.g. a brain. Finding what kind of performance can be expected was done by creating an implementation that uses Compute Shader and then compare it to the FANN library, i.e. a fast artificial neural network library written in C. The conclusion is that you can improve performance by training an artificial neural network on the compute shader as long as you are using non-trivial datasets and neural network configurations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Landassuri, Moreno Victor Manuel. „Evolution of modular neural networks“. Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Der volle Inhalt der Quelle
Annotation:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction tasks. The evolutionary algorithm EPNet is used to deliver the evolution of artificial neural networks. In the first stage of this study, the EPNet algorithm is carefully studied to understand its basis and to ensure confidence in its behaviour. Thereafter, its input feature selection (required for module evolution) is optimized, showing the robustness of the improved algorithm compared with the fixed input case and previous publications. Then module emergence, communication and reuse are investigated with the modular EPNet (M-EPNet) algorithm, which uses the information provided by a modularity measure to implement new mutation operators that favour the evolution of modules, allowing a new perspective for analyzing modularity, module formation and module reuse during evolution. The results obtained extend those of previous work, indicating that pure-modular architectures may emerge at low connectivity values, where similar tasks may share (reuse) common neural elements creating compact representations, and that the more different two tasks are, the bigger the modularity obtained during evolution. Other results indicate that some neural structures may be reused when similar tasks are evolved, leading to module interaction during evolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Xu, Shuxiang. „Neuron-adaptive neural network models and applications /“. [Campbelltown, N.S.W. : The Author], 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030702.085320/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

De, Jongh Albert. „Neural network ensembles“. Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Der volle Inhalt der Quelle
Annotation:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Weinstein, Randall Kenneth. „Techniques for FPGA neural modeling“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/26685.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D)--Bioengineering, Georgia Institute of Technology, 2007.
Committee Chair: Lee, Robert; Committee Member: Butera, Robert; Committee Member: DeWeerth, Steve; Committee Member: Madisetti, Vijay; Committee Member: Voit, Eberhard. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bolt, George Ravuama. „Fault tolerance in artificial neural networks : are neural networks inherently fault tolerant?“ Thesis, University of York, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317683.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Newman, Rhys A. „Automatic learning in computer vision“. Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Cheng, Chih Kang. „Hardware implementation of the complex Hopfield neural network“. CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Sloan, Cooper Stokes. „Neural bus networks“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Pardoe, Andrew Charles. „Neural network image reconstruction for nondestructive testing“. Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/44616/.

Der volle Inhalt der Quelle
Annotation:
Conventional image reconstruction of advanced composite materials using ultrasound tomography is computationally expensive, slow and unreliable. A neural network system is proposed which would permit the inspection of large composite structures, increasingly important for the aerospace industry. It uses a tomographic arrangement, whereby a number of ultrasonic transducers are positioned along the edges of a square, referred to as the sensor array. Two configurations of the sensor array are utilized. The first contains 16 transducers, 4 of which act as receivers of ultrasound, and the second contains 40 transducers, 8 of which act as receivers. The sensor array has required the development of instrumentation to generate and receive ultrasonic signals, multiplex the transmitting transducers and to store the numerous waveforms generated for each tomographic scan. The first implementation of the instrumentation required manual operation, however, to increase the amount of data available, the second implementation was automated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Filho, Edson Costa de Barros Carvalho. „Investigation of Boolean neural networks on a novel goal-seeking neuron“. Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277285.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Landry, Kenneth D. „Evolutionary neural networks“. Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/51904.

Der volle Inhalt der Quelle
Annotation:
To create neural networks that work, one needs to specify a structure and the interconnection weights between each pair of connected computing elements. The structure of a network can be selected by the designer depending on the application, although the selection of interconnection weights is a much larger problem. Algorithms have been developed to alter the weights slightly in order to produce the desired results. Learning algorithms such as Hebb's rule, the Delta rule and error propagation have been used, with success, to learn the appropriate weights. The major objection to this class of algorithms is that one cannot specify what is not desired in the network in addition to what is desired. An alternate method to learning the correct interconnection weights is to evolve a network in an environment that rewards "good” behavior and punishes "bad" behavior, This technique allows interesting networks to appear which otherwise may not be discovered by other methods of learning. In order to teach a network the correct weights, this approach simply needs a direction where an acceptable solution can be obtained rather than a complete answer to the problem.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Bailey, Scott P. „Neural network design on the SRC-6 reconfigurable computer“. Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FBailey.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 2006.
Thesis Advisor(s): Douglas J. Fouts. "December 2006." Includes bibliographical references (p. 105-106). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Wan, Chuen L. „Traffic representation by artificial neural system and computer vision“. Thesis, Edinburgh Napier University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Turega, Michael A. „A parallel computer architecture to support artificial neural networks“. Thesis, University of Manchester, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316469.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

SILVA, Adenilton José da. „Artificial neural network architecture selection in a quantum computer“. UNIVERSIDADE FEDERAL DE PERNAMBUCO, 2015. https://repositorio.ufpe.br/handle/123456789/15011.

Der volle Inhalt der Quelle
Annotation:
Submitted by Isaac Francisco de Souza Dias (isaac.souzadias@ufpe.br) on 2016-01-27T17:25:47Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Adenilton José da Silva.pdf: 4885126 bytes, checksum: d2bade12d15d6626962f244aebd5678d (MD5)
Made available in DSpace on 2016-01-27T17:25:47Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Adenilton José da Silva.pdf: 4885126 bytes, checksum: d2bade12d15d6626962f244aebd5678d (MD5) Previous issue date: 2015-06-26
CNPq
Miniaturisation of computers components is taking us from classical to quantum physics domain. Further reduction in computer components size eventually will lead to the development of computer systems whose components will be on such a small scale that quantum physics intrinsic properties must be taken into account. The expression quantum computation and a first formal model of a quantum computer were first employed in the eighties. With the discovery of a quantum algorithm for factoring exponentially faster than any known classical algorithm in 1997, quantum computing began to attract industry investments for the development of a quantum computer and the design of novel quantum algorithms. For instance, the development of learning algorithms for neural networks. Some artificial neural networks models can simulate an universal Turing machine, and together with learning capabilities have numerous applications in real life problems. One limitation of artificial neural networks is the lack of an efficient algorithm to determine its optimal architecture. The main objective of this work is to verify whether we can obtain some advantage with the use of quantum computation techniques in a neural network learning and architecture selection procedure. We propose a quantum neural network, named quantum perceptron over a field (QPF). QPF is a direct generalisation of a classical perceptron which addresses some drawbacks found in previous models for quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimises the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures and neural networks parameters in linear time over the number of examples in the training set. SAL is the first quantum learning algorithm to determine neural network architectures in linear time. This speedup is obtained by the use of quantum parallelism and a non linear quantum operator.
A miniaturização dos componentes dos computadores está nos levando dos domínios da física clássica aos domínios da física quântica. Futuras reduções nos componentes dos computadores eventualmente levará ao desenvolvimento de computadores cujos componentes estarão em uma escala em que efeitos intrínsecos da física quântica deverão ser considerados. O termo computação quântica e um primeiro modelo formal de computação quântica foram definidos na década de 80. Com a descoberta no ano de 1997 de um algoritmo quântico para fatoração exponencialmente mais rápido do que qualquer algoritmo clássico conhecido a computação quântica passou a atrair investimentos de diversas empresas para a construção de um computador quântico e para o desenvolvimento de algoritmos quânticos. Por exemplo, o desenvolvimento de algoritmos de aprendizado para redes neurais. Alguns modelos de Redes Neurais Artificiais podem ser utilizados para simular uma máquina de Turing universal. Devido a sua capacidade de aprendizado, existem aplicações de redes neurais artificiais nas mais diversas áreas do conhecimento. Uma das limitações das redes neurais artificiais é a inexistência de um algoritmo com custo polinomial para determinar a melhor arquitetura de uma rede neural. Este trabalho tem como objetivo principal verificar se é possível obter alguma vantagem no uso da computação quântica no processo de seleção de arquiteturas de uma rede neural. Um modelo de rede neural quântica denominado perceptron quântico sobre um corpo foi proposto. O perceptron quântico sobre um corpo é uma generalização direta de um perceptron clássico que resolve algumas das limitações em modelos de redes neurais quânticas previamente propostos. Um algoritmo de aprendizado denominado algoritmo de aprendizado de arquitetura baseado no princípio da superposição que otimiza pesos e arquitetura de uma rede neural simultaneamente é apresentado. O algoritmo proposto possui custo linear e determina a melhor arquitetura em um conjunto finito de arquiteturas e os parâmetros da rede neural. O algoritmo de aprendizado proposto é o primeiro algoritmo quântico para determinar a arquitetura de uma rede neural com custo linear. O custo linear é obtido pelo uso do paralelismo quântico e de um operador quântico não linear.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Llaquet, Bayo Antai. „Computer aided renal calculi detection using Convolutional Neural Networks“. Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-52254.

Der volle Inhalt der Quelle
Annotation:
In this thesis a novel approach is developed to detect urethral stones based on a computer-aided process. The input data is a CT scan from the patient, which is a high-resolution 3D grayscale image. The algorithm developed extracts the regions that might be stones, based on the intensity values of the pixels in the CT scan. This process includes a binarizing process of the image, finding the connected components of the resulting binary image and calculating the centroid of each of the components selected. The regions that are suspected to be stones are used as input of a CNN, a modified version of an ANN, so they can be classified as stone or non-stone. The parameters of the CNN have been chosen based on an exhaustive hyperparameter search with different configurations to select the one that gives the best performance. The results have been satisfactory, obtaining an accuracy of 98,3%, a sensitivity of 99,5% and a F1 score of 98,3%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Bergsten, John, und Konrad Öhman. „Player Analysis in Computer Games Using Artificial Neural Networks“. Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14812.

Der volle Inhalt der Quelle
Annotation:
Star Vault AB is a video game development company that has developed the video game Mortal Online. The company has stated that they believe that players new to the game repeatedly find themselves being lost in the game. The objective of this study is to evaluate whether or not an Artificial Neural Network can be used to evaluate when a player is lost in the game Mortal Online. This is done using the free open source library Fast Artifical Neural Network Library. People are invited to a data collection event where they play a tweaked version of the game to facilitate data collection. Players specify whether they are lost or not and the data collected is flagged accordingly. The collected data is then prepared with different parameters to be used when training multiple Artificial Neural Networks. When creating an Artificial Neural Network there exists several parameters which have an impact on its performance. Performance is defined as the balance of high prediction accuracy against low false positive rate. These parameters vary depending on the purpose of the Artificial Neural Network. A quantitative approach is followed where these parameters are varied to investigate which values result in the Artificial Neural Network which best identifies when a player is lost. The parameters are grouped into stages where all combinations of parameter values within each stage are evaluated to reduce the amount of Artificial Neural Networks which have to be trained, with the best performing parameters of each stage being used in subsequent stages. The result is a set of values for the parameters that are considered as ideal as possible. These parameter values are then altered one at a time to verify that they are ideal. The results show that a set of parameters exist which can optimize the Artificial Neural Network model to identify when a player is lost, however not with the high performance that was hoped for. It is theorized that the ambiguity of the word "lost" and the complexity of the game are critical to the low performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Wang, Ting. „Statistical feature ordering for neural-based incremental attribute learning“. Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/13633/.

Der volle Inhalt der Quelle
Annotation:
In pattern recognition, better classification or regression results usually depend on highly discriminative features (also known as attributes) of datasets. Machine learning plays a significant role in the performance improvement for classification and regression. Different from the conventional machine learning approaches which train all features in one batch by some predictive algorithms like neural networks and genetic algorithms, Incremental Attribute Learning (IAL) is a novel supervised machine learning approach which gradually trains one or more features step by step. Such a strategy enables features with greater discrimination abilities to be trained in an earlier step, and avoids interference among relevant features. Previous studies have confirmed that IAL is able to generate accurate results with lower error rates. If features with different discrimination abilities are sorted in different training order, the final results may be strongly influenced. Therefore, the way to sequentially sort features with some orderings and simultaneously reduce the pattern recognition error rates based on IAL inevitably becomes an important issue in this study. Compared with the applicable yet time-consuming contribution-based feature ordering methods which were derived in previous studies, more efficient feature ordering approaches for IAL are presented to tackle classification problems in this study. In the first approach, feature orderings are calculated by statistical correlations between input and output. The second approach is based on mutual information, which employs minimal-redundancy-maximal- relevance criterion (mRMR), a well-known feature selection method, for feature ordering. The third method is improved by Fisher's Linear Discriminant (FLD). Firstly, Single Discriminability (SD) of features is presented based on FLD, which can cope with both univariate and multivariate output classification problems. Secondly, a new feature ordering metric called Accumulative Discriminability (AD) is developed based on SD. This metric is designed for IAL classification with dynamic feature dimensions. It computes the multidimensional feature discrimination ability in each step for all imported features including those imported in previous steps during the IAL training. AD can be treated as a metric for accumulative effect, while SD only measures the one-dimensional feature discrimination ability in each step. Experimental results show that all these three approaches can exhibit better performance than the conventional one-batch training method. Furthermore, the results of AD are the best of the three, because AD is much fitter for the properties of IAL, where feature number in IAL is increasing. Moreover, studies on the combination use of feature ordering and selection in IAL is also presented in this thesis. As a pre-process of machine learning for pattern recognition, sometimes feature orderings are inevitably employed together with feature selection. Experimental results show that at times these integrated approaches can obtain a better performance than non-integrated approaches yet sometimes not. Additionally, feature ordering approaches for solving regression problems are also demonstrated in this study. Experimental results show that a proper feature ordering is also one of the key elements to enhance the accuracy of the results obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Zaghloul, Waleed A. Lee Sang M. „Text mining using neural networks“. Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.
Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Hadjifaradji, Saeed. „Learning algorithms for restricted neural networks“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Demiray, Sadettin Tuğlular Tuğkan. „Improving misuse detection with neural networks/“. [s.l.]: [s.n.], 2005. http://library.iyte.edu.tr/tezler/master/bilgisayaryazilimi/T000408.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (Master)--İzmir Institute of Technology, İzmir, 2005
Keywords: Neural network, back propagation networks, multilayer perceptions, computer networks security, Intrusion detection system. Includes bibliographical references (leaves 68-69)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Goodman, Stephen D. „Temporal pattern identification in a self-organizing neural network with an application to data compression“. Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/15794.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Vidmark, Stefan. „Röstigenkänning med Movidius Neural Compute Stick“. Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-151032.

Der volle Inhalt der Quelle
Annotation:
Företaget Omicron Ceti AB köpte en Intel Movidius Neural Compute Stick (NCS), som är en usb-enhet där neurala nätverk kan laddas in för att processa data. Min uppgift blev att studera hur NCS används och göra en guide med exempel. Med TensorFlow och hjälpbiblioteket TFLearn gjordes först ett testnätverk för att prova hela kedjan från träning till användning med NCS. Sedan tränades ett nätverk att kunna klassificera 14 olika ord. En mängd olika utformningar på nätverket testades, men till slut hittades ett exempel som blev en bra utgångspunkt och som efter lite justering gav en träffsäkerhet på 86% med testdatat. Vid inläsning i mikrofon så blev resultatet lite sämre, med 67% träffsäkerhet. Att processa data med NCS tog längre tid än med TFLearn men använde betydligt mindre CPU-kraft. I mindre system såsom en Raspberry Pi går det däremot inte ens att använda TensorFlow/TFLearn, så huruvida det är värt att använda NCS eller inte beror på det specifika användningsscenariot.
Omicron Ceti AB company had an Intel Movidius Neural Compute Stick (NCS), which is a usb device that may be loaded with neural networks to process data. My assignment was to study how NCS is used and to make a guide with examples. Using TensorFlow and the TFLearn help library a test network was made for the purpose of trying the work pipeline, from network training to using the NCS. After that a network was trained to classify 14 different words. Many different configurations of the network were tried, until a good example was found that was expanded upon until an accuracy of 86% with the test data was reached. The accuracy when speaking into a microphone was a bit worse at 67%. To process data with the NCS took a longer time than with TFLearn but used a lot less CPU power. However it’s not even possible to use TensorFlow/TFLearn in smaller systems like a Raspberry Pi, so whether it’s worth using the NCS depends on the specific usage scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Behnke, Sven. „Hierarchical neural networks for image interpretation /“. Berlin [u.a.] : Springer, 2003. http://www.loc.gov/catdir/enhancements/fy0813/2003059597-d.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Wenzel, Brent C. „Using neural nets to generate and improve computer graphic procedures“. Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834648.

Der volle Inhalt der Quelle
Annotation:
Image compression using neural networks in the past has focused on just reducing the number of bytes that had to be stored even thought the bytes had no meaning. This study looks at a new process that reduces the number of bytes stored but also maintains meaning behind the bytes. The bytes of the compressed image will correspond to parameters of an existing graphic algorithm. After a brief review of common neural networks and graphic algorithms, the back propagation neural network was chosen to be tested for this new process. Both three layer and four layer networks were tested. The four layer network was used in further tests because of its improved response compared to the three layer network. Two different training sets were used, a normal training set which was small and an extended version which included extreme value sets. These two training sets were shown to the neural network in two forms. The first was the raw format with no preprocessing. The second form used a Fast Fourier Transform to preprocess the data in an effort to distribute the image data throughout the image plane. The neural network’s response was good on images that it was trained on but responded poorly to new images that were not used in the training sets.
Department of Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wenström, Sean, und Erik Ihrén. „Stock Trading with Neural Networks“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168095.

Der volle Inhalt der Quelle
Annotation:
Stock trading is increasingly done pseudo-automatically orfully automatically, using algorithms which make day-todayor even moment-to-moment decisions.This report investigates the possibility of creating a virtualstock trader, using a method used in Artificial Intelligence,called Neural Networks, to make intelligent decisions onwhen to buy and sell stocks on the stock market.We found that it might be possible to earn money overa longer period of time, although the profit is less than theaverage stock index. However, the method also performedwell in situations where the stock index is going down.
Aktiehandel genomförs till allt större grad automatiskt ellerhalvautomatiskt, med algoritmer som fattar beslut pådaglig basis eller över ännu kortare tidsintervall.Denna rapport undersöker möjligheten att göra en virtuellaktiehandlare med hjälp av en metod inom artificiellintelligens kallad neurala nätverk, och fatta intelligenta beslutom när aktier på aktiemarknaden ska köpas eller säljas.Vi fann att det är möjligt att tjäna pengar över en längretidsperiod, men vinsten vår algoritm gör över den behandladetidsperioden är mindre än börsindex ökning. Däremotvisar vår algoritm positiva resultat även under sjunkandebörsindex.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Cheung, Ka Kit. „Neural networks for optimization“. HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Wojcik, Jeremy J. „Neural Cartography: Computer Assisted Poincare Return Mappings for Biological Oscillations“. Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/math_diss/10.

Der volle Inhalt der Quelle
Annotation:
This dissertation creates practical methods for Poincaré return mappings of individual and networked neuron models. Elliptic bursting models are found in numerous biological systems, including the external Globus Pallidus (GPe) section of the brain; the focus for studies of epileptic seizures and Parkinson's disease. However, the bifurcation structure for changes in dynamics remains incomplete. This dissertation develops computer-assisted Poincaré ́maps for mathematical and biologically relevant elliptic bursting neuron models and central pattern generators (CPGs). The first method, used for individual neurons, offers the advantage of an entire family of computationally smooth and complete mappings, which can explain all of the systems dynamical transitions. A complete bifurcation analysis was performed detailing the mechanisms for the transitions from tonic spiking to quiescence in elliptic bursters. A previously unknown, unstable torus bifurcation was found to give rise to small amplitude oscillations. The focus of the dissertation shifts from individual neuron models to small networks of neuron models, particularly 3-cell CPGs. A CPG is a small network which is able to produce specific phasic relationships between the cells. The output rhythms represent a number of biologically observable actions, i.e. walking or running gates. A 2-dimensional map is derived from the CPGs phase-lags. The cells are endogenously bursting neuron models mutually coupled with reciprocal inhibitory connections using the fast threshold synaptic paradigm. The mappings generate clear explanations for rhythmic outcomes, as well as basins of attraction for specific rhythms and possible mechanisms for switching between rhythms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Thangthai, Kwanchiva. „Computer lipreading via hybrid deep neural network hidden Markov models“. Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.

Der volle Inhalt der Quelle
Annotation:
Constructing a viable lipreading system is a challenge because it is claimed that only 30% of information of speech production is visible on the lips. Nevertheless, in small vocabulary tasks, there have been several reports of high accuracies. However, investigation of larger vocabulary tasks is rare. This work examines constructing a large vocabulary lipreading system using an approach based-on Deep Neural Network Hidden Markov Models (DNN-HMMs). We present the historical development of computer lipreading technology and the state-ofthe-art results in small and large vocabulary tasks. In preliminary experiments, we evaluate the performance of lipreading and audiovisual speech recognition in small vocabulary data sets. We then concentrate on the improvement of lipreading systems in a more substantial vocabulary size with a multi-speaker data set. We tackle the problem of lipreading an unseen speaker. We investigate the effect of employing several stepstopre-processvisualfeatures. Moreover, weexaminethecontributionoflanguage modelling in a lipreading system where we use longer n-grams to recognise visual speech. Our lipreading system is constructed on the 6000-word vocabulary TCDTIMIT audiovisual speech corpus. The results show that visual-only speech recognition can definitely reach about 60% word accuracy on large vocabularies. We actually achieved a mean of 59.42% measured via three-fold cross-validation on the speaker independent setting of the TCD-TIMIT corpus using Deep autoencoder features and DNN-HMM models. This is the best word accuracy of a lipreading system in a large vocabulary task reported on the TCD-TIMIT corpus. In the final part of the thesis, we examine how the DNN-HMM model improves lipreading performance. We also give an insight into lipreading by providing a feature visualisation. Finally, we present an analysis of lipreading results and suggestions for future development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Xie, Weidi. „Deep neural networks in computer vision and biomedical image analysis“. Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:5fcfb784-7b61-49cd-9561-64b5ffa5807a.

Der volle Inhalt der Quelle
Annotation:
This thesis proposes different models for a variety of applications, such as semantic segmentation, in-the-wild face recognition, microscopy cell counting and detection, standardized re-orientation of 3D ultrasound fetal brain and Magnetic Resonance (MR) cardiac video segmentation. Our approach is to employ the large-scale machine learning models, in particular deep neural networks. Expert knowledge is either mathematically modelled as a differentiable hidden layer in the Artificial Neural Networks, or we tried to break the complex tasks into several small and easy-to-solve tasks. Multi-scale contextual information plays an important role in pixel-wise predic- tion, e.g. semantic segmentation. To capture the spatial contextual information, we present a new block for learning receptive field adaptively by within-layer recurrence. While interleaving with the convolutional layers, receptive fields are effectively enlarged, reaching across the entire feature map or image. The new block can be initialized as identity and inserted into any pre-trained networks, therefore taking benefit from the "pre-train and fine-tuning" paradigm. Current face recognition systems are mostly driven by the success of image classification, where the models are trained to by identity classification. We propose a multi-column deep comparator networks for face recognition. The architecture takes two sets (each contains an arbitrary number of faces) of images or frames as inputs, facial part-based (e.g. eyes, noses) representations of each set are pooled out, dynamically calibrated based on the quality of input images, and further compared with local "experts" in a pairwise way. Unlike the computer vision applications, collecting data and annotation is usually more expensive in biomedical image analysis. Therefore, the models that can be trained with fewer data and weaker annotations are of great importance. We approach the microscopy cell counting and detection based on density estimation, where only central dot annotations are needed. The proposed fully convolutional regression networks are first trained on a synthetic dataset of cell nuclei, later fine-tuned and shown to generalize to real data. In 3D fetal ultrasound neurosonography, establishing a coordinate system over the fetal brain serves as a precursor for subsequent tasks, e.g. localization of anatomical landmarks, extraction of standard clinical planes for biometric assessment of fetal growth, etc. To align brain volumes into a common reference coordinate system, we decompose the complex transformation into several simple ones, which can be easily tackled with Convolutional Neural Networks. The model is therefore designed to leverage the closely related tasks by sharing low-level features, and the task-specific predictions are then combined to reproduce the transformation matrix as the desired output. Finally, we address the problem of MR cardiac video analysis, in which we are interested in assisting clinical diagnosis based on the fine-grained segmentation. To facilitate segmentation, we present one end-to-end trainable model that achieves multi-view structure detection, alignment (standardized re-orientation), and fine- grained segmentation simultaneously. This is motivated by the fact that the CNNs in essence is not rotation equivariance or invariance, therefore, adding the pre-alignment into the end-to-end trainable pipeline can effectively decrease the complexity of segmentation for later stages of the model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ahamed, Woakil Uddin. „Quantum recurrent neural networks for filtering“. Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Der volle Inhalt der Quelle
Annotation:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Williams, Bryn V. „Evolutionary neural networks : models and applications“. Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Der volle Inhalt der Quelle
Annotation:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Garret, Aaron Dozier Gerry V. „Neural enhancement for multiobjective optimization“. Auburn, Ala., 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Dissertation/Garrett_Aaron_55.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie