Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Neural networks (Computer science).

Дисертації з теми "Neural networks (Computer science)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Neural networks (Computer science)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Landassuri, Moreno Victor Manuel. "Evolution of modular neural networks." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Повний текст джерела
Анотація:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction tasks. The evolutionary algorithm EPNet is used to deliver the evolution of artificial neural networks. In the first stage of this study, the EPNet algorithm is carefully studied to understand its basis and to ensure confidence in its behaviour. Thereafter, its input feature selection (required for module evolution) is optimized, showing the robustness of the improved algorithm compared with the fixed input case and previous publications. Then module emergence, communication and reuse are investigated with the modular EPNet (M-EPNet) algorithm, which uses the information provided by a modularity measure to implement new mutation operators that favour the evolution of modules, allowing a new perspective for analyzing modularity, module formation and module reuse during evolution. The results obtained extend those of previous work, indicating that pure-modular architectures may emerge at low connectivity values, where similar tasks may share (reuse) common neural elements creating compact representations, and that the more different two tasks are, the bigger the modularity obtained during evolution. Other results indicate that some neural structures may be reused when similar tasks are evolved, leading to module interaction during evolution.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sloan, Cooper Stokes. "Neural bus networks." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Повний текст джерела
Анотація:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zaghloul, Waleed A. Lee Sang M. "Text mining using neural networks." Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.
Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hadjifaradji, Saeed. "Learning algorithms for restricted neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cheung, Ka Kit. "Neural networks for optimization." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Повний текст джерела
Анотація:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Повний текст джерела
Анотація:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

De, Jongh Albert. "Neural network ensembles." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Повний текст джерела
Анотація:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lee, Ji Young Ph D. Massachusetts Institute of Technology. "Information extraction with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111905.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-97).
Electronic health records (EHRs) have been widely adopted, and are a gold mine for clinical research. However, EHRs, especially their text components, remain largely unexplored due to the fact that they must be de-identified prior to any medical investigation. Existing systems for de-identification rely on manual rules or features, which are time-consuming to develop and fine-tune for new datasets. In this thesis, we propose the first de-identification system based on artificial neural networks (ANNs), which achieves state-of-the-art results without any human-engineered features. The ANN architecture is extended to incorporate features, further improving the de-identification performance. Under practical considerations, we explore transfer learning to take advantage of large annotated dataset to improve the performance on datasets with limited number of annotations. The ANN-based system is publicly released as an easy-to-use software package for general purpose named-entity recognition as well as de-identification. Finally, we present an ANN architecture for relation extraction, which ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
by Ji Young Lee.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zeng, Brandon. "Towards understanding residual neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123067.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (page 37).
Residual networks (ResNets) are now a prominent architecture in the field of deep learning. However, an explanation for their success remains elusive. The original view is that residual connections allows for the training of deeper networks, but it is not clear that added layers are always useful, or even how they are used. In this work, we find that residual connections distribute learning behavior across layers, allowing resnets to indeed effectively use deeper layers and outperform standard networks. We support this explanation with results for network gradients and representation learning that show that residual connections make the training of individual residual blocks easier.
by Brandon Zeng.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Sarda, Srikant 1977. "Neural networks and neurophysiological signals." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9806.

Повний текст джерела
Анотація:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 45).
The purpose of this thesis project is to develop, implement, and validate a neural network which will classify compound muscle action potentials (CMAPs). The two classes of signals are "via­ble" and "non-viable." This classification system will be used as part of a quality assurance mech­anism on the NC-stat nerve conduction monitoring system. The results show that standard backpropagation neural networks provide exceptional classification results on novel waveforms. Also, principal components analysis is a powerful preprocessing technique which allows for a sig­nificant reduction in processing efficiency, while maintaining performance standards. This system is implementable as a real-time quality control process for the NC-stat.
by Srikant Sarda.
S.B.and M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Nareshkumar, Nithyalakshmi. "Simulataneous versus Successive Learning in Neural Networks." Miami University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=miami1134068959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Amin, Muhamad Kamal M. "Multiple self-organised spiking neural networks." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Online version available for University members only until Feb. 1, 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=26029.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Aberdeen University, 2009.
With: Clustering with self-organised spiking neural network / Muhamad K. Amin ... et al. Joint 4th International Conference on Soft Computing and Intelligent Systems (SCIS) and 9th International Symposium on Advanced Intelligent Systems (SIS) Sept. 17-21, 2008, Nagoya. Japan. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Повний текст джерела
Анотація:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yang, Horng-Chang. "Multiresolution neural networks for image edge detection and restoration." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Повний текст джерела
Анотація:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Polhill, John Gareth. "Guaranteeing generalisation in neural networks." Thesis, University of St Andrews, 1995. http://hdl.handle.net/10023/12878.

Повний текст джерела
Анотація:
Neural networks need to be able to guarantee their intrinsic generalisation abilities if they are to be used reliably. Mitchell's concept and version spaces technique is able to guarantee generalisation in the symbolic concept-learning environment in which it is implemented. Generalisation, according to Mitchell, is guaranteed when there is no alternative concept that is consistent with all the examples presented so far, except the current concept, given the bias of the user. A form of bidirectional convergence is used by Mitchell to recognise when the no-alternative situation has been reached. Mitchell's technique has problems of search and storage feasibility in its symbolic environment. This thesis aims to show that by evolving the technique further in a neural environment, these problems can be overcome. Firstly, the biasing factors which affect the kind of concept that can be learned are explored in a neural network context. Secondly, approaches for abstracting the underlying features of the symbolic technique that enable recognition of the no-alternative situation are discussed. The discussion generates neural techniques for guaranteeing generalisation and culminates in a neural technique which is able to recognise when the best fit neural weight state has been found for a given set of data and topology.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Salama, Rameri. "On evolving modular neural networks." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0011.

Повний текст джерела
Анотація:
The basis of this thesis is the presumption that while neural networks are useful structures that can be used to model complex, highly non-linear systems, current methods of training the neural networks are inadequate in some problem domains. Genetic algorithms have been used to optimise both the weights and architectures of neural networks, but these approaches do not treat the neural network in a sensible manner. In this thesis, I define the basis of computation within a neural network as a single neuron and its associated input connections. Sets of these neurons, stored in a matrix representation, comprise the building blocks that are transferred during one or more epochs of a genetic algorithm. I develop the concept of a Neural Building Block and two new genetic algorithms are created that utilise this concept. The first genetic algorithm utilises the micro neural building block (micro-NBB); a unit consisting of one or more neurons and their input connections. The micro-NBB is a unit that is transmitted through the process of crossover and hence requires the introduction of a new crossover operator. However the micro NBB can not be stored as a reusable component and must exist only as the product of the crossover operator. The macro neural building block (macro-NBB) is utilised in the second genetic algorithm, and encapsulates the idea that fit neural networks contain fit sub-networks, that need to be preserved across multiple epochs. A macro-NBB is a micro-NBB that exists across multiple epochs. Macro-NBBs must exist across multiple epochs, and this necessitates the use of a genetic store, and a new operator to introduce macro-NBBs back into the population at random intervals. Once the theoretical presentation is completed the newly developed genetic algorithms are used to evolve weights for a variety of architectures of neural networks to demonstrate the feasibility of the approach. Comparison of the new genetic algorithm with other approaches is very favourable on two problems: a multiplexer problem and a robot control problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Bhattacharya, Dipankar. "Neural networks for signal processing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21924.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Tipping, Michael E. "Topographic mappings and feed-forward neural networks." Thesis, Aston University, 1996. http://publications.aston.ac.uk/672/.

Повний текст джерела
Анотація:
This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (CMDS) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed CMDS model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all interdata directions in input space.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Rountree, Nathan, and n/a. "Initialising neural networks with prior knowledge." University of Otago. Department of Computer Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070510.135442.

Повний текст джерела
Анотація:
This thesis explores the relationship between two classification models: decision trees and multilayer perceptrons. Decision trees carve up databases into box-shaped regions, and make predictions based on the majority class in each box. They are quick to build and relatively easy to interpret. Multilayer perceptrons (MLPs) are often more accurate than decision trees, because they are able to use soft, curved, arbitrarily oriented decision boundaries. Unfortunately MLPs typically require a great deal of effort to determine a good number and arrangement of neural units, and then require many passes through the database to determine a good set of connection weights. The cost of creating and training an MLP is thus hundreds of times greater than the cost of creating a decision tree, for perhaps only a small gain in accuracy. The following scheme is proposed for reducing the computational cost of creating and training MLPs. First, build and prune a decision tree to generate prior knowledge of the database. Then, use that knowledge to determine the initial architecture and connection weights of an MLP. Finally, use a training algorithm to refine the knowledge now embedded in the MLP. This scheme has two potential advantages: a suitable neural network architecture is determined very quickly, and training should require far fewer passes through the data. In this thesis, new algorithms for initialising MLPs from decision trees are developed. The algorithms require just one traversal of a decision tree, and produce four-layer MLPs with the same number of hidden units as there are nodes in the tree. The resulting MLPs can be shown to reach a state more accurate than the decision trees that initialised them, in fewer training epochs than a standard MLP. Employing this approach typically results in MLPs that are just as accurate as standard MLPs, and an order of magnitude cheaper to train.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Shah, Jagesh V. (Jagesh Vijaykumar). "Learning dynamics in feedforward neural networks." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36541.

Повний текст джерела
Анотація:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 108-115).
by Jagesh V. Shah.
M.S.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Mars, Risha R. "Organic LEDs for optoelectronic neural networks." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77537.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-81).
In this thesis, I investigate the characteristics of Organic Light Emitting Diodes (OLEDs) and assess their suitability for use in the Compact Optoelectronic Integrated Neural (COIN) coprocessor. The COIN coprocessor, a prototype artificial neural network implemented in hardware, seeks to implement neural network algorithms in native optoelectronic hardware in order to do parallel type processing in a faster and more efficient manner than all-electronic implementations. The feasibility of scaling the network to tens of millions of neurons is the main reason for optoelectronics - they do not suffer from crosstalk and other problems that affect electrical wires when they are densely packed. I measured the optical and electrical characteristics different types of OLEDs, and made calculations based on existing optical equipment to determine the specific characteristics required if OLEDs were to be used in the prototype. The OLEDs were compared to Vertical Cavity Surface Emitting Lasers (VCSELs) to determine the tradeoffs in using one over the other in the prototype neural network.
by Risha R. Mars.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Doshi, Anuja. "Aircraft position prediction using neural networks." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33300.

Повний текст джерела
Анотація:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaf 64).
The Federal Aviation Administration (FAA) has been investigating early warning accident prevention systems in an effort to prevent runway collisions. One system in place is the Airport Movement Area Safety System (AMASS), developed under contract with the FAA. AMASS uses a linear prediction system to predict the position of an aircraft 5 to 30 seconds in the future. The system sounds an alarm to warn air traffic controllers if it foresees a potential accident. However, research done at MIT and Volpe National Transportation Systems Center has shown that neural networks more accurately predict the future position of aircraft. Neural networks are self-learning, and the time required for the optimization of safety logic will be minimized using neural networks. More accurate predictions of aircraft position will deliver earlier warnings to air traffic controllers while reducing the number of nuisance alerts. There are many factors to consider in designing an aircraft position prediction neural network, including history length, types of inputs and outputs, and applicable training data. This document chronicles the design, training, performance, and analysis of a position prediction neural network, and the presents the resulting optimal neural network for the AMASS System. Additionally, the neural network prediction model is then compared other prediction models, including a constant speed, linear regression, and an auto regression model. In this analysis, neural networks present themselves as a superior model for aircraft position prediction.
by Anuja Doshi.
M.Eng.and S.B.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Gu, Youyang. "Food adulteration detection using neural networks." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106015.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 99-100).
In food safety and regulation, there is a need for an automated system to be able to make predictions on which adulterants (unauthorized substances in food) are likely to appear in which food products. For example, we would like to know that it is plausible for Sudan I, an illegal red dye, to adulter "strawberry ice cream", but not "bread". In this work, we show a novel application of deep neural networks in solving this task. We leverage data sources of commercial food products, hierarchical properties of substances, and documented cases of adulterations to characterize ingredients and adulterants. Taking inspiration from natural language processing, we show the use of recurrent neural networks to generate vector representations of ingredients from Wikipedia text and make predictions. Finally, we use these representations to develop a sequential method that has the capability to improve prediction accuracy as new observations are introduced. The results outline a promising direction in the use of machine learning techniques to aid in the detection of adulterants in food.
by Youyang Gu.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mehta, Haripriya(Haripriya P. ). "Secure inference of quantized neural networks." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127663.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 63-65).
Running image recognition algorithms on medical datasets raises several privacy concerns. Hospitals may not have access to an image recognition model that a third party may have developed, and medical images are HIPAA protected and thus, cannot leave hospital servers. However, with secure neural network inference, hospitals can send encrypted medical images as input to a modified neural network that is compatible with leveled fully homomorphic encryption (LHE), a form of encryption that can support evaluation of degree-bounded polynomial functions over encrypted data without decrypting it, and Brakerski/Fan-Vercauteren (BFV) scheme - an efficient LHE cryptographic scheme which only operates with integers. To make the model compatible with LHE with the BFV scheme, the neural net weights, and activations must be converted to integers through quantization and non-linear activation functions must be approximated with low-degree polynomial functions. This paper presents a pipeline that can train real world models such as ResNet-18 on large datasets and quantize them without significant loss in accuracy. Additionally, we highlight customized quantize inference functions which we will eventually modify to be compatible with LHE and measure the impact on model accuracy.
by Haripriya Mehta.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Srivastava, Sanjana. "On foveation of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123134.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 61-63).
The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. [26] show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of- the-art convolutional neural networks (CNNs), and are much more prominent in CNNs. We found many cases where CNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in CNNs. Our results thus reveal a new failure mode of CNNs that also affects humans to a lesser degree. They expose how fragile CNN recognition ability is for natural images even without synthetic adversarial patterns being introduced. This opens potential for CNN robustness in natural images to be brought to the human level by taking inspiration from human robustness methods. One of these is eccentricity dependence, a model of human focus in which attention to the visual input degrades proportional to distance from the focal point [7]. We demonstrate that applying the "inverted pyramid" eccentricity method, a multi-scale input transformation, makes CNNs more robust to useless background features than a standard raw-image input. Our results also find that using the inverted pyramid method generally reduces useless background pixels, therefore reducing required training data.
by Sanjana Srivastava.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Behnke, Sven. "Hierarchical neural networks for image interpretation /." Berlin [u.a.] : Springer, 2003. http://www.loc.gov/catdir/enhancements/fy0813/2003059597-d.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Whyte, William John. "Statistical mechanics of neural networks." Thesis, University of Oxford, 1995. http://ora.ox.ac.uk/objects/uuid:e17f9b27-58ac-41ad-8722-cfab75139d9a.

Повний текст джерела
Анотація:
We investigate five different problems in the field of the statistical mechanics of neural networks. The first three problems involve attractor neural networks that optimise particular cost functions for storage of static memories as attractors of the neural dynamics. We study the effects of replica symmetry breaking (RSB) and attempt to find algorithms that will produce the optimal network if error-free storage is impossible. For the Gardner-Derrida network we show that full RSB is necessary for an exact solution everywhere above saturation. We also show that, no matter what the cost function that is optimised, if the distribution of stabilities has a gap then the Parisi replica ansatz that has been made is unstable. For the noise-optimal network we find a continuous transition to replica symmetry breaking at the AT line, in line with previous studies of RSB for different networks. The change to RSBl improves the agreement between "experimental" and theoretical calculations of the local stability distribution ρ(λ) significantly. The effect on observables is smaller. We show that if the network is presented with a training set which has been generated from a set of prototypes by some noisy rule, but neither the noise level nor the prototypes are known, then the perceptron algorithm is the best initial choice to produce a network that will generalise well. If additional information is available more sophisticated algorithms will be faster and give a smaller generalisation error. The remaining problems deal with attractor neural networks with separable interaction matrices which can be used (under parallel dynamics) to store sequences of patterns without the need for time delays. We look at the effects of correlations on a singlesequence network, and numerically investigate the storage capacity of a network storing an extensive number of patterns in such sequences. When correlations are implemented along with a term in the interaction matrix designed to suppress some of the effects of those correlations, the competition between the two produces a rich range of behaviour. Contrary to expectations, increasing the correlations and the operating temperature proves capable of improving the sequenceprocessing behaviour of the network. Finally, we demonstrate that a network storing a large number of sequences of patterns using a Hebb-like rule can store approximately twice as many patterns as the network trained with the Hebb rule to store individual patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Adamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Повний текст джерела
Анотація:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Mohr, Sheila Jean. "Temporal EKG signal classification using neural networks." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-02022010-020115/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Treadgold, Nicholas K. Computer Science &amp Engineering Faculty of Engineering UNSW. "Constructive neural networks : generalisation, convergence and architectures." Awarded by:University of New South Wales. School of Computer Science and Engineering, 1999. http://handle.unsw.edu.au/1959.4/17615.

Повний текст джерела
Анотація:
Feedforward neural networks trained via supervised learning have proven to be successful in the field of pattern recognition. The most important feature of a pattern recognition technique is its ability to successfully classify future data. This is known as generalisation. A more practical aspect of pattern recognition methods is how quickly they can be trained and how reliably a good solution is found. Feedforward neural networks have been shown to provide good generali- sation on a variety of problems. A number of training techniques also exist that provide fast convergence. Two problems often addressed within the field of feedforward neural networks are how to improve thegeneralisation and convergence of these pattern recognition techniques. These two problems are addressed in this thesis through the frame- work of constructive neural network algorithms. Constructive neural networks are a type of feedforward neural network in which the network architecture is built during the training process. The type of architecture built can affect both generalisation and convergence speed. Convergence speed and reliability areimportant properties of feedforward neu- ral networks. These properties are studied by examining different training al- gorithms and the effect of using a constructive process. A new gradient based training algorithm, SARPROP, is introduced. This algorithm addresses the problems of poor convergence speed and reliability when using a gradient based training method. SARPROP is shown to increase both convergence speed and the chance of convergence to a good solution. This is achieved through the combination of gradient based and Simulated Annealing methods. The convergence properties of various constructive algorithms are examined through a series of empirical studies. The results of these studies demonstrate that the cascade architecture allows for faster, more reliable convergence using a gradient based method than a single layer architecture with a comparable num- ber of weights. It is shown that constructive algorithms that bias the search direction of the gradient based training algorithm for the newly added hidden neurons, produce smaller networks and more rapid convergence. A constructive algorithm using search direction biasing is shown to converge to solutions with networks that are unreliable and ine??cient to train using a non-constructive gradient based algorithm. The technique of weight freezing is shown to result in larger architectures than those obtained from training the whole network. Improving the generalisation ability of constructive neural networks is an im- portant area of investigation. A series of empirical studies are performed to examine the effect of regularisation on generalisation in constructive cascade al- gorithms. It is found that the combination of early stopping and regularisation results in better generalisation than the use of early stopping alone. A cubic regularisation term that greatly penalises large weights is shown to be benefi- cial for generalisation in cascade networks. An adaptive method of setting the regularisation magnitude in constructive networks is introduced and is shown to produce generalisation results similar to those obtained with a fixed, user- optimised regularisation setting. This adaptive method also oftenresults in the construction of smaller networks for more complex problems. The insights obtained from the SARPROP algorithm and from the convergence and generalisation empirical studies are used to create a new constructive cascade algorithm, acasper. This algorithm is extensively benchmarked and is shown to obtain good generalisation results in comparison to a number of well-respected and successful neural network algorithms. A technique of incorporating the validation data into the training set after network construction is introduced and is shown to generally result in similar or improved generalisation. The di??culties of implementing a cascade architecture in VLSI are described and results are given on the effect of the cascade architecture on such attributes as weight growth, fan-in, network depth, and propagation delay. Two variants of the cascade architecture are proposed. These new architectures are shown to produce similar generalisation results to the cascade architecture, while also addressing the problems of VLSI implementation of cascade networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.

Повний текст джерела
Анотація:

This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.

The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.

Стилі APA, Harvard, Vancouver, ISO та ін.
34

Czuchry, Andrew J. Jr. "Toward a formalism for the automation of neural network construction and processing control." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9199.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bragansa, John. "On the performance issues of the bidirectional associative memory." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Post, David L. "Network Management: Assessing Internet Network-Element Fault Status Using Neural Networks." Ohio : Ohio University, 2008. http://www.ohiolink.edu/etd/view.cgi?ohiou1220632155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Morphet, Steven Brian Işık Can. "Modeling neural networks via linguistically interpretable fuzzy inference systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2004. http://wwwlib.umi.com/cr/syr/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ngom, Alioune. "Synthesis of multiple-valued logic functions by neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ36787.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Rivest, François. "Knowledge transfer in neural networks : knowledge-based cascade-correlation." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29470.

Повний текст джерела
Анотація:
Most neural network learning algorithms cannot use knowledge other than what is provided in the training data. Initialized using random weights, they cannot use prior knowledge such as knowledge stored in previously trained networks. This manuscript thesis addresses this problem. It contains a literature review of the relevant static and constructive neural network learning algorithms and of the recent research on transfer of knowledge across neural networks. Manuscript 1 describes a new algorithm, named knowledge-based cascade-correlation (KBCC), which extends the cascade-correlation learning algorithm to allow it to use prior knowledge. This prior knowledge can be provided as, but is not limited to, previously trained neural networks. The manuscript also contains a set of experiments that shows how KBCC is able to reduce its learning time by automatically selecting the appropriate prior knowledge to reuse. Manuscript 2 shows how KBCC speeds up learning on a realistic large problem of vowel recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Künzle, Philippe. "Building topological maps for robot navigation using neural networks." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82266.

Повний текст джерела
Анотація:
Robots carrying tasks in an unknown environment often need to build a map in order to be able to navigate. One approach is to create a detailed map of the environment containing the position of obstacles. But this option can use a large amount of memory, especially if the environment is large. Another approach, closer to how people build a mental map, is the topological map. A topological map contains only places that are easy to recognize (landmarks) and links them together.
In this thesis, we explore the issue of creating a topological map from range data. A robot in a simulated environment uses the distance from objects around it (range data) and a compass as inputs. From this information, the robot finds intersections, classifies them as landmarks using a neural network and creates a topological map of its environment. The neural network detecting landmarks is trained online on sample intersections. Although the robot evolves in a simulated environment, the ideas developed in this thesis could be applied to a real robot in an office space.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yang, Xiao. "Memristor based neural networks : feasibility, theories and approaches." Thesis, University of Kent, 2014. https://kar.kent.ac.uk/49041/.

Повний текст джерела
Анотація:
Memristor-based neural networks refer to the utilisation of memristors, the newly emerged nanoscale devices, in building neural networks. The memristor was first postulated by Leon Chua in 1971 as the fourth fundamental passive circuit element and experimentally validated by one of HP labs in 2008. Memristors, short for memory-resistor, have a peculiar memory effect which distinguishes them from resistors. By applying a bias voltage across it, the resistance of a memristor, namely memristance, is changed. In addition, the memristance is retained when the power supply is removed which demonstrates the non-volatility of the memristor. Memristor-based neural networks are currently being researched in order to replace complementary metal-oxide-semiconductor (CMOS) devices in neuromorphic circuits with memristors and to investigate their potential applications. Current research primarily focuses on the utilisation of memristors as synaptic connections between neurons, however in any application it may be possible to allow memristors to perform computation in a natural way which attempts to avoid additional CMOS devices. Examples of such methods utilised in neural networks are presented in this thesis, such as memristor-based cellular neural network (CNN) structures, the memristive spiking-time dependent plasticity (STDP) model and the exploration of their potential applications. This thesis presents manifold studies in the topic of memristor-based neural networks from theories and feasibility to approaches to implementations. Studies are divided into two parts which are the utilisation of memristors in non-spiking neural networks and spiking neural networks (SNNs). At the beginning of the thesis, fundamentals of neural networks and memristors are explored with the analysis of the physical properties and $v-i$ behaviour of memristors. In the studies of memristor-based non-spiking neural networks, a staircase memristor model is presented based on memristors which have multi-level resistive states and the delayed-switching effect. This model is adapted to CNNs and echo state networks (ESNs) as applications that benefit from memristive implementations. In the studies of memristor-based SNNs, a trace-based memristive STDP model is proposed and discussed to overcome the incompatibility issues of the previous model with all-to-all spike interaction. The work also presents applications of the trace-based memristive model in associative learning with retention loss and supervised learning. The computational results of experiments with different applications have shown that memristor-based neural networks will be advantageous in building synchronous or asynchronous parallel neuromorphic systems. The work presents several new findings on memristor modelling, memristor-based neural network structures and memristor-based associative learning. These studies address unexplored research areas in the context of memristor-based neural networks to the best of our knowledge, and therefore form original contributions.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Fengzhen. "Neural networks for data fusion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30179.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Horvitz, Richard P. "Symbol Grounding Using Neural Networks." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337887977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Turner, Joe. "Application of artificial neural networks in pharmacokinetics /." Connect to full text, 2003. http://setis.library.usyd.edu.au/adt/public_html/adt-NU/public/adt-NU20031007.090937/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Miller, Paul Ian. "Recurrent neural networks and adaptive motor control." Thesis, University of Stirling, 1997. http://hdl.handle.net/1893/21520.

Повний текст джерела
Анотація:
This thesis is concerned with the use of neural networks for motor control tasks. The main goal of the thesis is to investigate ways in which the biological notions of motor programs and Central Pattern Generators (CPGs) may be implemented in a neural network framework. Biological CPGs can be seen as components within a larger control scheme, which is basically modular in design. In this thesis, these ideas are investigated through the use of modular recurrent networks, which are used in a variety of control tasks. The first experimental chapter deals with learning in recurrent networks, and it is shown that CPGs may be easily implemented using the machinery of backpropagation. The use of these CPGs can aid the learning of pattern generation tasks; they can also mean that the other components in the system can be reduced in complexity, say, to a purely feedforward network. It is also shown that incremental learning, or 'shaping' is an effective method for building CPGs. Genetic algorithms are also used to build CPGs; although computational effort prevents this from being a practical method, it does show that GAs are capable of optimising systems that operate in the context of a larger scheme. One interesting result from the GA is that optimal CPGs tend to have unstable dynamics, which may have implications for building modular neural controllers. The next chapter applies these ideas to some simple control tasks involving a highly redundant simulated robot arm. It was shown that it is relatively straightforward to build CPGs that represent elements of pattern generation, constraint satisfaction. and local feedback. This is indirect control, in which errors are backpropagated through a plant model, as well as the ePG itself, to give errors for the controller. Finally, the third experimental chapter takes an alternative approach, and uses direct control methods, such as reinforcement learning. In reinforcement learning, controller outputs have unmodelled effects; this allows us to build complex control systems, where outputs modulate the couplings between sets of dynamic systems. This was shown for a simple case, involving a system of coupled oscillators. A second set of experiments investigates the use of simplified models of behaviour; this is a reduced form of supervised learning, and the use of such models in control is discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chen, Francis Xinghang. "Modeling human vision using feedforward neural networks." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/112824.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-86).
In this thesis, we discuss the implementation, characterization, and evaluation of a new computational model for human vision. Our goal is to understand the mechanisms enabling invariant perception under scaling, translation, and clutter. The model is based on I-Theory [50], and uses convolutional neural networks. We investigate the explanatory power of this approach using the task of object recognition. We find that the model has important similarities with neural architectures and that it can reproduce human perceptual phenomena. This work may be an early step towards a more general and unified human vision model.
by Francis Xinghang Chen.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Dernoncourt, Franck. "Sequential short-text classification with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111880.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-79).
Medical practice too often fails to incorporate recent medical advances. The two main reasons are that over 25 million scholarly medical articles have been published, and medical practitioners do not have the time to perform literature reviews. Systematic reviews aim at summarizing published medical evidence, but writing them requires tremendous human efforts. In this thesis, we propose several natural language processing methods based on artificial neural networks to facilitate the completion of systematic reviews. In particular, we focus on short-text classification, to help authors of systematic reviews locate the desired information. We introduce several algorithms to perform sequential short-text classification, which outperform state-of-the-art algorithms. To facilitate the choice of hyperparameters, we present a method based on Gaussian processes. Lastly, we release PubMed 20k RCT, a new dataset for sequential sentence classification in randomized control trial abstracts.
by Franck Dernoncourt.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhang, Jeffrey M. Eng Massachusetts Institute of Technology. "Enhancing adversarial robustness of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122994.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.
by Jeffry Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Miglani, Vivek N. "Comparing learned representations of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123048.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-64).
In recent years, a variety of deep neural network architectures have obtained substantial accuracy improvements in tasks such as image classification, speech recognition, and machine translation, yet little is known about how different neural networks learn. To further understand this, we interpret the function of a deep neural network used for classification as converting inputs to a hidden representation in a high dimensional space and applying a linear classifier in this space. This work focuses on comparing these representations as well as the learned input features for different state-of-the-art convolutional neural network architectures. By focusing on the geometry of this representation, we find that different network architectures trained on the same task have hidden representations which are related by linear transformations. We find that retraining the same network architecture with a different initialization does not necessarily lead to more similar representation geometry for most architectures, but the ResNeXt architecture consistently learns similar features and hidden representation geometry. We also study connections to adversarial examples and observe that networks with more similar hidden representation geometries also exhibit higher rates of adversarial example transferability.
by Vivek N. Miglani.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Trinh, Loc Quang. "Greedy layerwise training of convolutional neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123128.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 61-63).
Layerwise training presents an alternative approach to end-to-end back-propagation for training deep convolutional neural networks. Although previous work was unsuccessful in demonstrating the viability of layerwise training, especially on large-scale datasets such as ImageNet, recent work has shown that layerwise training on specific architectures can yield highly competitive performances. On ImageNet, the layerwise trained networks can perform comparably to many state-of-the-art end-to-end trained networks. In this thesis, we compare the performance gap between the two training procedures across a wide range of network architectures and further analyze the possible limitations of layerwise training. Our results show that layerwise training quickly saturates after a certain critical layer, due to the overfitting of early layers within the networks. We discuss several approaches we took to address this issue and help layerwise training improve across multiple architectures. From a fundamental standpoint, this study emphasizes the need to open the blackbox that is modern deep neural networks and investigate the layerwise interactions between intermediate hidden layers within deep networks, all through the lens of layerwise training.
by Loc Quang Trinh.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії