Dissertations / Theses on the topic 'Neural network programming'

To see the other types of publications on this topic, follow the link: Neural network programming.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Neural network programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Howse, Samuel. "Dynamic programming problems, neural network solutions and economic applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0009/MQ60678.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Collins, Tamar L. "A methodology for engineering neural network systems." Thesis, University of Exeter, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sulaiman, Md Nasir. "The design of a neural network compiler." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/25628.

Full text
Abstract:
Computer simulation is a flexible and economical way for rapid prototyping and concept evaluation with Neural Network (NN) models. Increasing research on NNs has led to the development of several simulation programs. Not all simulations have the same scope. Some simulations allow only a fixed network model and some are more general. Designing a simulation program for general purpose NN models has become a current trend nowadays because of its flexibility and efficiency. A proper programming language specifically for NN models is preferred since the existing high-level languages such as C are for NN designers from a strong computer background. The program translations for NN languages come from combinations which are either interpreter and/or compiler. There are also various styles of programming languages such as a procedural, functional, descriptive and object-oriented. The main focus of this thesis is to study the feasibility of using a compiler method for the development of a general-purpose simulator - NEUCOMP that compiles the program written as a list of mathematical specifications of the particular NN model and translates it into a chosen target program. The language supported by NEUCOMP is based on a procedural style. Information regarding the list of mathematical statements required by the NN models are written in the program. The mathematical statements used are represented by scalar, vector and matrix assignments. NEUCOMP translates these expressions into actual program loops. NEUCOMP enables compilation of a simulation program written in the NEUCOMP language for any NN model, contains graphical facilities such as portraying the NN architecture and displaying a graph of the result during training and finally to have a program that can run on a parallel shared memory multi-processor system.
APA, Harvard, Vancouver, ISO, and other styles
4

Lukashev, A. "Basics of artificial neural networks (ANNs)." Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sims, Pauline. "Turing's P-type machine and neural network hybrid systems." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Haggett, Simon J. "Towards a multipurpose neural network approach to novelty detection." Thesis, University of Kent, 2008. https://kar.kent.ac.uk/24133/.

Full text
Abstract:
Novelty detection, the identification of data that is unusual or different in some way, is relevant in a wide number of real-world scenarios, ranging from identifying unusual weather conditions to detecting evidence of damage in mechanical systems. However, utilising novelty detection approaches in a particular scenario presents significant challenges to the non-expert user. They must first select an appropriate approach from the novelty detection literature for their scenario. Then, suitable values must be determined for any parameters of the chosen approach. These challenges are at best time consuming and at worst prohibitively difficult for the user. Worse still, if no suitable approach can be found from the literature, then the user is left with the impossible task of designing a novelty detector themselves. In order to make novelty detection more accessible, an approach is required which does not pose the above challenges. This thesis presents such an approach, which aims to automatically construct novelty detectors for specific applications. The approach combines a neural network model, recently proposed to explain a phenomenon observed in the neural pathways of the retina, with an evolutionary algorithm that is capable of simultaneously evolving the structure and weights of a neural network in order to optimise its performance in a particular task. The proposed approach was evaluated over a number of very different novelty detection tasks. It was found that, in each task, the approach successfully evolved novelty detectors which outperformed a number of existing techniques from the literature. A number of drawbacks with the approach were also identified, and suggestions were given on ways in which these may potentially be overcome.
APA, Harvard, Vancouver, ISO, and other styles
7

Heaton, Jeff T. "Automated Feature Engineering for Deep Neural Networks with Genetic Programming." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/994.

Full text
Abstract:
Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features.
APA, Harvard, Vancouver, ISO, and other styles
8

Gueddar, T. "Neural network and multi-parametric programming based approximation techniques for process optimisation." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1432138/.

Full text
Abstract:
In this thesis two approximation techniques are proposed: Artificial Neural Networks (ANN) and Multi – Parametric Programming. The usefulness of these techniques is demonstrated through process optimisation case studies. The oil refining industry mainly uses Linear Programming (LP) for refinery optimization and planning purposes, on a daily basis. LPs are attractive from the computational time point of view; however it has limitations such as the nonlinearity of the refinery processes is not taken into account. The main aim of this work is to develop approximate models to replace the rigorous ones providing a good accuracy without compromising the computational time, for refinery optimization. The data for deriving approximate models is generated from rigorous process models from a commercial software, which is extensively used in the refining industry. In this work we present three model reduction techniques. The first approach is based upon deriving an optimal configuration of artificial neural networks (ANN) for approximating the refinery models. The basic idea is to formulate the existence or not of the nodes and interconnections in the network using binary variables. This results in a Mixed Integer Nonlinear Programming formulation for Artificial Neural Networks (MIPANN). The second approach is concerned with dealing with complexity associated with large amounts of data that is usually available in the refineries; a disagg regation¬aggregation based approach is presented to address the complexity. The data is split (disagg reg ation) into smaller subsets and reduced ANN models are obtained for each of the subset. These ANN models are then combined (aggregation) to obtain an ANN model which represents the whole of the original data. The disagg reg ation step can be carried out within a parallel computing platform. The third approach consists of combining the MIPA NN and the disagg reg ation¬aggregation reduction methods to handle medium and large scale training data using a neural network that has already been reduced through nodes and interconnections optimization. Refinery optimization studies are carried out to demonstrate the applicability and the usefulness of these proposed model reduction approaches. Process synthesis and MIPANN problems are usually formulated as Mixed Integer Nonlinear programming (MINLP) problems requiring efficient algorithm for their solution. An approximate multi-parametric programming Branch and Bound (mpBB) algorithm is proposed. An approximate parametric solution at the root node and other fractional nodes of the Branch and Bound (BB) tree are obtained and used to estimate the solution at the terminal nodes in different sections of the tree. These estimates are then used to guide the search in the BB tree, resulting in fewer nodes being evaluated and reduction in the computational effort. Problems from the literature are solved using the proposed algorithm and compared with the other currently available algorithms for solving MINLP problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Myers, Catherine E. "Learning with delayed reinforcement in an exploratory probabilistic logic neural network." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lategano, Antonio. "Image-based programming language recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22208/.

Full text
Abstract:
Nel presente lavoro di tesi viene affrontato per la prima volta il problema della classificazione dei linguaggi di programmazione mediante approcci image-based. Abbiamo utilizzato alcune Convolutional Neural Network pre-addestrate su task di classificazione di immagini, adattandole alla classificazione di immagini contenenti porzioni di codice sorgente scritto in 149 diversi linguaggi di programmazione. I nostri risultati hanno dimostrato che tali modelli riescono ad apprendere, con buone prestazioni, le feature lessicali presenti nel testo. Aggiungendo del rumore, tramite modifica dei caratteri presenti nelle immagini, siamo riusciti a comprendere quali fossero i caratteri che meglio permettevano ai modelli di discriminare tra una una classe e l’altra. Il risultato, confermato tramite l’utilizzo di tecniche di visualizzazione come la Class Activation Mapping, è che la rete riesce ad apprendere delle feature lessicali di basso livello concentrandosi in particolare sui simboli tipici di ogni linguaggio di programmazione (come punteggiatura e parentesi), piuttosto che sui caratteri alfanumerici.
APA, Harvard, Vancouver, ISO, and other styles
11

Visaggi, Salvatore. "Multimodal Side-Tuning for Code Snippets Programming Language Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22993/.

Full text
Abstract:
Identificare in modo automatico il linguaggio di programmazione di una porzione di codice sorgente è uno dei temi che ancora oggi presenta diverse difficoltà. Il numero di linguaggi di programmazione, la quantità di codice pubblicato e reso open source, e il numero di sviluppatori che producono e pubblicano nuovo codice sorgente è in continuo aumento. Le motivazioni che richiedono la necessità di disporre di strumenti in grado di riconoscere il tipo di linguaggio per snippet di codice sorgente sono svariate. Ad esempio, tali strumenti trovano applicazione in ambiati quali: la ricerca di codice sorgente; la ricerca di possibili vulnerabilità nel codice; la syntax highlighting; o semplicemente per comprendere il contenuto di progetti software. Nasce così l'esigenza di disporre di dataset di snippet di codice allineati in modo adeguato con il linguaggio di programmazione. StackOverflow, una piattaforma di condivisione di conoscenza tra sviluppatori, offre la possibilità di avere accesso a centinaia di migliaia di snippet di codice sorgente scritti nei linguaggi più usati dagli sviluppatori, rendendolo il luogo ideale da cui estrarre snippet per la risoluzione del task proposto. Nel lavoro svolto si è dedicata molta attenzione a tale problematica, iterando sull'approccio scelto al fine di ottenere una metodologia che ha permesso l'estrazione di un dataset adeguato. Al fine di risolvere il task dell'identificazione del linguaggio per gli snippet estratti da StackOverflow, nel lavoro svolto si fa uso di un approccio multimodale (considerando rappresentazioni testuali e di immagini degli snippet), prendendo in esame la tecnica innovativa di side-tuning (basata sull'adattamento incrementale di una rete neurale pre-addestrata). I risultati ottenuti sono confrontabili con lo stato dell'arte e in alcuni casi migliori, in considerazione della difficoltà del task affrontato nel caso di snippet di codice sorgente che presentano poche linee di codice.
APA, Harvard, Vancouver, ISO, and other styles
12

Ferroni, Nicola. "Exact Combinatorial Optimization with Graph Convolutional Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17502/.

Full text
Abstract:
Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose to learn a variable selection policy for branch-and-bound in mixed-integer linear programming, by imitation learning on a diversified variant of the strong branching expert rule. We encode states as bipartite graphs and parameterize the policy as a graph convolutional neural network. Experiments on a series of synthetic problems demonstrate that our approach produces policies that can improve upon expert-designed branching rules on large problems, and generalize to instances significantly larger than seen during training.
APA, Harvard, Vancouver, ISO, and other styles
13

Conti, Matteo. "Machine Learning Based Programming Language Identification." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20875/.

Full text
Abstract:
L'avvento dell'era digitale ha contribuito allo sviluppo di nuovi settori tecnologici, i quali, per diretta conseguenza, hanno portato alla richiesta di nuove figure professionali capaci di assumere un ruolo chiave nel processo d'innovazione tecnologica. L'aumento di questa richiesta ha interessato particolarmente il settore dello sviluppo del software, a seguito della nascita di nuovi linguaggi di programmazione e nuovi campi a cui applicarli. La componente principale di cui è composto un software, infatti, è il codice sorgente, il quale può essere rappresentato come un archivio di uno o più file testuali contenti una serie d'istruzioni scritte in uno o più linguaggi di programmazione. Nonostante molti di questi vengano utilizzati in diversi settori tecnologici, spesso accade che due o più di questi condividano una struttura sintattica e semantica molto simile. Chiaramente questo aspetto può generare confusione nell'identificazione di questo all'interno di un frammento di codice, soprattutto se consideriamo l'eventualità che non sia specificata nemmeno l'estensione dello stesso file. Infatti, ad oggi, la maggior parte del codice disponibile online contiene informazioni relative al linguaggio di programmazione specificate manualmente. All'interno di questo elaborato ci concentreremo nel dimostrare che l'identificazione del linguaggio di programmazione di un file `generico' di codice sorgente può essere effettuata in modo automatico utilizzando algoritmi di Machine Learning e non usando nessun tipo di assunzione `a priori' sull'estensione o informazioni particolari che non riguardino il contenuto del file. Questo progetto segue la linea dettata da alcune ricerche precedenti basate sullo stesso approccio, confrontando tecniche di estrazione delle features differenti e algoritmi di classificazione con caratteristiche molto diverse, cercando di ottimizzare la fase di estrazione delle features in base al modello considerato.
APA, Harvard, Vancouver, ISO, and other styles
14

Wilgenbus, Erich Feodor. "The file fragment classification problem : a combined neural network and linear programming discriminant model approach / Erich Feodor Wilgenbus." Thesis, North-West University, 2013. http://hdl.handle.net/10394/10215.

Full text
Abstract:
The increased use of digital media to store legal, as well as illegal data, has created the need for specialized tools that can monitor, control and even recover this data. An important task in computer forensics and security is to identify the true le type to which a computer le or computer le fragment belongs. File type identi cation is traditionally done by means of metadata, such as le extensions and le header and footer signatures. As a result, traditional metadata-based le object type identi cation techniques work well in cases where the required metadata is available and unaltered. However, traditional approaches are not reliable when the integrity of metadata is not guaranteed or metadata is unavailable. As an alternative, any pattern in the content of a le object can be used to determine the associated le type. This is called content-based le object type identi cation. Supervised learning techniques can be used to infer a le object type classi er by exploiting some unique pattern that underlies a le type's common le structure. This study builds on existing literature regarding the use of supervised learning techniques for content-based le object type identi cation, and explores the combined use of multilayer perceptron neural network classi ers and linear programming-based discriminant classi ers as a solution to the multiple class le fragment type identi cation problem. The purpose of this study was to investigate and compare the use of a single multilayer perceptron neural network classi er, a single linear programming-based discriminant classi- er and a combined ensemble of these classi ers in the eld of le type identi cation. The ability of each individual classi er and the ensemble of these classi ers to accurately predict the le type to which a le fragment belongs were tested empirically. The study found that both a multilayer perceptron neural network and a linear programming- based discriminant classi er (used in a round robin) seemed to perform well in solving the multiple class le fragment type identi cation problem. The results of combining multilayer perceptron neural network classi ers and linear programming-based discriminant classi ers in an ensemble were not better than those of the single optimized classi ers.
MSc (Computer Science), North-West University, Potchefstroom Campus, 2013
APA, Harvard, Vancouver, ISO, and other styles
15

Hanselmann, Thomas. "Approximate dynamic programming with adaptive critics and the algebraic perceptron as a fast neural network related to support vector machines." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2004.0005.

Full text
Abstract:
[Truncated abstract. Please see the pdf version for the complete text. Also, formulae and special characters can only be approximated here. Please see the pdf version of this abstract for an accurate reproduction.] This thesis treats two aspects of intelligent control: The first part is about long-term optimization by approximating dynamic programming and in the second part a specific class of a fast neural network, related to support vector machines (SVMs), is considered. The first part relates to approximate dynamic programming, especially in the framework of adaptive critic designs (ACDs). Dynamic programming can be used to find an optimal decision or control policy over a long-term period. However, in practice it is difficult, and often impossible, to calculate a dynamic programming solution, due to the 'curse of dimensionality'. The adaptive critic design framework addresses this issue and tries to find a good solution by approximating the dynamic programming process for a stationary environment. In an adaptive critic design there are three modules, the plant or environment to be controlled, a critic to estimate the long-term cost and an action or controller module to produce the decision or control strategy. Even though there have been many publications on the subject over the past two decades, there are some points that have had less attention. While most of the publications address the training of the critic, one of the points that has not received systematic attention is training of the action module.¹ Normally, training starts with an arbitrary, hopefully stable, decision policy and its long-term cost is then estimated by the critic. Often the critic is a neural network that has to be trained, using a temporal difference and Bellman's principle of optimality. Once the critic network has converged, a policy improvement step is carried out by gradient descent to adjust the parameters of the controller network. Then the critic is retrained again to give the new long-term cost estimate. However, it would be preferable to focus more on extremal policies earlier in the training. Therefore, the Calculus of Variations is investigated to discard the idea of using the Euler equations to train the actor. However, an adaptive critic formulation for a continuous plant with a short-term cost as an integral cost density is made and the chain rule is applied to calculate the total derivative of the short-term cost with respect to the actor weights. This is different from the discrete systems, usually used in adaptive critics, which are used in conjunction with total ordered derivatives. This idea is then extended to second order derivatives such that Newton's method can be applied to speed up convergence. Based on this, an almost concurrent actor and critic training was proposed. The equations are developed for any non-linear system and short-term cost density function and these were tested on a linear quadratic regulator (LQR) setup. With this approach the solution to the actor and critic weights can be achieved in only a few actor-critic training cycles. Some other, more minor issues, in the adaptive critic framework are investigated, such as the influence of the discounting factor in the Bellman equation on total ordered derivatives, the target interpretation in backpropagation through time as moving and fixed targets, the relation between simultaneous recurrent networks and dynamic programming is stated and a reinterpretation of the recurrent generalized multilayer perceptron (GMLP) as a recurrent generalized finite impulse MLP (GFIR-MLP) is made. Another subject in this area that is investigated, is that of a hybrid dynamical system, characterized as a continuous plant and a set of basic feedback controllers, which are used to control the plant by finding a switching sequence to select one basic controller at a time. The special but important case is considered when the plant is linear but with some uncertainty in the state space and in the observation vector, and a quadratic cost function. This is a form of robust control, where a dynamic programming solution has to be calculated. ¹Werbos comments that most treatment of action nets or policies either assume enumerative maximization, which is good only for small problems, except for the games of Backgammon or Go [1], or, gradient-based training. The latter is prone to difficulties with local minima due to the non-convex nature of the cost-to-go function. With incremental methods, such as backpropagation through time, calculus of variations and model-predictive control, the dangers of non-convexity of the cost-to-go function with respect to the control is much less than the with respect to the critic parameters, when the sampling times are small. Therefore, getting the critic right has priority. But with larger sampling times, when the control represents a more complex plan, non-convexity becomes more serious.
APA, Harvard, Vancouver, ISO, and other styles
16

Cheng, Chao. "Application of Artificial Neural Networks in the Power Split Controller For a Series Hydraulic Hybrid Vehicle." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1278610645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hytychová, Tereza. "Evoluční návrh neuronových sítí využívající generativní kódování." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445478.

Full text
Abstract:
The aim of this work is to design and implement a method for the evolutionary design of neural networks with generative encoding. The proposed method is based on J. F. Miller's approach and uses a brain model that is gradually developed and which allows extraction of traditional neural networks. The development of the brain is controlled by programs created using cartesian genetic programming. The project was implemented in Python with the use of Numpy library. Experiments have shown that the proposed method is able to construct neural networks that achieve over 90 % accuracy on smaller datasets. The method is also able to develop neural networks capable of solving multiple problems at once while slightly reducing accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Moles, Joshua Stephen. "Chemical Reaction Network Control Systems for Agent-Based Foraging Tasks." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2203.

Full text
Abstract:
Chemical reaction networks are an unconventional computing medium that could benefit from the ability to form basic control systems. In this work, we demonstrate the functionality of a chemical control system by evaluating classic genetic algorithm problems: Koza's Santa Fe trail, Jefferson's John Muir trail, and three Santa Fe trail segments. Both Jefferson and Koza found that memory, such as a recurrent neural network or memories in a genetic program, are required to solve the task. Our approach presents the first instance of a chemical system acting as a control system. We propose a delay line connected with an artificial neural network in a chemical reaction network to determine the artificial ant's moves. We first search for the minimal required delay line size connected to a feed forward neural network in a chemical system. Our experiments show a delay line of length four is sufficient. Next, we used these findings to implement a chemical reaction network with a length four delay line and an artificial neural network. We use genetic algorithms to find an optimal set of weights for the artificial neural network. This chemical system is capable of consuming 100% of the food on a subset and greater than 44% of the food on Koza's Santa Fe trail. We also show the first implementation of a simulated chemical memory in two different models that can reliably capture and store information over time. The ability to store data over time gives rise to basic control systems that can perform more complex tasks. The integration of a memory storage unit and a control system in a chemistry has applications in biomedicine, like smart drug delivery. We show that we can successfully store the information over time and use it to act as a memory for a control system navigating an agent through a maze.
APA, Harvard, Vancouver, ISO, and other styles
19

Svobodová, Jitka. "Neuronové sítě a evoluční algoritmy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218221.

Full text
Abstract:
Objective of this master's thesis is optimizing of neral network topology using some of evolutionary algorithms. The backpropagation neural network was optimized using genetic algorithms, evolutionary programming and evolutionary strategies. The text contains an application in the Matlab environment which applies these methods to simple tasks as pattern recognition and function prediction. Created graphs of fitness and error functions are included as a result of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
20

Campos, Jose Roberto. "Desenvolvimento de um sistema dinâmico para predição de cargas elétricas por redes neurais através do paradigma de programação orientada a objeto sob a linguagem JAVA /." Ilha Solteira : [s.n.], 2010. http://hdl.handle.net/11449/87104.

Full text
Abstract:
Orientador: Anna Diva Plasencia Lotufo
Banca: Maria do Carmo Gomes da Silveira
Banca: Gelson da. Cruz Junior
Resumo: A previsão de carga, considerada essencial no planejamento da operação energética e nos estudos de ampliação e reforços da rede básica, assume importância estratégica na extensão comercial, valorizando os processos de armazenamento desses dados e da extração de conhecimentos através de técnicas computacionais. Nos últimos anos, diversos trabalhos foram publicados sobre sistemas de previsão de cargas (demanda) elétricas. Nos horizontes de curto, médio e longo prazo, os modelos neurais, estão entre os mais explorados. O objetivo deste trabalho é apresentar um sistema previsor de cargas elétricas de forma simples e eficiente através de sistemas baseados em redes neurais artificiais com treinamento realizado pelo algoritmo back-propagation. Para isto, optou-se pelo desenvolvimento de um software utilizando os paradigmas de programação orientada a objetos para criar um modelo neural de fácil manipulação, e que de certa forma, consiga corrigir o problema dos mínimos locais. Em geral, o sistema desenvolvido é capaz de atribuir os parâmetros da rede neural de forma automática através de processos exaustivos. Os resultados apresentados foram comparados utilizando outros trabalhos em que também se usaram-se os dados da mesma companhia elétrica. Este trabalho apresentou um ganho de desempenho bem satisfatório em relação a outros trabalhos encontrados na literatura para a mesma classe de problemas
Abstract: Load Forecasting is essential in planning and operation of power systems, in enlarging and reinforcing the basic network, is also very important commercially, valorizing the filing process of these data and extracting knowledge by computational techniques. Lately, several works have been published about electrical load forecasting. Short term, medium term and long term horizons are equally studied. The objective of this work is to present an electrical load forecasting system, which is simple and efficient and based on artificial neural networks whose training is with the back-propagation algorithm. Therefore, a software is developed using the paradigms of the object oriented programming technique to create a neural model which is ease to manipulate, and able to correct the local minimum problem. This system attributes the neural parameters automatically by exhaustive procedures. Results are compared with other works that have used the same data and this work presents a satisfactory performance when compared with those and others found in the literature
Mestre
APA, Harvard, Vancouver, ISO, and other styles
21

Maragno, Donato. "Optimization with machine learning-based modeling: an application to humanitarian food aid." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21621/.

Full text
Abstract:
In this thesis, we propose a machine learning-based optimization methodology to build (part of) optimization models with a data-driven approach. This approach is useful whenever we have to model one or more relations between the decisions and their impact on the system. This kind of relationship can be challenging to model manually, and so machine learning is used to learn it through the use of data. We demonstrate the potential of this method through a case study in which a predictive model is used to approximate the palatability scoring function in a typical diet problem formulation. First, the performance of this approach is analyzed by embedding a Linear Regression model and then by embedding a Fully Connected Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
22

Guazzelli, Cauê Sauter. "Modelos e métodos para estudos de configuração de redes logísticas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3138/tde-13072018-112347/.

Full text
Abstract:
Este trabalho trata do problema de configuração de redes logísticas, em que são consideradas como principais decisões a quantidade e a localização de instalações logísticas e a definição da alocação de clientes às instalações. Mais especificamente, o trabalho considera um processo típico de configuração de redes logísticas que se vale de modelos discretos de otimização e a tomada de decisão com base nos resultados. O objetivo da tese é propor modelos e métodos capazes de dar suporte às etapas fundamentais deste tipo de estudo. Inicialmente são propostos métodos para a seleção de locais candidatos considerados nos modelos de localização. Os métodos se valem de informações sobre a distribuição dos pontos de demanda ao longo da rede para a obtenção dos candidatos a instalação e são avaliados por meio de sua aplicação a dois conjuntos de instâncias da literatura científica e comparação de tempos de resolução e de valores da função objetivo. Os resultados mostram que o tempo de resolução foi reduzido, na média, em 57% e os gaps das funções objetivo resultantes vale menos que 0,16% em comparação com os modelos que consideram todos os pontos de demanda como candidatos. Adicionalmente, também foram propostos métodos capazes de obter soluções alternativas de qualidade para problemas de localização que podem ser comparadas a fim de fornecer mais subsídio para a tomada de decisão. Os métodos são capazes de obter as K melhores soluções de problemas de localização e são avaliados por meio de sua aplicação a 215 instâncias da literatura científica. Além disso, a abordagem proposta permitiu a análise de resultados nunca antes obtidos para um problema muito estudado: as K melhores soluções do problema de localização de instalações capacitadas com custo fixo. Duas características principais foram identificadas: a quantidade de instalações é estável - em 99% das instâncias testadas o desvio padrão da quantidade de instalações nas 20 melhores soluções de cada instância é menor que um - e grande parte das instalações que fazem parte da solução ótima de cada instância também faz parte da maior parte das 20 melhores soluções. A partir de tais conclusões, o trabalho investiga algumas propriedades gerais de problemas de localização e apresenta uma análise topológica das 215 instâncias utilizadas, com base em indicadores propostos. Por fim, três tipos de modelos de redes neurais capazes de identificar relações entre os valores dos indicadores das instâncias e os valores das variáveis resposta associadas às melhores soluções são aplicados e avaliados. A abordagem consiste em comparar o tempo de resolução e o valor da função objetivo de modelos cujos espaços de soluções viáveis são reduzidos com base nos resultados obtidos pelas redes neurais. Os resultados mostram que é possível utilizar tal abordagem para melhorar o processo de configuração de redes logísticas, seja na etapa de construção dos modelos seja proporcionando mais subsídios para a tomada de decisão.
This thesis deals with the supply chain network design problem (SCND) that aims to find the optimal location of facilities and the allocation of customers to each facility. The work considers a typical process of SCND in which discrete optimization models are run and its results are used in the decision making. The goal of the thesis is to propose models and methods to support the stages of this type of planning process. Initially, methods for the selection of candidates considered in the localization models are proposed. The methods consider the distribution of the demand points throughout the network to obtain the candidates and are evaluated by their application to two sets of scientific literature instances and comparison of computational times and objective function values. The results show that the average computational time has been reduced by 57% and the resulting objective function gaps are less than 0,16% compared to the solutions obtained by the models that consider all the demand points as candidates. In addition, the thesis present methods capable of obtaining high-quality alternative solutions to location problems that can be compared in order to provide better support for decision making. The methods obtain the K-best solutions of location problems and are evaluated by their application to 215 instances of the scientific literature. In addition, the proposed approach allowed the analysis of results never before obtained for a well-studied problem: the best solutions of the capacitated fixed cost facility location problem. Two main insights were identified: the number of facilities is stable - in 99% of the tested instances the standard deviation of the number of facilities in the 20 best solutions of each instance is less than one - and most of the selected facilities in the optimal solution of each instance is selected in most of the 20 best solutions as well. Based on these conclusions, the work investigates some general properties of localization problems and presents a topological analysis of the 215 instances, based on proposed indicators. Finally, three types of neural network models capable of identifying relations between the instances indicators and the values of the variables of the best solutions are applied and evaluated. The approach consists in comparing the computational time and the objective function value of models whose feasible solution spaces are reduced based on the results obtained by the neural networks. The results show that it is possible to use such approach to improve the SCND process, either at the construction stage of the models or by providing more information for the decision making.
APA, Harvard, Vancouver, ISO, and other styles
23

Campos, Jose Roberto [UNESP]. "Desenvolvimento de um sistema dinâmico para predição de cargas elétricas por redes neurais através do paradigma de programação orientada a objeto sob a linguagem JAVA." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/87104.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-11-26Bitstream added on 2014-06-13T19:28:04Z : No. of bitstreams: 1 campos_jr_me_ilha.pdf: 1235138 bytes, checksum: 9965ccc979ea59bf6f2a7e8558692b7b (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A previsão de carga, considerada essencial no planejamento da operação energética e nos estudos de ampliação e reforços da rede básica, assume importância estratégica na extensão comercial, valorizando os processos de armazenamento desses dados e da extração de conhecimentos através de técnicas computacionais. Nos últimos anos, diversos trabalhos foram publicados sobre sistemas de previsão de cargas (demanda) elétricas. Nos horizontes de curto, médio e longo prazo, os modelos neurais, estão entre os mais explorados. O objetivo deste trabalho é apresentar um sistema previsor de cargas elétricas de forma simples e eficiente através de sistemas baseados em redes neurais artificiais com treinamento realizado pelo algoritmo back-propagation. Para isto, optou-se pelo desenvolvimento de um software utilizando os paradigmas de programação orientada a objetos para criar um modelo neural de fácil manipulação, e que de certa forma, consiga corrigir o problema dos mínimos locais. Em geral, o sistema desenvolvido é capaz de atribuir os parâmetros da rede neural de forma automática através de processos exaustivos. Os resultados apresentados foram comparados utilizando outros trabalhos em que também se usaram-se os dados da mesma companhia elétrica. Este trabalho apresentou um ganho de desempenho bem satisfatório em relação a outros trabalhos encontrados na literatura para a mesma classe de problemas
Load Forecasting is essential in planning and operation of power systems, in enlarging and reinforcing the basic network, is also very important commercially, valorizing the filing process of these data and extracting knowledge by computational techniques. Lately, several works have been published about electrical load forecasting. Short term, medium term and long term horizons are equally studied. The objective of this work is to present an electrical load forecasting system, which is simple and efficient and based on artificial neural networks whose training is with the back-propagation algorithm. Therefore, a software is developed using the paradigms of the object oriented programming technique to create a neural model which is ease to manipulate, and able to correct the local minimum problem. This system attributes the neural parameters automatically by exhaustive procedures. Results are compared with other works that have used the same data and this work presents a satisfactory performance when compared with those and others found in the literature
APA, Harvard, Vancouver, ISO, and other styles
24

Filho, Edson Costa de Barros Carvalho. "Investigation of Boolean neural networks on a novel goal-seeking neuron." Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Roddier, Nicolas. "Global optimization via neural networks and D.C. programming." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186617.

Full text
Abstract:
The ultimate goal of this work is to provide a general global optimization method. Due to the difficulty of the problem, the complete task is divided into several sections. These sections can be collected into a modeling phase followed by a global minimization phase. Each of the various sections draws from different engineering fields. What this work suggests is an interface and common ground between these fields. The modeling phase of the procedure consists of converting a general problem into a given formulation using a particular type of neural network. The architecture required for the neural network forms a new class: the pseudo multilayer neural network. It is introduced and compared to more classical neural network architectures such as the regular multilayer neural network. However, a key difference between these two classes is that the behavior of the usual multilayer network has to be programmed via iterative training, while an extremely efficient direct procedure is given here to synthesize the pseudo multilayer neural network. Therefore any initial problem can be systematically converted into a pseudo multilayer network without going through the undesired programming steps such as the backpropagation rule. The second phase of the work consists of translating the initial global optimization problem into the global minimization of a target function related to the neural network model. Systematic procedures are again given here. The last phase consists of globally minimizing the target function. This is done via the so-called DC programming technique where DC stands for "Difference of Convex". The pseudo multilayer was created such that it can systematically be converted into a DC formulation, and therefore be compatible with DC programming. A translation procedure to go from the pseudo multilayer neural network model to the DC formulation is given. When a DC program is applied to this last formulation, the resulting solution can be directly mapped to the global minimum of the target function previously defined, thereby producing the global optimal solution of the neural network modeling the initial problem. Therefore, the optimal solution of the original problem is known as well.
APA, Harvard, Vancouver, ISO, and other styles
26

Turner, Andrew. "Evolving artificial neural networks using Cartesian genetic programming." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/12035/.

Full text
Abstract:
NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. NeuroEvolution is thought to possess many benefits over traditional training methods including: the ability to train recurrent network structures, the capability to adapt network topology, being able to create heterogeneous networks of arbitrary transfer functions, and allowing application to reinforcement as well as supervised learning tasks. This thesis presents a series of rigorous empirical investigations into many of these perceived advantages of NeuroEvolution. In this work it is demonstrated that the ability to simultaneously adapt network topology along with connection weights represents a significant advantage of many NeuroEvolutionary methods. It is also demonstrated that the ability to create heterogeneous networks comprising a range of transfer functions represents a further significant advantage. This thesis also investigates many potential benefits and drawbacks of NeuroEvolution which have been largely overlooked in the literature. This includes the presence and role of genetic redundancy in NeuroEvolution's search and whether program bloat is a limitation. The investigations presented focus on the use of a recently developed NeuroEvolution method based on Cartesian Genetic Programming. This thesis extends Cartesian Genetic Programming such that it can represent recurrent program structures allowing for the creation of recurrent Artificial Neural Networks. Using this newly developed extension, Recurrent Cartesian Genetic Programming, and its application to Artificial Neural Networks, are demonstrated to be extremely competitive in the domain of series forecasting.
APA, Harvard, Vancouver, ISO, and other styles
27

Day, Charles Robert. "Symbol processing in RAAM neural networks." Thesis, Keele University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296075.

Full text
Abstract:
The ability to construct and manipulate recursive symbol structures is regarded as fundamentally important in the domain of cognitive modelling. The aim of this thesis dissertation is to explore how well Pollack's Recursive Auto-Associative Memory (RAAM) networks can represent and facilitate the manipulation of highly-recursive structures. Using mainly skewed and balanced binary trees, the representational power of the RAAM architecture is examined for structures which are lexically simple and syntactically complex. This is in contrast to much published work on RAAM networks, in which the structures encoded are lexically complex but syntactically simple. A new RAAM tree-processing operation, which allows partial information about a set of siblings to be used as a parent pointer, is described and tested. Several empirical investigations are motivated and carried out, to determine how effectively RAAM networks can encode highly-recursive structures. The investigations demonstrate the sensitivity of the RAAM architecture with respect to the initial conditions, training parameters and the training strategies used. This work also introduces some new techniques which help to address the twin problems of extended training times and obtaining successful RAAM encodings. A completely new method for performing terminal detection is presented as well as a technique for refining Pollack's (1990) terminal detection method. In both cases, the rate at which successful RAAM encodings are obtained is significantly better than using Pollack's method. In addition, the new implicit terminal detection method might allow improved RAAM generalisation, although this conjecture has not yet been tested.RAAM networks have been used as an important counter-example to influential analyses of the shortcomings of connectionist cognitive models. The limited success of the RAAM networks in this study brings into question connectionist hopes for an effective RAAM-based cognitive model.
APA, Harvard, Vancouver, ISO, and other styles
28

Hodge, Victoria J. "Integrating information retrieval & neural networks." Thesis, University of York, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chan, Chee Keong. "Langrange programming neural networks for nonlinear Volterra system identification." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tjeng, Vincent. "Evaluating robustness of neural networks with mixed integer programming." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119563.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 43-47).
Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, neural networks can be fooled by adversarial examples -- slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. Our verifier finds minimum adversarial distortions two to three orders of magnitude more quickly than the state-of-the-art. We achieve this via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup enables us to verify properties on convolutional networks with an order of magnitude more ReLUs than had been previously verified by any complete verifier, and we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l[infinity] norm e = 0:1. On this network, we find an adversarial example for 4.38% of samples, and a certificate of robustness for the remainder. Across a variety of robust training procedures, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.
by Vincent Tjeng.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Rodriguez, Adelein. "A NEAT Approach to Genetic Programming." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2379.

Full text
Abstract:
The evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution of Augmenting Topologies) has shown to be a successful method for evolving increasingly complex networks. This system's success is based on three interrelated elements: speciation, marking of historical information in topologies, and initializing search in a small structures search space. This provides the dynamics necessary for the exploration of diverse solution spaces at once and a way to discriminate between different structures. Although different representations have emerged in the area of genetic programming, the study of the tree representation has remained of interest in great part because of its mapping to programming languages and also because of the observed phenomenon of unnecessary code growth or bloat which hinders performance. The structural similarity between trees and neural networks poses an interesting question: Is it possible to apply the techniques from NEAT to the evolution of trees and if so, how does it affect performance and the dynamics of code growth? In this work we address these questions and present analogous techniques to those in NEAT for genetic programming.
M.S.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering MSCpE
APA, Harvard, Vancouver, ISO, and other styles
32

Heaton, Jeff. "Automated Feature Engineering for Deep Neural Networks with Genetic Programming." Thesis, Nova Southeastern University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10259604.

Full text
Abstract:

Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model's predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set.

This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm's engineered features.

APA, Harvard, Vancouver, ISO, and other styles
33

MBITI, JOHN N. "Deep learning for portfolio optimization." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104567.

Full text
Abstract:
In this thesis, an optimal investment problem is studied for an investor who can only invest in a financial market modelled by an Itô-Lévy process; with one risk free (bond) and one risky (stock) investment possibility. We present the dynamic programming method and the associated Hamilton-Jacobi-Bellman (HJB) equation to explicitly solve this problem. It is shown that with purification and simplification to the standard jump diffusion process, closed form solutions for the optimal investment strategy and for the value function are attainable. It is also shown that, an explicit solution can be obtained via a finite training of a neural network using Stochastic gradient descent (SGD) for a specific case.
APA, Harvard, Vancouver, ISO, and other styles
34

Innes, Andrew, and andrew innes@defence gov au. "Genetic Programming for Cephalometric Landmark Detection." RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080221.123310.

Full text
Abstract:
The domain of medical imaging analysis has burgeoned in recent years due to the availability and affordability of digital radiographic imaging equipment and associated algorithms and, as such, there has been significant activity in the automation of the medical diagnostic process. One such process, cephalometric analysis, is manually intensive and it can take an experienced orthodontist thirty minutes to analyse one radiology image. This thesis describes an approach, based on genetic programming, neural networks and machine learning, to automate this process. A cephalometric analysis involves locating a number of points in an X-ray and determining the linear and angular relationships between them. If the points can be located accurately enough, the rest of the analysis is straightforward. The investigative steps undertaken were as follows: Firstly, a previously published method, which was claimed to be domain independent, was implemented and tested on a selection of landmarks, ranging from easy to very difficult. These included the menton, upper lip, incisal upper incisor, nose tip and sella landmarks. The method used pixel values, and pixel statistics (mean and standard deviation) of pre-determined regions as inputs to a genetic programming detector. This approach proved unsatisfactory and the second part of the investigation focused on alternative handcrafted features sets and fitness measures. This proved to be much more successful and the third part of the investigation involved using pulse coupled neural networks to replace the handcrafted features with learned ones. The fourth and final stage involved an analysis of the evolved programs to determine whether reasonable algorithms had been evolved and not just random artefacts learnt from the training images. A significant finding from the investigative steps was that the new domain independent approach, using pulse coupled neural networks and genetic programming to evolve programs, was as good as or even better than one using the handcrafted features. The advantage of this finding is that little domain knowledge is required, thus obviating the requirement to manually generate handcrafted features. The investigation revealed that some of the easy landmarks could be found with 100\% accuracy while the accuracy of finding the most difficult ones was around 78\%. An extensive analysis of evolved programs revealed underlying regularities that were captured during the evolutionary process. Even though the evolutionary process took different routes and a diverse range of programs was evolved, many of the programs with an acceptable detection rate implemented algorithms with similar characteristics. The major outcome of this work is that the method described in this thesis could be used as the basis of an automated system. The orthodontist would be required to manually correct a few errors before completing the analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Howarth, Martin. "An investigation of task level programming for robotic assembly." Thesis, Nottingham Trent University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kerin, Michael A. "Self-organisation and autonomous learning in logical neural networks." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Turega, Michael A. "A parallel computer architecture to support artificial neural networks." Thesis, University of Manchester, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Worthy, Paul James. "Investigation of artificial neural networks for forecasting and classification." Thesis, City University London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Stasinakis, Charalampos. "Applications of hybrid neural networks and genetic programming in financial forecasting." Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4921/.

Full text
Abstract:
This thesis explores the utility of computational intelligent techniques and aims to contribute to the growing literature of hybrid neural networks and genetic programming applications in financial forecasting. The theoretical background and the description of the forecasting techniques are given in the first part of the thesis (chapters 1-3), while the contribution is provided through the last five self-contained chapters (chapters 4-8). Chapter 4 investigates the utility of the Psi Sigma neural network when applied to the task of forecasting and trading the Euro/Dollar exchange rate, while Kalman Filter estimation is tested in combining neural network forecasts. A time-varying leverage trading strategy based on volatility forecasts is also introduced. In chapter 5 three neural networks are used to forecast an exchange rate, while Kalman Filter, Genetic Programming and Support Vector Regression are implemented to provide stochastic and genetic forecast combinations. In addition, a hybrid leverage trading strategy tests if volatility forecasts and market shocks can be combined to boost the trading performance of the models. Chapter 6 presents a hybrid Genetic Algorithm – Support Vector Regression model for optimal parameter selection and feature subset combination. The model is applied to the task of forecasting and trading three euro exchange rates. The results of these chapters suggest that the stochastic and genetic neural network forecast combinations present superior forecasts and high profitability. In that way, more light is shed in the demanding issue of achieving statistical and trading efficiency in the foreign exchange markets. The focus of the next two chapters shifts from exchange rate forecasting to inflation and unemployment prediction through optimal macroeconomic variable selection. Chapter 7 focuses on forecasting the US inflation and unemployment, while chapter 8 presents the Rolling Genetic – Support Vector Regression model. The latter is applied to several forecasting exercises of inflation and unemployment of EMU members. Both chapters provide information on which set of macroeconomic indicators is found relevant to inflation and unemployment targeting on a monthly basis. The proposed models statistically outperform traditional ones. Hence, the voluminous literature, suggesting that non-linear time-varying approaches are more efficient and realistic in similar applications, is extended. From a technical point of view, these algorithms are superior to non-adaptive algorithms; avoid time consuming optimization approaches and efficiently cope with dimensionality and data-snooping issues.
APA, Harvard, Vancouver, ISO, and other styles
40

Hussain, Jabbar. "Deep Learning Black Box Problem." Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393479.

Full text
Abstract:
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Feurstein, Markus, and Martin Natter. "Neural networks, stochastic dynamic programming and a heuristic for valuing flexible manufacturing systems." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/1106/1/document.pdf.

Full text
Abstract:
We compare the use of stochastic dynamic programming (SDP), Neural Networks and a simple approximation rule for calculating the real option value of a flexible production system. While SDP yields the best solution to the problem, it is computationally prohibitive for larger settings. We test two approximations of the value function and show that the results are comparable to those obtained via SDP. These methods have the advantage of a high computational performance and of no restrictions on the type of process used. Our approach is not only useful for supporting large investment decisions, but it can also be applied in the case of routine decisions like the determination of the production program when stochastic profit margins occur. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
42

Craddock, Rachel Joy. "Multi layered radial basis function networks and the application of state space control theory to feedforward neural networks." Thesis, University of Reading, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Alahakoon, Lakpriya Damminda 1968. "Data mining with structure adapting neural networks." Monash University, School of Computer Science and Software Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/7987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cooper, Brenton S. "On the performance of optimisation networks /." Adelaide, 1996. http://web4.library.adelaide.edu.au/theses/09PH/09phc7758.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vasconcelos, Germano Crispim. "An investigation of feedforward neural networks with respect to the detection of spurious patterns." Thesis, University of Kent, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Valiveti, Natana Carleton University Dissertation Computer Science. "Parallel computational geometry on Analog Hopfield Networks." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
47

Townsend, Joseph Paul. "Artificial development of neural-symbolic networks." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15162.

Full text
Abstract:
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Cooper, Brenton S. "On the performance of optimisation networks / by Brenton S. Cooper." Thesis, Adelaide, 1996. http://hdl.handle.net/2440/18979.

Full text
Abstract:
Bibliography: leaves 125-131.
xi, 131 leaves : ill. ; 30 cm.
This thesis examines the performace of optimisation networks. The main objectives are to determine if there exist any factors which limit the solution quality that may be achieved with optimisation networks, to determine the reasons for any such limitations and to suggest remedies for them.
Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1996
APA, Harvard, Vancouver, ISO, and other styles
49

Neri, Giacomo. "Deep neural networks for solving time prediction in mixed-integer linear programming: an experimental study." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Risolvere alcuni problemi di Mixed-Integer Linear programming (MILP) è ancora una sfida per i solver attuali e può richiedere ore di computazione. Ci sono poche indicazioni a priori su quanto possa essere difficile il problema da risolvere. In pratica, potrebbe essere utile una stima del numero di nodi (tempo di computazione) necessari al branch and bound. Potrebbe aiutare a capire se il solver sta lavorando bene e magari, attivare degli algoritmi durante la procedura di risoluzione in modo da ottenere performance migliori. Osservando l’evoluzione di un albero parziale generato durante il branch and bound per un MILP, il nostro obiettivo è quello di predire quanto il problema sia difficile, ovvero quanti nodi mancano alla fine. Predire la difficoltà di un problema tenendo conto della grandezza e della forma dell’albero è una delle aeree in cui il machine learning può avere un sostanziale impatto pratico nell’ottimizzazione combinatoria. Il nostro dataset include 100000 samples ed è stato realizzato estraendo alberi parzialmente completi durante il processo risolutivo del branch and bound su 10000 instanze tramite SCIP (Solving Constraint Integer Programs). Per sviluppare il nostro modello, abbiamo studiato le deep neural networks, in particolare le convolutional e recurrent neural networks comparandole e cercando di ottenere la migliore combinazione possibile per il nostro scopo. I risultati di questa tesi hanno mostrato che le 1D convolutional neural networks possono essere usate con successo per questi tipi di task ed avere performance migliori delle recurrent networks. Questa scoperta, inoltre, mostra che le 1D CNN sono molto efficienti per fare time series prediction, il quale è un campo dove di solito le RNN non hanno rivali. Quest'ultima osservazione potrebbe essere molto promettente considerando che le 1D CNN sono meno complesse e più veloci rispetto alle RNN.
APA, Harvard, Vancouver, ISO, and other styles
50

Garret, Aaron Dozier Gerry V. "Neural enhancement for multiobjective optimization." Auburn, Ala., 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Dissertation/Garrett_Aaron_55.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography