Academic literature on the topic 'Fixed neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fixed neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fixed neural network"

1

Wang, Guanzheng, Rubin Wang, Wanzeng Kong, and Jianhai Zhang. "The Relationship between Sparseness and Energy Consumption of Neural Networks." Neural Plasticity 2020 (November 25, 2020): 1–13. http://dx.doi.org/10.1155/2020/8848901.

Full text
Abstract:
About 50-80% of total energy is consumed by signaling in neural networks. A neural network consumes much energy if there are many active neurons in the network. If there are few active neurons in a neural network, the network consumes very little energy. The ratio of active neurons to all neurons of a neural network, that is, the sparseness, affects the energy consumption of a neural network. Laughlin’s studies show that the sparseness of an energy-efficient code depends on the balance between signaling and fixed costs. Laughlin did not give an exact ratio of signaling to fixed costs, nor did they give the ratio of active neurons to all neurons in most energy-efficient neural networks. In this paper, we calculated the ratio of signaling costs to fixed costs by the data from physiology experiments. The ratio of signaling costs to fixed costs is between 1.3 and 2.1. We calculated the ratio of active neurons to all neurons in most energy-efficient neural networks. The ratio of active neurons to all neurons in neural networks is between 0.3 and 0.4. Our results are consistent with the data from many relevant physiological experiments, indicating that the model used in this paper may meet neural coding under real conditions. The calculation results of this paper may be helpful to the study of neural coding.
APA, Harvard, Vancouver, ISO, and other styles
2

Thatoi, Dhirendranath, and Prabir Kumar Jena. "Inverse Analysis of Crack in Fixed-Fixed Structure by Neural Network with the Aid of Modal Analysis." Advances in Artificial Neural Systems 2013 (March 3, 2013): 1–8. http://dx.doi.org/10.1155/2013/150209.

Full text
Abstract:
In this research, dynamic response of a cracked shaft having transverse crack is analyzed using theoretical neural network and experimental analysis. Structural damage detection using frequency response functions (FRFs) as input data to the back-propagation neural network (BPNN) has been explored. For deriving the effect of crack depths and crack locations on FRF, theoretical expressions have been developed using strain energy release rate at the crack section of the shaft for the calculation of the local stiffnesses. Based on the flexibility, a new stiffness matrix is deduced that is subsequently used to calculate the natural frequencies and mode shapes of the cracked beam using the neural network method. The results of the numerical analysis and the neural network method are being validated with the result from the experimental method. The analysis results on a shaft show that the neural network can assess damage conditions with very good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jianhua, Yang Li, and Wenbo Fei. "Neural Network-Based Nonlinear Fixed-Time Adaptive Practical Tracking Control for Quadrotor Unmanned Aerial Vehicles." Complexity 2020 (September 26, 2020): 1–13. http://dx.doi.org/10.1155/2020/8828453.

Full text
Abstract:
This brief addresses the position and attitude tracking fixed-time practical control for quadrotor unmanned aerial vehicles (UAVs) subject to nonlinear dynamics. First, by combining the radial basis function neural networks (NNs) with virtual parameter estimating algorithms, a NN adaptive control scheme is developed for UAVs. Then, a fixed-time adaptive law is proposed for neural networks to achieve fixed-time stability, and convergence time is dependent only on control gain parameters. Based on Lyapunov analyses and fixed-time stability theory, it is proved that the fixed-time adaptive neural network control is finite-time stable and convergence time is dependent with control parameters without initial conditions. The effectiveness of the NN fixed-time control is given through a simulation of the UAV system.
APA, Harvard, Vancouver, ISO, and other styles
4

Pan, Tetie, Bao Shi, and Jian Yuan. "Global Stability of Almost Periodic Solution of a Class of Neutral-Type BAM Neural Networks." Abstract and Applied Analysis 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/482584.

Full text
Abstract:
A class of BAM neural networks with variable coefficients and neutral delays are investigated. By employing fixed-point theorem, the exponential dichotomy, and differential inequality techniques, we obtain some sufficient conditions to insure the existence and globally exponential stability of almost periodic solution. This is the first time to investigate the almost periodic solution of the BAM neutral neural network and the results of this paper are new, and they extend previously known results.
APA, Harvard, Vancouver, ISO, and other styles
5

Cotter, Neil E., and Peter R. Conwell. "Universal Approximation by Phase Series and Fixed-Weight Networks." Neural Computation 5, no. 3 (May 1993): 359–62. http://dx.doi.org/10.1162/neco.1993.5.3.359.

Full text
Abstract:
In this note we show that weak (specified energy bound) universal approximation by neural networks is possible if variable synaptic weights are brought in as network inputs rather than being embedded in a network. We illustrate this idea with a Fourier series network that we transform into what we call a phase series network. The transformation only increases the number of neurons by a factor of two.
APA, Harvard, Vancouver, ISO, and other styles
6

Scellier, Benjamin, and Yoshua Bengio. "Equivalence of Equilibrium Propagation and Recurrent Backpropagation." Neural Computation 31, no. 2 (February 2019): 312–29. http://dx.doi.org/10.1162/neco_a_01160.

Full text
Abstract:
Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. In this work, we establish a close connection between these two algorithms. We show that at every moment in the second phase, the temporal derivatives of the neural activities in equilibrium propagation are equal to the error derivatives computed iteratively by recurrent backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives and supports the hypothesis that in biological neural networks, temporal derivatives of neural activities may code for error signals.
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Chengjun, Donal O’Regan, Feiqi Deng, and Ravi P. Agarwal. "Fixed points and exponential stability for a stochastic neutral cellular neural network." Applied Mathematics Letters 26, no. 8 (August 2013): 849–53. http://dx.doi.org/10.1016/j.aml.2013.03.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

White, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.

Full text
Abstract:
Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yang, Jianhua Zhang, Xinli Xu, and Cheng Siong Chin. "Adaptive Fixed-Time Neural Network Tracking Control of Nonlinear Interconnected Systems." Entropy 23, no. 9 (September 1, 2021): 1152. http://dx.doi.org/10.3390/e23091152.

Full text
Abstract:
In this article, a novel adaptive fixed-time neural network tracking control scheme for nonlinear interconnected systems is proposed. An adaptive backstepping technique is used to address unknown system uncertainties in the fixed-time settings. Neural networks are used to identify the unknown uncertainties. The study shows that, under the proposed control scheme, each state in the system can converge into small regions near zero with fixed-time convergence time via Lyapunov stability analysis. Finally, the simulation example is presented to demonstrate the effectiveness of the proposed approach. A step-by-step procedure for engineers in industry process applications is proposed.
APA, Harvard, Vancouver, ISO, and other styles
10

Sergeev, Fedor, Elena Bratkovskaya, Ivan Kisel, and Iouri Vassiliev. "Deep learning for quark–gluon plasma detection in the CBM experiment." International Journal of Modern Physics A 35, no. 33 (November 30, 2020): 2043002. http://dx.doi.org/10.1142/s0217751x20430022.

Full text
Abstract:
Classification of processes in heavy-ion collisions in the CBM experiment (FAIR/GSI, Darmstadt) using neural networks is investigated. Fully-connected neural networks and a deep convolutional neural network are built to identify quark–gluon plasma simulated within the Parton-Hadron-String Dynamics (PHSD) microscopic off-shell transport approach for central Au+Au collision at a fixed energy. The convolutional neural network outperforms fully-connected networks and reaches 93% accuracy on the validation set, while the remaining only 7% of collisions are incorrectly classified.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Fixed neural network"

1

Puttige, Vishwas Ramadas Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Neural network based adaptive control for autonomous flight of fixed wing unmanned aerial vehicles." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43736.

Full text
Abstract:
This thesis presents the development of small, inexpensive unmanned aerial vehicles (UAVs) to achieve autonomous fight. Fixed wing hobby model planes are modified and instrumented to form experimental platforms. Different sensors employed to collect the flight data are discussed along with their calibrations. The time constant and delay for the servo-actuators for the platform are estimated. Two different data collection and processing units based on micro-controller and PC104 architectures are developed and discussed. These units are also used to program the identification and control algorithms. Flight control of fixed wing UAVs is a challenging task due to the coupled, time-varying, nonlinear dynamic behaviour. One of the possible alternatives for the flight control system is to use the intelligent adaptive control techniques that provide online learning capability to cope with varying dynamics and disturbances. Neural network based indirect adaptive control strategy is applied for the current work. The two main components of the adaptive control technique are the identification block and the control block. Identification provides a mathematical model for the controller to adapt to varying dynamics. Neural network based identification provides a black-box identification technique wherein a suitable network provides prediction capability based upon the past inputs and outputs. Auto-regressive neural networks are employed for this to ensure good retention capabilities for the model that uses the past outputs and inputs along with the present inputs. Online and offline identification of UAV platforms are discussed based upon the flight data. Suitable modifications to the Levenberg-Marquardt training algorithm for online training are proposed. The effect of varying the different network parameters on the performance of the network are numerically tested out. A new performance index is proposed that is shown to improve the accuracy of prediction and also reduces the training time for these networks. The identification algorithms are validated both numerically and flight tested. A hardware-in-loop simulation system has been developed to test the identification and control algorithms before flight testing to identify the problems in real time implementation on the UAVs. This is developed to keep the validation process simple and a graphical user interface is provided to visualise the UAV flight during simulations. A dual neural network controller is proposed as the adaptive controller based upon the identification models. This has two neural networks collated together. One of the neural networks is trained online to adapt to changes in the dynamics. Two feedback loops are provided as part of the overall structure that is seen to improve the accuracy. Proofs for stability analysis in the form of convergence of the identifier and controller networks based on Lyapunov's technique are presented. In this analysis suitable bounds on the rate of learning for the networks are imposed. Numerical results are presented to validate the adaptive controller for single-input single-output as well as multi-input multi-output subsystems of the UAV. Real time validation results and various flight test results confirm the feasibility of the proposed adaptive technique as a reliable tool to achieve autonomous flight. The comparison of the proposed technique with a baseline gain scheduled controller both in numerical simulations as well as test flights bring out the salient adaptive feature of the proposed technique to the time-varying, nonlinear dynamics of the UAV platforms under different flying conditions.
APA, Harvard, Vancouver, ISO, and other styles
2

Hao, Haiyan. "Understanding Fixed Object Crashes with SHRP2 Naturalistic Driving Study Data." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84942.

Full text
Abstract:
Fixed-object crashes have long time been considered as major roadway safety concerns. While previous relevant studies tended to address such crashes in the context of roadway departures, and heavily relied on police-reported accidents data, this study integrated the SHRP2 NDS and RID data for analyses, which fully depicted the prior to, during, and after crash scenarios. A total of 1,639 crash, near-crash events, and 1,050 baseline events were acquired. Three analysis methods: logistic regression, support vector machine (SVM) and artificial neural network (ANN) were employed for two responses: crash occurrence and severity level. Logistic regression analyses identified 16 and 10 significant variables with significance levels of 0.1, relevant to driver, roadway, environment, etc. for two responses respectively. The logistic regression analyses led to a series of findings regarding the effects of explanatory variables on fixed-object event occurrence and associated severity level. SVM classifiers and ANN models were also constructed to predict these two responses. Sensitivity analyses were performed for SVM classifiers to infer the contributing effects of input variables. All three methods obtained satisfactory prediction performance, that was around 88% for fixed-object event occurrence and 75% for event severity level, which indicated the effectiveness of NDS event data on depicting crash scenarios and roadway safety analyses.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Rava, Eleonora Maria Elizabeth. "Bioaugmentation of coal gasification stripped gas liquor wastewater in a hybrid fixed-film bioreactor." Thesis, University of Pretoria, 2017. http://hdl.handle.net/2263/62789.

Full text
Abstract:
Coal gasification stripped gas liquor (CGSGL) wastewater contains large quantities of complex organic and inorganic pollutants which include phenols, ammonia, hydantoins, furans, indoles, pyridines, phthalates and other monocyclic and polycyclic nitrogen-containing aromatics, as well as oxygen- and sulphur-containing heterocyclic compounds. The performance of most conventional aerobic systems for CGSGL wastewater is inadequate in reducing pollutants contributing to chemical oxygen demand (COD), phenols and ammonia due to the presence of toxic and inhibitory organic compounds. There is an ever-increasing scarcity of freshwater in South Africa, thus reclamation of wastewater for recycling is growing rapidly and the demand for higher effluent quality before being discharged or reused is also increasing. The selection of hybrid fixed-film bioreactor (HFFBR) systems in the detoxification of a complex mixture of compounds such as those found in CGSGL has not been investigated. Thus, the objective of this study was to investigate the detoxification of the CGSGL in a H-FFBR bioaugmented with a mixed-culture inoculum containing Pseudomonas putida, Pseudomonas plecoglossicida, Rhodococcus erythropolis, Rhodococcus qingshengii, Enterobacter cloacae, Enterobacter asburiae strains of bacteria, as well as the seaweed (Silvetia siliquosa) and diatoms. The results indicated a 45% and 79% reduction in COD and phenols, respectively, without bioaugmentation. The reduction in COD increased by 8% with inoculum PA1, 13% with inoculum PA2 and 7% with inoculum PA3. Inoculum PA1 was a blend of Pseudomonas, Enterobacter and Rhodococcus strains, inoculum PA2 was a blend of Pseudomonas putida iistrains and inoculum PA3 was a blend of Pseudomonas putida and Pseudomonas plecoglossicida strains. The results also indicated that a 70% carrier fill formed a dense biofilm, a 50% carrier fill formed a rippling biofilm and a 30% carrier fill formed a porous biofilm. The autotrophic nitrifying bacteria were out-competed by the heterotrophic bacteria of the genera Thauera, Pseudaminobacter, Pseudomonas and Diaphorobacter. Metagenomic sequencing data also indicated significant dissimilarities between the biofilm, suspended biomass, effluent and feed microbial populations. A large population (20% to 30%) of unclassified bacteria were also present, indicating the presence of novel bacteria that may play an important role in the treatment of the CGSGL wastewater. The artificial neural network (ANN) model developed in this study is a novel virtual tool for the prediction of COD and phenol removal from CGSGL wastewater treated in a bioaugmented H-FFBR. Knowledge extraction from the trained ANN model showed that significant nonlinearities exist between the H-FFBR operational parameters and the removal of COD and phenol. The predictive model thus increases knowledge of the process inputs and outputs and thus facilitates process control and optimisation to meet more stringent effluent discharge requirements.
Thesis (PhD)--University of Pretoria, 2017.
Chemical Engineering
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
4

Коломієць, Ольга Вікторівна. "Телеграм-бот для класифікації зображень твердих побутових відходів." Магістерська робота, Хмельницький національний університет, 2020. http://elar.khnu.km.ua/jspui/handle/123456789/9374.

Full text
Abstract:
Дипломна робота магістра присвячена розробленню інформаційної системи для класифікації твердих побутових відходів. У роботі спроектовано та розроблено класифікатор зображень твердих побутових відходів з використанням згорткових нейронних мереж. Вперше використано інтерфейс програмного забезпечення Телеграм для реалізації системи класифікації таких зображень.
APA, Harvard, Vancouver, ISO, and other styles
5

Silva, Carlos Alberto de Albuquerque. "Contribui??o para o estudo do embarque de uma rede neural artificial em field programmable gate array (FPGA)." Universidade Federal do Rio Grande do Norte, 2010. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15340.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:47Z (GMT). No. of bitstreams: 1 CarlosAAS_DISSERT_1-60.pdf: 4186909 bytes, checksum: cebf9d80edc07d16ef618a3095ead927 (MD5) Previous issue date: 2010-06-30
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Este estudo consiste na implementa??o e no embarque de uma Rede Neural Artificial (RNA) em hardware, ou seja, em um dispositivo program?vel do tipo field programmable gate array (FPGA). O presente trabalho permitiu a explora??o de diferentes implementa??es, descritas em VHDL, de RNA do tipo perceptrons de m?ltiplas camadas. Por causa do paralelismo inerente ?s RNAs, ocorrem desvantagens nas implementa??es em software, devido ? natureza sequencial das arquiteturas de Von Neumann. Como alternativa a este problema, surge uma implementa??o em hardware que permite explorar todo o paralelismo impl?cito neste modelo. Atualmente, verifica-se um aumento no uso do FPGA como plataforma para implementar as Redes Neurais Artificiais em hardware, explorando o alto poder de processamento, o baixo custo, a facilidade de programa??o e capacidade de reconfigura??o do circuito, permitindo que a rede se adapte a diferentes aplica??es. Diante desse contexto, objetivou-se desenvolver arranjos de redes neurais em hardware, em uma arquitetura flex?vel, nas quais fosse poss?vel acrescentar ou retirar neur?nios e, principalmente, modificar a topologia da rede, de forma a viabilizar uma rede modular em aritm?tica de ponto fixo, em um FPGA. Produziram-se cinco s?nteses de descri??es em VHDL: duas para o neur?nio com uma e duas entradas, e tr?s para diferentes arquiteturas de RNA. As descri??es das arquiteturas utilizadas tornaram-se bastante modulares, possibilitando facilmente aumentar ou diminuir o n?mero de neur?nios. Em decorr?ncia disso, algumas redes neurais completas foram implementadas em FPGA, em aritm?tica de ponto fixo e com alta capacidade de processamento paralelo
APA, Harvard, Vancouver, ISO, and other styles
6

Rek, Petr. "Knihovna pro návrh konvolučních neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385999.

Full text
Abstract:
In this diploma thesis, the reader is introduced to artificial neural networks and convolutional neural networks. Based on that, the design and implementation of a new library for convolutional neural networks is described. The library is then evaluated on widely used datasets and compared to other publicly available libraries. The added benefit of the library, that makes it unique, is its independence on data types. Each layer may contain up to three independent data types - for weights, for inference and for training. For the purpose of evaluating this feature, a data type with fixed point representation is also part of the library. The effects of this representation on trained net accuracy are put to a test.
APA, Harvard, Vancouver, ISO, and other styles
7

Čermák, Justin. "Implementace umělé neuronové sítě do obvodu FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219363.

Full text
Abstract:
This master's thesis describes the design of effective working artificial neural network in FPGA Virtex-5 series with the maximum use of the possibility of parallelization. The theoretical part contains basic information on artificial neural networks, FPGA and VHDL. The practical part describes the used format of the variables, creating non-linear function, the principle of calculation the single layers, or the possibility of parameter settings generated artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Keller, Paul Edwin. "Fixed planar holographic interconnects for optically implemented neural networks." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185721.

Full text
Abstract:
In recent years there has been a great interest in neural networks, since neural networks are capable of performing pattern recognition, classification, decision, search, and optimization. A key element of most neural network systems is the massive number of weighted interconnections (synapses) used to tie relatively simple processing elements (neurons) together in a useful architecture. The inherent parallelism and interconnection capability of optics make it a likely candidate for the implementation of the neural network interconnection process. While there are several optical technologies worth exploring, this dissertation examines the capabilities and limitations of using fixed planar holographic interconnects in a neural network system. While optics is well suited to the interconnection task, nonlinear processing operations are difficult to implement in optics and better suited to electronic implementations. Therefore, a hybrid neural network architecture of planar interconnection holograms and opto-electronic neurons is a sensible approach to implementing a neural network. This architecture is analyzed. The interconnection hologram must accurately encode synaptic weights, have a high diffraction efficiency, and maximize the number of interconnections. Various computer generated hologram techniques are tested for their ability to produce the interconnection hologram. A new technique using the Gerchberg-Saxton process followed by a random-search error minimization produces the highest interconnect accuracy and highest diffraction efficiency of the techniques tested. The analysis shows that a reasonable size planar hologram has a capacity to connect 5000 neuron outputs to 5000 neuron inputs and that the bipolar synaptic weights can have an accuracy of approximately 5 bits. To demonstrate the concept of an opto-electronic neural network and planar holographic interconnects, a Hopfield style associative memory is constructed and shown to perform almost as well as an ideal system.
APA, Harvard, Vancouver, ISO, and other styles
9

Thai, Shee Meng. "Neural network modelling and control of coal fired boiler plant." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/neural-network-modelling-and-control-of-coal-fired-boiler-plant(b5562ca0-e45e-44d8-aad2-ed2e3e114808).html.

Full text
Abstract:
This thesis presents the development of a Neural Network Based Controller (NNBC) for chain grate stoker fired boilers. The objective of the controller was to increase combustion efficiency and maintain pollutant emissions below future medium term stringent legislation. Artificial Neural Networks (ANNs) were used to estimate future emissions from and control the combustion process. Initial tests at Casella CRE Ltd demonstrated the ability of ANNs to characterise the complex functional relationships which subsisted in the data set, and utilised previously gained knowledge to deliver predictions up to three minutes into the future. This technique was then built into a carefully designed control strategy that fundamentally mimicked the actions of an expert boiler operator, to control an industrial chain grate stoker at HM Prison Garth, Lancashire. Test results demonstrated that the developed novel NNBC was able to control the industrial stoker boiler plant to deliver the load demand whilst keeping the excess air level to a minimum. As a result the NNBC also managed to maintain the pollutant emissions within probable future limits for this size of boiler. This prototype controller would thus offer the industrial coal user with a means to improve the combustion efficiency on chain grate stokers as well as meeting medium term legislation limits on pollutant emissions that could be imposed by the European Commission.
APA, Harvard, Vancouver, ISO, and other styles
10

Gaopande, Meghana Laxmidhar. "Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural Networks." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98617.

Full text
Abstract:
The growing complexity of neural networks makes their deployment on resource-constrained embedded or mobile devices challenging. With millions of weights and biases, modern deep neural networks can be computationally intensive, with large memory, power and computational requirements. In this thesis, we devise and explore three quantization methods (post-training, in-training and combined quantization) that quantize 32-bit floating-point weights and biases to lower bit width fixed-point parameters while also achieving significant pruning, leading to model compression. We use the total accumulated absolute gradient over the training process as the indicator of importance of a parameter to the network. The most important parameters are quantized by the smallest amount. The post-training quantization method sorts and clusters the accumulated gradients of the full parameter set and subsequently assigns a bit width to each cluster. The in-training quantization method sorts and divides the accumulated gradients into two groups after each training epoch. The larger group consisting of the lowest accumulated gradients is quantized. The combined quantization method performs in-training quantization followed by post-training quantization. We assume storage of the quantized parameters using compressed sparse row format for sparse matrix storage. On LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), post-training quantization achieves 7.62x, 10.87x, 6.39x and 12.43x compression, in-training quantization achieves 22.08x, 21.05x, 7.95x and 12.71x compression and combined quantization achieves 57.22x, 50.19x, 13.15x and 13.53x compression, respectively. Our methods quantize at the cost of accuracy, and we present our work in the light of the accuracy-compression trade-off.
Master of Science
Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Fixed neural network"

1

Chong, A. Z. S. The monitoring and control of stoker-fired boiler plant by neural networks. 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nolte, David D. Introduction to Modern Dynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198844624.001.0001.

Full text
Abstract:
Introduction to Modern Dynamics: Chaos, Networks, Space and Time (2nd Edition) combines the topics of modern dynamics—chaos theory, dynamics on complex networks and the geometry of dynamical spaces—into a coherent framework. This text is divided into four parts: Geometric Mechanics, Nonlinear Dynamics, Complex Systems, and Relativity. These topics share a common and simple mathematical language that helps students gain a unified physical intuition. Geometric mechanics lays the foundation and sets the tone for the rest of the book by emphasizing dynamical spaces, like state space and phase space, whose geometric properties define the set of all trajectories through those spaces. The section on nonlinear dynamics has chapters on chaos theory, synchronization, and networks. Chaos theory provides the language and tools to understand nonlinear systems, introducing fixed points that are classified through stability analysis and nullclines that shepherd system trajectories. Synchronization and networks are central paradigms in this book because they demonstrate how collective behavior emerges from the interactions of many individual nonlinear elements. The section on complex systems contains chapters on neural dynamics, evolutionary dynamics, and economic dynamics. The final section contains chapters on metric spaces and the special and general theories of relativity. In the second edition, sections on conventional topics, like applications of Lagrangians, have been strengthened, as well as being updated to provide a modern perspective. Several of the introductory chapters have been rearranged for improved logical flow and there are expanded homework problems at the end of each chapter.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fixed neural network"

1

Koh, Bo-Yun, Yung-Wan Kwon, Hyuek-Jae Lee, Soo-Young Lee, and Sang-Yung Shin. "Optical Implementation of Neural Networks with Fixed Global Interconnection and Local Adaptive Gain-Control." In International Neural Network Conference, 611–14. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Litinsky, Leonid B. "Energy Functional and Fixed Points of a Neural Network." In Neural Nets WIRN VIETRI-97, 153–61. London: Springer London, 1998. http://dx.doi.org/10.1007/978-1-4471-1520-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sulzbachner, Christoph, Martin Humenberger, Ágoston Srp, and Ferenc Vajda. "Optimization of a Neural Network for Computer Vision Based Fall Detection with Fixed-Point Arithmetic." In Neural Information Processing, 18–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34478-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Giacobbe, Mirco, Thomas A. Henzinger, and Mathias Lechner. "How Many Bits Does it Take to Quantize Your Neural Network?" In Tools and Algorithms for the Construction and Analysis of Systems, 79–97. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45237-7_5.

Full text
Abstract:
Abstract Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their real-numbered counterpart, quantized networks are not immune to malicious misclassification caused by adversarial attacks. We investigate how quantization affects a network’s robustness to adversarial attacks, which is a formal verification question. We show that neither robustness nor non-robustness are monotonic with changing the number of bits for the representation and, also, neither are preserved by quantization from a real-numbered network. For this reason, we introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. We built a tool and analyzed the effect of quantization on a classifier for the MNIST dataset. We demonstrate that, compared to our method, existing methods for the analysis of real-numbered networks often derive false conclusions about their quantizations, both when determining robustness and when detecting attacks, and that existing methods for quantized networks often miss attacks. Furthermore, we applied our method beyond robustness, showing how the number of bits in quantization enlarges the gender bias of a predictor for students’ grades.
APA, Harvard, Vancouver, ISO, and other styles
5

Giménez, V., M. Pérez-Castellanos, J. Rios Carrion, and F. de Mingo. "Capacity and parasitic fixed points control in a recursive neural network." In Biological and Artificial Computation: From Neuroscience to Technology, 217–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0032479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cha, ByungRae, KyungWoo Park, and JaeHyun Seo. "Neural Network Techniques for Host Anomaly Intrusion Detection Using Fixed Pattern Transformation." In Computational Science and Its Applications – ICCSA 2005, 254–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11424826_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shan, Lei, Minxuan Zhang, Lin Deng, and Guohui Gong. "A Dynamic Multi-precision Fixed-Point Data Quantization Strategy for Convolutional Neural Network." In Communications in Computer and Information Science, 102–11. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3159-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Danyang, Chenghua Li, Benhua Zhang, and Ling Tong. "Prediction of Drying Indices for Paddy Rice in a Deep Fixed-Bed Based on Neural Network." In Computer and Computing Technologies in Agriculture X, 496–507. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06155-5_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Deville, Yannick, and Shahram Hosseini. "Blind Operation of a Recurrent Neural Network for Linear-Quadratic Source Separation: Fixed Points, Stabilization and Adaptation Scheme." In Latent Variable Analysis and Signal Separation, 237–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15995-4_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cervantes-Ojeda, Jorge, and María del Carmen Gómez-Fuentes. "On the Connection Weight Space Structure of a Two-Neuron Discrete Neural Network: Bifurcations of the Fixed Point at the Origin." In Nature-Inspired Computation and Machine Learning, 85–94. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13650-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fixed neural network"

1

Benmaghnia, Hanane, Matthieu Martel, and Yassamine Seladji. "Fixed-Point Code Synthesis for Neural Networks." In 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.

Full text
Abstract:
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
APA, Harvard, Vancouver, ISO, and other styles
2

Boo, Yoonho, and Wonyong Sung. "Fixed-Point Optimization of Transformer Neural Network." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dias, Mauricio A., and Fernando S. Osorio. "Fixed-Point Neural Network Ensembles for Visual Navigation." In 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium (SBR-LARS). IEEE, 2012. http://dx.doi.org/10.1109/sbr-lars.2012.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Mingyu, Xu Yuan, Bing Chen, Chong Lin, and Yun Shang. "Fixed-time quadrotor trajectory tracking neural network backstepping control." In 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9602534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xi, Xiaolin Hu, Hucheng Zhou, and Ningyi Xu. "FxpNet: Training a deep convolutional neural network in fixed-point representation." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lo, Chun Yan, and Chiu-Wing Sham. "Energy Efficient Fixed-point Inference System of Convolutional Neural Network." In 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2020. http://dx.doi.org/10.1109/mwscas48704.2020.9184436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bouguezzi, Safa, Hassene Faiedh, and Chokri Souani. "Hardware Implementation of Fixed-Point Convolutional Neural Network For Classification." In 2021 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS). IEEE, 2021. http://dx.doi.org/10.1109/dts52014.2021.9498072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhuang, Yuan, Zhenguang Liu, Peng Qian, Qi Liu, Xiang Wang, and Qinming He. "Smart Contract Vulnerability Detection using Graph Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/454.

Full text
Abstract:
The security problems of smart contracts have drawn extensive attention due to the enormous financial losses caused by vulnerabilities. Existing methods on smart contract vulnerability detection heavily rely on fixed expert rules, leading to low detection accuracy. In this paper, we explore using graph neural networks (GNNs) for smart contract vulnerability detection. Particularly, we construct a contract graph to represent both syntactic and semantic structures of a smart contract function. To highlight the major nodes, we design an elimination phase to normalize the graph. Then, we propose a degree-free graph convolutional neural network (DR-GCN) and a novel temporal message propagation network (TMP) to learn from the normalized graphs for vulnerability detection. Extensive experiments show that our proposed approach significantly outperforms state-of-the-art methods in detecting three different types of vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
9

Khatua, Kaushik, Hillol Maity, Santanu Chattopadhyay, Indranil Sengupta, Girish Patankar, and Parthajit Bhattacharya. "A Deep Neural Network Augmented Approach for Fixed Polarity AND-XOR Network Synthesis." In TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). IEEE, 2019. http://dx.doi.org/10.1109/tencon.2019.8929289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wawrzynski, Pawel. "Fixed point method of step-size estimation for on-line neural network training." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596596.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Fixed neural network"

1

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography