Статті в журналах з теми "Fixed neural network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Fixed neural network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Fixed neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wang, Guanzheng, Rubin Wang, Wanzeng Kong, and Jianhai Zhang. "The Relationship between Sparseness and Energy Consumption of Neural Networks." Neural Plasticity 2020 (November 25, 2020): 1–13. http://dx.doi.org/10.1155/2020/8848901.

Повний текст джерела
Анотація:
About 50-80% of total energy is consumed by signaling in neural networks. A neural network consumes much energy if there are many active neurons in the network. If there are few active neurons in a neural network, the network consumes very little energy. The ratio of active neurons to all neurons of a neural network, that is, the sparseness, affects the energy consumption of a neural network. Laughlin’s studies show that the sparseness of an energy-efficient code depends on the balance between signaling and fixed costs. Laughlin did not give an exact ratio of signaling to fixed costs, nor did they give the ratio of active neurons to all neurons in most energy-efficient neural networks. In this paper, we calculated the ratio of signaling costs to fixed costs by the data from physiology experiments. The ratio of signaling costs to fixed costs is between 1.3 and 2.1. We calculated the ratio of active neurons to all neurons in most energy-efficient neural networks. The ratio of active neurons to all neurons in neural networks is between 0.3 and 0.4. Our results are consistent with the data from many relevant physiological experiments, indicating that the model used in this paper may meet neural coding under real conditions. The calculation results of this paper may be helpful to the study of neural coding.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Thatoi, Dhirendranath, and Prabir Kumar Jena. "Inverse Analysis of Crack in Fixed-Fixed Structure by Neural Network with the Aid of Modal Analysis." Advances in Artificial Neural Systems 2013 (March 3, 2013): 1–8. http://dx.doi.org/10.1155/2013/150209.

Повний текст джерела
Анотація:
In this research, dynamic response of a cracked shaft having transverse crack is analyzed using theoretical neural network and experimental analysis. Structural damage detection using frequency response functions (FRFs) as input data to the back-propagation neural network (BPNN) has been explored. For deriving the effect of crack depths and crack locations on FRF, theoretical expressions have been developed using strain energy release rate at the crack section of the shaft for the calculation of the local stiffnesses. Based on the flexibility, a new stiffness matrix is deduced that is subsequently used to calculate the natural frequencies and mode shapes of the cracked beam using the neural network method. The results of the numerical analysis and the neural network method are being validated with the result from the experimental method. The analysis results on a shaft show that the neural network can assess damage conditions with very good accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Jianhua, Yang Li, and Wenbo Fei. "Neural Network-Based Nonlinear Fixed-Time Adaptive Practical Tracking Control for Quadrotor Unmanned Aerial Vehicles." Complexity 2020 (September 26, 2020): 1–13. http://dx.doi.org/10.1155/2020/8828453.

Повний текст джерела
Анотація:
This brief addresses the position and attitude tracking fixed-time practical control for quadrotor unmanned aerial vehicles (UAVs) subject to nonlinear dynamics. First, by combining the radial basis function neural networks (NNs) with virtual parameter estimating algorithms, a NN adaptive control scheme is developed for UAVs. Then, a fixed-time adaptive law is proposed for neural networks to achieve fixed-time stability, and convergence time is dependent only on control gain parameters. Based on Lyapunov analyses and fixed-time stability theory, it is proved that the fixed-time adaptive neural network control is finite-time stable and convergence time is dependent with control parameters without initial conditions. The effectiveness of the NN fixed-time control is given through a simulation of the UAV system.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pan, Tetie, Bao Shi, and Jian Yuan. "Global Stability of Almost Periodic Solution of a Class of Neutral-Type BAM Neural Networks." Abstract and Applied Analysis 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/482584.

Повний текст джерела
Анотація:
A class of BAM neural networks with variable coefficients and neutral delays are investigated. By employing fixed-point theorem, the exponential dichotomy, and differential inequality techniques, we obtain some sufficient conditions to insure the existence and globally exponential stability of almost periodic solution. This is the first time to investigate the almost periodic solution of the BAM neutral neural network and the results of this paper are new, and they extend previously known results.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cotter, Neil E., and Peter R. Conwell. "Universal Approximation by Phase Series and Fixed-Weight Networks." Neural Computation 5, no. 3 (May 1993): 359–62. http://dx.doi.org/10.1162/neco.1993.5.3.359.

Повний текст джерела
Анотація:
In this note we show that weak (specified energy bound) universal approximation by neural networks is possible if variable synaptic weights are brought in as network inputs rather than being embedded in a network. We illustrate this idea with a Fourier series network that we transform into what we call a phase series network. The transformation only increases the number of neurons by a factor of two.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Scellier, Benjamin, and Yoshua Bengio. "Equivalence of Equilibrium Propagation and Recurrent Backpropagation." Neural Computation 31, no. 2 (February 2019): 312–29. http://dx.doi.org/10.1162/neco_a_01160.

Повний текст джерела
Анотація:
Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. In this work, we establish a close connection between these two algorithms. We show that at every moment in the second phase, the temporal derivatives of the neural activities in equilibrium propagation are equal to the error derivatives computed iteratively by recurrent backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives and supports the hypothesis that in biological neural networks, temporal derivatives of neural activities may code for error signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Guo, Chengjun, Donal O’Regan, Feiqi Deng, and Ravi P. Agarwal. "Fixed points and exponential stability for a stochastic neutral cellular neural network." Applied Mathematics Letters 26, no. 8 (August 2013): 849–53. http://dx.doi.org/10.1016/j.aml.2013.03.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

White, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.

Повний текст джерела
Анотація:
Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Yang, Jianhua Zhang, Xinli Xu, and Cheng Siong Chin. "Adaptive Fixed-Time Neural Network Tracking Control of Nonlinear Interconnected Systems." Entropy 23, no. 9 (September 1, 2021): 1152. http://dx.doi.org/10.3390/e23091152.

Повний текст джерела
Анотація:
In this article, a novel adaptive fixed-time neural network tracking control scheme for nonlinear interconnected systems is proposed. An adaptive backstepping technique is used to address unknown system uncertainties in the fixed-time settings. Neural networks are used to identify the unknown uncertainties. The study shows that, under the proposed control scheme, each state in the system can converge into small regions near zero with fixed-time convergence time via Lyapunov stability analysis. Finally, the simulation example is presented to demonstrate the effectiveness of the proposed approach. A step-by-step procedure for engineers in industry process applications is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sergeev, Fedor, Elena Bratkovskaya, Ivan Kisel, and Iouri Vassiliev. "Deep learning for quark–gluon plasma detection in the CBM experiment." International Journal of Modern Physics A 35, no. 33 (November 30, 2020): 2043002. http://dx.doi.org/10.1142/s0217751x20430022.

Повний текст джерела
Анотація:
Classification of processes in heavy-ion collisions in the CBM experiment (FAIR/GSI, Darmstadt) using neural networks is investigated. Fully-connected neural networks and a deep convolutional neural network are built to identify quark–gluon plasma simulated within the Parton-Hadron-String Dynamics (PHSD) microscopic off-shell transport approach for central Au+Au collision at a fixed energy. The convolutional neural network outperforms fully-connected networks and reaches 93% accuracy on the validation set, while the remaining only 7% of collisions are incorrectly classified.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Li, Yang, Yuanyuan Bao, and Wai Chen. "Fixed-Sign Binary Neural Network: An Efficient Design of Neural Network for Internet-of-Things Devices." IEEE Access 8 (2020): 164858–63. http://dx.doi.org/10.1109/access.2020.3022902.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hu, Dengzhou, Xing He, and Xingxing Ju. "A modified projection neural network with fixed-time convergence." Neurocomputing 489 (June 2022): 90–97. http://dx.doi.org/10.1016/j.neucom.2022.03.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wu, Chyuan-Tyng, Peter van Beek, Phillip Schmidt, Joao Peralta Moreira, and Thomas R. Gardos. "Evaluation of semi-frozen semi-fixed neural network for efficient computer vision inference." Electronic Imaging 2021, no. 17 (January 18, 2021): 213–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.17.avm-213.

Повний текст джерела
Анотація:
Deep neural networks have been utilized in an increasing number of computer vision tasks, demonstrating superior performance. Much research has been focused on making deep networks more suitable for efficient hardware implementation, for low-power and low-latency real-time applications. In [1], Isikdogan et al. introduced a deep neural network design that provides an effective trade-off between flexibility and hardware efficiency. The proposed solution consists of fixed-topology hardware blocks, with partially frozen/partially trainable weights, that can be configured into a full network. Initial results in a few computer vision tasks were presented in [1]. In this paper, we further evaluate this network design by applying it to several additional computer vision use cases and comparing it to other hardware-friendly networks. The experimental results presented here show that the proposed semi-fixed semi-frozen design achieves competitive performanc on a variety of benchmarks, while maintaining very high hardware efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Li, Yongkun, and Xiaofang Meng. "Existence and Global Exponential Stability of Pseudo Almost Periodic Solutions for Neutral Type Quaternion-Valued Neural Networks with Delays in the Leakage Term on Time Scales." Complexity 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/9878369.

Повний текст джерела
Анотація:
We propose a class of neutral type quaternion-valued neural networks with delays in the leakage term on time scales that can unify the discrete-time and the continuous-time neural networks. In order to avoid the difficulty brought by the noncommutativity of quaternion multiplication, we first decompose the quaternion-valued system into four real-valued systems. Then, by applying the exponential dichotomic theory of linear dynamic equations on time scales, Banach’s fixed point theorem, the theory of calculus on time scales, and inequality techniques, we obtain some sufficient conditions on the existence and global exponential stability of pseudo almost periodic solutions for this class of neural networks. Our results are completely new even for both the case of the neural networks governed by differential equations and the case of the neural networks governed by difference equations and show that, under a simple condition, the continuous-time quaternion-valued network and its corresponding discrete-time quaternion-valued network have the same dynamical behavior for the pseudo almost periodicity. Finally, a numerical example is given to illustrate the feasibility of our results.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Afridi, Muhammad Ishaq. "Cognition in a Cognitive Routing System for Mobile Ad-Hoc Network through Leaning Automata and Neural Network." Applied Mechanics and Materials 421 (September 2013): 694–700. http://dx.doi.org/10.4028/www.scientific.net/amm.421.694.

Повний текст джерела
Анотація:
A cognitive routing system intelligently selects one protocol at a time for specific routing conditions and environment in MANET. Cognition or self-learning can be achieved in a cognitive routing system for mobile ad-hoc network (MANET) through a learning system like learning automata or neural networks. This article covers the application of learning automata and neural network to achieve cognition in MANET routing system. Mobile Ad-hoc networks are dynamic in nature and lack any fixed infrastructure, so the implementation of cognition enhances the performance of overall routing system in these networks. In learning automata the process of learning is different from reasoning or decision making. Learning automata require little knowledge to take decisions. Neural network can be improved by increasing the number of neurons and changing parameters. Self-training enhance neural network performance and it select suitable protocol for a given network environment. Cognition in MANET is either based upon learning automata as in some wireless sensor networks or specialized cognitive neural networks like Elman network. Learning automata do not follow predetermine rules and has the ability to learn and evolve. The interaction of learning automata with the MANET environment results in the evolution of cognition system.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Müller, Peter, and David Rios Insua. "Issues in Bayesian Analysis of Neural Network Models." Neural Computation 10, no. 3 (April 1, 1998): 749–70. http://dx.doi.org/10.1162/089976698300017737.

Повний текст джерела
Анотація:
Stemming from work by Buntine and Weigend (1991) and MacKay (1992), there is a growing interest in Bayesian analysis of neural network models. Although conceptually simple, this problem is computationally involved. We suggest a very efficient Markov chain Monte Carlo scheme for inference and prediction with fixed-architecture feedforward neural networks. The scheme is then extended to the variable architecture case, providing a data-driven procedure to identify sensible architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Curto, Carina, Jesse Geneson, and Katherine Morrison. "Fixed Points of Competitive Threshold-Linear Networks." Neural Computation 31, no. 1 (January 2019): 94–155. http://dx.doi.org/10.1162/neco_a_01151.

Повний текст джерела
Анотація:
Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. In addition, we study larger networks composed of smaller building block subnetworks and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of graphical calculus to infer features of the dynamics from a network's connectivity.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

DeVore, Ronald, Boris Hanin, and Guergana Petrova. "Neural network approximation." Acta Numerica 30 (May 2021): 327–444. http://dx.doi.org/10.1017/s0962492921000052.

Повний текст джерела
Анотація:
Neural networks (NNs) are the method of choice for building learning algorithms. They are now being investigated for other numerical tasks such as solving high-dimensional partial differential equations. Their popularity stems from their empirical success on several challenging learning problems (computer chess/Go, autonomous navigation, face recognition). However, most scholars agree that a convincing theoretical explanation for this success is still lacking. Since these applications revolve around approximating an unknown function from data observations, part of the answer must involve the ability of NNs to produce accurate approximations.This article surveys the known approximation properties of the outputs of NNs with the aim of uncovering the properties that are not present in the more traditional methods of approximation used in numerical analysis, such as approximations using polynomials, wavelets, rational functions and splines. Comparisons are made with traditional approximation methods from the viewpoint of rate distortion, i.e. error versus the number of parameters used to create the approximant. Another major component in the analysis of numerical approximation is the computational time needed to construct the approximation, and this in turn is intimately connected with the stability of the approximation algorithm. So the stability of numerical approximation using NNs is a large part of the analysis put forward.The survey, for the most part, is concerned with NNs using the popular ReLU activation function. In this case the outputs of the NNs are piecewise linear functions on rather complicated partitions of the domain of f into cells that are convex polytopes. When the architecture of the NN is fixed and the parameters are allowed to vary, the set of output functions of the NN is a parametrized nonlinear manifold. It is shown that this manifold has certain space-filling properties leading to an increased ability to approximate (better rate distortion) but at the expense of numerical stability. The space filling creates the challenge to the numerical method of finding best or good parameter choices when trying to approximate.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Bing, Yongkun Li, and Xiaofang Meng. "The Existence and Global Exponential Stability of Almost Periodic Solutions for Neutral-Type CNNs on Time Scales." Mathematics 7, no. 4 (March 29, 2019): 321. http://dx.doi.org/10.3390/math7040321.

Повний текст джерела
Анотація:
In this paper, neutral-type competitive neural networks with mixed time-varying delays and leakage delays on time scales are proposed. Based on the contraction fixed-point theorem, some sufficient conditions that are independent of the backwards graininess function of the time scale are obtained for the existence and global exponential stability of almost periodic solutions of neural networks under consideration. The results obtained are brand new, indicating that the continuous time and discrete-time conditions of the network share the same dynamic behavior. Finally, two examples are given to illustrate the validity of the results obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fukai, Tomoki, and Masatoshi Shiino. "Memory Recall by Quasi-Fixed-Point Attractors in Oscillator Neural Networks." Neural Computation 7, no. 3 (May 1995): 529–48. http://dx.doi.org/10.1162/neco.1995.7.3.529.

Повний текст джерела
Анотація:
It is shown that approximate fixed-point attractors rather than synchronized oscillations can be employed by a wide class of neural networks of oscillators to achieve an associative memory recall. This computational ability of oscillator neural networks is ensured by the fact that reduced dynamic equations for phase variables in general involve two terms that can be respectively responsible for the emergence of synchronization and cessation of oscillations. Thus the cessation occurs in memory retrieval if the corresponding term dominates in the dynamic equations. A bottomless feature of the energy function for such a system makes the retrieval states quasi-fixed points, which admit continual rotating motion to a small portion of oscillators, when an extensive number of memory patterns are embedded. An approximate theory based on the self-consistent signal-to-noise analysis enables one to study the equilibrium properties of the neural network of phase variables with the quasi-fixed-point attractors. As far as the memory retrieval by the quasi-fixed points is concerned, the equilibrium properties including the storage capacity of oscillator neural networks are proved to be similar to those of the Hopfield type neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ak, Cevher, Ali Yıldız, Ali Akdağlı, and Mustafa Berkan Biçer. "Computing the pull-in voltage of fixed–fixed micro-actuators by artificial neural network." Microsystem Technologies 23, no. 8 (September 14, 2016): 3537–46. http://dx.doi.org/10.1007/s00542-016-3128-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Hsu, Pai-Hui. "EVALUATING THE INITIALIZATION METHODS OF WAVELET NETWORKS FOR HYPERSPECTRAL IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 83–89. http://dx.doi.org/10.5194/isprsarchives-xli-b7-83-2016.

Повний текст джерела
Анотація:
The idea of using artificial neural network has been proven useful for hyperspectral image classification. However, the high dimensionality of hyperspectral images usually leads to the failure of constructing an effective neural network classifier. To improve the performance of neural network classifier, wavelet-based feature extraction algorithms can be applied to extract useful features for hyperspectral image classification. However, the extracted features with fixed position and dilation parameters of the wavelets provide insufficient characteristics of spectrum. In this study, wavelet networks which integrates the advantages of wavelet-based feature extraction and neural networks classification is proposed for hyperspectral image classification. Wavelet networks is a kind of feed-forward neural networks using wavelets as activation function. Both the position and the dilation parameters of the wavelets are optimized as well as the weights of the network during the training phase. The value of wavelet networks lies in their capabilities of optimizing network weights and extracting essential features simultaneously for hyperspectral images classification. In this study, the influence of the learning rate and momentum term during the network training phase is presented, and several initialization modes of wavelet networks were used to test the performance of wavelet networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hsu, Pai-Hui. "EVALUATING THE INITIALIZATION METHODS OF WAVELET NETWORKS FOR HYPERSPECTRAL IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 83–89. http://dx.doi.org/10.5194/isprs-archives-xli-b7-83-2016.

Повний текст джерела
Анотація:
The idea of using artificial neural network has been proven useful for hyperspectral image classification. However, the high dimensionality of hyperspectral images usually leads to the failure of constructing an effective neural network classifier. To improve the performance of neural network classifier, wavelet-based feature extraction algorithms can be applied to extract useful features for hyperspectral image classification. However, the extracted features with fixed position and dilation parameters of the wavelets provide insufficient characteristics of spectrum. In this study, wavelet networks which integrates the advantages of wavelet-based feature extraction and neural networks classification is proposed for hyperspectral image classification. Wavelet networks is a kind of feed-forward neural networks using wavelets as activation function. Both the position and the dilation parameters of the wavelets are optimized as well as the weights of the network during the training phase. The value of wavelet networks lies in their capabilities of optimizing network weights and extracting essential features simultaneously for hyperspectral images classification. In this study, the influence of the learning rate and momentum term during the network training phase is presented, and several initialization modes of wavelet networks were used to test the performance of wavelet networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Frady, E. Paxon, and Friedrich T. Sommer. "Robust computation with rhythmic spike patterns." Proceedings of the National Academy of Sciences 116, no. 36 (August 20, 2019): 18050–59. http://dx.doi.org/10.1073/pnas.1902653116.

Повний текст джерела
Анотація:
Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Aziz, K. A. A., Abdul Kadir, Rostam Affendi Hamzah, and Amat Amir Basari. "Product Identification Using Image Processing and Radial Basis Function Neural Networks." Applied Mechanics and Materials 761 (May 2015): 120–24. http://dx.doi.org/10.4028/www.scientific.net/amm.761.120.

Повний текст джерела
Анотація:
This paper presents a product identification using image processing and radial basis function neural networks. The system identified a specific product based on the shape of the product. An image processing had been applied to the acquired image and the product was recognized using the Radial Basis Function Neural Network (RBFNN). The RBF Neural Networks offer several advantages compared to other neural network architecture such as they can be trained using a fast two-stage training algorithm and the network possesses the property of best approximation. The output of the network can be optimized by setting suitable values of the center and the spread of RBF. In this paper, fixed spread value was used for every cluster. The system can detect all the four products with 100% successful rate using ±0.2 tolerance.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Seryakov, A., and D. Uzhva. "Convolutional Neural Network for Centrality Determination in Fixed Target Experiments." Physics of Particles and Nuclei 51, no. 3 (May 2020): 331–36. http://dx.doi.org/10.1134/s1063779620030259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wawrzyński, PaweŁ, and Bartosz Papis. "Fixed point method for autonomous on-line neural network training." Neurocomputing 74, no. 17 (October 2011): 2893–905. http://dx.doi.org/10.1016/j.neucom.2011.03.029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Farkas, I., P. Reményi, and A. Biró. "Modelling of a Fixed Bed Grain Dryer Using Neural Network." IFAC Proceedings Volumes 31, no. 9 (June 1998): 7–12. http://dx.doi.org/10.1016/s1474-6670(17)44020-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kwak, Yuyeong, Junho Song, and Hongchul Lee. "Neural network with fixed noise for index-tracking portfolio optimization." Expert Systems with Applications 183 (November 2021): 115298. http://dx.doi.org/10.1016/j.eswa.2021.115298.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Ge, S. S., and T. H. Lee. "Parallel Adaptive Neural Network Control of Robots." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 208, no. 4 (November 1994): 231–37. http://dx.doi.org/10.1243/pime_proc_1994_208_336_02.

Повний текст джерела
Анотація:
In this paper, a parallel adaptive neural network (NN) control design for robots motivated by the work by Lee and Tan is presented. The controller is based on direct adaptive techniques and an approach of using an additional parallel NN to provide adaptive enhancements to a basic fixed controller, which can be either a NN-based non-linear controller or a model-based non-linear controller. It is shown that, if Gaussian radial basis function networks are used for the additional parallel NN, uniformly stable adaptation is assured and asymptotic tracking of the position reference signal is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bolland, Peter J., and Jerome T. Connor. "A Constrained Neural Network Kalman Filter for Price Estimation in High Frequency Financial Data." International Journal of Neural Systems 08, no. 04 (August 1997): 399–415. http://dx.doi.org/10.1142/s0129065797000409.

Повний текст джерела
Анотація:
In this paper we present a neural network extended Kalman filter for modeling noisy financial time series. The neural network is employed to estimate the nonlinear dynamics of the extended Kalman filter. Conditions for the neural network weight matrix are provided to guarantee the stability of the filter. The extended Kalman filter presented is designed to filter three types of noise commonly observed in financial data: process noise, measurement noise, and arrival noise. The erratic arrival of data (arrival noise) results in the neural network predictions being iterated into the future. Constraining the neural network to have a fixed point at the origin produces better iterated predictions and more stable results. The performance of constrained and unconstrained neural networks within the extended Kalman filter is demonstrated on "Quote" tick data from the $/DM exchange rate (1993–1995).
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Abbass, Ahmed Abdulrudah, Hussein Lafta Hussein, Wisam Abed Shukur, Jasim Kaabi, and Robert Tornai. "Efficient Eye Recognition for Secure Systems Using Convolutional Neural Network." Webology 19, no. 1 (January 20, 2022): 4967–78. http://dx.doi.org/10.14704/web/v19i1/web19333.

Повний текст джерела
Анотація:
Individual’s eye recognition is an important issue in applications such as security systems, credit card control and guilty identification. Using video images cause to destroy the limitation of fixed images and to be able to receive users’ image under any condition as well as doing the eye recognition. There are some challenges in these systems; changes of individual gestures, changes of light, face coverage, low quality of video images and changes of personal characteristics in each frame. There is a need for two phases in order to do the eye recognition using images; revelation and eye recognition which will use in the security systems to identify the persons. The main aim of this paper is innovation in eye recognition phase. In this paper, a new and fast method is proposed for human eye recognition that can quickly specify the human eye location in an input image. In the proposed method, eyes will be specified in an input image with a CNN neural network. This proposed method is tested on different images and provided highest accuracy for the image recognition which used in security systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Liu, Jiandu, Bokui Chen, Dengcheng Yan, and Lei Wang. "Average number of fixed points and attractors in Hopfield neural networks." International Journal of Modern Physics C 29, no. 08 (August 2018): 1850076. http://dx.doi.org/10.1142/s0129183118500766.

Повний текст джерела
Анотація:
Calculating the exact number of fixed points and attractors of an arbitrary Hopfield neural network is a non-deterministic polynomial (NP)-hard problem. In this paper, we first calculate the average number of fixed points in such networks versus their size and threshold of neurons, in terms of a statistical method, which has been applied to the calculation of the average number of metastable states in spin glass systems. Then the same method is expanded to study the average number of attractors in such networks. The results of the calculation qualitatively agree well with the numerical calculation. The discrepancies between them are also well explained.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Amalia, Sitti. "Identification Of Number Using Artificial Neural Network Backpropagation." MATEC Web of Conferences 215 (2018): 01011. http://dx.doi.org/10.1051/matecconf/201821501011.

Повний текст джерела
Анотація:
This research proposed to design and implementation system of voice pattern recognition in the form of numbers with offline pronunciation. Artificial intelligent with backpropagation algorithm used on the simulation test. The test has been done to 100 voice files which got from 10 person voices for 10 different numbers. The words are consisting of number 0 to 9. The trial has been done with artificial neural network parameters such as tolerance value and the sum of a neuron. The best result is shown at tolerance value varied and a sum of the neuron is fixed. The percentage of this network training with optimal architecture and network parameter for each training data and new data are 82,2% and 53,3%. Therefore if tolerance value is fixed and a sum of neuron varied gave 82,2% for training data and 54,4% for new data
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Torkamani, MohamadAli, Shiv Shankar, Amirmohammad Rooshenas, and Phillip Wallis. "Differential Equation Units: Learning Functional Forms of Activation Functions from Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6030–37. http://dx.doi.org/10.1609/aaai.v34i04.6065.

Повний текст джерела
Анотація:
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure. We introduce differential equation units (DEUs), an improvement to modern neural networks, which enables each neuron to learn a particular nonlinear activation function from a family of solutions to an ordinary differential equation. Specifically, each neuron may change its functional form during training based on the behavior of the other parts of the network. We show that using neurons with DEU activation functions results in a more compact network capable of achieving comparable, if not superior, performance when compared to much larger networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Boukabou, A., and N. Mansouri. "Neural Predictive Control of Unknown Chaotic Systems." Nonlinear Analysis: Modelling and Control 10, no. 2 (April 25, 2005): 95–106. http://dx.doi.org/10.15388/na.2005.10.2.15125.

Повний текст джерела
Анотація:
In this work, a neural networks is developed for modelling and controlling a chaotic system based on measured input-output data pairs. In the chaos modelling phase, a neural network is trained on the unknown system. Then, a predictive control mechanism has been implemented with the neural networks to reach the close neighborhood of the chosen unstable fixed point embedded in the chaotic systems. Effectiveness of the proposed method for both modelling and prediction-based control on the chaotic logistic equation and Hénon map has been demonstrated.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Men, Hong, Hai Yan Liu, Lei Wang, and Yun Peng Pan. "An Optimizing Method of Competitive Neural Network." Key Engineering Materials 467-469 (February 2011): 894–99. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.894.

Повний текст джерела
Анотація:
This paper presents an optimizing method of competitive neural network(CNN):During clustering analysis fixed on the optimum number of output neurons according to the change of DB value,and then adjusted connected weight including increasing ,dividing , delete. Each neuron had the different variety trend of learning rate according with the change of the probability of neurons. The optimizing method made classification more accurate. Simulation results showed that optimized network structure had a strong ability to adjust the number of clusters dynamically and good results of classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Hasan, Ali T. "Under-Actuated Robot Manipulator Positioning Control Using Artificial Neural Network Inversion Technique." Advances in Artificial Intelligence 2012 (October 3, 2012): 1–6. http://dx.doi.org/10.1155/2012/927905.

Повний текст джерела
Анотація:
This paper is devoted to solve the positioning control problem of underactuated robot manipulator. Artificial Neural Networks Inversion technique was used where a network represents the forward dynamics of the system trained to learn the position of the passive joint over the working space of a 2R underactuated robot. The obtained weights from the learning process were fixed, and the network was inverted to represent the inverse dynamics of the system and then used in the estimation phase to estimate the position of the passive joint for a new set of data the network was not previously trained for. Data used in this research are recorded experimentally from sensors fixed on the robot joints in order to overcome whichever uncertainties presence in the real world such as ill-defined linkage parameters, links flexibility, and backlashes in gear trains. Results were verified experimentally to show the success of the proposed control strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Stamova, Ivanka, Sotir Sotirov, Evdokia Sotirova, and Gani Stamov. "Impulsive Fractional Cohen-Grossberg Neural Networks: Almost Periodicity Analysis." Fractal and Fractional 5, no. 3 (July 27, 2021): 78. http://dx.doi.org/10.3390/fractalfract5030078.

Повний текст джерела
Анотація:
In this paper, a fractional-order Cohen–Grossberg-type neural network with Caputo fractional derivatives is investigated. The notion of almost periodicity is adapted to the impulsive generalization of the model. General types of impulsive perturbations not necessarily at fixed moments are considered. Criteria for the existence and uniqueness of almost periodic waves are proposed. Furthermore, the global perfect Mittag–Leffler stability notion for the almost periodic solution is defined and studied. In addition, a robust global perfect Mittag–Leffler stability analysis is proposed. Lyapunov-type functions and fractional inequalities are applied in the proof. Since the type of Cohen–Grossberg neural networks generalizes several basic neural network models, this research contributes to the development of the investigations on numerous fractional neural network models.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chan, R. H. T., P. K. S. Tam, and D. N. K. Leung. "Solving the motion planning problem by using neural networks." Robotica 12, no. 4 (July 1994): 323–33. http://dx.doi.org/10.1017/s0263574700017343.

Повний текст джерела
Анотація:
SUMMARYThis paper presents a new neural networks-based method to solve the motion planning problem, i.e. to construct a collision-free path for a moving object among fixed obstacles. Our ‘navigator’ basically consists of two neural networks: The first one is a modified feed-forward neural network, which is used to determine the configuration space; the moving object is modelled as a configuration point in the configuration space. The second neural network is a modified bidirectional associative memory, which is used to find a path for the configuration point through the configuration space while avoiding the configuration obstacles. The basic processing unit of the neural networks may be constructed using logic gates, including AND gates, OR gates, NOT gate and flip flops. Examples of efficient solutions to difficult motion planning problems using our proposed techniques are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

V.M., Sineglazov, and Chumachenko O.I. "Structural-parametric synthesis of deep learning neural networks." Artificial Intelligence 25, no. 4 (December 25, 2020): 42–51. http://dx.doi.org/10.15407/jai2020.04.042.

Повний текст джерела
Анотація:
The structural-parametric synthesis of neural networks of deep learning, in particular convolutional neural networks used in image processing, is considered. The classification of modern architectures of convolutional neural networks is given. It is shown that almost every convolutional neural network, depending on its topology, has unique blocks that determine its essential features (for example, Squeeze and Excitation Block, Convolutional Block of Attention Module (Channel attention module, Spatial attention module), Residual block, Inception module, ResNeXt block. It is stated the problem of structural-parametric synthesis of convolutional neural networks, for the solution of which it is proposed to use a genetic algorithm. The genetic algorithm is used to effectively overcome a large search space: on the one hand, to generate possible topologies of the convolutional neural network, namely the choice of specific blocks and their locations in the structure of the convolutional neural network, and on the other hand to solve the problem of structural-parametric synthesis of convolutional neural network of selected topology. The most significant parameters of the convolutional neural network are determined. An encoding method is proposed that allows to repre- sent each network structure in the form of a string of fixed length in binary format. After that, several standard genetic operations were identified, i.e. selection, mutation and crossover, which eliminate weak individuals of the previous generation and use them to generate competitive ones. An example of solving this problem is given, a database (ultrasound results) of patients with thyroid disease was used as a training sample.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Williams, Ronald J., and David Zipser. "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks." Neural Computation 1, no. 2 (June 1989): 270–80. http://dx.doi.org/10.1162/neco.1989.1.2.270.

Повний текст джерела
Анотація:
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Kurowski, Mariusz, Andrzej Sroczyński, Georgis Bogdanis, and Andrzej Czyżewski. "An Automated Method for Biometric Handwritten Signature Authentication Employing Neural Networks." Electronics 10, no. 4 (February 12, 2021): 456. http://dx.doi.org/10.3390/electronics10040456.

Повний текст джерела
Анотація:
Handwriting biometrics applications in e-Security and e-Health are addressed in the course of the conducted research. An automated analysis method for the dynamic electronic representation of handwritten signature authentication was researched. The developed algorithms are based on the dynamic analysis of electronically handwritten signatures employing neural networks. The signatures were acquired with the use of the designed electronic pen described in the paper. The triplet loss method was used to train a neural network suitable for writer-invariant signature verification. For each signature, the same neural network calculates a fixed-length latent space representation. The hand-corrected dataset containing 10,622 signatures was used in order to train and evaluate the proposed neural network. After learning, the network was tested and evaluated based on a comparison with the results found in the literature. The use of the triplet loss algorithm to teach the neural network to generate embeddings has proven to give good results in aggregating similar signatures and separating them from signatures representing different people.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chervyakov, N. I., P. A. Lyakhov, N. N. Nagornov, M. V. Valueva, and G. V. Valuev. "Hardware implementation of a convolutional neural network using calculations in the residue number system." Computer Optics 43, no. 5 (October 2019): 857–68. http://dx.doi.org/10.18287/2412-6179-2019-43-5-857-868.

Повний текст джерела
Анотація:
Modern convolutional neural networks architectures are very resource intensive which limits the possibilities for their wide practical application. We propose a convolutional neural network architecture in which the neural network is divided into hardware and software parts to increase performance and reduce the cost of implementation resources. We also propose to use the residue number system in the hardware part to implement the convolutional layer of the neural network for resource costs reducing. A numerical method for quantizing the filters coefficients of a convolutional network layer is proposed to minimize the influence of quantization noise on the calculation result in the residue number system and determine the bit-width of the filters coefficients. This method is based on scaling the coefficients by a fixed number of bits and rounding up and down. The operations used make it possible to reduce resources in hardware implementation due to the simplifying of their execution. All calculations in the convolutional layer are performed on numbers in a fixed-point format. Software simulations using Matlab 2017b showed that convolutional neural network with a minimum number of layers can be quickly and successfully trained. Hardware implementation using the field-programmable gate array Kintex7 xc7k70tfbg484-2 showed that the use of residue number system in the convolutional layer of the neural network reduces the hardware costs by 32.6% compared with the traditional approach based on the two’s complement representation. The research results can be applied to create effective video surveillance systems, for recognizing handwriting, individuals, objects and terrain.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Li, Yongkun, and Li Yang. "Almost Periodic Solutions for Neutral-Type BAM Neural Networks with Delays on Time Scales." Journal of Applied Mathematics 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/942309.

Повний текст джерела
Анотація:
Using the existence of the exponential dichotomy of linear dynamic equations on time scales, a fixed point theorem and the theory of calculus on time scales, we obtain some sufficient conditions for the existence and exponential stability of almost periodic solutions for a class of neutral-type BAM neural networks with delays on time scales. Finally, a numerical example illustrates the feasibility of our results and also shows that the continuous-time neural network and its discrete-time analogue have the same dynamical behaviors. The results of this paper are completely new even if the time scale𝕋=ℝorℤand complementary to the previously known results.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Song, Q., and M. J. Grimble. "Design of a Multivariable Neural Controller and Its Application to Gas Turbines." Journal of Dynamic Systems, Measurement, and Control 119, no. 3 (September 1, 1997): 565–67. http://dx.doi.org/10.1115/1.2801295.

Повний текст джерела
Анотація:
The algorithm for a multivariable controller using neural network is based on a discrete-time fixed controller and the neural network provides a compensation signal to suppress the nonlinearity. The multivariable neural controller is easy to train and applied to an aircraft gas turbine plant.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Aribowo, Widi. "ELMAN-RECURRENT NEURAL NETWORK FOR LOAD SHEDDING OPTIMIZATION." SINERGI 24, no. 1 (January 14, 2020): 29. http://dx.doi.org/10.22441/sinergi.2020.1.005.

Повний текст джерела
Анотація:
Load shedding plays a key part in the avoidance of the power system outage. The frequency and voltage fluidity leads to the spread of a power system into sub-systems and leads to the outage as well as the severe breakdown of the system utility. In recent years, Neural networks have been very victorious in several signal processing and control applications. Recurrent Neural networks are capable of handling complex and non-linear problems. This paper provides an algorithm for load shedding using ELMAN Recurrent Neural Networks (RNN). Elman has proposed a partially RNN, where the feedforward connections are modifiable and the recurrent connections are fixed. The research is implemented in MATLAB and the performance is tested with a 6 bus system. The results are compared with the Genetic Algorithm (GA), Combining Genetic Algorithm with Feed Forward Neural Network (hybrid) and RNN. The proposed method is capable of assigning load releases needed and more efficient than other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Huang, Xiaohui, Shuxia Zheng, Shilong Li, Jinxiang Wu, and Graham Spence. "Scheduling Problem of Biochemical Analyzer and Exploration of Neural Network-Greedy Algorithm." Journal of Medical Imaging and Health Informatics 10, no. 8 (August 1, 2020): 1912–18. http://dx.doi.org/10.1166/jmihi.2020.3120.

Повний текст джерела
Анотація:
The mathematical model of biochemical analysis system was established based on neural network-greedy algorithm. The optimal task scheduling sequence was solved by neural network algorithm. At the same time, the local optimization was obtained by combining greedy algorithm. In this way, the task scheduling problem in biochemical analyzer was transformed into a mathematical problem, and the mathematical model of scheduling algorithm was established. On the platform of MATLAB, eight groups of simulation tests were carried out on the same task scheduling problem using the neural network-greedy scheduling algorithm and the traditional fixedperiod scheduling algorithm. The task-time Gantt charts of the two algorithms were compared under different scheduling orders. The results showed that the average speed of the neural network-greedy algorithm was improved by 31% compared with that of the fixed-period scheduling algorithm. The mathematical model of biochemical analysis system on scheduling problem established by neural network-greedy scheduling algorithm has high efficiency compared with the traditional fixed-period scheduling algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Miller, M., and E. N. Miranda. "STABILITY OF MULTILAYERED NEURAL NETWORKS." International Journal of Neural Systems 02, no. 01n02 (January 1991): 143–46. http://dx.doi.org/10.1142/s0129065791000133.

Повний текст джерела
Анотація:
Stability of a multilayered neural network architecture against synaptic changes has been studied numerically. We have found that the average change goes to zero as the number N of input neurons is N≫1. If a fixed fraction of output mistakes is allowed, then the synapses may be changed within some limits even for large N.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Qi, Haiyu, Xing-Gui Zhou, Liang-Hong Liu, and Wei-Kang Yuan. "A hybrid neural network-first principles model for fixed-bed reactor." Chemical Engineering Science 54, no. 13-14 (July 1999): 2521–26. http://dx.doi.org/10.1016/s0009-2509(98)00523-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії