To see the other types of publications on this topic, follow the link: Stochastic neural networks.

Journal articles on the topic 'Stochastic neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stochastic neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wong, Eugene. "Stochastic neural networks." Algorithmica 6, no. 1-6 (June 1991): 466–78. http://dx.doi.org/10.1007/bf01759054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Wuneng, Xueqing Yang, Jun Yang, and Jun Zhou. "Stochastic Synchronization of Neutral-Type Neural Networks with Multidelays Based onM-Matrix." Discrete Dynamics in Nature and Society 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/826810.

Full text
Abstract:
The problem of stochastic synchronization of neutral-type neural networks with multidelays based onM-matrix is researched. Firstly, we designed a control law of stochastic synchronization of the neural-type and multiple time-delays neural network. Secondly, by making use of Lyapunov functional andM-matrix method, we obtained a criterion under which the drive and response neutral-type multiple time-delays neural networks with stochastic disturbance and Markovian switching are stochastic synchronization. The synchronization condition is expressed as linear matrix inequality which can be easily solved by MATLAB. Finally, we introduced a numerical example to illustrate the effectiveness of the method and result obtained in this paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Reddy, BhanuTeja, and Usha J.C. "Prediction of Stock Market using Stochastic Neural Networks." International Journal of Innovative Research in Computer Science & Technology 7, no. 5 (September 2019): 128–38. http://dx.doi.org/10.21276/ijircst.2019.7.5.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

SAKTHIVEL, RATHINASAMY, R. SAMIDURAI, and S. MARSHAL ANTHONI. "EXPONENTIAL STABILITY FOR STOCHASTIC NEURAL NETWORKS OF NEUTRAL TYPE WITH IMPULSIVE EFFECTS." Modern Physics Letters B 24, no. 11 (May 10, 2010): 1099–110. http://dx.doi.org/10.1142/s0217984910023141.

Full text
Abstract:
This paper is concerned with the exponential stability of stochastic neural networks of neutral type with impulsive effects. By employing the Lyapunov functional and stochastic analysis, a new stability criterion for the stochastic neural network is derived in terms of linear matrix inequality. A numerical example is provided to show the effectiveness and applicability of the obtained result.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Chunmei, Junhao Hu, and Yan Li. "Robustness Analysis of Hybrid Stochastic Neural Networks with Neutral Terms and Time-Varying Delays." Discrete Dynamics in Nature and Society 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/278571.

Full text
Abstract:
We analyze the robustness of global exponential stability of hybrid stochastic neural networks subject to neutral terms and time-varying delays simultaneously. Given globally exponentially stable hybrid stochastic neural networks, we characterize the upper bounds of contraction coefficients of neutral terms and time-varying delays by using the transcendental equation. Moreover, we prove theoretically that, for any globally exponentially stable hybrid stochastic neural networks, if additive neutral terms and time-varying delays are smaller than the upper bounds arrived, then the perturbed neural networks are guaranteed to also be globally exponentially stable. Finally, a numerical simulation example is given to illustrate the presented criteria.
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Zhan, Elvin Isufi, and Alejandro Ribeiro. "Stochastic Graph Neural Networks." IEEE Transactions on Signal Processing 69 (2021): 4428–43. http://dx.doi.org/10.1109/tsp.2021.3092336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Shigeng, Xiaoxin Liao, and Xuerong Mao. "Stochastic Hopfield neural networks." Journal of Physics A: Mathematical and General 36, no. 9 (February 19, 2003): 2235–49. http://dx.doi.org/10.1088/0305-4470/36/9/303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Tianyuan, Yongxin Yang, Da Li, Timothy Hospedales, and Tao Xiang. "Simple and Effective Stochastic Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3252–60. http://dx.doi.org/10.1609/aaai.v35i4.16436.

Full text
Abstract:
Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularized learning. These neural network variants impact several major considerations, including generalization, network compression, robustness against adversarial attack and label noise, and model calibration. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this paper we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modeling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement and faster to train, and produces state of the art results on network compression by pruning, adversarial defense, learning with label noise, and model calibration.
APA, Harvard, Vancouver, ISO, and other styles
9

Peretto, Pierre, and Jean-jacques Niez. "Stochastic Dynamics of Neural Networks." IEEE Transactions on Systems, Man, and Cybernetics 16, no. 1 (January 1986): 73–83. http://dx.doi.org/10.1109/tsmc.1986.289283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kanarachos, Andreas E., and Kleanthis T. Geramanis. "Semi-Stochastic Complex Neural Networks." IFAC Proceedings Volumes 31, no. 12 (June 1998): 47–52. http://dx.doi.org/10.1016/s1474-6670(17)36040-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hurtado, Jorge E. "Neural networks in stochastic mechanics." Archives of Computational Methods in Engineering 8, no. 3 (September 2001): 303–42. http://dx.doi.org/10.1007/bf02736646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

PARK, JU H., and O. M. KWON. "SYNCHRONIZATION OF NEURAL NETWORKS OF NEUTRAL TYPE WITH STOCHASTIC PERTURBATION." Modern Physics Letters B 23, no. 14 (June 10, 2009): 1743–51. http://dx.doi.org/10.1142/s0217984909019909.

Full text
Abstract:
In this letter, the problem of feedback controller design to achieve synchronization for neural network of neutral type with stochastic perturbation is considered. Based on Lyapunov method and LMI (linear matrix inequality) framework, the goal of this letter is to derive an existence criterion of the controller for the synchronization between master and response networks.
APA, Harvard, Vancouver, ISO, and other styles
13

Shi, Xiang Dong. "Mean Square Asymptotic Stability of Neutral Stochastic Neutral Networks with Multiple Time-Varying Delays." Advanced Materials Research 684 (April 2013): 579–82. http://dx.doi.org/10.4028/www.scientific.net/amr.684.579.

Full text
Abstract:
The paper considers the problems of almost surely asymptotic stability for neutral stochastic neural networks with multiple time-varying delays. By applying Lyapunov functional method and differential inequality techniques, new sufficient conditions ensuring the existence and almost surely asymptotic stability of neutral stochastic neural networks with multiple time-varying delays are established. The results are shown to be generalizations of some previously published results and are less conservative than existing results.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Xiaobo, Xuefei Wu, Zhe Nie, and Zengxian Yan. "The pth Moment Exponential Synchronization of Drive-Response Memristor Neural Networks Subject to Stochastic Perturbations." Complexity 2023 (July 18, 2023): 1–10. http://dx.doi.org/10.1155/2023/1335184.

Full text
Abstract:
In this paper, the p t h moment exponential synchronization problems of drive-response stochastic memristor neural networks are studied via a state feedback controller. The dynamics of the memristor neural network are nonidentical, consisting of both asymmetrically nondelayed and delayed coupled, state-dependent, and subject to exogenous stochastic perturbations. The pth moment exponential synchronization of these drive-response stochastic memristor neural networks is guaranteed under some testable and computable sufficient conditions utilizing differential inclusion theory and Filippov regularization. Finally, the correctness and effectiveness of our theoretical results are demonstrated through a numerical example.
APA, Harvard, Vancouver, ISO, and other styles
15

PARK, JU H., and O. M. KWON. "ANALYSIS ON GLOBAL STABILITY OF STOCHASTIC NEURAL NETWORKS OF NEUTRAL TYPE." Modern Physics Letters B 22, no. 32 (December 30, 2008): 3159–70. http://dx.doi.org/10.1142/s0217984908017680.

Full text
Abstract:
In this paper, the problem of global asymptotic stability of stochastic neural networks of neutral type is considered. Based on the Lyapunov stability theory, a new delay-dependent stability criterion for the network is derived in terms of LMI (linear matrix inequality). A numerical example is given to show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
16

Hua, Min Gang, Jun Tao Fei, and Wei Li Dai. "Exponential Stability Analysis for Neutral Stochastic Delayed Neural Networks." Advanced Materials Research 354-355 (October 2011): 877–80. http://dx.doi.org/10.4028/www.scientific.net/amr.354-355.877.

Full text
Abstract:
In this paper, the generalized Finsler lemma and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria of neutral stochastic delayed neural networks. The stability criteria in the new results improve and generalize existing ones. Two examples are included to show the effectiveness of the results.
APA, Harvard, Vancouver, ISO, and other styles
17

Sriraman, R., R. Samidurai, V. C. Amritha, G. Rachakit, and Prasanalakshmi Balaji. "System decomposition-based stability criteria for Takagi-Sugeno fuzzy uncertain stochastic delayed neural networks in quaternion field." AIMS Mathematics 8, no. 5 (2023): 11589–616. http://dx.doi.org/10.3934/math.2023587.

Full text
Abstract:
<abstract><p>Stochastic disturbances often occur in real-world systems which can lead to undesirable system dynamics. Therefore, it is necessary to investigate stochastic disturbances in neural network modeling. As such, this paper examines the stability problem for Takagi-Sugeno fuzzy uncertain quaternion-valued stochastic neural networks. By applying Takagi-Sugeno fuzzy models and stochastic analysis, we first consider a general form of Takagi-Sugeno fuzzy uncertain quaternion-valued stochastic neural networks with time-varying delays. Then, by constructing suitable Lyapunov-Krasovskii functional, we present new delay-dependent robust and global asymptotic stability criteria for the considered networks. Furthermore, we present our results in terms of real-valued linear matrix inequalities that can be solved in MATLAB LMI toolbox. Finally, two numerical examples are presented with their simulations to demonstrate the validity of the theoretical analysis.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Wuneng, Xueqing Yang, Jun Yang, Anding Dai, and Huashan Liu. "Almost Sure Asymptotical Adaptive Synchronization for Neutral-Type Neural Networks with Stochastic Perturbation and Markovian Switching." Mathematical Problems in Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/479084.

Full text
Abstract:
The problem of almost sure (a.s.) asymptotic adaptive synchronization for neutral-type neural networks with stochastic perturbation and Markovian switching is researched. Firstly, we proposed a new criterion of a.s. asymptotic stability for a general neutral-type stochastic differential equation which extends the existing results. Secondly, based upon this stability criterion, by making use of Lyapunov functional method and designing an adaptive controller, we obtained a condition of a.s. asymptotic adaptive synchronization for neutral-type neural networks with stochastic perturbation and Markovian switching. The synchronization condition is expressed as linear matrix inequality which can be easily solved by Matlab. Finally, we introduced a numerical example to illustrate the effectiveness of the method and result obtained in this paper.
APA, Harvard, Vancouver, ISO, and other styles
19

Hong, Junping, and Ercan Engin Kuruoglu. "Minimax Bayesian Neural Networks." Entropy 27, no. 4 (March 25, 2025): 340. https://doi.org/10.3390/e27040340.

Full text
Abstract:
Robustness is an important issue in deep learning, and Bayesian neural networks (BNNs) provide means of robustness analysis, while the minimax method is a conservative choice in the classical Bayesian field. Recently, researchers have applied the closed-loop idea to neural networks via the minimax method and proposed the closed-loop neural networks. In this paper, we study more conservative BNNs with the minimax method, which formulates a two-player game between a deterministic neural network and a sampling stochastic neural network. From this perspective, we reveal the connection between the closed-loop neural and the BNNs. We test the models on some simple data sets and study their robustness under noise perturbation, etc.
APA, Harvard, Vancouver, ISO, and other styles
20

Fan, Tongke. "Research on Optimal CDMA Multiuser Detection Based on Stochastic Hopfield Neural Network." Recent Patents on Computer Science 12, no. 3 (May 8, 2019): 233–40. http://dx.doi.org/10.2174/2213275912666181210103742.

Full text
Abstract:
Background: Most of the common multi-user detection techniques have the shortcomings of large computation and slow operation. For Hopfield neural networks, there are some problems such as high-speed searching ability and parallel processing, but there are local convergence problems. Objective: The stochastic Hopfield neural network avoids local convergence by introducing noise into the state variables and then achieves the optimal detection. Methods: Based on the study of CDMA communication model, this paper presents and models the problem of multi-user detection. Then a new stochastic Hopfield neural network is obtained by introducing a stochastic disturbance into the traditional Hopfield neural network. Finally, the problem of CDMA multi-user detection is simulated. Conclusion: The results show that the introduction of stochastic disturbance into Hopfield neural network can help the neural network to jump out of the local minimum, thus achieving the minimum and improving the performance of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
21

Blythe, Steve, Xuerong Mao, and Xiaoxin Liao. "Stability of stochastic delay neural networks." Journal of the Franklin Institute 338, no. 4 (July 2001): 481–95. http://dx.doi.org/10.1016/s0016-0032(01)00016-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Verikas, A., and A. Gelzinis. "Training neural networks by stochastic optimisation." Neurocomputing 30, no. 1-4 (January 2000): 153–72. http://dx.doi.org/10.1016/s0925-2312(99)00123-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Yunfeng, and Fengxian Tang. "Network Intrusion Detection Based on Stochastic Neural Networks Method." International Journal of Security and Its Applications 10, no. 8 (August 31, 2016): 435–46. http://dx.doi.org/10.14257/ijsia.2016.10.8.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Borovkov, K., G. Decrouez, and M. Gilson. "On Stationary Distributions of Stochastic Neural Networks." Journal of Applied Probability 51, no. 3 (September 2014): 837–57. http://dx.doi.org/10.1239/jap/1409932677.

Full text
Abstract:
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
APA, Harvard, Vancouver, ISO, and other styles
25

Borovkov, K., G. Decrouez, and M. Gilson. "On Stationary Distributions of Stochastic Neural Networks." Journal of Applied Probability 51, no. 03 (September 2014): 837–57. http://dx.doi.org/10.1017/s0021900200011700.

Full text
Abstract:
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Qingliang, and Jinmei Lai. "Stochastic Loss Function." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4884–91. http://dx.doi.org/10.1609/aaai.v34i04.5925.

Full text
Abstract:
Training deep neural networks is inherently subject to the predefined and fixed loss functions during optimizing. To improve learning efficiency, we develop Stochastic Loss Function (SLF) to dynamically and automatically generating appropriate gradients to train deep networks in the same round of back-propagation, while maintaining the completeness and differentiability of the training pipeline. In SLF, a generic loss function is formulated as a joint optimization problem of network weights and loss parameters. In order to guarantee the requisite efficiency, gradients with the respect to the generic differentiable loss are leveraged for selecting loss function and optimizing network weights. Extensive experiments on a variety of popular datasets strongly demonstrate that SLF is capable of obtaining appropriate gradients at different stages during training, and can significantly improve the performance of various deep models on real world tasks including classification, clustering, regression, neural machine translation, and objection detection.
APA, Harvard, Vancouver, ISO, and other styles
27

Lu, Yuanming, Di Wang, Die Liu, and Xianyi Yang. "A Lightweight and Efficient Method of Structural Damage Detection Using Stochastic Configuration Network." Sensors 23, no. 22 (November 13, 2023): 9146. http://dx.doi.org/10.3390/s23229146.

Full text
Abstract:
With the advancement of neural networks, more and more neural networks are being applied to structural health monitoring systems (SHMSs). When an SHMS requires the integration of numerous neural networks, high-performance and low-latency networks are favored. This paper focuses on damage detection based on vibration signals. In contrast to traditional neural network approaches, this study utilizes a stochastic configuration network (SCN). An SCN is an incrementally learning network that randomly configures appropriate neurons based on data and errors. It is an emerging neural network that does not require predefined network structures and is not based on gradient descent. While SCNs dynamically define the network structure, they essentially function as fully connected neural networks that fail to capture the temporal properties of monitoring data effectively. Moreover, they suffer from inference time and computational cost issues. To enable faster and more accurate operation within the monitoring system, this paper introduces a stochastic convolutional feature extraction approach that does not rely on backpropagation. Additionally, a random node deletion algorithm is proposed to automatically prune redundant neurons in SCNs, addressing the issue of network node redundancy. Experimental results demonstrate that the feature extraction method improves accuracy by 30% compared to the original SCN, and the random node deletion algorithm removes approximately 10% of neurons.
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Yuhua, Jinmeng Wang, Wuneng Zhou, and Xin Wang. "Synchronization in pth Moment for Stochastic Chaotic Neural Networks with Finite-Time Control." Complexity 2019 (January 3, 2019): 1–8. http://dx.doi.org/10.1155/2019/2916364.

Full text
Abstract:
Finite-time synchronization in pth moment is considered for time varying stochastic chaotic neural networks. Compared with some existing results about finite-time mean square stability of stochastic neural network, we obtain some useful criteria of finite-time synchronization in pth moment for chaotic neural networks based on finite-time nonlinear feedback control and finite-time adaptive feedback control, which are efficient and easy to implement in practical applications. Finally, a numerical example is given to illustrate the validity of the derived synchronization conditions.
APA, Harvard, Vancouver, ISO, and other styles
29

EMMERT-STREIB, FRANK. "A HETEROSYNAPTIC LEARNING RULE FOR NEURAL NETWORKS." International Journal of Modern Physics C 17, no. 10 (October 2006): 1501–20. http://dx.doi.org/10.1142/s0129183106009916.

Full text
Abstract:
In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Jie, Jun Wang, Wen Fang, and Hongli Niu. "Financial Time Series Prediction Using Elman Recurrent Random Neural Networks." Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4742515.

Full text
Abstract:
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
APA, Harvard, Vancouver, ISO, and other styles
31

TAN, Z., and M. K. ALI. "PATTERN RECOGNITION WITH STOCHASTIC RESONANCE IN A GENERIC NEURAL NETWORK." International Journal of Modern Physics C 11, no. 08 (December 2000): 1585–93. http://dx.doi.org/10.1142/s0129183100001413.

Full text
Abstract:
We discuss stochastic resonance in associative memory with a canonical neural network model that describes the generic behavior of a large family of dynamical systems near bifurcation. Our result shows that stochastic resonance helps memory association. The relationship between stochastic resonance, associative memory, storage load, history of memory and initial states are studied. In intelligent systems like neural networks, it is likely that stochastic resonance combined with synaptic information enhances memory recalls.
APA, Harvard, Vancouver, ISO, and other styles
32

Ma, Yunlong, Tao Xie, and Yijia Zhang. "Robustness analysis of neutral fuzzy cellular neural networks with stochastic disturbances and time delays." AIMS Mathematics 9, no. 10 (2024): 29556–72. http://dx.doi.org/10.3934/math.20241431.

Full text
Abstract:
<p>This paper discusses the robustness of neutral fuzzy cellular neural networks with stochastic disturbances and time delays. This work questions whether fuzzy cellular neural networks, which initially remains stable, can be stabilised again when the system is subjected to three simultasneous perturbations i.e., neutral items, random disturbances, and time delays. First, by using inequality techniques such as Gronwall's Lemma, the Itŏ formula, and the property of integrals, the transcendental equations that contain the contraction coefficient of the neutral terms, the intensity of the random disturbances, and the time delays are derived. Then, the upper bounds of the neutral terms, random disturbances, and time delays are estimated by solving the transcendental equations for multifactor perturbations, which ensures that the disturbed fuzzy cellular neural network can be stabilised again. Finally, the validity of the results is verified by numerical examples.</p>
APA, Harvard, Vancouver, ISO, and other styles
33

Nikolic, Kostantin. "Training Neural Network Elements Created From Long Shot Term Memory." Oriental journal of computer science and technology 10, no. 1 (March 1, 2017): 01–10. http://dx.doi.org/10.13005/ojcst/10.01.01.

Full text
Abstract:
This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are based on the concept of gradient. The efficiency of these algorithms is demonstrated in the training network created from units that are characterized by long term and long shot term memory of networks. The presented methology is effective and relative simple.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Bingxian, Honghui Yin, and Bo Du. "On Asymptotic Properties of Stochastic Neutral-Type Inertial Neural Networks with Mixed Delays." Symmetry 15, no. 9 (September 12, 2023): 1746. http://dx.doi.org/10.3390/sym15091746.

Full text
Abstract:
This article studies the stability problem of a class of stochastic neutral-type inertial delay neural networks. By introducing appropriate variable transformations, the second-order differential system is transformed into a first-order differential system. Using homeomorphism mapping, standard stochastic analyzing technology, the Lyapunov functional method and the properties of a neutral operator, we establish new sufficient criteria for the unique existence and stochastically globally asymptotic stability of equilibrium points. An example is also provided, to show the validity of the established results. From our results, we find that, under appropriate conditions, random disturbances have no significant impact on the existence, stability, and symmetry of network systems.
APA, Harvard, Vancouver, ISO, and other styles
35

DEBIEC, PIOTOR, LUKASZ KORNATOWSKI, KRZYSZTOF SLOT, and HYONGSUK KIM. "TEXTURE GENERATION USING CELLULAR NEURAL NETWORKS." International Journal of Bifurcation and Chaos 16, no. 12 (December 2006): 3655–68. http://dx.doi.org/10.1142/s021812740601704x.

Full text
Abstract:
The following paper introduces an application of Cellular Neural Networks for the generation of predetermined stochastic textures. The key element for the task realization is an appropriate selection of template elements, which should provide a transformation of initial, random CNN state into a stable equilibrium, featuring desired perceptual properties. A template derivation procedure comprises two steps: linear CNN design, followed by a template-refinement procedure that involves nonlinear optimization. In addition, a procedure that extends CNN texture rendition capabilities into a realm of non-pure stochastic textures is proposed.
APA, Harvard, Vancouver, ISO, and other styles
36

Han, Sufang, Tianwei Zhang, and Guoxin Liu. "Stochastic Dynamics of Discrete-Time Fuzzy Random BAM Neural Networks with Time Delays." Mathematical Problems in Engineering 2019 (August 25, 2019): 1–20. http://dx.doi.org/10.1155/2019/9416234.

Full text
Abstract:
By using the semidiscrete method of differential equations, a new version of discrete analogue of stochastic fuzzy BAM neural networks was formulated, which gives a more accurate characterization for continuous-time stochastic neural networks than that by the Euler scheme. Firstly, the existence of the 2p-th mean almost periodic sequence solution of the discrete-time stochastic fuzzy BAM neural networks is investigated with the help of Minkowski inequality, Hölder inequality, and Krasnoselskii’s fixed point theorem. Secondly, the 2p-th moment global exponential stability of the discrete-time stochastic fuzzy BAM neural networks is also studied by using some analytical skills in stochastic theory. Finally, two examples with computer simulations are given to demonstrate that our results are feasible. The main results obtained in this paper are completely new, and the methods used in this paper provide a possible technique to study 2p-th mean almost periodic sequence solution and 2p-th moment global exponential stability of semidiscrete stochastic fuzzy models.
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Jin Fang. "Mean-Square Exponential Stability of Stochastic Interval Cellular Neural Networks with Time-Varying Delay." Advanced Materials Research 760-762 (September 2013): 1742–47. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1742.

Full text
Abstract:
This paper is concerned with the mean-square exponential stability analysis problem for a class of stochastic interval cellular neural networks with time-varying delay. By using the stochastic analysis approach, employing Lyapunov function and norm inequalities, several mean-square exponential stability criteria are established in terms of the formula and Razumikhin theorem to guarantee the stochastic interval delayed cellular neural networks to be mean-square exponential stable. Some recent results reported in the literatures are generalized. A kind of equivalent description for this stochastic interval cellular neural networks with time-varying delay is also given.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Yajun, Xisheng Dai, Wenping Xiao, and Like Jiao. "Stability Analysis for Stochastic Markovian Jumping Neural Networks with Leakage Delay." Open Electrical & Electronic Engineering Journal 11, no. 1 (January 25, 2017): 1–13. http://dx.doi.org/10.2174/1874129001711010001.

Full text
Abstract:
The stability problem for a class of stochastic neural networks with Markovian jump parameters and leakage delay is addressed in this study. The sufficient condition to ensure an exponentially stable stochastic neural networks system is presented and proven with Lyapunov functional theory, stochastic stability technique and linear matrix inequality method. The effect of leakage delay on the stability of the neural networks system is discussed and numerical examples are provided to show the correctness and effectiveness of the research results .
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Yan, and Yi Shen. "Preserving Global Exponential Stability of Hybrid BAM Neural Networks with Reaction Diffusion Terms in the Presence of Stochastic Noise and Connection Weight Matrices Uncertainty." Mathematical Problems in Engineering 2014 (2014): 1–17. http://dx.doi.org/10.1155/2014/486052.

Full text
Abstract:
We study the impact of stochastic noise and connection weight matrices uncertainty on global exponential stability of hybrid BAM neural networks with reaction diffusion terms. Given globally exponentially stable hybrid BAM neural networks with reaction diffusion terms, the question to be addressed here is how much stochastic noise and connection weights matrices uncertainty the neural networks can tolerate while maintaining global exponential stability. The upper threshold of stochastic noise and connection weights matrices uncertainty is defined by using the transcendental equations. We find that the perturbed hybrid BAM neural networks with reaction diffusion terms preserve global exponential stability if the intensity of both stochastic noise and connection weights matrices uncertainty is smaller than the defined upper threshold. A numerical example is also provided to illustrate the theoretical conclusion.
APA, Harvard, Vancouver, ISO, and other styles
40

Hou, Yuanyuan, and Lihua Dai. "Square-Mean Pseudo Almost Periodic Solutions for Quaternion-Valued Stochastic Neural Networks with Time-Varying Delays." Mathematical Problems in Engineering 2021 (January 21, 2021): 1–17. http://dx.doi.org/10.1155/2021/6679326.

Full text
Abstract:
In this paper, we are concerned with a class of quaternion-valued stochastic neural networks with time-varying delays. Firstly, we cannot explicitly decompose the quaternion-valued stochastic systems into equivalent real-valued stochastic systems; by using the Banach fixed point theorem and stochastic analysis techniques, we obtain some sufficient conditions for the existence of square-mean pseudo almost periodic solutions for this class of neural networks. Then, by constructing an appropriate Lyapunov functional and stochastic analysis techniques, we can also obtain sufficient conditions for square-mean exponential stability of the considered neural networks. All of these results are new. Finally, two examples are given to illustrate the effectiveness and feasibility of our main results.
APA, Harvard, Vancouver, ISO, and other styles
41

Akbar, Mutaqin. "Traffic sign recognition using convolutional neural networks." Jurnal Teknologi dan Sistem Komputer 9, no. 2 (March 5, 2021): 120–25. http://dx.doi.org/10.14710/jtsiskom.2021.13959.

Full text
Abstract:
Traffic sign recognition (TSR) can be used to recognize traffic signs by utilizing image processing. This paper presents traffic sign recognition in Indonesia using convolutional neural networks (CNN). The overall image dataset used is 2050 images of traffic signs, consisting of 10 kinds of signs. The CNN layer used in this study consists of one convolution layer, one pooling layer using maxpool operation, and one fully connected layer. The training algorithm used is stochastic gradient descent (SGD). At the training stage, using 1750 training images, 48 filters, and a learning rate of 0.005, the recognition results in 0.005 of loss and 100 % of accuracy. At the testing stage using 300 test images, the system recognizes the signs with 0.107 of loss and 97.33 % of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
42

Sugishita, Naoki, and Jun Ohkubo. "Embedding stochastic differential equations into neural networks via dual processes." Journal of Statistical Mechanics: Theory and Experiment 2023, no. 9 (September 1, 2023): 093201. http://dx.doi.org/10.1088/1742-5468/acf126.

Full text
Abstract:
Abstract We propose a new approach to constructing a neural network for predicting expectations of stochastic differential equations. The proposed method does not need data sets of inputs and outputs; instead, the information obtained from the time-evolution equations, i.e. the corresponding dual process, is directly compared with the weights in the neural network. As a demonstration, we construct neural networks for the Ornstein–Uhlenbeck process and the noisy van der Pol system. The remarkable feature of learned networks with the proposed method is the accuracy of inputs near the origin. Hence, it would be possible to avoid the overfitting problem because the learned network does not depend on training data sets.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yanzhi, Zheng Zhan, Liang Zhao, Jian Tang, Siyue Wang, Jiayu Li, Bo Yuan, Wujie Wen, and Xue Lin. "Universal Approximation Property and Equivalence of Stochastic Computing-Based Neural Networks and Binary Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5369–76. http://dx.doi.org/10.1609/aaai.v33i01.33015369.

Full text
Abstract:
Large-scale deep neural networks are both memory and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated. Specific forms of binary neural networks (BNNs) and stochastic computing-based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations.
 Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper we prove that the “ideal” SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior), which is a new angle from the original approximation property. The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a “bridge” to prove for BNNs. Besides the universal approximation property, we also derive an appropriate bound for bit length M in order to provide insights for the actual neural network implementations. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growth of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable.
APA, Harvard, Vancouver, ISO, and other styles
44

Jia, Zhifu, and Cunlin Li. "Almost Sure Exponential Stability of Uncertain Stochastic Hopfield Neural Networks Based on Subadditive Measures." Mathematics 11, no. 14 (July 14, 2023): 3110. http://dx.doi.org/10.3390/math11143110.

Full text
Abstract:
For this paper, we consider the almost sure exponential stability of uncertain stochastic Hopfield neural networks based on subadditive measures. Firstly, we deduce two corollaries, using the Itô–Liu formula. Then, we introduce the concept of almost sure exponential stability for uncertain stochastic Hopfield neural networks. Next, we investigate the almost sure exponential stability of uncertain stochastic Hopfield neural networks, using the Lyapunov method, Liu inequality, the Liu lemma, and exponential martingale inequality. In addition, we prove two sufficient conditions for almost sure exponential stability. Furthermore, we consider stabilization with linear uncertain stochastic perturbation and present some exceptional examples. Finally, our paper provides our conclusion.
APA, Harvard, Vancouver, ISO, and other styles
45

ROSSELLÓ, JOSEP L., VINCENT CANALS, ANTONI MORRO, and ANTONI OLIVER. "HARDWARE IMPLEMENTATION OF STOCHASTIC SPIKING NEURAL NETWORKS." International Journal of Neural Systems 22, no. 04 (July 25, 2012): 1250014. http://dx.doi.org/10.1142/s0129065712500141.

Full text
Abstract:
Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.
APA, Harvard, Vancouver, ISO, and other styles
46

Soula, Hédi, and Carson C. Chow. "Stochastic Dynamics of a Finite-Size Spiking Neural Network." Neural Computation 19, no. 12 (December 2007): 3262–92. http://dx.doi.org/10.1162/neco.2007.19.12.3262.

Full text
Abstract:
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.
APA, Harvard, Vancouver, ISO, and other styles
47

Tosh, Colin R., and Graeme D. Ruxton. "The need for stochastic replication of ecological neural networks." Philosophical Transactions of the Royal Society B: Biological Sciences 362, no. 1479 (January 18, 2007): 455–60. http://dx.doi.org/10.1098/rstb.2006.1973.

Full text
Abstract:
Artificial neural networks are becoming increasingly popular as predictive statistical tools in ecosystem ecology and as models of signal processing in behavioural and evolutionary ecology. We demonstrate here that a commonly used network in ecology, the three-layer feed-forward network, trained with the backpropagation algorithm, can be extremely sensitive to the stochastic variation in training data that results from random sampling of the same underlying statistical distribution, with networks converging to several distinct predictive states. Using a random walk procedure to sample error–weight space, and Sammon dimensional reduction of weight arrays, we demonstrate that these different predictive states are not artefactual, due to local minima, but lie at the base of major error troughs in the error–weight surface. We further demonstrate that various gross weight compositions can produce the same predictive state, suggesting the analogy of weight space as a ‘patchwork’ of multiple predictive states. Our results argue for increased inclusion of stochastic training replication and analysis into ecological and behavioural applications of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
48

Li WAN, and Qinghua ZHOU. "Ultimate Boundedness of Stochastic Hopfield Neural Networks." Journal of Convergence Information Technology 6, no. 8 (August 31, 2011): 244–48. http://dx.doi.org/10.4156/jcit.vol6.issue8.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Rosaci, D. "Stochastic neural networks for transportation systems modeling." Applied Artificial Intelligence 12, no. 6 (September 1998): 491–500. http://dx.doi.org/10.1080/088395198117631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Verriest, E. I. "Stability of Stochastic Neural Networks with Delays." IFAC Proceedings Volumes 31, no. 20 (July 1998): 441–46. http://dx.doi.org/10.1016/s1474-6670(17)41834-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography