Статті в журналах з теми "Stochastic systems; neural networks; computer science"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Stochastic systems; neural networks; computer science.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Stochastic systems; neural networks; computer science".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

NAJIM, K., and M. CHTOUROU. "Neural networks synthesis based on stochastic approximation algorithm." International Journal of Systems Science 25, no. 7 (July 1994): 1219–22. http://dx.doi.org/10.1080/00207729408949273.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Borkar, V. S., and P. Gupta. "Randomized neural networks for learning stochastic dependences." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 29, no. 4 (1999): 469–80. http://dx.doi.org/10.1109/3477.775263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Battilotti, S., and A. De Santis. "Robust output feedback control of nonlinear stochastic systems using neural networks." IEEE Transactions on Neural Networks 14, no. 1 (January 2003): 103–16. http://dx.doi.org/10.1109/tnn.2002.806609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhou, Liqun, and Guangda Hu. "Almost sure exponential stability of neutral stochastic delayed cellular neural networks." Journal of Control Theory and Applications 6, no. 2 (May 2008): 195–200. http://dx.doi.org/10.1007/s11768-008-7036-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gutiérrez, Irene López, and Christian B. Mendl. "Real time evolution with neural-network quantum states." Quantum 6 (January 20, 2022): 627. http://dx.doi.org/10.22331/q-2022-01-20-627.

Повний текст джерела
Анотація:
A promising application of neural-network quantum states is to describe the time dynamics of many-body quantum systems. To realize this idea, we employ neural-network quantum states to approximate the implicit midpoint rule method, which preserves the symplectic form of Hamiltonian dynamics. We ensure that our complex-valued neural networks are holomorphic functions, and exploit this property to efficiently compute gradients. Application to the transverse-field Ising model on a one- and two-dimensional lattice exhibits an accuracy comparable to the stochastic configuration method proposed in [Carleo and Troyer, Science 355, 602-606 (2017)], but does not require computing the (pseudo-)inverse of a matrix.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fernandes, Fernando, Rodrigo de Losso da Silveira Bueno, Pedro Delano Cavalcanti, and Alemayehu Solomon Admasu. "Generating Stochastic Processes Through Convolutional Neural Networks." Journal of Control, Automation and Electrical Systems 31, no. 2 (January 31, 2020): 294–303. http://dx.doi.org/10.1007/s40313-020-00567-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

You, Jingjing, Abdujelil Abdurahman, and Hayrengul Sadik. "Fixed/Predefined-Time Synchronization of Complex-Valued Stochastic BAM Neural Networks with Stabilizing and Destabilizing Impulse." Mathematics 10, no. 22 (November 21, 2022): 4384. http://dx.doi.org/10.3390/math10224384.

Повний текст джерела
Анотація:
This article is mainly concerned with the fixed-time and predefined-time synchronization problem for a type of complex-valued BAM neural networks with stochastic perturbations and impulse effect. First, some previous fixed-time stability results on nonlinear impulsive systems in which stabilizing and destabilizing impulses were separately analyzed are extended to a general case in which the stabilizing and destabilizing impulses can be handled simultaneously. Additionally, using the same logic, a new predefined-time stability lemma for stochastic nonlinear systems with a general impulsive effect is obtained by using the inequality technique. Then, based on these novel results, two novel controllers are implemented to derive some simple fixed/predefined-time synchronization criteria for the considered complex-valued impulsive BAM neural networks with stochastic perturbations using the non-separation method. Finally, two numerical examples are given to demonstrate the feasibility of the obtained results.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xu, Hao, and Sarangapani Jagannathan. "Neural Network-Based Finite Horizon Stochastic Optimal Control Design for Nonlinear Networked Control Systems." IEEE Transactions on Neural Networks and Learning Systems 26, no. 3 (March 2015): 472–85. http://dx.doi.org/10.1109/tnnls.2014.2315622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chanthorn, Pharunyou, Grienggrai Rajchakit, Usa Humphries, Pramet Kaewmesri, Ramalingam Sriraman, and Chee Peng Lim. "A Delay-Dividing Approach to Robust Stability of Uncertain Stochastic Complex-Valued Hopfield Delayed Neural Networks." Symmetry 12, no. 5 (April 25, 2020): 683. http://dx.doi.org/10.3390/sym12050683.

Повний текст джерела
Анотація:
In scientific disciplines and other engineering applications, most of the systems refer to uncertainties, because when modeling physical systems the uncertain parameters are unavoidable. In view of this, it is important to investigate dynamical systems with uncertain parameters. In the present study, a delay-dividing approach is devised to study the robust stability issue of uncertain neural networks. Specifically, the uncertain stochastic complex-valued Hopfield neural network (USCVHNN) with time delay is investigated. Here, the uncertainties of the system parameters are norm-bounded. Based on the Lyapunov mathematical approach and homeomorphism principle, the sufficient conditions for the global asymptotic stability of USCVHNN are derived. To perform this derivation, we divide a complex-valued neural network (CVNN) into two parts, namely real and imaginary, using the delay-dividing approach. All the criteria are expressed by exploiting the linear matrix inequalities (LMIs). Based on two examples, we obtain good theoretical results that ascertain the usefulness of the proposed delay-dividing approach for the USCVHNN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Yongbao, Jilin Zhu, and Wenxue Li. "Intermittent Discrete Observation Control for Synchronization of Stochastic Neural Networks." IEEE Transactions on Cybernetics 50, no. 6 (June 2020): 2414–24. http://dx.doi.org/10.1109/tcyb.2019.2930579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Liu, Zhiyou, Lichao Feng, Xinbin Li, Zhigang Lu, and Xianhui Meng. "Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems." Symmetry 13, no. 12 (December 19, 2021): 2447. http://dx.doi.org/10.3390/sym13122447.

Повний текст джерела
Анотація:
Motivated by the two strategies of intermittent control and discrete feedback control, this paper aims to introduce a periodically intermittent discrete feedback control in the drift part to stabilize an unstable Markov jumping stochastic differential system. It is illustrated that, by the approach of comparison principle, this can be achieved in the sense of almost sure exponential stability. Further, the stabilization theory is applied to Markov jumping stochastic recurrent neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Zhou, Wuneng, Qingyu Zhu, Peng Shi, Hongye Su, Jian'an Fang, and Liuwei Zhou. "Adaptive Synchronization for Neutral-Type Neural Networks with Stochastic Perturbation and Markovian Switching Parameters." IEEE Transactions on Cybernetics 44, no. 12 (December 2014): 2848–60. http://dx.doi.org/10.1109/tcyb.2014.2317236.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Feng, Lichao, Jinde Cao, and Lei Liu. "Robust analysis of discrete time noises for stochastic systems and application in neural networks." International Journal of Control 93, no. 12 (January 28, 2019): 2908–21. http://dx.doi.org/10.1080/00207179.2019.1568580.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Chen, Yun, and Wei Xing Zheng. "Stability Analysis of Time-Delay Neural Networks Subject to Stochastic Perturbations." IEEE Transactions on Cybernetics 43, no. 6 (December 2013): 2122–34. http://dx.doi.org/10.1109/tcyb.2013.2240451.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Yin, Liping, Jianguo Liu, Hongquan Qu, and Tao Li. "Two-Step Neural-Network-Based Fault Isolation for Stochastic Systems." Mathematics 10, no. 22 (November 14, 2022): 4261. http://dx.doi.org/10.3390/math10224261.

Повний текст джерела
Анотація:
This paper studies a fault isolation method for an optical fiber vibration source detection and early warning system. We regard the vibration sources in the system as faults and then detect and isolate the faults of the system based on a two-step neural network. Firstly, the square root B-spline expansion method is used to approximate the output probability density functions. Secondly, the nonlinear weight dynamic model is established through a dynamic neural network. Thirdly, the nonlinear filter and residual generator are constructed to estimate the weight, analyze the residual, and estimate the threshold, so as to detect, diagnose, and isolate the faults. The feasibility criterion of fault detection and isolation is given by using some linear matrix inequalities, and the stability of the estimation error system is proven according to the Lyapunov theorem. Finally, simulation experiments based on a optical fiber vibration source system are given to verify the effectiveness of this method.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kafka, Dominic, and Daniel N. Wilke. "An empirical study into finding optima in stochastic optimization of neural networks." Information Sciences 560 (June 2021): 235–55. http://dx.doi.org/10.1016/j.ins.2021.01.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

KODA, MASATO. "Stochastic sensitivity analysis method for neural network learning." International Journal of Systems Science 26, no. 3 (March 1995): 703–11. http://dx.doi.org/10.1080/00207729508929062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Peng, Guoqiang, and Lihong Huang. "Exponential stability of hybrid stochastic recurrent neural networks with time-varying delays." Nonlinear Analysis: Hybrid Systems 2, no. 4 (November 2008): 1198–204. http://dx.doi.org/10.1016/j.nahs.2008.09.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

ZHU, Q. M., and S. A. BILLINGS. "Fast orthogonal identification of nonlinear stochastic models and radial basis function neural networks." International Journal of Control 64, no. 5 (July 1996): 871–86. http://dx.doi.org/10.1080/00207179608921662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fu, Qianhua, Jingye Cai, Shouming Zhong, and Yongbin Yu. "Pinning Impulsive Synchronization of Stochastic Memristor-based Neural Networks with Time-varying Delays." International Journal of Control, Automation and Systems 17, no. 1 (January 2019): 243–52. http://dx.doi.org/10.1007/s12555-018-0295-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wang, Tong, Yongsheng Ding, Lei Zhang, and Kuangrong Hao. "Delay-dependent exponential state estimators for stochastic neural networks of neutral type with both discrete and distributed delays." International Journal of Systems Science 46, no. 4 (May 7, 2013): 670–80. http://dx.doi.org/10.1080/00207721.2013.794908.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ituabhor, Odesanya, Joseph Isabona, Jangfa T. zhimwang, and Ikechi Risi. "Cascade Forward Neural Networks-based Adaptive Model for Real-time Adaptive Learning of Stochastic Signal Power Datasets." International Journal of Computer Network and Information Security 14, no. 3 (June 8, 2022): 63–74. http://dx.doi.org/10.5815/ijcnis.2022.03.05.

Повний текст джерела
Анотація:
In this work, adaptive learning of a monitored real-time stochastic phenomenon over an operational LTE broadband radio network interface is proposed using cascade forward neural network (CFNN) model. The optimal architecture of the model has been implemented computationally in the input and hidden units by means of incremental search process. Particularly, we have applied the proposed adaptive-based cascaded forward neural network model for realistic learning of practical signal data taken from an operational LTE cellular network. The performance of the adaptive learning model is compared with a benchmark feedforward neural network model (FFNN) using a number of measured stochastic SINR datasets obtained over a period of three months at two indoors and outdoors locations of the LTE network. The results showed that proposed CFNN model provided the best adaptive learning performance (0.9310 RMSE; 0.8669 MSE; 0.5210 MAE; 0.9311 R), compared to the benchmark FFNN model (1.0566 RMSE; 1.1164 MSE; 0.5568 MAE; 0.9131 R) in the first studied outdoor location. Similar robust performances were attained for the proposed CFNN model in other locations, thus indicating that it is superior to FFNN model for adaptive learning of real-time stochastic phenomenon.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhang, Wei, and Junjian Huang. "Stability Analysis of Stochastic Delayed Differential Systems with State-Dependent-Delay Impulses: Application of Neural Networks." Cognitive Computation 14, no. 2 (January 17, 2022): 805–13. http://dx.doi.org/10.1007/s12559-021-09967-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Antonacci, Yuri, Ludovico Minati, Luca Faes, Riccardo Pernice, Giandomenico Nollo, Jlenia Toppi, Antonio Pietrabissa, and Laura Astolfi. "Estimation of Granger causality through Artificial Neural Networks: applications to physiological systems and chaotic electronic oscillators." PeerJ Computer Science 7 (May 18, 2021): e429. http://dx.doi.org/10.7717/peerj-cs.429.

Повний текст джерела
Анотація:
One of the most challenging problems in the study of complex dynamical systems is to find the statistical interdependencies among the system components. Granger causality (GC) represents one of the most employed approaches, based on modeling the system dynamics with a linear vector autoregressive (VAR) model and on evaluating the information flow between two processes in terms of prediction error variances. In its most advanced setting, GC analysis is performed through a state-space (SS) representation of the VAR model that allows to compute both conditional and unconditional forms of GC by solving only one regression problem. While this problem is typically solved through Ordinary Least Square (OLS) estimation, a viable alternative is to use Artificial Neural Networks (ANNs) implemented in a simple structure with one input and one output layer and trained in a way such that the weights matrix corresponds to the matrix of VAR parameters. In this work, we introduce an ANN combined with SS models for the computation of GC. The ANN is trained through the Stochastic Gradient Descent L1 (SGD-L1) algorithm, and a cumulative penalty inspired from penalized regression is applied to the network weights to encourage sparsity. Simulating networks of coupled Gaussian systems, we show how the combination of ANNs and SGD-L1 allows to mitigate the strong reduction in accuracy of OLS identification in settings of low ratio between number of time series points and of VAR parameters. We also report how the performances in GC estimation are influenced by the number of iterations of gradient descent and by the learning rate used for training the ANN. We recommend using some specific combinations for these parameters to optimize the performance of GC estimation. Then, the performances of ANN and OLS are compared in terms of GC magnitude and statistical significance to highlight the potential of the new approach to reconstruct causal coupling strength and network topology even in challenging conditions of data paucity. The results highlight the importance of of a proper selection of regularization parameter which determines the degree of sparsity in the estimated network. Furthermore, we apply the two approaches to real data scenarios, to study the physiological network of brain and peripheral interactions in humans under different conditions of rest and mental stress, and the effects of the newly emerged concept of remote synchronization on the information exchanged in a ring of electronic oscillators. The results highlight how ANNs provide a mesoscopic description of the information exchanged in networks of multiple interacting physiological systems, preserving the most active causal interactions between cardiovascular, respiratory and brain systems. Moreover, ANNs can reconstruct the flow of directed information in a ring of oscillators whose statistical properties can be related to those of physiological networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

HUANG, C., Y. HE, L. HUANG, and W. ZHU. "pth moment stability analysis of stochastic recurrent neural networks with time-varying delays." Information Sciences 178, no. 9 (May 1, 2008): 2194–203. http://dx.doi.org/10.1016/j.ins.2008.01.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ren, Jie, Qimin Zhang, Feilong Cao, Chunmei Ding, and Li Wang. "Modeling a stochastic age-structured capital system with Poisson jumps using neural networks." Information Sciences 516 (April 2020): 254–65. http://dx.doi.org/10.1016/j.ins.2019.12.048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wang, Fang, LiLi Zhang, Shaowei Zhou, and Yuanyuan Huang. "Neural network-based finite-time control of quantized stochastic nonlinear systems." Neurocomputing 362 (October 2019): 195–202. http://dx.doi.org/10.1016/j.neucom.2019.06.060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wang, Jianhui, Zhi Liu, Yun Zhang, and C. L. Philip Chen. "Neural Adaptive Event-Triggered Control for Nonlinear Uncertain Stochastic Systems With Unknown Hysteresis." IEEE Transactions on Neural Networks and Learning Systems 30, no. 11 (November 2019): 3300–3312. http://dx.doi.org/10.1109/tnnls.2018.2890699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Cui, Ying, Yurong Liu, Wenbing Zhang, and Fuad E. Alsaadi. "Stochastic Stability for a Class of Discrete-time Switched Neural Networks with Stochastic Noise and Time-varying Mixed Delays." International Journal of Control, Automation and Systems 16, no. 1 (February 2018): 158–67. http://dx.doi.org/10.1007/s12555-016-0778-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Rongni Yang, Huijun Gao, and Peng Shi. "Novel Robust Stability Criteria for Stochastic Hopfield Neural Networks With Time Delays." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39, no. 2 (April 2009): 467–74. http://dx.doi.org/10.1109/tsmcb.2008.2006860.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Li, Xiaofan, Jian-an Fang, and Huiyuan Li. "Exponential stabilisation of stochastic memristive neural networks under intermittent adaptive control." IET Control Theory & Applications 11, no. 15 (October 13, 2017): 2432–39. http://dx.doi.org/10.1049/iet-cta.2017.0021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhang, Ganlei, Jiayong Zhang, Wei Li, Chao Ge, and Yajuan Liu. "Exponential Synchronization of Delayed Neural Networks with Actuator Failure Using Stochastic Sampled-data Control." International Journal of Control, Automation and Systems 20, no. 2 (February 2022): 691–701. http://dx.doi.org/10.1007/s12555-020-0631-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Xu, Changjin, Lilin Chen, and Peiluan Li. "On p-th Moment Exponential Stability for Stochastic Cellular Neural Networks with Distributed Delays." International Journal of Control, Automation and Systems 16, no. 3 (May 15, 2018): 1217–25. http://dx.doi.org/10.1007/s12555-017-0570-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Li, Liangchen, Rui Xu, and Jiazhe Lin. "Mean-square Stability in Lagrange Sense for Stochastic Memristive Neural Networks with Leakage Delay." International Journal of Control, Automation and Systems 17, no. 8 (May 6, 2019): 2145–58. http://dx.doi.org/10.1007/s12555-018-0662-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Nian, Fuzhong, and Jia Li. "The Module-Phase Synchronization of Complex-Valued Neural Networks with Time-Varying Delay and Stochastic Perturbations." Journal of Systems Science and Complexity 34, no. 6 (October 23, 2021): 2139–54. http://dx.doi.org/10.1007/s11424-021-9024-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Feng, Shan-Shan, Zong-Yao Sun, Cheng-Qian Zhou, Chih-Chiang Chen, and Qinghua Meng. "Output Tracking Control via Neural Networks for High-Order Stochastic Nonlinear Systems with Dynamic Uncertainties." International Journal of Fuzzy Systems 23, no. 3 (February 5, 2021): 716–26. http://dx.doi.org/10.1007/s40815-020-01000-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Elanayar V.T., S., and Y. C. Shin. "Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems." IEEE Transactions on Neural Networks 5, no. 4 (July 1994): 594–603. http://dx.doi.org/10.1109/72.298229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Syed Ali, M., R. Saravanakumar, Choon Ki Ahn, and Hamid Reza Karimi. "Stochastic H∞ filtering for neural networks with leakage delay and mixed time-varying delays." Information Sciences 388-389 (May 2017): 118–34. http://dx.doi.org/10.1016/j.ins.2017.01.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Khan, Siraj, Muhammad Asim, Salabat Khan, Ahmad Musyafa, and Qingyao Wu. "Unsupervised domain adaptation using fuzzy rules and stochastic hierarchical convolutional neural networks." Computers and Electrical Engineering 105 (January 2023): 108547. http://dx.doi.org/10.1016/j.compeleceng.2022.108547.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wang, Huanqing, Peter Xiaoping Liu, Jialei Bao, Xue-Jun Xie, and Shuai Li. "Adaptive Neural Output-Feedback Decentralized Control for Large-Scale Nonlinear Systems With Stochastic Disturbances." IEEE Transactions on Neural Networks and Learning Systems 31, no. 3 (March 2020): 972–83. http://dx.doi.org/10.1109/tnnls.2019.2912082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gao, Wenyun, Xinjun Wang, Qinghui Wu, and Xinghui Yin. "Neural Networks-Based Adaptive Tracking Control for a Class of High-Order Stochastic Nonlinear Time-Delay Systems." IEEE Access 7 (2019): 63992–4004. http://dx.doi.org/10.1109/access.2019.2916319.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kumaresan, N., and P. Balasubramaniam. "Optimal control for stochastic linear quadratic singular system using neural networks." Journal of Process Control 19, no. 3 (March 2009): 482–88. http://dx.doi.org/10.1016/j.jprocont.2008.05.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jirsa, Viktor, and Hiba Sheheitli. "Entropy, free energy, symmetry and dynamics in the brain." Journal of Physics: Complexity 3, no. 1 (February 3, 2022): 015007. http://dx.doi.org/10.1088/2632-072x/ac4bec.

Повний текст джерела
Анотація:
Abstract Neuroscience is home to concepts and theories with roots in a variety of domains including information theory, dynamical systems theory, and cognitive psychology. Not all of those can be coherently linked, some concepts are incommensurable, and domain-specific language poses an obstacle to integration. Still, conceptual integration is a form of understanding that provides intuition and consolidation, without which progress remains unguided. This paper is concerned with the integration of deterministic and stochastic processes within an information theoretic framework, linking information entropy and free energy to mechanisms of emergent dynamics and self-organization in brain networks. We identify basic properties of neuronal populations leading to an equivariant matrix in a network, in which complex behaviors can naturally be represented through structured flows on manifolds establishing the internal model relevant to theories of brain function. We propose a neural mechanism for the generation of internal models from symmetry breaking in the connectivity of brain networks. The emergent perspective illustrates how free energy can be linked to internal models and how they arise from the neural substrate.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Fu, Yingying, Jing Li, Shuiyan Wu, and Xiaobo Li. "Dynamic Event-Triggered Adaptive Tracking Control for a Class of Unknown Stochastic Nonlinear Strict-Feedback Systems." Symmetry 13, no. 9 (September 7, 2021): 1648. http://dx.doi.org/10.3390/sym13091648.

Повний текст джерела
Анотація:
In this paper, the dynamic event-triggered tracking control issue is studied for a class of unknown stochastic nonlinear systems with strict-feedback form. At first, neural networks (NNs) are used to approximate the unknown nonlinear functions. Then, a dynamic event-triggered controller (DETC) is designed through the adaptive backstepping method. Especially, the triggered threshold is dynamically adjusted. Compared with its corresponding static event-triggered mechanism (SETM), the dynamic event-triggered mechanism (DETM) can generate a larger execution interval and further save resources. Moreover, it is verified by two simulation examples that show that the closed-loop stochastic system signals are ultimately fourth moment semi-globally uniformly bounded (SGUUB).
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lou, Xuyang, and Baotong Cui. "Stochastic Exponential Stability for Markovian Jumping BAM Neural Networks With Time-Varying Delays." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 37, no. 3 (June 2007): 713–19. http://dx.doi.org/10.1109/tsmcb.2006.887426.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Humphries, Usa, Grienggrai Rajchakit, Ramalingam Sriraman, Pramet Kaewmesri, Pharunyou Chanthorn, Chee Peng Lim, and Rajendran Samidurai. "An Extended Analysis on Robust Dissipativity of Uncertain Stochastic Generalized Neural Networks with Markovian Jumping Parameters." Symmetry 12, no. 6 (June 20, 2020): 1035. http://dx.doi.org/10.3390/sym12061035.

Повний текст джерела
Анотація:
The main focus of this research is on a comprehensive analysis of robust dissipativity issues pertaining to a class of uncertain stochastic generalized neural network (USGNN) models in the presence of time-varying delays and Markovian jumping parameters (MJPs). In real-world environments, most practical systems are subject to uncertainties. As a result, we take the norm-bounded parameter uncertainties, as well as stochastic disturbances into consideration in our study. To address the task, we formulate the appropriate Lyapunov–Krasovskii functional (LKF), and through the use of effective integral inequalities, simplified linear matrix inequality (LMI) based sufficient conditions are derived. We validate the feasible solutions through numerical examples using MATLAB software. The simulation results are analyzed and discussed, which positively indicate the feasibility and effectiveness of the obtained theoretical findings.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Luo, Jinnan, Wenhong Tian, Shouming Zhong, Kaibo Shi, Xian-Ming Gu, and Wenqin Wang. "Improved delay-probability-dependent results for stochastic neural networks with randomly occurring uncertainties and multiple delays." International Journal of Systems Science 49, no. 9 (June 10, 2018): 2039–59. http://dx.doi.org/10.1080/00207721.2018.1483044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Park, Han G., and Michail Zak. "Gray-Box Approach for Fault Detection of Dynamical Systems." Journal of Dynamic Systems, Measurement, and Control 125, no. 3 (September 1, 2003): 451–54. http://dx.doi.org/10.1115/1.1589032.

Повний текст джерела
Анотація:
We present a fault detection method called the gray-box. The term “gray-box” refers to the approach wherein a deterministic model of system, i.e., “white box,” is used to filter the data and generate a residual, while a stochastic model, i.e., “black-box” is used to describe the residual. The residual is described by a three-tier stochastic model. An auto-regressive process, and a time-delay feed-forward neural network describe the linear and nonlinear components of the residual, respectively. The last component, the noise, is characterized by its moments. Faults are detected by monitoring the parameters of the auto-regressive model, the weights of the neural network, and the moments of noise. This method is demonstrated on a simulated system of a gas turbine with time delay feedback actuator.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Koda, M. "Neural network learning based on stochastic sensitivity analysis." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 27, no. 1 (February 1997): 132–35. http://dx.doi.org/10.1109/3477.552193.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wu, Li-Bing, and Guang-Hong Yang. "Adaptive Output Neural Network Control for a Class of Stochastic Nonlinear Systems With Dead-Zone Nonlinearities." IEEE Transactions on Neural Networks and Learning Systems 28, no. 3 (March 2017): 726–39. http://dx.doi.org/10.1109/tnnls.2015.2503004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії