To see the other types of publications on this topic, follow the link: Stochastic control theory.

Journal articles on the topic 'Stochastic control theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stochastic control theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rubal’skii, G. B. "Stochastic theory of inventory control." Automation and Remote Control 70, no. 12 (December 2009): 2098–108. http://dx.doi.org/10.1134/s0005117909120169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fleishman, Benzion Semionovich. "Stochastic theory of community control." Ecological Modelling 39, no. 1-2 (November 1987): 121–59. http://dx.doi.org/10.1016/0304-3800(87)90017-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

OHSUMI, Akira. "Some Topics in Stochastic Control Theory." Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications 1998 (May 5, 1998): 163–70. http://dx.doi.org/10.5687/sss.1998.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

De la Salle, S. "Stochastic optimal control theory and application." Automatica 24, no. 3 (May 1988): 425–26. http://dx.doi.org/10.1016/0005-1098(88)90086-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chou, Sidney. "Controller Tuning Based on Stochastic Control Theory." Journal of Dynamic Systems, Measurement, and Control 110, no. 1 (March 1, 1988): 100–104. http://dx.doi.org/10.1115/1.3152638.

Full text
Abstract:
A practical controller tuning method is proposed for selecting controller gains in the face of design difficulties such as poor repeatability, long delay, nonlinearity, conflicting control objectives, model inaccuracy, and system complexity. Unlike many adaptive schemes striving to acquire knowledge about the system being controlled, the proposed approach is aimed at designing nonadaptive, or at best, gain scheduling controllers in a quantitative, systematic way while meeting design specifications.
APA, Harvard, Vancouver, ISO, and other styles
6

Zadeh, L. A. "Stochastic finite-state systems in control theory." Information Sciences 251 (December 2013): 1–9. http://dx.doi.org/10.1016/j.ins.2013.06.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Solo, V. "Stochastic adaptive control and Martingale limit theory." IEEE Transactions on Automatic Control 35, no. 1 (1990): 66–71. http://dx.doi.org/10.1109/9.45146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Repperger†, D. W., and K. A. Farris. "Stochastic resonance–a nonlinear control theory interpretation." International Journal of Systems Science 41, no. 7 (July 2010): 897–907. http://dx.doi.org/10.1080/00207720903494692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Weihai, Honglei Xu, Huanqing Wang, and Zhongwei Lin. "Stochastic Systems and Control: Theory and Applications." Mathematical Problems in Engineering 2017 (2017): 1–4. http://dx.doi.org/10.1155/2017/4063015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tapiero, Charles S. "Applicable stochastic control: From theory to practice." European Journal of Operational Research 73, no. 2 (March 1994): 209–25. http://dx.doi.org/10.1016/0377-2217(94)90260-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lanchares, Manuel, and Wassim M. Haddad. "Nonlinear Optimal Control for Stochastic Dynamical Systems." Mathematics 12, no. 5 (February 22, 2024): 647. http://dx.doi.org/10.3390/math12050647.

Full text
Abstract:
This paper presents a comprehensive framework addressing optimal nonlinear analysis and feedback control synthesis for nonlinear stochastic dynamical systems. The focus lies on establishing connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory within a unified perspective. We demonstrate that the closed-loop nonlinear system’s asymptotic stability in probability is ensured through a Lyapunov function, identified as the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. This dual assurance guarantees both stochastic stability and optimality. Additionally, optimal feedback controllers for affine nonlinear systems are developed using an inverse optimality framework tailored to the stochastic stabilization problem. Furthermore, the paper derives stability margins for optimal and inverse optimal stochastic feedback regulators. Gain, sector, and disk margin guarantees are established for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers.
APA, Harvard, Vancouver, ISO, and other styles
12

Parise, Francesca, and Asuman Ozdaglar. "Analysis and Interventions in Large Network Games." Annual Review of Control, Robotics, and Autonomous Systems 4, no. 1 (May 3, 2021): 455–86. http://dx.doi.org/10.1146/annurev-control-072020-084434.

Full text
Abstract:
We review classic results and recent progress on equilibrium analysis, dynamics, and optimal interventions in network games with both continuous and discrete strategy sets. We study strategic interactions in deterministic networks as well as networks generated from a stochastic network formation model. For the former case, we review a unifying framework for analysis based on the theory of variational inequalities. For the latter case, we highlight how knowledge of the stochastic network formation model can be used by a central planner to design interventions for large networks in a computationally efficient manner when exact network data are not available.
APA, Harvard, Vancouver, ISO, and other styles
13

Meerkov, S. M., and T. Runolfsson. "Theory of Aiming Control for Linear Stochastic Systems." IFAC Proceedings Volumes 23, no. 8 (August 1990): 43–47. http://dx.doi.org/10.1016/s1474-6670(17)51981-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cutland, Nigel J. "Infinitesimal methods in control theory: Deterministic and stochastic." Acta Applicandae Mathematicae 5, no. 2 (February 1986): 105–35. http://dx.doi.org/10.1007/bf00046584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shaikhet, Leonid. "Some Unsolved Problems in Stability and Optimal Control Theory of Stochastic Systems." Mathematics 10, no. 3 (February 1, 2022): 474. http://dx.doi.org/10.3390/math10030474.

Full text
Abstract:
In spite of the fact that the theory of stability and optimal control for different types of stochastic systems is well developed and very popular in research, there are some simply and clearly formulated problems, solutions of which have not been found so far. To the readers’ attention six open stability problems for stochastic differential equations with delay, for stochastic difference equations with discrete and continuous time and one open optimal control problem for stochastic hyperbolic equation with two-parameter white noise are offered.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, W. Q. "Nonlinear Stochastic Dynamics and Control in Hamiltonian Formulation." Applied Mechanics Reviews 59, no. 4 (July 1, 2006): 230–48. http://dx.doi.org/10.1115/1.2193137.

Full text
Abstract:
The significant advances in nonlinear stochastic dynamics and control in Hamiltonian formulation during the past decade are reviewed. The exact stationary solutions and equivalent nonlinear system method of Gaussian-white -noises excited and dissipated Hamiltonian systems, the stochastic averaging method for quasi Hamiltonian systems, the stochastic stability, stochastic bifurcation, first-passage time and nonlinear stochastic optimal control of quasi Hamiltonian systems are summarized. Possible extension and applications of the theory are pointed out. This review article cites 158 references.
APA, Harvard, Vancouver, ISO, and other styles
17

Amirdjanova{, Anna, and Gopinath Kallianpur{. "Stochastic Vorticity and Associated Filtering Theory." Applied Mathematics and Optimization 46, no. 2 (December 19, 2002): 89–96. http://dx.doi.org/10.1007/s00245-002-0755-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Jinyi. "Control Analysis of Stochastic Lagging Discrete Ecosystems." Symmetry 14, no. 5 (May 19, 2022): 1039. http://dx.doi.org/10.3390/sym14051039.

Full text
Abstract:
In this paper, control analysis of a stochastic lagging discrete ecosystem is investigated. Two-dimensional stochastic hysteresis discrete ecosystem equilibrium points with symmetry are discussed, and the dynamical behavior of equilibrium points with symmetry and their control analysis is discussed. Using the orthogonal polynomial approximation theory, the stochastic lagged discrete ecosystems are approximately transformed as its equivalent deterministic ecosystem. Based on the stability and bifurcation theory of deterministic discrete systems, through mathematical analysis, asymptotic stability and Hopf bifurcation are existent in the ecosystem, constructing control functions, controlling the behavior of the system dynamics. Finally, the effects of different random strengths on the bifurcation control and asymptotic stability control are verified by numerical simulations, which validate the correctness and effectiveness of the main results of this paper.
APA, Harvard, Vancouver, ISO, and other styles
19

LOEWEN, PHILIP D. "Existence Theory for a Stochastic Bolza Problem." IMA Journal of Mathematical Control and Information 4, no. 4 (1987): 301–20. http://dx.doi.org/10.1093/imamci/4.4.301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fleming, Wendell H., and Tao Pang. "An Application of Stochastic Control Theory to Financial Economics." SIAM Journal on Control and Optimization 43, no. 2 (January 2004): 502–31. http://dx.doi.org/10.1137/s0363012902419060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kraft, D. "Optimal estimation with an introduction to stochastic control theory." Automatica 23, no. 6 (November 1987): 807–8. http://dx.doi.org/10.1016/0005-1098(87)90047-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mortensen, R. E. "Stochastic Optimal Control: Theory and Application (Robert F. Stengel)." SIAM Review 31, no. 1 (March 1989): 153–54. http://dx.doi.org/10.1137/1031032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Min, Kyongyob. "A Stochastic Optimal Control Theory to Model Spontaneous Breathing." Applied Mathematics 04, no. 11 (2013): 1537–46. http://dx.doi.org/10.4236/am.2013.411208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Qinan, and Mahmut Parlar. "A three-person game theory model arising in stochastic inventory control theory." European Journal of Operational Research 76, no. 1 (July 1994): 83–97. http://dx.doi.org/10.1016/0377-2217(94)90008-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Anukiruthika, K., N. Durga, and P. Muthukumar. "Approximate controllability of semilinear retarded stochastic differential system with non-instantaneous impulses: Fredholm theory approach." IMA Journal of Mathematical Control and Information 38, no. 2 (March 18, 2021): 684–713. http://dx.doi.org/10.1093/imamci/dnab006.

Full text
Abstract:
Abstract This article deals with the approximate controllability of semilinear retarded integrodifferential equations with non-instantaneous impulses governed by Poisson jumps in Hilbert space. The existence of a mild solution is established by using stochastic calculus and a suitable fixed point technique. The approximate controllability of the proposed non-linear stochastic differential system is obtained by employing the theory of interpolation spaces and Fredholm theory. Finally, applications to the stochastic heat equation and retarded type stochastic Benjamin–Bona–Mahony equation are provided to illustrate the developed theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
26

Bensoussan, Alain, and Sheung Chi Phillip Yam. "Mean field approach to stochastic control with partial information." ESAIM: Control, Optimisation and Calculus of Variations 27 (2021): 89. http://dx.doi.org/10.1051/cocv/2021085.

Full text
Abstract:
In our present article, we follow our way of developing mean field type control theory in our earlier works [Bensoussan et al., Mean Field Games and Mean Field Type Control Theory. Springer, New York (2013)], by first introducing the Bellman and then master equations, the system of Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck (FP) equations, and then tackling them by looking for the semi-explicit solution for the linear quadratic case, especially with an arbitrary initial distribution; such a problem, being left open for long, has not been specifically dealt with in the earlier literature, such as Bensoussan [Stochastic Control of Partially Observable Systems. Cambridge University Press, (1992)] and Nisio [Stochastic control theory: Dynamic programming principle. Springer (2014)], which only tackled the linear quadratic setting with Gaussian initial distributions. Thanks to the effective mean-field theory, we propose a solution to this long standing problem of the general non-Gaussian case. Besides, our problem considered here can be reduced to the model in Bandini et al. [Stochastic Process. Appl. 129 (2019) 674–711], which is fundamentally different from our present proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
27

Zheng, Zhonghao, Xiuchun Bi, and Shuguang Zhang. "Stochastic Optimization Theory of Backward Stochastic Differential Equations Driven by G-Brownian Motion." Abstract and Applied Analysis 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/564524.

Full text
Abstract:
We consider the stochastic optimal control problems under G-expectation. Based on the theory of backward stochastic differential equations driven by G-Brownian motion, which was introduced in Hu et al. (2012), we can investigate the more general stochastic optimal control problems under G-expectation than that were constructed in Zhang (2011). Then we obtain a generalized dynamic programming principle, and the value function is proved to be a viscosity solution of a fully nonlinear second-order partial differential equation.
APA, Harvard, Vancouver, ISO, and other styles
28

Borkar, Vivek S. "White-Noise Representations in Stochastic Realization Theory." SIAM Journal on Control and Optimization 31, no. 5 (September 1993): 1093–102. http://dx.doi.org/10.1137/0331050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Accardi, Luigi, and Andreas Boukas. "Control of Quantum Langevin Equations." Open Systems & Information Dynamics 10, no. 01 (March 2003): 89–104. http://dx.doi.org/10.1023/a:1022927426053.

Full text
Abstract:
The problem of controlling quantum stochastic evolutions arises naturally in several different fields such as quantum chemistry, quantum information theory, quantum engineering, etc. In this paper, we apply the recently discovered closed form of the unitarity conditions for stochastic evolutions driven by the square of white noise [9] to solve this problem in the case of quadratic cost functionals (cf. (5.5) below). The optimal control is explicitly given in terms of the solution of an operator Riccati equation. Under general conditions on the system Hamiltonian part of the stochastic evolution and on the system observable to be controlled, this equation admits solutions with the required properties and they can be explicitly described.
APA, Harvard, Vancouver, ISO, and other styles
30

Kerimkulov, B., D. Šiška, and L. Szpruch. "A Modified MSA for Stochastic Control Problems." Applied Mathematics & Optimization 84, no. 3 (February 25, 2021): 3417–36. http://dx.doi.org/10.1007/s00245-021-09750-2.

Full text
Abstract:
AbstractThe classical Method of Successive Approximations (MSA) is an iterative method for solving stochastic control problems and is derived from Pontryagin’s optimality principle. It is known that the MSA may fail to converge. Using careful estimates for the backward stochastic differential equation (BSDE) this paper suggests a modification to the MSA algorithm. This modified MSA is shown to converge for general stochastic control problems with control in both the drift and diffusion coefficients. Under some additional assumptions the rate of convergence is shown. The results are valid without restrictions on the time horizon of the control problem, in contrast to iterative methods based on the theory of forward-backward stochastic differential equations.
APA, Harvard, Vancouver, ISO, and other styles
31

Bychkov, Evgeniy, Georgy Sviridyuk, and Alexey Bogomolov. "Optimal control for solutions to Sobolev stochastic equations." Electronic Journal of Differential Equations 2021, no. 01-104 (June 9, 2021): 51. http://dx.doi.org/10.58997/ejde.2021.51.

Full text
Abstract:
This article concerns the optimal control problem for internal gravitational waves in a model with additive "white noise". This mathematical models based on the stochasticSobolev equation, Dirichlet boundary conditions, and a Cauchy initial condition. The inhomogeneity describes random heterogeneities of the medium and fluctuations. By white noise we realize the Nelson-Gliklikh derivative of the Wiener process. The study was carried out within the framework of the theory of relatively bounded operators and the theory of Sobolev-type stochastic equations of higher order and the theory of (semi) groups of operators. We show the existence and uniqueness of a strong solutions, and obtain sufficient conditions for the existence of an optimal control of such solutions. The theorem about the existence and uniqueness of the optimal control is based on the works of J.-L. Lyons. For more information see https://ejde.math.txstate.edu/Volumes/2021/51/abstr.html
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Xue, Lixia Sun, Ping Ju, and Hongyu Li. "Stochastic Control for Intra-Region Probability Maximization of Multi-Machine Power Systems Based on the Quasi-Generalized Hamiltonian Theory." Energies 13, no. 1 (December 30, 2019): 167. http://dx.doi.org/10.3390/en13010167.

Full text
Abstract:
With the penetration of renewable generation, electric vehicles and other random factors in power systems, the stochastic disturbances are increasing significantly, which are necessary to be handled for guarantying the security of systems. A novel stochastic optimal control strategy is proposed in this paper to reduce the impact of such stochastic continuous disturbances on power systems. The proposed method is effective in solving the problems caused by the stochastic continuous disturbances and has two significant advantages. First, a simplified and effective solution is proposed to analyze the system influenced by the stochastic disturbances. Second, a novel optimal control strategy is proposed in this paper to effectively reduce the impact of stochastic continuous disturbances. To be specific, a novel excitation controlled power systems model with stochastic disturbances is built in the quasi-generalized Hamiltonian form, which is further simplified into a lower-dimension model through the stochastic averaging method. Based on this Itô equation, a novel optimal control strategy to achieve the intra-region probability maximization is established for power systems by using the dynamic programming method. Finally, the intra-region probability increases in controlled systems, which confirms the effectiveness of the proposed control strategy. The proposed control method has advantages on controlling the fluctuation of system state variables within a desired region under the influence of stochastic disturbances, which means improving the security of stochastic systems. With more stochasticity in the future, the proposed control method based on the stochastic theory will play a novel way to relieve the impact of stochastic disturbances.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmed, Hamdy M., Mahmoud M. El-Borai, Wagdy G. El-Sayed, and Alaa Y. Elbadrawi. "Fractional Stochastic Evolution Inclusions with Control on the Boundary." Symmetry 15, no. 4 (April 17, 2023): 928. http://dx.doi.org/10.3390/sym15040928.

Full text
Abstract:
Symmetry in systems arises as a result of natural design and provides a pivotal mechanism for crucial system properties. In the field of control theory, scattered research has been carried out concerning the control of group-theoretic symmetric systems. In this manuscript, the principles of stochastic analysis, the fixed-point theorem, fractional calculus, and multivalued map theory are implemented to investigate the null boundary controllability (NBC) of stochastic evolution inclusion (SEI) with the Hilfer fractional derivative (HFD) and the Clarke subdifferential. Moreover, an example is depicted to show the effect of the obtained results.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiao, Jian Wu, Ming Jun Jiang, and Hong Zhai. "Comparatively Study Renting and Purchasing Carbon Subsidies Schemes on Portfolio and Stochastic Control Theory." Advanced Materials Research 869-870 (December 2013): 1029–33. http://dx.doi.org/10.4028/www.scientific.net/amr.869-870.1029.

Full text
Abstract:
Applying the portfolio and stochastic control theory, the paper comparatively considers tow carbon subsidies schemes: Renting and Purchasing, and focuses on the rental scheme to present a stochastic control model and obtain an analytical optimal dynamic strategy about harvesting quantity in conditions of stochastic commodity price and timber growth. Through contrasts, the conclusion shows that government will pay less under the rental scheme than under the purchasing scheme for the same negative effect on harvesting quantity.
APA, Harvard, Vancouver, ISO, and other styles
35

Ilie, Silvana, and Monjur Morshed. "Adaptive Time-Stepping Using Control Theory for the Chemical Langevin Equation." Journal of Applied Mathematics 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/567275.

Full text
Abstract:
Stochastic modeling of biochemical systems has been the subject of intense research in recent years due to the large number of important applications of these systems. A critical stochastic model of well-stirred biochemical systems in the regime of relatively large molecular numbers, far from the thermodynamic limit, is the chemical Langevin equation. This model is represented as a system of stochastic differential equations, with multiplicative and noncommutative noise. Often biochemical systems in applications evolve on multiple time-scales; examples include slow transcription and fast dimerization reactions. The existence of multiple time-scales leads to mathematical stiffness, which is a major challenge for the numerical simulation. Consequently, there is a demand for efficient and accurate numerical methods to approximate the solution of these models. In this paper, we design an adaptive time-stepping method, based on control theory, for the numerical solution of the chemical Langevin equation. The underlying approximation method is the Milstein scheme. The adaptive strategy is tested on several models of interest and is shown to have improved efficiency and accuracy compared with the existing variable and constant-step methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Dongping, and Yankai Li. "Antidisturbance Control for Helicopter Stochastic Systems." Mathematical Problems in Engineering 2021 (November 16, 2021): 1–28. http://dx.doi.org/10.1155/2021/6980715.

Full text
Abstract:
In this paper, an antidisturbance controller is presented for helicopter stochastic systems under disturbances. To enhance the antidisturbance abilities, the nonlinear disturbance observer method is applied to reject the time-varying disturbances. Then, the antidisturbance nonlinear controller is designed by combining the backstepping control scheme. And the stochastic theory is used to guarantee that the closed-loop system is asymptotically bounded in mean square while the proposed control method is shown via some traditional nonlinear control techniques, which still show some common issues such as “dimension explosion” or others. The result of this paper can be regarded as a typical case of the nonlinear control method to help and promote the generation of advanced methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Altman, Eitan, and Arie Hordijk. "Applications of Borovkov's Renovation Theory to Non-Stationary Stochastic Recursive Sequences and Their Control." Advances in Applied Probability 29, no. 2 (June 1997): 388–413. http://dx.doi.org/10.2307/1428009.

Full text
Abstract:
We investigate in this paper the stability of non-stationary stochastic processes, arising typically in applications of control. The setting is known as stochastic recursive sequences, which allows us to construct on one probability space stochastic processes that correspond to different initial states and even different control policies. It does not require any Markovian assumptions. A natural criterion for stability for such processes is that the influence of the initial state disappears after some finite time; in other words, starting from different initial states, the process will couple after some finite time to the same limiting (not necessarily stationary nor ergodic) stochastic process. We investigate this as well as other types of coupling, and present conditions for them to occur uniformly in some class of control policies. We then use the coupling results to establish new theoretical aspects in the theory of non-Markovian control.
APA, Harvard, Vancouver, ISO, and other styles
38

Altman, Eitan, and Arie Hordijk. "Applications of Borovkov's Renovation Theory to Non-Stationary Stochastic Recursive Sequences and Their Control." Advances in Applied Probability 29, no. 02 (June 1997): 388–413. http://dx.doi.org/10.1017/s0001867800028056.

Full text
Abstract:
We investigate in this paper the stability of non-stationary stochastic processes, arising typically in applications of control. The setting is known as stochastic recursive sequences, which allows us to construct on one probability space stochastic processes that correspond to different initial states and even different control policies. It does not require any Markovian assumptions. A natural criterion for stability for such processes is that the influence of the initial state disappears after some finite time; in other words, starting from different initial states, the process will couple after some finite time to the same limiting (not necessarily stationary nor ergodic) stochastic process. We investigate this as well as other types of coupling, and present conditions for them to occur uniformly in some class of control policies. We then use the coupling results to establish new theoretical aspects in the theory of non-Markovian control.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Guici, and Yi Shen. "Robust ReliableH∞Control for Nonlinear Stochastic Markovian Jump Systems." Mathematical Problems in Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/431576.

Full text
Abstract:
The robust reliableH∞control problem for a class of nonlinear stochastic Markovian jump systems (NSMJSs) is investigated. The system under consideration includes Itô-type stochastic disturbance, Markovian jumps, as well as sector-bounded nonlinearities and norm-bounded stochastic nonlinearities. Our aim is to design a controller such that, for possible actuator failures, the closed-loop stochastic Markovian jump system is exponential mean-square stable with convergence rateαand disturbance attenuationγ. Based on the Lyapunov stability theory and Itô differential rule, together with LMIs techniques, a sufficient condition for stochastic systems is first established in Lemma 3. Then, using the lemma, the sufficient conditions of the solvability of the robust reliableH∞controller for linear SMJSs and NSMJSs are given. Finally, a numerical example is exploited to show the usefulness of the derived results.
APA, Harvard, Vancouver, ISO, and other styles
40

Lü, Qi, and Xu Zhang. "Control theory for stochastic distributed parameter systems, an engineering perspective." Annual Reviews in Control 51 (2021): 268–330. http://dx.doi.org/10.1016/j.arcontrol.2021.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Han-Fu. "Theory of System Identification and Adaptive Control for Stochastic Systems *." IFAC Proceedings Volumes 21, no. 9 (August 1988): 51–61. http://dx.doi.org/10.1016/s1474-6670(17)54703-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Burrage, P. M., R. Herdiana, and K. Burrage. "Adaptive stepsize based on control theory for stochastic differential equations." Journal of Computational and Applied Mathematics 170, no. 2 (September 2004): 317–36. http://dx.doi.org/10.1016/j.cam.2004.01.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Scruggs, J. T. "An optimal stochastic control theory for distributed energy harvesting networks." Journal of Sound and Vibration 320, no. 4-5 (March 2009): 707–25. http://dx.doi.org/10.1016/j.jsv.2008.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Neck, Reinhard. "EJOR special issue on stochastic control theory and operational research." European Journal of Operational Research 73, no. 2 (March 1994): 205–8. http://dx.doi.org/10.1016/0377-2217(94)90259-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Charalambous, Charalambos D., Christos K. Kourtellaris, and Ioannis Tzortzis. "Information Transfer of Control Strategies: Dualities of Stochastic Optimal Control Theory and Feedback Capacity of Information Theory." IEEE Transactions on Automatic Control 62, no. 10 (October 2017): 5010–25. http://dx.doi.org/10.1109/tac.2017.2690147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Okabayashi, Takatoshi, Toshiaki Kaga, Toru Yoshimura, and Shinya Oguchi. "Highway Bridge Vibration Control Under a Moving Vehicle by the Stochastic Control Theory." Doboku Gakkai Ronbunshu, no. 591 (1998): 339–49. http://dx.doi.org/10.2208/jscej.1998.591_339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Xiaoli. "Research on Pension Risk Control and Sustainable Development Based on Stochastic Control Theory." Journal of Computational and Theoretical Nanoscience 13, no. 12 (December 1, 2016): 9937–41. http://dx.doi.org/10.1166/jctn.2016.6090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Goodwin, G. C. "Stochastic Optimal Control Theory with Application in Self-Tuning Control (K. J. Hunt)." SIAM Review 33, no. 1 (March 1991): 142–44. http://dx.doi.org/10.1137/1033032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ma, Lifeng, Zidong Wang, Hongli Dong, and Guoliang Wei. "Variance-Constrained Multiobjective Control and Filtering for Nonlinear Stochastic Systems: A Survey." Abstract and Applied Analysis 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/724018.

Full text
Abstract:
The multiobjective control and filtering problems for nonlinear stochastic systems with variance constraints are surveyed. First, the concepts of nonlinear stochastic systems are recalled along with the introduction of some recent advances. Then, the covariance control theory, which serves as a practical method for multi-objective control design as well as a foundation for linear system theory, is reviewed comprehensively. The multiple design requirements frequently applied in engineering practice for the use of evaluating system performances are introduced, including robustness, reliability, and dissipativity. Several design techniques suitable for the multi-objective variance-constrained control and filtering problems for nonlinear stochastic systems are discussed. In particular, as a special case for the multi-objective design problems, the mixedH2/H∞control and filtering problems are reviewed in great detail. Subsequently, some latest results on the variance-constrained multi-objective control and filtering problems for the nonlinear stochastic systems are summarized. Finally, conclusions are drawn, and several possible future research directions are pointed out.
APA, Harvard, Vancouver, ISO, and other styles
50

OVCHINNIKOV, SERGEI, and ALEXANDER DUKHOVNY. "ADVANCES IN MEDIA THEORY." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 08, no. 01 (February 2000): 45–71. http://dx.doi.org/10.1142/s0218488500000058.

Full text
Abstract:
Media theory is a new branch of mathematical social and behavioral sciences with applications ranging from combinatorics to political sciences. Because of the generality of the concept of a medium and the natural character of its defining axioms, we believe the theory has a potential to become a new methodology for the management of imprecise, uncertain or incomplete information. In this paper, we review the basic concepts of media theory and present some advances in its combinatorial and stochastic parts. We prove that any medium is isomorphic to a submedium of a complete oriented medium and characterize these submedia. We also present some results concerning the stochastic evolution of a complete oriented medium and some of its submedia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography