Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Structured continuous time Markov decision processes.

Articles de revues sur le sujet « Structured continuous time Markov decision processes »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Structured continuous time Markov decision processes ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Shelton, C. R., and G. Ciardo. "Tutorial on Structured Continuous-Time Markov Processes." Journal of Artificial Intelligence Research 51 (December 23, 2014): 725–78. http://dx.doi.org/10.1613/jair.4415.

Texte intégral
Résumé :
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and
Styles APA, Harvard, Vancouver, ISO, etc.
2

D'Amico, Guglielmo, Jacques Janssen, and Raimondo Manca. "Monounireducible Nonhomogeneous Continuous Time Semi-Markov Processes Applied to Rating Migration Models." Advances in Decision Sciences 2012 (October 16, 2012): 1–12. http://dx.doi.org/10.1155/2012/123635.

Texte intégral
Résumé :
Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 3 (1987): 644–56. http://dx.doi.org/10.2307/3214096.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary poli
Styles APA, Harvard, Vancouver, ISO, etc.
4

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 03 (1987): 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary pol
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.

Texte intégral
Résumé :
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be decentralized. This new Dec-POMDP formulation, which we call an occu
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chornei, Ruslan. "Local Control in Gordon-Newell Networks." NaUKMA Research Papers. Computer Science 7 (May 12, 2025): 120–29. https://doi.org/10.18523/2617-3808.2024.7.120-129.

Texte intégral
Résumé :
We examine continuous-time stochastic processes with a general compact state space, which is organized by a fundamental graph defining a neighborhood structure of states. These neighborhoods establish local interactions among the coordinates of the spatial process. At any given moment, the random state of the system, as described by the stochastic process, forms a random field concerning the neighborhood graph.The process is assumed to have a semi-Markov temporal property, and its transition kernels exhibit a spatial Markov property relative to the basic graph. Additionally, a local control st
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pazis, Jason, and Ronald Parr. "Sample Complexity and Performance Bounds for Non-Parametric Approximate Linear Programming." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 782–88. http://dx.doi.org/10.1609/aaai.v27i1.8696.

Texte intégral
Résumé :
One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellma
Styles APA, Harvard, Vancouver, ISO, etc.
8

Abid, Amira, Fathi Abid, and Bilel Kaffel. "CDS-based implied probability of default estimation." Journal of Risk Finance 21, no. 4 (2020): 399–422. http://dx.doi.org/10.1108/jrf-05-2019-0079.

Texte intégral
Résumé :
Purpose This study aims to shed more light on the relationship between probability of default, investment horizons and rating classes to make decision-making processes more efficient. Design/methodology/approach Based on credit default swaps (CDS) spreads, a methodology is implemented to determine the implied default probability and the implied rating, and then to estimate the term structure of the market-implied default probability and the transition matrix of implied rating. The term structure estimation in discrete time is conducted with the Nelson and Siegel model and in continuous time wi
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mironov, Aleksey, Anna Mironova, and Vyacheslav Burlov. "Mathematical modeling of preemptive management by the stages complex of administrative production." Applied Mathematics and Control Sciences, no. 4 (December 12, 2022): 174–97. http://dx.doi.org/10.15593/2499-9873/2022.4.10.

Texte intégral
Résumé :
In line with the fundamental reform of the Russia administrative legislation, this article discusses the synthesis of a geoinformation system for preventive management of the full cycle of production on affairs about administrative offenses. According to the Anokhin - Sudakov's theory of functional systems, the tasks of forming a structural image and synthesizing a mathematical model to manage the administrative process stages, as well as the tasks of substantiating the mathematical criterion and its structural and functional implementation to prevent violations of a reasonable time in product
Styles APA, Harvard, Vancouver, ISO, etc.
10

Puterman, Martin L., and F. A. Van der Duyn Schouten. "Markov Decision Processes With Continuous Time Parameter." Journal of the American Statistical Association 80, no. 390 (1985): 491. http://dx.doi.org/10.2307/2287942.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Mahmoud, Marwan, and Sami Ben Slama. "Peer-to-Peer Energy Trading Case Study Using an AI-Powered Community Energy Management System." Applied Sciences 13, no. 13 (2023): 7838. http://dx.doi.org/10.3390/app13137838.

Texte intégral
Résumé :
The Internet of Energy (IoE) is a topic that industry and academics find intriguing and promising, since it can aid in developing technology for smart cities. This study suggests an innovative energy system with peer-to-peer trading and more sophisticated residential energy storage system management. It proposes a smart residential community strategy that includes household customers and nearby energy storage installations. Without constructing new energy-producing facilities, users can consume affordable renewable energy by exchanging energy with the community energy pool. The community energ
Styles APA, Harvard, Vancouver, ISO, etc.
12

Fu, Yaqing. "Variance Optimization for Continuous-Time Markov Decision Processes." Open Journal of Statistics 09, no. 02 (2019): 181–95. http://dx.doi.org/10.4236/ojs.2019.92014.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Guo, Xianping, and Yi Zhang. "Constrained total undiscounted continuous-time Markov decision processes." Bernoulli 23, no. 3 (2017): 1694–736. http://dx.doi.org/10.3150/15-bej793.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Zhang, Yi. "Continuous-Time Markov Decision Processes with Exponential Utility." SIAM Journal on Control and Optimization 55, no. 4 (2017): 2636–60. http://dx.doi.org/10.1137/16m1086261.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Dufour, François, and Alexei B. Piunovskiy. "Impulsive Control for Continuous-Time Markov Decision Processes." Advances in Applied Probability 47, no. 1 (2015): 106–27. http://dx.doi.org/10.1239/aap/1427814583.

Texte intégral
Résumé :
In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under co
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dufour, François, and Alexei B. Piunovskiy. "Impulsive Control for Continuous-Time Markov Decision Processes." Advances in Applied Probability 47, no. 01 (2015): 106–27. http://dx.doi.org/10.1017/s0001867800007722.

Texte intégral
Résumé :
In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under co
Styles APA, Harvard, Vancouver, ISO, etc.
17

Piunovskiy, Alexey. "Realizable Strategies in Continuous-Time Markov Decision Processes." SIAM Journal on Control and Optimization 56, no. 1 (2018): 473–95. http://dx.doi.org/10.1137/17m1138959.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Hu, Qiying. "Continuous time shock markov decision processes with discounted criterion." Optimization 25, no. 2-3 (1992): 271–83. http://dx.doi.org/10.1080/02331939208843824.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Wei, Qingda. "Mean–semivariance optimality for continuous-time Markov decision processes." Systems & Control Letters 125 (March 2019): 67–74. http://dx.doi.org/10.1016/j.sysconle.2019.02.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Guo, Xianping, XinYuan Song, and Junyu Zhang. "Bias optimality for multichain continuous-time Markov decision processes." Operations Research Letters 37, no. 5 (2009): 317–21. http://dx.doi.org/10.1016/j.orl.2009.04.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Lanlan, and Xianping Guo. "Constrained continuous-time Markov decision processes with average criteria." Mathematical Methods of Operations Research 67, no. 2 (2007): 323–40. http://dx.doi.org/10.1007/s00186-007-0154-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hu, Q. Y. "Nonstationary Continuous Time Markov Decision Processes with Discounted Criterion." Journal of Mathematical Analysis and Applications 180, no. 1 (1993): 60–70. http://dx.doi.org/10.1006/jmaa.1993.1382.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hu, Qiying. "Continuous Time Markov Decision Processes with Discounted Moment Criterion." Journal of Mathematical Analysis and Applications 203, no. 1 (1996): 1–12. http://dx.doi.org/10.1006/jmaa.1996.9999.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Piunovskiy, Alexey, and Yi Zhang. "The Transformation Method for Continuous-Time Markov Decision Processes." Journal of Optimization Theory and Applications 154, no. 2 (2012): 691–712. http://dx.doi.org/10.1007/s10957-012-0015-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Bartocci, Ezio, Luca Bortolussi, Tomáš Brázdil, Dimitrios Milios, and Guido Sanguinetti. "Policy learning in continuous-time Markov decision processes using Gaussian Processes." Performance Evaluation 116 (November 2017): 84–100. http://dx.doi.org/10.1016/j.peva.2017.08.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Guo, Xianping. "Constrained Optimization for Average Cost Continuous-Time Markov Decision Processes." IEEE Transactions on Automatic Control 52, no. 6 (2007): 1139–43. http://dx.doi.org/10.1109/tac.2007.899040.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Xianping Guo and Xinyuan Song. "Mean-Variance Criteria for Finite Continuous-Time Markov Decision Processes." IEEE Transactions on Automatic Control 54, no. 9 (2009): 2151–57. http://dx.doi.org/10.1109/tac.2009.2023833.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Piunovskiy, Alexey. "Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes." SIAM Journal on Control and Optimization 53, no. 6 (2015): 3503–33. http://dx.doi.org/10.1137/15m1014012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Piunovskiy, A. B. "DISCOUNTED CONTINUOUS TIME MARKOV DECISION PROCESSES: THE CONVEX ANALYTIC APPROACH." IFAC Proceedings Volumes 38, no. 1 (2005): 31–36. http://dx.doi.org/10.3182/20050703-6-cz-1902.00357.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Guo, Xianping, Mantas Vykertas, and Yi Zhang. "Absorbing Continuous-Time Markov Decision Processes with Total Cost Criteria." Advances in Applied Probability 45, no. 2 (2013): 490–519. http://dx.doi.org/10.1239/aap/1370870127.

Texte intégral
Résumé :
In this paper we study absorbing continuous-time Markov decision processes in Polish state spaces with unbounded transition and cost rates, and history-dependent policies. The performance measure is the expected total undiscounted costs. For the unconstrained problem, we show the existence of a deterministic stationary optimal policy, whereas, for the constrained problems with N constraints, we show the existence of a mixed stationary optimal policy, where the mixture is over no more than N+1 deterministic stationary policies. Furthermore, the strong duality result is obtained for the associat
Styles APA, Harvard, Vancouver, ISO, etc.
31

Guo, Xianping, Mantas Vykertas, and Yi Zhang. "Absorbing Continuous-Time Markov Decision Processes with Total Cost Criteria." Advances in Applied Probability 45, no. 02 (2013): 490–519. http://dx.doi.org/10.1017/s0001867800006418.

Texte intégral
Résumé :
In this paper we study absorbing continuous-time Markov decision processes in Polish state spaces with unbounded transition and cost rates, and history-dependent policies. The performance measure is the expected total undiscounted costs. For the unconstrained problem, we show the existence of a deterministic stationary optimal policy, whereas, for the constrained problems withNconstraints, we show the existence of a mixed stationary optimal policy, where the mixture is over no more thanN+1 deterministic stationary policies. Furthermore, the strong duality result is obtained for the associated
Styles APA, Harvard, Vancouver, ISO, etc.
32

Anselmi, Jonatha, François Dufour, and Tomás Prieto-Rumeau. "Computable approximations for average Markov decision processes in continuous time." Journal of Applied Probability 55, no. 2 (2018): 571–92. http://dx.doi.org/10.1017/jpr.2018.36.

Texte intégral
Résumé :
Abstract In this paper we study the numerical approximation of the optimal long-run average cost of a continuous-time Markov decision process, with Borel state and action spaces, and with bounded transition and reward rates. Our approach uses a suitable discretization of the state and action spaces to approximate the original control model. The approximation error for the optimal average reward is then bounded by a linear combination of coefficients related to the discretization of the state and action spaces, namely, the Wasserstein distance between an underlying probability measure μ and a m
Styles APA, Harvard, Vancouver, ISO, etc.
33

Guo, Xianping, Yonghui Huang, and Yi Zhang. "Constrained Continuous-Time Markov Decision Processes on the Finite Horizon." Applied Mathematics & Optimization 75, no. 2 (2016): 317–41. http://dx.doi.org/10.1007/s00245-016-9352-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Ye, Liuer, and Xianping Guo. "Continuous-Time Markov Decision Processes with State-Dependent Discount Factors." Acta Applicandae Mathematicae 121, no. 1 (2012): 5–27. http://dx.doi.org/10.1007/s10440-012-9669-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Guo, Xianping, and Xinyuan Song. "Discounted continuous-time constrained Markov decision processes in Polish spaces." Annals of Applied Probability 21, no. 5 (2011): 2016–49. http://dx.doi.org/10.1214/10-aap749.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Zhu, Quan-xin. "Variance minimization for continuous-time Markov decision processes: two approaches." Applied Mathematics-A Journal of Chinese Universities 25, no. 4 (2010): 400–410. http://dx.doi.org/10.1007/s11766-010-2428-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Zhang, Junyu, and Xi-Ren Cao. "Continuous-time Markov decision processes with nth-bias optimality criteria." Automatica 45, no. 7 (2009): 1628–38. http://dx.doi.org/10.1016/j.automatica.2009.03.009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Wei, Qingda. "Finite approximation for finite-horizon continuous-time Markov decision processes." 4OR 15, no. 1 (2016): 67–84. http://dx.doi.org/10.1007/s10288-016-0321-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Guo, Xianping, and Liuer Ye. "New discount and average optimality conditions for continuous-time Markov decision processes." Advances in Applied Probability 42, no. 4 (2010): 953–85. http://dx.doi.org/10.1239/aap/1293113146.

Texte intégral
Résumé :
This paper deals with continuous-time Markov decision processes in Polish spaces, under the discounted and average cost criteria. All underlying Markov processes are determined by given transition rates which are allowed to be unbounded, and the costs are assumed to be bounded below. By introducing an occupation measure of a randomized Markov policy and analyzing properties of occupation measures, we first show that the family of all randomized stationary policies is ‘sufficient’ within the class of all randomized Markov policies. Then, under the semicontinuity and compactness conditions, we p
Styles APA, Harvard, Vancouver, ISO, etc.
40

Guo, Xianping, and Liuer Ye. "New discount and average optimality conditions for continuous-time Markov decision processes." Advances in Applied Probability 42, no. 04 (2010): 953–85. http://dx.doi.org/10.1017/s000186780000447x.

Texte intégral
Résumé :
This paper deals with continuous-time Markov decision processes in Polish spaces, under the discounted and average cost criteria. All underlying Markov processes are determined by given transition rates which are allowed to be unbounded, and the costs are assumed to be bounded below. By introducing an occupation measure of a randomized Markov policy and analyzing properties of occupation measures, we first show that the family of all randomized stationary policies is ‘sufficient’ within the class of all randomized Markov policies. Then, under the semicontinuity and compactness conditions, we p
Styles APA, Harvard, Vancouver, ISO, etc.
41

Hu, Q. Y. "Nonstationary Continuous Time Markov Decision Processes in a Semi-Markov Environment with Discounted Criterion." Journal of Mathematical Analysis and Applications 194, no. 3 (1995): 640–59. http://dx.doi.org/10.1006/jmaa.1995.1322.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

HUANG, XiangXiang, and LiuEr YE. "A mean-variance optimization problem for continuous-time Markov decision processes." SCIENTIA SINICA Mathematica 44, no. 8 (2014): 883–98. http://dx.doi.org/10.1360/n012013-00117.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Wei, Qingda, and Xian Chen. "Risk-sensitive average continuous-time Markov decision processes with unbounded rates." Optimization 68, no. 4 (2018): 773–800. http://dx.doi.org/10.1080/02331934.2018.1547382.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Gao, Xuefeng, and Xun Yu Zhou. "Logarithmic Regret Bounds for Continuous-Time Average-Reward Markov Decision Processes." SIAM Journal on Control and Optimization 62, no. 5 (2024): 2529–56. http://dx.doi.org/10.1137/23m1584101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Guo, Xianping, and Lanlan Zhang. "Total reward criteria for unconstrained/constrained continuous-time Markov decision processes." Journal of Systems Science and Complexity 24, no. 3 (2011): 491–505. http://dx.doi.org/10.1007/s11424-011-8004-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Zou, Xiaolong, and Yonghui Huang. "Verifiable conditions for average optimality of continuous-time Markov decision processes." Operations Research Letters 44, no. 6 (2016): 742–46. http://dx.doi.org/10.1016/j.orl.2016.09.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Huo, Haifeng, Xiaolong Zou, and Xianping Guo. "The risk probability criterion for discounted continuous-time Markov decision processes." Discrete Event Dynamic Systems 27, no. 4 (2017): 675–99. http://dx.doi.org/10.1007/s10626-017-0257-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Feinberg, Eugene A. "Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach." Mathematics of Operations Research 29, no. 3 (2004): 492–524. http://dx.doi.org/10.1287/moor.1040.0089.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Guo, Xianping, and Zhong-Wei Liao. "Risk-Sensitive Discounted Continuous-Time Markov Decision Processes with Unbounded Rates." SIAM Journal on Control and Optimization 57, no. 6 (2019): 3857–83. http://dx.doi.org/10.1137/18m1222016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Liu, Qiuli, Hangsheng Tan, and Xianping Guo. "Denumerable continuous-time Markov decision processes with multiconstraints on average costs." International Journal of Systems Science 43, no. 3 (2012): 576–85. http://dx.doi.org/10.1080/00207721.2010.517868.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!