Littérature scientifique sur le sujet « Structured continuous time Markov decision processes »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Structured continuous time Markov decision processes ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Structured continuous time Markov decision processes"

1

Shelton, C. R., et G. Ciardo. « Tutorial on Structured Continuous-Time Markov Processes ». Journal of Artificial Intelligence Research 51 (23 décembre 2014) : 725–78. http://dx.doi.org/10.1613/jair.4415.

Texte intégral
Résumé :
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks.
Styles APA, Harvard, Vancouver, ISO, etc.
2

D'Amico, Guglielmo, Jacques Janssen et Raimondo Manca. « Monounireducible Nonhomogeneous Continuous Time Semi-Markov Processes Applied to Rating Migration Models ». Advances in Decision Sciences 2012 (16 octobre 2012) : 1–12. http://dx.doi.org/10.1155/2012/123635.

Texte intégral
Résumé :
Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Beutler, Frederick J., et Keith W. Ross. « Uniformization for semi-Markov decision processes under stationary policies ». Journal of Applied Probability 24, no 3 (septembre 1987) : 644–56. http://dx.doi.org/10.2307/3214096.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Beutler, Frederick J., et Keith W. Ross. « Uniformization for semi-Markov decision processes under stationary policies ». Journal of Applied Probability 24, no 03 (septembre 1987) : 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet et François Charpillet. « Optimally Solving Dec-POMDPs as Continuous-State MDPs ». Journal of Artificial Intelligence Research 55 (24 février 2016) : 443–97. http://dx.doi.org/10.1613/jair.4623.

Texte intégral
Résumé :
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be decentralized. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. To provide scalability, we refine this approach by combining heuristic search and compact representations that exploit the structure present in multi-agent domains, without losing the ability to converge to an optimal solution. In particular, we introduce a feature-based heuristic search value iteration (FB-HSVI) algorithm that relies on feature-based compact representations, point-based updates and efficient action selection. A theoretical analysis demonstrates that FB-HSVI terminates in finite time with an optimal solution. We include an extensive empirical analysis using well-known benchmarks, thereby demonstrating that our approach provides significant scalability improvements compared to the state of the art.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Pazis, Jason, et Ronald Parr. « Sample Complexity and Performance Bounds for Non-Parametric Approximate Linear Programming ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 1 (30 juin 2013) : 782–88. http://dx.doi.org/10.1609/aaai.v27i1.8696.

Texte intégral
Résumé :
One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellman equation, adding robustness to noise. In addition, we provide the first max-norm, finite sample performance guarantees for any form of ALP. NP-ALP is amenable to problems with large (multidimensional) or even infinite (continuous) action spaces, and does not require a model to select actions using the resulting approximate solution.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Abid, Amira, Fathi Abid et Bilel Kaffel. « CDS-based implied probability of default estimation ». Journal of Risk Finance 21, no 4 (21 juillet 2020) : 399–422. http://dx.doi.org/10.1108/jrf-05-2019-0079.

Texte intégral
Résumé :
Purpose This study aims to shed more light on the relationship between probability of default, investment horizons and rating classes to make decision-making processes more efficient. Design/methodology/approach Based on credit default swaps (CDS) spreads, a methodology is implemented to determine the implied default probability and the implied rating, and then to estimate the term structure of the market-implied default probability and the transition matrix of implied rating. The term structure estimation in discrete time is conducted with the Nelson and Siegel model and in continuous time with the Vasicek model. The assessment of the transition matrix is performed using the homogeneous Markov model. Findings The results show that the CDS-based implied ratings are lower than those based on Thomson Reuters approach, which can partially be explained by the fact that the real-world probabilities are smaller than those founded on a risk-neutral framework. Moreover, investment and sub-investment grade companies exhibit different risk profiles with respect of the investment horizons. Originality/value The originality of this study consists in determining the implied rating based on CDS spreads and to detect the difference between implied market rating and the Thomson Reuters StarMine rating. The results can be used to analyze credit risk assessments and examine issues related to the Thomson Reuters StarMine credit risk model.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Puterman, Martin L., et F. A. Van der Duyn Schouten. « Markov Decision Processes With Continuous Time Parameter. » Journal of the American Statistical Association 80, no 390 (juin 1985) : 491. http://dx.doi.org/10.2307/2287942.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fu, Yaqing. « Variance Optimization for Continuous-Time Markov Decision Processes ». Open Journal of Statistics 09, no 02 (2019) : 181–95. http://dx.doi.org/10.4236/ojs.2019.92014.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Guo, Xianping, et Yi Zhang. « Constrained total undiscounted continuous-time Markov decision processes ». Bernoulli 23, no 3 (août 2017) : 1694–736. http://dx.doi.org/10.3150/15-bej793.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Structured continuous time Markov decision processes"

1

VILLA, SIMONE. « Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Texte intégral
Résumé :
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.
The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Saha, Subhamay. « Single and Multi-player Stochastic Dynamic Optimization ». Thesis, 2013. http://etd.iisc.ernet.in/2005/3357.

Texte intégral
Résumé :
In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game and also obtain optimal strategies for both players. We also study a continuous time zero-sum stochastic game with complete observation. In this case the state is a pure jump Markov process. We investigate the nite horizon total cost criterion. We characterise the value function via appropriate Isaacs equations. This also yields optimal Markov strategies for both players. In the single player setup we investigate risk-sensitive control of continuous time Markov chains. We consider both nite and in nite horizon problems. For the nite horizon total cost problem and the in nite horizon discounted cost problem we characterise the value function as the unique solution of appropriate Hamilton Jacobi Bellman equations. We also derive optimal Markov controls in both the cases. For the in nite horizon average cost case we shown the existence of an optimal stationary control. we also give a value iteration scheme for computing the optimal control in the case of nite state and action spaces. Further we introduce a new class of stochastic processes which we call stochastic processes with \age-dependent transition rates". We give a rigorous construction of the process. We prove that under certain assunptions the process is Feller. We also compute the limiting probabilities for our process. We then study the controlled version of the above process. In this case we take the risk-neutral cost criterion. We solve the in nite horizon discounted cost problem and the average cost problem for this process. The crucial step in analysing these problems is to prove that the original control problem is equivalent to an appropriate semi-Markov decision problem. Then the value functions and optimal controls are characterised using this equivalence and the theory of semi-Markov decision processes (SMDP). The analysis of nite horizon problems becomes di erent from that of in nite horizon problems because of the fact that in this case the idea of converting into an equivalent SMDP does not seem to work. So we deal with the nite horizon total cost problem by showing that our problem is equivalent to another appropriately de ned discrete time Markov decision problem. This allows us to characterise the value function and to nd an optimal Markov control.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Structured continuous time Markov decision processes"

1

Guo, Xianping, et Onésimo Hernández-Lerma. Continuous-Time Markov Decision Processes. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Piunovskiy, Alexey, et Yi Zhang. Continuous-Time Markov Decision Processes. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hernandez-Lerma, Onesimo, et Xianping Guo. Continuous-Time Markov Decision Processes : Theory and Applications. Springer, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hernández-Lerma, Onésimo, et Xianping Guo. Continuous-Time Markov Decision Processes : Theory and Applications. Springer, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Yi, Alexey Piunovskiy et Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes : Borel Space Models and General Control Strategies. Springer International Publishing AG, 2021.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yi, Alexey Piunovskiy et Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes : Borel Space Models and General Control Strategies. Springer International Publishing AG, 2020.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hernandez-Lerma, Onesimo, et Xianping Guo. Continuous-Time Markov Decision Processes : Theory and Applications (Stochastic Modelling and Applied Probability Book 62). Springer, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Structured continuous time Markov decision processes"

1

Neuhäußer, Martin R., Mariëlle Stoelinga et Joost-Pieter Katoen. « Delayed Nondeterminism in Continuous-Time Markov Decision Processes ». Dans Foundations of Software Science and Computational Structures, 364–79. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00596-1_26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Melchiors, Philipp. « Continuous-Time Markov Decision Processes ». Dans Lecture Notes in Economics and Mathematical Systems, 29–41. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04540-5_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Guo, Xianping, et Onésimo Hernández-Lerma. « Continuous-Time Markov Decision Processes ». Dans Stochastic Modelling and Applied Probability, 9–18. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Piunovskiy, Alexey, et Yi Zhang. « Selected Properties of Controlled Processes ». Dans Continuous-Time Markov Decision Processes, 63–144. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Piunovskiy, Alexey, et Yi Zhang. « Description of CTMDPs and Preliminaries ». Dans Continuous-Time Markov Decision Processes, 1–62. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Piunovskiy, Alexey, et Yi Zhang. « The Discounted Cost Model ». Dans Continuous-Time Markov Decision Processes, 145–200. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Piunovskiy, Alexey, et Yi Zhang. « Reduction to DTMDP : The Total Cost Model ». Dans Continuous-Time Markov Decision Processes, 201–62. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Piunovskiy, Alexey, et Yi Zhang. « The Average Cost Model ». Dans Continuous-Time Markov Decision Processes, 263–336. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Piunovskiy, Alexey, et Yi Zhang. « The Total Cost Model : General Case ». Dans Continuous-Time Markov Decision Processes, 337–402. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Piunovskiy, Alexey, et Yi Zhang. « Gradual-Impulsive Control Models ». Dans Continuous-Time Markov Decision Processes, 403–72. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Structured continuous time Markov decision processes"

1

Huang, Yunhan, Veeraruna Kavitha et Quanyan Zhu. « Continuous-Time Markov Decision Processes with Controlled Observations ». Dans 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019. http://dx.doi.org/10.1109/allerton.2019.8919744.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Neuhausser, Martin R., et Lijun Zhang. « Time-Bounded Reachability Probabilities in Continuous-Time Markov Decision Processes ». Dans 2010 Seventh International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, 2010. http://dx.doi.org/10.1109/qest.2010.47.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rincon, Luis F., Yina F. Muñoz Moscoso, Jose Campos Matos et Stefan Leonardo Leiva Maldonado. « Stochastic degradation model analysis for prestressed concrete bridges ». Dans IABSE Symposium, Prague 2022 : Challenges for Existing and Oncoming Structures. Zurich, Switzerland : International Association for Bridge and Structural Engineering (IABSE), 2022. http://dx.doi.org/10.2749/prague.2022.1092.

Texte intégral
Résumé :
<p>Bridges in the road infrastructure represent a critical and strategic asset, due to their functionality, is vital for the economic and social development of the countries. Currently, approximately 50% of construction industry expenditures in most developed countries are associated with repairs, maintenance, and rehabilitation of existing structures, and are expected to increase in the future. In this sense, it is necessary to monitor the behaviour of bridges and obtain indicators that represent the evolution of the state of service over time.</p><p>Therefore, degradation models play a crucial role in determining asset performance that will define cost-effective and efficient planned maintenance solutions to ensure continuous and correct operation. Of these models, Markov chains stand out for being stochastic models that consider the uncertainty of complex phenomena and are the most used for structures in general due to their practicality, easy implementation, and compatibility. In this context, this research develops degradation models of a database of 414 prestressed concrete bridges continuously monitored from 2000 to 2016 in the state of Indiana, USA. Degradation models were developed from a rating system of the state of the deck, the superstructure, and the substructure. Finally, the database is identified and divided from cluster analysis, into classes that share similar deterioration trends to obtain a more accurate prediction that can facilitate the decision processes of bridge management systems.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
4

Qiu, Qinru, et Massoud Pedram. « Dynamic power management based on continuous-time Markov decision processes ». Dans the 36th ACM/IEEE conference. New York, New York, USA : ACM Press, 1999. http://dx.doi.org/10.1145/309847.309997.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Feinberg, Eugene A., Manasa Mandava et Albert N. Shiryaev. « Sufficiency of Markov policies for continuous-time Markov decision processes and solutions to Kolmogorov's forward equation for jump Markov processes ». Dans 2013 IEEE 52nd Annual Conference on Decision and Control (CDC). IEEE, 2013. http://dx.doi.org/10.1109/cdc.2013.6760792.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Guo, Xianping. « Discounted Optimality for Continuous-Time Markov Decision Processes in Polish Spaces ». Dans 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280655.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Alasmari, Naif, et Radu Calinescu. « Synthesis of Pareto-optimal Policies for Continuous-Time Markov Decision Processes ». Dans 2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2022. http://dx.doi.org/10.1109/seaa56994.2022.00071.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Cao, Xi-Ren. « A new model of continuous-time Markov processes and impulse stochastic control ». Dans 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5399775.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Tanaka, Takashi, Mikael Skoglund et Valeri Ugrinovskii. « Optimal sensor design and zero-delay source coding for continuous-time vector Gauss-Markov processes ». Dans 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 2017. http://dx.doi.org/10.1109/cdc.2017.8264246.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Maginnis, Peter A., Matthew West et Geir E. Dullerud. « Exact simulation of continuous time Markov jump processes with anticorrelated variance reduced Monte Carlo estimation ». Dans 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039916.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie