Academic literature on the topic 'Structured continuous time Markov decision processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Structured continuous time Markov decision processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Structured continuous time Markov decision processes"

1

Shelton, C. R., and G. Ciardo. "Tutorial on Structured Continuous-Time Markov Processes." Journal of Artificial Intelligence Research 51 (December 23, 2014): 725–78. http://dx.doi.org/10.1613/jair.4415.

Full text
Abstract:
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
2

D'Amico, Guglielmo, Jacques Janssen, and Raimondo Manca. "Monounireducible Nonhomogeneous Continuous Time Semi-Markov Processes Applied to Rating Migration Models." Advances in Decision Sciences 2012 (October 16, 2012): 1–12. http://dx.doi.org/10.1155/2012/123635.

Full text
Abstract:
Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.
APA, Harvard, Vancouver, ISO, and other styles
3

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 3 (September 1987): 644–56. http://dx.doi.org/10.2307/3214096.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
APA, Harvard, Vancouver, ISO, and other styles
4

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 03 (September 1987): 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Full text
Abstract:
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.
APA, Harvard, Vancouver, ISO, and other styles
5

Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.

Full text
Abstract:
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be decentralized. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. To provide scalability, we refine this approach by combining heuristic search and compact representations that exploit the structure present in multi-agent domains, without losing the ability to converge to an optimal solution. In particular, we introduce a feature-based heuristic search value iteration (FB-HSVI) algorithm that relies on feature-based compact representations, point-based updates and efficient action selection. A theoretical analysis demonstrates that FB-HSVI terminates in finite time with an optimal solution. We include an extensive empirical analysis using well-known benchmarks, thereby demonstrating that our approach provides significant scalability improvements compared to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
6

Pazis, Jason, and Ronald Parr. "Sample Complexity and Performance Bounds for Non-Parametric Approximate Linear Programming." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 782–88. http://dx.doi.org/10.1609/aaai.v27i1.8696.

Full text
Abstract:
One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellman equation, adding robustness to noise. In addition, we provide the first max-norm, finite sample performance guarantees for any form of ALP. NP-ALP is amenable to problems with large (multidimensional) or even infinite (continuous) action spaces, and does not require a model to select actions using the resulting approximate solution.
APA, Harvard, Vancouver, ISO, and other styles
7

Abid, Amira, Fathi Abid, and Bilel Kaffel. "CDS-based implied probability of default estimation." Journal of Risk Finance 21, no. 4 (July 21, 2020): 399–422. http://dx.doi.org/10.1108/jrf-05-2019-0079.

Full text
Abstract:
Purpose This study aims to shed more light on the relationship between probability of default, investment horizons and rating classes to make decision-making processes more efficient. Design/methodology/approach Based on credit default swaps (CDS) spreads, a methodology is implemented to determine the implied default probability and the implied rating, and then to estimate the term structure of the market-implied default probability and the transition matrix of implied rating. The term structure estimation in discrete time is conducted with the Nelson and Siegel model and in continuous time with the Vasicek model. The assessment of the transition matrix is performed using the homogeneous Markov model. Findings The results show that the CDS-based implied ratings are lower than those based on Thomson Reuters approach, which can partially be explained by the fact that the real-world probabilities are smaller than those founded on a risk-neutral framework. Moreover, investment and sub-investment grade companies exhibit different risk profiles with respect of the investment horizons. Originality/value The originality of this study consists in determining the implied rating based on CDS spreads and to detect the difference between implied market rating and the Thomson Reuters StarMine rating. The results can be used to analyze credit risk assessments and examine issues related to the Thomson Reuters StarMine credit risk model.
APA, Harvard, Vancouver, ISO, and other styles
8

Puterman, Martin L., and F. A. Van der Duyn Schouten. "Markov Decision Processes With Continuous Time Parameter." Journal of the American Statistical Association 80, no. 390 (June 1985): 491. http://dx.doi.org/10.2307/2287942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Yaqing. "Variance Optimization for Continuous-Time Markov Decision Processes." Open Journal of Statistics 09, no. 02 (2019): 181–95. http://dx.doi.org/10.4236/ojs.2019.92014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Xianping, and Yi Zhang. "Constrained total undiscounted continuous-time Markov decision processes." Bernoulli 23, no. 3 (August 2017): 1694–736. http://dx.doi.org/10.3150/15-bej793.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Structured continuous time Markov decision processes"

1

VILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Full text
Abstract:
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.
The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Saha, Subhamay. "Single and Multi-player Stochastic Dynamic Optimization." Thesis, 2013. http://etd.iisc.ernet.in/2005/3357.

Full text
Abstract:
In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game and also obtain optimal strategies for both players. We also study a continuous time zero-sum stochastic game with complete observation. In this case the state is a pure jump Markov process. We investigate the nite horizon total cost criterion. We characterise the value function via appropriate Isaacs equations. This also yields optimal Markov strategies for both players. In the single player setup we investigate risk-sensitive control of continuous time Markov chains. We consider both nite and in nite horizon problems. For the nite horizon total cost problem and the in nite horizon discounted cost problem we characterise the value function as the unique solution of appropriate Hamilton Jacobi Bellman equations. We also derive optimal Markov controls in both the cases. For the in nite horizon average cost case we shown the existence of an optimal stationary control. we also give a value iteration scheme for computing the optimal control in the case of nite state and action spaces. Further we introduce a new class of stochastic processes which we call stochastic processes with \age-dependent transition rates". We give a rigorous construction of the process. We prove that under certain assunptions the process is Feller. We also compute the limiting probabilities for our process. We then study the controlled version of the above process. In this case we take the risk-neutral cost criterion. We solve the in nite horizon discounted cost problem and the average cost problem for this process. The crucial step in analysing these problems is to prove that the original control problem is equivalent to an appropriate semi-Markov decision problem. Then the value functions and optimal controls are characterised using this equivalence and the theory of semi-Markov decision processes (SMDP). The analysis of nite horizon problems becomes di erent from that of in nite horizon problems because of the fact that in this case the idea of converting into an equivalent SMDP does not seem to work. So we deal with the nite horizon total cost problem by showing that our problem is equivalent to another appropriately de ned discrete time Markov decision problem. This allows us to characterise the value function and to nd an optimal Markov control.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Structured continuous time Markov decision processes"

1

Guo, Xianping, and Onésimo Hernández-Lerma. Continuous-Time Markov Decision Processes. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Piunovskiy, Alexey, and Yi Zhang. Continuous-Time Markov Decision Processes. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hernandez-Lerma, Onesimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hernández-Lerma, Onésimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yi, Alexey Piunovskiy, and Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes: Borel Space Models and General Control Strategies. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yi, Alexey Piunovskiy, and Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes: Borel Space Models and General Control Strategies. Springer International Publishing AG, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hernandez-Lerma, Onesimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications (Stochastic Modelling and Applied Probability Book 62). Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Structured continuous time Markov decision processes"

1

Neuhäußer, Martin R., Mariëlle Stoelinga, and Joost-Pieter Katoen. "Delayed Nondeterminism in Continuous-Time Markov Decision Processes." In Foundations of Software Science and Computational Structures, 364–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00596-1_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Melchiors, Philipp. "Continuous-Time Markov Decision Processes." In Lecture Notes in Economics and Mathematical Systems, 29–41. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04540-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Xianping, and Onésimo Hernández-Lerma. "Continuous-Time Markov Decision Processes." In Stochastic Modelling and Applied Probability, 9–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Piunovskiy, Alexey, and Yi Zhang. "Selected Properties of Controlled Processes." In Continuous-Time Markov Decision Processes, 63–144. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Piunovskiy, Alexey, and Yi Zhang. "Description of CTMDPs and Preliminaries." In Continuous-Time Markov Decision Processes, 1–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Piunovskiy, Alexey, and Yi Zhang. "The Discounted Cost Model." In Continuous-Time Markov Decision Processes, 145–200. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Piunovskiy, Alexey, and Yi Zhang. "Reduction to DTMDP: The Total Cost Model." In Continuous-Time Markov Decision Processes, 201–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Piunovskiy, Alexey, and Yi Zhang. "The Average Cost Model." In Continuous-Time Markov Decision Processes, 263–336. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Piunovskiy, Alexey, and Yi Zhang. "The Total Cost Model: General Case." In Continuous-Time Markov Decision Processes, 337–402. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Piunovskiy, Alexey, and Yi Zhang. "Gradual-Impulsive Control Models." In Continuous-Time Markov Decision Processes, 403–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Structured continuous time Markov decision processes"

1

Huang, Yunhan, Veeraruna Kavitha, and Quanyan Zhu. "Continuous-Time Markov Decision Processes with Controlled Observations." In 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019. http://dx.doi.org/10.1109/allerton.2019.8919744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neuhausser, Martin R., and Lijun Zhang. "Time-Bounded Reachability Probabilities in Continuous-Time Markov Decision Processes." In 2010 Seventh International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, 2010. http://dx.doi.org/10.1109/qest.2010.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rincon, Luis F., Yina F. Muñoz Moscoso, Jose Campos Matos, and Stefan Leonardo Leiva Maldonado. "Stochastic degradation model analysis for prestressed concrete bridges." In IABSE Symposium, Prague 2022: Challenges for Existing and Oncoming Structures. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2022. http://dx.doi.org/10.2749/prague.2022.1092.

Full text
Abstract:
<p>Bridges in the road infrastructure represent a critical and strategic asset, due to their functionality, is vital for the economic and social development of the countries. Currently, approximately 50% of construction industry expenditures in most developed countries are associated with repairs, maintenance, and rehabilitation of existing structures, and are expected to increase in the future. In this sense, it is necessary to monitor the behaviour of bridges and obtain indicators that represent the evolution of the state of service over time.</p><p>Therefore, degradation models play a crucial role in determining asset performance that will define cost-effective and efficient planned maintenance solutions to ensure continuous and correct operation. Of these models, Markov chains stand out for being stochastic models that consider the uncertainty of complex phenomena and are the most used for structures in general due to their practicality, easy implementation, and compatibility. In this context, this research develops degradation models of a database of 414 prestressed concrete bridges continuously monitored from 2000 to 2016 in the state of Indiana, USA. Degradation models were developed from a rating system of the state of the deck, the superstructure, and the substructure. Finally, the database is identified and divided from cluster analysis, into classes that share similar deterioration trends to obtain a more accurate prediction that can facilitate the decision processes of bridge management systems.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Qiu, Qinru, and Massoud Pedram. "Dynamic power management based on continuous-time Markov decision processes." In the 36th ACM/IEEE conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/309847.309997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feinberg, Eugene A., Manasa Mandava, and Albert N. Shiryaev. "Sufficiency of Markov policies for continuous-time Markov decision processes and solutions to Kolmogorov's forward equation for jump Markov processes." In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC). IEEE, 2013. http://dx.doi.org/10.1109/cdc.2013.6760792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Xianping. "Discounted Optimality for Continuous-Time Markov Decision Processes in Polish Spaces." In 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alasmari, Naif, and Radu Calinescu. "Synthesis of Pareto-optimal Policies for Continuous-Time Markov Decision Processes." In 2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2022. http://dx.doi.org/10.1109/seaa56994.2022.00071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Xi-Ren. "A new model of continuous-time Markov processes and impulse stochastic control." In 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5399775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tanaka, Takashi, Mikael Skoglund, and Valeri Ugrinovskii. "Optimal sensor design and zero-delay source coding for continuous-time vector Gauss-Markov processes." In 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 2017. http://dx.doi.org/10.1109/cdc.2017.8264246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maginnis, Peter A., Matthew West, and Geir E. Dullerud. "Exact simulation of continuous time Markov jump processes with anticorrelated variance reduced Monte Carlo estimation." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography