Academic literature on the topic 'Sequential decision processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sequential decision processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sequential decision processes"

1

Alagoz, Oguzhan, Heather Hsu, Andrew J. Schaefer, and Mark S. Roberts. "Markov Decision Processes: A Tool for Sequential Decision Making under Uncertainty." Medical Decision Making 30, no. 4 (December 31, 2009): 474–83. http://dx.doi.org/10.1177/0272989x09353194.

Full text
Abstract:
We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Sobel, Matthew J., and Wei Wei. "Myopic Solutions of Homogeneous Sequential Decision Processes." Operations Research 58, no. 4-part-2 (August 2010): 1235–46. http://dx.doi.org/10.1287/opre.1090.0767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El Chamie, Mahmoud, Dylan Janak, and Behçet Açıkmeşe. "Markov decision processes with sequential sensor measurements." Automatica 103 (May 2019): 450–60. http://dx.doi.org/10.1016/j.automatica.2019.02.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Feinberg, Eugene A. "On essential information in sequential decision processes." Mathematical Methods of Operations Research 62, no. 3 (November 15, 2005): 399–410. http://dx.doi.org/10.1007/s00186-005-0035-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maruyama, Yukihiro. "Strong representation theorems for bitone sequential decision processes." Optimization Methods and Software 18, no. 4 (August 2003): 475–89. http://dx.doi.org/10.1080/1055678031000154707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Milani Fard, M., and J. Pineau. "Non-Deterministic Policies in Markovian Decision Processes." Journal of Artificial Intelligence Research 40 (January 5, 2011): 1–24. http://dx.doi.org/10.1613/jair.3175.

Full text
Abstract:
Markovian processes have long been used to model stochastic environments. Reinforcement learning has emerged as a framework to solve sequential planning and decision-making problems in such environments. In recent years, attempts were made to apply methods from reinforcement learning to construct decision support systems for action selection in Markovian environments. Although conventional methods in reinforcement learning have proved to be useful in problems concerning sequential decision-making, they cannot be applied in their current form to decision support systems, such as those in medical domains, as they suggest policies that are often highly prescriptive and leave little room for the user's input. Without the ability to provide flexible guidelines, it is unlikely that these methods can gain ground with users of such systems. This paper introduces the new concept of non-deterministic policies to allow more flexibility in the user's decision-making process, while constraining decisions to remain near optimal solutions. We provide two algorithms to compute non-deterministic policies in discrete domains. We study the output and running time of these method on a set of synthetic and real-world problems. In an experiment with human subjects, we show that humans assisted by hints based on non-deterministic policies outperform both human-only and computer-only agents in a web navigation task.
APA, Harvard, Vancouver, ISO, and other styles
7

Maruyama, Yukihiro. "SUPER-STRONG REPRESENTATION THEOREMS FOR NONDETERMINISTIC SEQUENTIAL DECISION PROCESSES." Journal of the Operations Research Society of Japan 60, no. 2 (2017): 136–55. http://dx.doi.org/10.15807/jorsj.60.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hantula, Donald A., and Charles R. Crowell. "Intermittent Reinforcement and Escalation Processes in Sequential Decision Making:." Journal of Organizational Behavior Management 14, no. 2 (June 30, 1994): 7–36. http://dx.doi.org/10.1300/j075v14n02_03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Canbolat, Pelin G., and Uriel G. Rothblum. "(Approximate) iterated successive approximations algorithm for sequential decision processes." Annals of Operations Research 208, no. 1 (February 8, 2012): 309–20. http://dx.doi.org/10.1007/s10479-012-1073-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ying, Ming-Sheng, Yuan Feng, and Sheng-Gang Ying. "Optimal Policies for Quantum Markov Decision Processes." International Journal of Automation and Computing 18, no. 3 (March 20, 2021): 410–21. http://dx.doi.org/10.1007/s11633-021-1278-z.

Full text
Abstract:
AbstractMarkov decision process (MDP) offers a general framework for modelling sequential decision making where outcomes are random. In particular, it serves as a mathematical framework for reinforcement learning. This paper introduces an extension of MDP, namely quantum MDP (qMDP), that can serve as a mathematical model of decision making about quantum systems. We develop dynamic programming algorithms for policy evaluation and finding optimal policies for qMDPs in the case of finite-horizon. The results obtained in this paper provide some useful mathematical tools for reinforcement learning techniques applied to the quantum world.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sequential decision processes"

1

Saebi, Nasrollah. "Sequential decision procedures for point processes." Thesis, Birkbeck (University of London), 1987. http://eprints.kingston.ac.uk/8409/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramsey, David Mark. "Models of evolution, interaction and learning in sequential decision processes." Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, You-Gan. "Contributions to the theory of Gittins indices : with applications in pharmaceutical research and clinical trials." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

El, Khalfi Zeineb. "Lexicographic refinements in possibilistic sequential decision-making models." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30269/document.

Full text
Abstract:
Ce travail contribue à la théorie de la décision possibiliste et plus précisément à la prise de décision séquentielle dans le cadre de la théorie des possibilités, à la fois au niveau théorique et pratique. Bien qu'attrayante pour sa capacité à résoudre les problèmes de décision qualitatifs, la théorie de la décision possibiliste souffre d'un inconvénient important : les critères d'utilité qualitatives possibilistes comparent les actions avec les opérateurs min et max, ce qui entraîne un effet de noyade. Pour surmonter ce manque de pouvoir décisionnel, plusieurs raffinements ont été proposés dans la littérature. Les raffinements lexicographiques sont particulièrement intéressants puisqu'ils permettent de bénéficier de l'arrière-plan de l'utilité espérée, tout en restant "qualitatifs". Cependant, ces raffinements ne sont définis que pour les problèmes de décision non séquentiels. Dans cette thèse, nous présentons des résultats sur l'extension des raffinements lexicographiques aux problèmes de décision séquentiels, en particulier aux Arbres de Décision et aux Processus Décisionnels de Markov possibilistes. Cela aboutit à des nouveaux algorithmes de planification plus "décisifs" que leurs contreparties possibilistes. Dans un premier temps, nous présentons des relations de préférence lexicographiques optimistes et pessimistes entre les politiques avec et sans utilités intermédiaires, qui raffinent respectivement les utilités possibilistes optimistes et pessimistes. Nous prouvons que les critères proposés satisfont le principe de l'efficacité de Pareto ainsi que la propriété de monotonie stricte. Cette dernière garantit la possibilité d'application d'un algorithme de programmation dynamique pour calculer des politiques optimales. Nous étudions tout d'abord l'optimisation lexicographique des politiques dans les Arbres de Décision possibilistes et les Processus Décisionnels de Markov à horizon fini. Nous fournissons des adaptations de l'algorithme de programmation dynamique qui calculent une politique optimale en temps polynomial. Ces algorithmes sont basés sur la comparaison lexicographique des matrices de trajectoires associées aux sous-politiques. Ce travail algorithmique est complété par une étude expérimentale qui montre la faisabilité et l'intérêt de l'approche proposée. Ensuite, nous prouvons que les critères lexicographiques bénéficient toujours d'une fondation en termes d'utilité espérée, et qu'ils peuvent être capturés par des utilités espérées infinitésimales. La dernière partie de notre travail est consacrée à l'optimisation des politiques dans les Processus Décisionnels de Markov (éventuellement infinis) stationnaires. Nous proposons un algorithme d'itération de la valeur pour le calcul des politiques optimales lexicographiques. De plus, nous étendons ces résultats au cas de l'horizon infini. La taille des matrices augmentant exponentiellement (ce qui est particulièrement problématique dans le cas de l'horizon infini), nous proposons un algorithme d'approximation qui se limite à la partie la plus intéressante de chaque matrice de trajectoires, à savoir les premières lignes et colonnes. Enfin, nous rapportons des résultats expérimentaux qui prouvent l'efficacité des algorithmes basés sur la troncation des matrices
This work contributes to possibilistic decision theory and more specifically to sequential decision-making under possibilistic uncertainty, at both the theoretical and practical levels. Even though appealing for its ability to handle qualitative decision problems, possibilisitic decision theory suffers from an important drawback: qualitative possibilistic utility criteria compare acts through min and max operators, which leads to a drowning effect. To overcome this lack of decision power, several refinements have been proposed in the literature. Lexicographic refinements are particularly appealing since they allow to benefit from the expected utility background, while remaining "qualitative". However, these refinements are defined for the non-sequential decision problems only. In this thesis, we present results on the extension of the lexicographic preference relations to sequential decision problems, in particular, to possibilistic Decision trees and Markov Decision Processes. This leads to new planning algorithms that are more "decisive" than their original possibilistic counterparts. We first present optimistic and pessimistic lexicographic preference relations between policies with and without intermediate utilities that refine the optimistic and pessimistic qualitative utilities respectively. We prove that these new proposed criteria satisfy the principle of Pareto efficiency as well as the property of strict monotonicity. This latter guarantees that dynamic programming algorithm can be used for calculating lexicographic optimal policies. Considering the problem of policy optimization in possibilistic decision trees and finite-horizon Markov decision processes, we provide adaptations of dynamic programming algorithm that calculate lexicographic optimal policy in polynomial time. These algorithms are based on the lexicographic comparison of the matrices of trajectories associated to the sub-policies. This algorithmic work is completed with an experimental study that shows the feasibility and the interest of the proposed approach. Then we prove that the lexicographic criteria still benefit from an Expected Utility grounding, and can be represented by infinitesimal expected utilities. The last part of our work is devoted to policy optimization in (possibly infinite) stationary Markov Decision Processes. We propose a value iteration algorithm for the computation of lexicographic optimal policies. We extend these results to the infinite-horizon case. Since the size of the matrices increases exponentially (which is especially problematic in the infinite-horizon case), we thus propose an approximation algorithm which keeps the most interesting part of each matrix of trajectories, namely the first lines and columns. Finally, we reports experimental results that show the effectiveness of the algorithms based on the cutting of the matrices
APA, Harvard, Vancouver, ISO, and other styles
5

Raffensperger, Peter Abraham. "Measuring and Influencing Sequential Joint Agent Behaviours." Thesis, University of Canterbury. Electrical and Computer Engineering, 2013. http://hdl.handle.net/10092/7472.

Full text
Abstract:
Algorithmically designed reward functions can influence groups of learning agents toward measurable desired sequential joint behaviours. Influencing learning agents toward desirable behaviours is non-trivial due to the difficulties of assigning credit for global success to the deserving agents and of inducing coordination. Quantifying joint behaviours lets us identify global success by ranking some behaviours as more desirable than others. We propose a real-valued metric for turn-taking, demonstrating how to measure one sequential joint behaviour. We describe how to identify the presence of turn-taking in simulation results and we calculate the quantity of turn-taking that could be observed between independent random agents. We demonstrate our turn-taking metric by reinterpreting previous work on turn-taking in emergent communication and by analysing a recorded human conversation. Given a metric, we can explore the space of reward functions and identify those reward functions that result in global success in groups of learning agents. We describe 'medium access games' as a model for human and machine communication and we present simulation results for an extensive range of reward functions for pairs of Q-learning agents. We use the Nash equilibria of medium access games to develop predictors for determining which reward functions result in turn-taking. Having demonstrated the predictive power of Nash equilibria for turn-taking in medium access games, we focus on synthesis of reward functions for stochastic games that result in arbitrary desirable Nash equilibria. Our method constructs a reward function such that a particular joint behaviour is the unique Nash equilibrium of a stochastic game, provided that such a reward function exists. This method builds on techniques for designing rewards for Markov decision processes and for normal form games. We explain our reward design methods in detail and formally prove that they are correct.
APA, Harvard, Vancouver, ISO, and other styles
6

Dulac-Arnold, Gabriel. "A General Sequential Model for Constrained Classification." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066572.

Full text
Abstract:
Nous proposons une nouvelle approche pour l'apprentissage de représentation parcimonieuse, où le but est de limiter le nombre de caractéristiques sélectionnées \textbf{par donnée}, résultant en un modèle que nous appellerons \textit{Modèle de parcimonie locale pour la classification} --- \textit{Datum-Wise Sparse Classification} (DWSC) en anglais. Notre approche autorise le fait que les caractéristiques utilisées lors de la classification peuvent être différentes d'une donnée à une autre: une donnée facile à classifier le sera ainsi en ne considérant que quelques caractéristiques, tandis que plus de caractéristiques seront utilisées pour les données plus complexes. Au contraire des approches traditionnelles de régularisation qui essaient de trouver un équilibre entre performance et parcimonie au niveau de l'ensemble du jeu de données, notre motivation est de trouver cet équilibre au niveau des données individuelles, autorisant une parcimonie moyenne plus élevée, pour une performance équivalente. Ce type de parcimonie est intéressant pour plusieurs raisons~: premièrement, nous partons du principe que les explications les plus simples sont toujours préférables~; deuxièmement, pour la compréhension des données, une représentation parcimonieuse par donnée fournit une information par rapport à la structure sous-jacente de celles-ci~: typiquement, si un jeu de données provient de deux distributions disjointes, DWSC autorise le modèle à choisir automatiquement de ne prendre en compte que les caractéristiques de la distribution génératrice de chaque donnée considérée
This thesis introduces a body of work on sequential models for classification. These models allow for a more flexible and general approach to classification tasks. Many tasks ultimately require the classification of some object, but cannot be handled with a single atomic classification step. This is the case for tasks where information is either not immediately available upfront, or where the act of accessing different aspects of the object being classified may present various costs (due to time, computational power, monetary cost, etc.). The goal of this thesis is to introduce a new method, which we call datum-wise classification, that is able to handle these more complex classifications tasks by modelling them as sequential processes
APA, Harvard, Vancouver, ISO, and other styles
7

Warren, Adam L. "Sequential decision-making under uncertainty /." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zawaideh, Zaid. "Eliciting preferences sequentially using partially observable Markov decision processes." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18794.

Full text
Abstract:
Decision Support systems have been gaining in importance recently. Yet one of the bottlenecks of designing such systems lies in understanding how the user values different decision outcomes, or more simply what the user preferences are. Preference elicitation promises to remove the guess work of designing decision making agents by providing more formal methods for measuring the `goodness' of outcomes. This thesis aims to address some of the challenges of preference elicitation such as the high dimensionality of the underlying problem. The problem is formulated as a partially observable Markov decision process (POMDP) using a factored representation to take advantage of the structure inherent to preference elicitation problems. Moreover, simple preference knowledge on problem attributes are used to acquire more accurate preferences without increasing the burden on the user. Sparse terminal actions are defined to allow a flexible trade-off between speed and accuracy of the elicited preference function. Empirical simulations are used to validate the proposed methodology. The result is a framework that is flexible enough to be applied to a wide range of domains that addresses some of the challenges facing preference elicitation methods
Les systèmes d'aide à la décision ont gagné en importance récemment. Pourtant, un des problèmes importants liés au design de tels systèmes demeure: comprendre comment l'usager évalue les différents résultats, ou plus simplement, déterminer quelles sont ses préférences. L'extraction des préférences vise à éliminer certains aspects arbitraires du design d'agents de décision en offrant des méthodes plus formelles pour mesurer la qualité des résultats. Cette thèse tente de résoudre certains problèmes ayant trait à l'extraction des préférences, tel que celui de la haute dimensionnalité du problème sous-jacent. Le problème est formulé en tant que processus de décision markovien partiellement observable (POMDP), et utilise une représentation factorisée afin de profiter de la structure inhérente aux problèmes d'extraction des préférences. De plus, des connaissances simples quant aux caractéristiques de ces problèmes sont exploitées afin d'obtenir des préférences plus précises, sans pour autant augmenter la tâche de l'usager. Les actions terminales "sparse" sont définies de manière à permettre un compromis flexible entre vitesse et précision. Le résultat est un système assez flexible pour être appliqué à un grand nombre de domaines qui ont à faire face aux problèmes liés aux méthodes d'extraction des préférences.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoock, Jean-Baptiste. "Contributions to Simulation-based High-dimensional Sequential Decision Making." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00912338.

Full text
Abstract:
My thesis is entitled "Contributions to Simulation-based High-dimensional Sequential Decision Making". The context of the thesis is about games, planning and Markov Decision Processes. An agent interacts with its environment by successively making decisions. The agent starts from an initial state until a final state in which the agent can not make decision anymore. At each timestep, the agent receives an observation of the state of the environment. From this observation and its knowledge, the agent makes a decision which modifies the state of the environment. Then, the agent receives a reward and a new observation. The goal is to maximize the sum of rewards obtained during a simulation from an initial state to a final state. The policy of the agent is the function which, from the history of observations, returns a decision. We work in a context where (i) the number of states is huge, (ii) reward carries little information, (iii) the probability to reach quickly a good final state is weak and (iv) prior knowledge is either nonexistent or hardly exploitable. Both applications described in this thesis present these constraints : the game of Go and a 3D simulator of the european project MASH (Massive Sets of Heuristics). In order to take a satisfying decision in this context, several solutions are brought : 1. Simulating with the compromise exploration/exploitation (MCTS) 2. Reducing the complexity by local solving (GoldenEye) 3. Building a policy which improves itself (RBGP) 4. Learning prior knowledge (CluVo+GMCTS) Monte-Carlo Tree Search (MCTS) is the state of the art for the game of Go. From a model of the environment, MCTS builds incrementally and asymetrically a tree of possible futures by performing Monte-Carlo simulations. The tree starts from the current observation of the agent. The agent switches between the exploration of the model and the exploitation of decisions which statistically give a good cumulative reward. We discuss 2 ways for improving MCTS : the parallelization and the addition of prior knowledge. The parallelization does not solve some weaknesses of MCTS; in particular some local problems remain challenges. We propose an algorithm (GoldenEye) which is composed of 2 parts : detection of a local problem and then its resolution. The algorithm of resolution reuses some concepts of MCTS and it solves difficult problems of a classical database. The addition of prior knowledge by hand is laborious and boring. We propose a method called Racing-based Genetic Programming (RBGP) in order to add automatically prior knowledge. The strong point is that RBGP rigorously validates the addition of a prior knowledge and RBGP can be used for building a policy (instead of only optimizing an algorithm). In some applications such as MASH, simulations are too expensive in time and there is no prior knowledge and no model of the environment; therefore Monte-Carlo Tree Search can not be used. So that MCTS becomes usable in this context, we propose a method for learning prior knowledge (CluVo). Then we use pieces of prior knowledge for improving the rapidity of learning of the agent and for building a model, too. We use from this model an adapted version of Monte-Carlo Tree Search (GMCTS). This method solves difficult problems of MASH and gives good results in an application to a word game.
APA, Harvard, Vancouver, ISO, and other styles
10

Filho, Ricardo Shirota. "Processos de decisão Markovianos com probabilidades imprecisas e representações relacionais: algoritmos e fundamentos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-13062013-160912/.

Full text
Abstract:
Este trabalho é dedicado ao desenvolvimento teórico e algorítmico de processos de decisão markovianos com probabilidades imprecisas e representações relacionais. Na literatura, essa configuração tem sido importante dentro da área de planejamento em inteligência artificial, onde o uso de representações relacionais permite obter descrições compactas, e o emprego de probabilidades imprecisas resulta em formas mais gerais de incerteza. São três as principais contribuições deste trabalho. Primeiro, efetua-se uma discussão sobre os fundamentos de tomada de decisão sequencial com probabilidades imprecisas, em que evidencia-se alguns problemas ainda em aberto. Esses resultados afetam diretamente o (porém não restrito ao) modelo de interesse deste trabalho, os processos de decisão markovianos com probabilidades imprecisas. Segundo, propõe-se três algoritmos para processos de decisão markovianos com probabilidades imprecisas baseadas em programação (otimização) matemática. E terceiro, desenvolvem-se ideias propostas por Trevizan, Cozman e de Barros (2008) no uso de variantes do algoritmo Real-Time Dynamic Programming para resolução de problemas de planejamento probabilístico descritos através de versões estendidas da linguagem de descrição de domínios de planejamento (PPDDL).
This work is devoted to the theoretical and algorithmic development of Markov Decision Processes with Imprecise Probabilities and relational representations. In the literature, this configuration is important within artificial intelligence planning, where the use of relational representations allow compact representations and imprecise probabilities result in a more general form of uncertainty. There are three main contributions. First, we present a brief discussion of the foundations of decision making with imprecise probabilities, pointing towards key questions that remain unanswered. These results have direct influence upon the model discussed within this text, that is, Markov Decision Processes with Imprecise Probabilities. Second, we propose three algorithms for Markov Decision Processes with Imprecise Probabilities based on mathematical programming. And third, we develop ideas proposed by Trevizan, Cozman e de Barros (2008) on the use of variants of Real-Time Dynamic Programming to solve problems of probabilistic planning described by an extension of the Probabilistic Planning Domain Definition Language (PPDDL).
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sequential decision processes"

1

Villemeur, Etienne Billette de. Sequential decision processes make behavioural types endogenous. Florence: European University Institute, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Villemeur, Étienne Billette de. Sequential decision processes make behavioural types endogenous. Badia Fiesolana, San Domenico: European University Institute, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alent'eva, Tat'yana. Public opinion in the United States on the eve of the Civil war (1850-1861), was. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1068789.

Full text
Abstract:
The monograph first examines American public opinion as a major factor of social and political life in the period of the maturing of the Civil war (1861-1865 gg.). Special value it is given by the study of the struggle in the South and in the North, consideration of the process of formation of two socio-cultural models. On the wide canvas of the socio-economic and political history in the monograph analyses the state and development of public opinion in the United States, sequentially from the compromise of 1850, a small civil war in Kansas, the uprising of John brown, of the maturing of "inevitable conflict," the secession of the southern States to the formation of the southern Confederacy and the Civil war. Reveals a fierce struggle, which was accompanied by the adoption of the compromise Kansas-Nebraska and the Supreme court decision in the Dred Scott case of 1857, which annulled the action of the famous Missouri compromise. Special attention is paid to the formation of the Republican party and the presidential elections of 1856 and 1860 Shown, as were incitement to hatred between citizens of the same country, which were used propaganda and manipulative techniques. The totality of facts gleaned from primary sources, especially the materials about these manipulations give an opportunity to look behind the scenes politics that led to the outbreak of the Civil war in the United States, a deeper understanding of its causes. For students of historical faculties and departments of sociology and political Sciences, and anyone interested in American history.
APA, Harvard, Vancouver, ISO, and other styles
4

Howes, Andrew, Xiuli Chen, Aditya Acharya, and Richard L. Lewis. Interaction as an Emergent Property of a Partially Observable Markov Decision Process. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198799603.003.0011.

Full text
Abstract:
In this chapter we explore the potential advantages of modeling the interaction between a human and a computer as a consequence of a Partially Observable Markov Decision Process (POMDP) that models human cognition. POMDPs can be used to model human perceptual mechanisms, such as human vision, as partial (uncertain) observers of a hidden state are possible. In general, POMDPs permit a rigorous definition of interaction as the outcome of a reward maximizing stochastic sequential decision processes. They have been shown to explain interaction between a human and an environment in a range of scenarios, including visual search, interactive search and sense-making. The chapter uses these scenarios to illustrate the explanatory power of POMDPs in HCI. It also shows that POMDPs embrace the embodied, ecological and adaptive nature of human interaction.
APA, Harvard, Vancouver, ISO, and other styles
5

Lepora, Nathan F. Decision making. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0028.

Full text
Abstract:
Decision making is the process by which alternatives are deliberated and chosen based on the values and goals of the decision maker. In this chapter, we describe recent progress in understanding how living organisms make decisions and the implications for engineering artificial systems with decision-making capabilities. Nature appears to re-use design principles for decision making across a hierarchy of organizational levels, from cells to organisms to entire populations. One common principle is that decision formation is realized by accumulating sensory evidence up to a threshold, approximating the optimal statistical technique of sequential analysis. Sequential analysis has applications spanning from cryptography to clinical drug testing. Artificial perception based on sequential analysis has advanced robot capabilities, enabling robust sensing under uncertainty. Future applications could lead to individual robots, or artificial swarms, that perceive and interact with complex environments with an ease and robustness now achievable only by living organisms.
APA, Harvard, Vancouver, ISO, and other styles
6

Ratcliff, Roger, and Philip Smith. Modeling Simple Decisions and Applications Using a Diffusion Model. Edited by Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199957996.013.3.

Full text
Abstract:
The diffusion model is one of the major sequential-sampling models for two-choice decision-making and choice response time in psychology. The model conceives of decision-making as a process in which noisy evidence is accumulated until one of two response criteria is reached and the associated response is made. The criteria represent the amount of evidence needed to make each decision and reflect the decision maker’s response biases and speed-accuracy trade-off settings. In this chapter we examine the application of the diffusion model in a variety of different settings. We discuss the optimality of the model and review its applications to a number of cognitive tasks, including perception, memory, and language tasks. We also consider its applications to normal and special populations, to the cognitive foundations of individual differences, to value-based decisions, and its role in understanding the neural basis of decision-making.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sequential decision processes"

1

de Moor, Oege. "A generic program for sequential decision processes." In Programming Languages: Implementations, Logics and Programs, 1–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0026809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Samuel P. M., Dit-Yan Yeung, and Nevin L. Zhang. "Hidden-Mode Markov Decision Processes for Nonstationary Sequential Decision Making." In Sequence Learning, 264–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44565-x_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dvoretzky, A., J. Kiefer, and J. Wolfowitz. "Sequential Decision Problems for Processes with Continuous Time Parameter. Testing Hypotheses." In Collected Papers, 90–100. New York, NY: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4613-8505-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Troffaes, Matthias C. M., Nathan Huntley, and Ricardo Shirota Filho. "Sequential Decision Processes under Act-State Independence with Arbitrary Choice Functions." In Communications in Computer and Information Science, 98–107. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14055-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dvoretzky, A., J. Kiefer, and J. Wolfowitz. "Sequential Decision Problems for Processes with Continuous Time Parameter. Problems of Estimation." In Collected Papers, 101–13. New York, NY: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4613-8505-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dvoretzky, A., J. Kiefer, and J. Wolfowitz. "Corrections to “Sequential Decision Problems for Processes with Continuous Time Parameter. Testing Hypotheses”." In Collected Papers, 100. New York, NY: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4613-8505-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt, Klaus D. "A Sequential Lebesgue-Radon-Nikodym Theorem and the Lebesgue Decomposition of Martingales." In Transactions of the Tenth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, 285–92. Dordrecht: Springer Netherlands, 1988. http://dx.doi.org/10.1007/978-94-010-9913-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Junges, Sebastian, Nils Jansen, and Sanjit A. Seshia. "Enforcing Almost-Sure Reachability in POMDPs." In Computer Aided Verification, 602–25. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_28.

Full text
Abstract:
AbstractPartially-Observable Markov Decision Processes (POMDPs) are a well-known stochastic model for sequential decision making under limited information. We consider the EXPTIME-hard problem of synthesising policies that almost-surely reach some goal state without ever visiting a bad state. In particular, we are interested in computing the winning region, that is, the set of system configurations from which a policy exists that satisfies the reachability specification. A direct application of such a winning region is the safe exploration of POMDPs by, for instance, restricting the behavior of a reinforcement learning agent to the region. We present two algorithms: A novel SAT-based iterative approach and a decision-diagram based alternative. The empirical evaluation demonstrates the feasibility and efficacy of the approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Khoza, Sizwile, Dewald van Niekerk, and Livhuwani Nemakonde. "Rethinking Climate-Smart Agriculture Adoption for Resilience-Building Among Smallholder Farmers: Gender-Sensitive Adoption Framework." In African Handbook of Climate Change Adaptation, 677–98. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_130.

Full text
Abstract:
AbstractThis study identifies the need for holistic understanding of gender-differentiated climate-smart agriculture (CSA) adoption by smallholder farmers who are at the frontline of climate-related hazards and disasters in Africa. CSA adoption is predominantly informed by a parochial linear approach to farmers’ decision-making processes. Resilience-building and adaptation, which forms the second pillar of CSA and can enhance understanding of the CSA adoption nuances at farmer level, often receives less attention in adoption investigations. To appreciate CSA adoption from a resilience perspective, this study focused on resilience-building based on the interlinkage between CSA and disaster risk reduction and applied a resilience perspective in a gendered approach to CSA adoption by smallholder farmers. Through primary data collected in an exploratory sequential mixed method design, the study presents a proposed normative gender-sensitive CSA adoption framework to guide CSA implementation strategies and policies. The framework is anchored in resilience thinking, and some of its key components include gender-sensitive CSA technology development, risk-informed decision-making by heterogeneous smallholder farmers, gender-sensitive enabling factors, resilience strategies, gender equitable and equal ownership, and control of and access to resilience capitals. The proposed framework can be used to improve CSA adoption by smallholder farmers by addressing gendered vulnerability and inequality that influence low adoption.
APA, Harvard, Vancouver, ISO, and other styles
10

Egli, Dennis B. "Growth of crop communities and the production of yield." In Applied crop physiology: understanding the fundamentals of grain crop management, 50–88. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789245950.0003.

Full text
Abstract:
Abstract This chapter focuses on developing general model of community growth and the production of yield by grain crops. Murata's (1969) three-stage system provides such a model. It is useful because it is simple (only three stages), it applies equally well to all grain crop species (although there are some species variation in minor details), it clearly identifies the sequential nature of the yield production process and the three stages relate to the primary drivers of the yield production process at the community level. First, the crop must accumulate the leaf area that drives community photosynthesis (Stage I), then seed number is determined (Stage II), and finally seed filling occurs (Stage III) and the production of yield is finished. High yield of any variety/location combination requires, at a minimum: (i) the production of enough leaf area index (LAI) during Stage I to maximize solar radiation interception and community photosynthesis; and (ii) an absence of stress during Stage II to maximize seed number and during Stage III to allow the seeds to fill to their maximum potential size. The scheme provides a powerful framework for us to think about how management decisions and environmental conditions affect yield.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sequential decision processes"

1

Chiu, Po-Hsiang, and Manfred Huber. "Clustering Similar Actions in Sequential Decision Processes." In 2009 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2009. http://dx.doi.org/10.1109/icmla.2009.98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mazo, Manuel, and Ming Cao. "Design of reward structures for sequential decision-making processes using symbolic analysis." In 2013 American Control Conference (ACC). IEEE, 2013. http://dx.doi.org/10.1109/acc.2013.6580516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kent, David, Siddhartha Banerjee, and Sonia Chernova. "Learning Sequential Decision Tasks for Robot Manipulation with Abstract Markov Decision Processes and Demonstration-Guided Exploration." In 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids). IEEE, 2018. http://dx.doi.org/10.1109/humanoids.2018.8624949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Hanyu, and Fengqi You. "Adaptive surrogate-based algorithm for integrated scheduling and dynamic optimization of sequential batch processes." In 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015. http://dx.doi.org/10.1109/cdc.2015.7403372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Chunhui, and Youxian Sun. "Step-wise sequential phase partition algorithm and on-line monitoring strategy for multiphase batch processes." In 2013 25th Chinese Control and Decision Conference (CCDC). IEEE, 2013. http://dx.doi.org/10.1109/ccdc.2013.6561559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baek, Stanley S., Hyukseong Kwon, Josiah A. Yoder, and Daniel Pack. "Optimal path planning of a target-following fixed-wing UAV using sequential decision processes." In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013). IEEE, 2013. http://dx.doi.org/10.1109/iros.2013.6696775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nellippallil, Anand Balu, Kevin N. Song, Chung-Hyun Goh, Pramod Zagade, B. P. Gautham, Janet K. Allen, and Farrokh Mistree. "A Goal Oriented, Sequential Process Design of a Multi-Stage Hot Rod Rolling System." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59402.

Full text
Abstract:
The steel manufacturing process is characterized by the requirement of expeditious development of high quality products at low cost through effective and judicious use of available resources. Identifying solutions that meet the conflicting commercially imperative goals of such process chains is hard using traditional search techniques. The complexity embedded in such a problem increases due to the presence of large number of design variables, constraints and bounds, conflicting goals and the complex sequential relationships of the different stages of manufacturing. A classic example of such a manufacturing problem is the design of a rolling system for manufacturing a steel rod. This is a sequential process in which information flows from first rolling stage/pass to last rolling pass and the decisions made at first pass influence the decisions that are made at the later passes. In this paper, we present a method based on well-established empirical models and response surface models developed through simulation experiments (finite element based) along with the compromise Decision Support Problem (cDSP) construct to support integrated information flow across different stages of a multi-stage hot rod rolling system. The method is goal-oriented because the design decisions are first made based on the end requirements identified for the process at the last rolling pass and these decisions are then passed to rolling passes that precede following the sequential order in an inverse manner to design the entire rolling process chain. We illustrate the efficacy of the method by carrying out the design of a multi-stage rolling system. We formulate the cDSP for the second and fourth pass of a four pass rolling chain. The stages are designed by sequentially passing the design information obtained after exercising the cDSP for the last pass for different scenarios and identifying the best combination of design variables that satisfies the conflicting goals. The cDSP for second pass helps in integrated information flow from fourth to first pass and in meeting specified goals imposed by the fourth and third pass designed. The end goals identified for this problem for fourth pass are minimization of ovality (quality) of rod, maximization of throughput (productivity) and minimization of rolling load (performance and cost). The method can be instantiated for other multi-stage manufacturing processes such as the steel making process chain having several unit operations. In future, we plan to use the method for supporting decision workflow in steel making process by formulating cDSPs for the multiple unit operations involved and linking them as a decision network using coupled cDSPs.
APA, Harvard, Vancouver, ISO, and other styles
8

Gani, Abdullah, Omar Zakaria, and Nor Badrul Anuar Jumaat. "A Markov Decision Process Model for Traffic Prioritisation Provisioning." In InSITE 2004: Informing Science + IT Education Conference. Informing Science Institute, 2004. http://dx.doi.org/10.28945/2750.

Full text
Abstract:
This paper presents an application of Markov Decision Process (MDP) into the provision of traffic prioritisation in the best-effort networks. MDP was used because it is a standard, general formalism for modelling stochastic, sequential decision problems. The implementation of traffic prioritisation involves a series of decision making processes by which packets are marked and classified before being despatched to destinations. The application of MDP was driven by the objective of ensuring the higher priority packets are not delayed by the lower ones. The MDP is believed to be applicable in improving the traffic prioritisation arbitration.
APA, Harvard, Vancouver, ISO, and other styles
9

Shergadwala, Murtuza, Ilias Bilionis, and Jitesh H. Panchal. "Students As Sequential Decision-Makers: Quantifying the Impact of Problem Knowledge and Process Deviation on the Achievement of Their Design Problem Objective." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85537.

Full text
Abstract:
Factors such as a student’s knowledge of the design problem and their deviation from a design process impact the achievement of their design problem objective. Typically, an instructor provides students with qualitative assessments of such factors. To provide accurate assessments, there is a need to quantify the impact of such factors in a design process. Moreover, design processes are iterative in nature. Therefore, the research question addressed in this study is, How can we quantify the impact of a student’s problem knowledge and their deviation from a design process, on the achievement of their design problem objective, in successive design iterations? We illustrate an approach in the context of a decision-making scenario. In the scenario, a student makes sequential decisions to optimize a mathematically unknown design objective with given constraints. Consequently, we utilize a decision-making model to abstract their design process. Their problem knowledge is quantified as their belief about the feasibility of the design space via a probability distribution. Their deviation from the decision-making model is quantified by introducing uncertainty in the model. We simulate cases where they have a combination of high (or low) knowledge of the design problem and high (or low) deviation in their design process. The results of our simulation study indicate that if students have a high (low) deviation from the modeled design process then we cannot (can) infer their knowledge of the design problem based on their problem objective achievement.
APA, Harvard, Vancouver, ISO, and other styles
10

Karandikar, H. M., J. Rao, and F. Mistree. "Sequential vs. Concurrent Formulations for the Synthesis of Engineering Designs." In ASME 1991 Design Technical Conferences. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/detc1991-0139.

Full text
Abstract:
Abstract Modeling and gaining an understanding of the interaction between information from design and from manufacturing is an important step in developing techniques and methods for concurrent engineering. In this paper, the role of optimization techniques in the product development process in a concurrent engineering framework is examined. Through arguments based in optimization theory, it is demonstrated that a concurrent approach to designing for manufacture problems is superior to a sequential one. By extension, this applies to designing for other life-cycle processes. Results which illustrate the point are presented from a comprehensive, non-textbook case study in design using composite materials and dealing with the integration of analysis, dimensional synthesis, and manufacturing. The case study is tackled by using Decision Support Problems. The focus in the paper is on understanding the ramifications of considering life-cycle processes concurrently.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography