To see the other types of publications on this topic, follow the link: Sequential decision processes.

Journal articles on the topic 'Sequential decision processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sequential decision processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alagoz, Oguzhan, Heather Hsu, Andrew J. Schaefer, and Mark S. Roberts. "Markov Decision Processes: A Tool for Sequential Decision Making under Uncertainty." Medical Decision Making 30, no. 4 (2009): 474–83. http://dx.doi.org/10.1177/0272989x09353194.

Full text
Abstract:
We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Sobel, Matthew J., and Wei Wei. "Myopic Solutions of Homogeneous Sequential Decision Processes." Operations Research 58, no. 4-part-2 (2010): 1235–46. http://dx.doi.org/10.1287/opre.1090.0767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El Chamie, Mahmoud, Dylan Janak, and Behçet Açıkmeşe. "Markov decision processes with sequential sensor measurements." Automatica 103 (May 2019): 450–60. http://dx.doi.org/10.1016/j.automatica.2019.02.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Feinberg, Eugene A. "On essential information in sequential decision processes." Mathematical Methods of Operations Research 62, no. 3 (2005): 399–410. http://dx.doi.org/10.1007/s00186-005-0035-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maruyama, Yukihiro. "Strong representation theorems for bitone sequential decision processes." Optimization Methods and Software 18, no. 4 (2003): 475–89. http://dx.doi.org/10.1080/1055678031000154707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Milani Fard, M., and J. Pineau. "Non-Deterministic Policies in Markovian Decision Processes." Journal of Artificial Intelligence Research 40 (January 5, 2011): 1–24. http://dx.doi.org/10.1613/jair.3175.

Full text
Abstract:
Markovian processes have long been used to model stochastic environments. Reinforcement learning has emerged as a framework to solve sequential planning and decision-making problems in such environments. In recent years, attempts were made to apply methods from reinforcement learning to construct decision support systems for action selection in Markovian environments. Although conventional methods in reinforcement learning have proved to be useful in problems concerning sequential decision-making, they cannot be applied in their current form to decision support systems, such as those in medical domains, as they suggest policies that are often highly prescriptive and leave little room for the user's input. Without the ability to provide flexible guidelines, it is unlikely that these methods can gain ground with users of such systems. This paper introduces the new concept of non-deterministic policies to allow more flexibility in the user's decision-making process, while constraining decisions to remain near optimal solutions. We provide two algorithms to compute non-deterministic policies in discrete domains. We study the output and running time of these method on a set of synthetic and real-world problems. In an experiment with human subjects, we show that humans assisted by hints based on non-deterministic policies outperform both human-only and computer-only agents in a web navigation task.
APA, Harvard, Vancouver, ISO, and other styles
7

Maruyama, Yukihiro. "SUPER-STRONG REPRESENTATION THEOREMS FOR NONDETERMINISTIC SEQUENTIAL DECISION PROCESSES." Journal of the Operations Research Society of Japan 60, no. 2 (2017): 136–55. http://dx.doi.org/10.15807/jorsj.60.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hantula, Donald A., and Charles R. Crowell. "Intermittent Reinforcement and Escalation Processes in Sequential Decision Making:." Journal of Organizational Behavior Management 14, no. 2 (1994): 7–36. http://dx.doi.org/10.1300/j075v14n02_03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Canbolat, Pelin G., and Uriel G. Rothblum. "(Approximate) iterated successive approximations algorithm for sequential decision processes." Annals of Operations Research 208, no. 1 (2012): 309–20. http://dx.doi.org/10.1007/s10479-012-1073-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mahadevan, Sridhar. "Representation Discovery in Sequential Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1718–21. http://dx.doi.org/10.1609/aaai.v24i1.7766.

Full text
Abstract:
Automatically constructing novel representations of tasks from analysis of state spaces is a longstanding fundamental challenge in AI. I review recent progress on this problem for sequential decision making tasks modeled as Markov decision processes. Specifically, I discuss three classes of representation discovery problems: finding functional, state, and temporal abstractions. I describe solution techniques varying along several dimensions: diagonalization or dilation methods using approximate or exact transition models; reward-specific vs reward-invariant methods; global vs. local representation construction methods; multiscale vs. flat discovery methods; and finally, orthogonal vs. redundant representa- tion discovery methods. I conclude by describing a number of open problems for future work.
APA, Harvard, Vancouver, ISO, and other styles
11

Ying, Ming-Sheng, Yuan Feng, and Sheng-Gang Ying. "Optimal Policies for Quantum Markov Decision Processes." International Journal of Automation and Computing 18, no. 3 (2021): 410–21. http://dx.doi.org/10.1007/s11633-021-1278-z.

Full text
Abstract:
AbstractMarkov decision process (MDP) offers a general framework for modelling sequential decision making where outcomes are random. In particular, it serves as a mathematical framework for reinforcement learning. This paper introduces an extension of MDP, namely quantum MDP (qMDP), that can serve as a mathematical model of decision making about quantum systems. We develop dynamic programming algorithms for policy evaluation and finding optimal policies for qMDPs in the case of finite-horizon. The results obtained in this paper provide some useful mathematical tools for reinforcement learning techniques applied to the quantum world.
APA, Harvard, Vancouver, ISO, and other styles
12

PENG, Yanyan, and Xinwang LIU. "BIDDING DECISION IN LAND AUCTION USING PROSPECT THEORY." International Journal of Strategic Property Management 19, no. 2 (2015): 186–205. http://dx.doi.org/10.3846/1648715x.2015.1047914.

Full text
Abstract:
Land auction is widely practiced in company and government decisions, especially in China. Bidders are always faced with two or more auctions in the period of a decision cycle. The outcome of the auction is under high risk. The bidder's risk attitude and preference will have a great influence on his/her bidding price. Prospect theory is currently the main descriptive theory of decision under risk. In this paper, we will consider the preferences of the decision-makers in land bidding decisions with the multi-attribute additive utility and reference point method in cumulative prospect theory. Three land auction models are proposed based on the appearance time of the land auctions. The simultaneous model uses cumulative prospect theory without considering the relationships between the auctions. The time sequential model involves the exchange auction decisions at different time with the third-generation prospect theory. The event sequential model further considers the reference point prediction in sequential land auction decisions. The three models can help the decision-makers make better bidding price decision when they are faced with several land auctions in the period of a decision cycle. A case study illustrates the processes and results of our approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Sin, Yeonju, HeeYoung Seon, Yun Kyoung Shin, Oh-Sang Kwon, and Dongil Chung. "Subjective optimality in finite sequential decision-making." PLOS Computational Biology 17, no. 12 (2021): e1009633. http://dx.doi.org/10.1371/journal.pcbi.1009633.

Full text
Abstract:
Many decisions in life are sequential and constrained by a time window. Although mathematically derived optimal solutions exist, it has been reported that humans often deviate from making optimal choices. Here, we used a secretary problem, a classic example of finite sequential decision-making, and investigated the mechanisms underlying individuals’ suboptimal choices. Across three independent experiments, we found that a dynamic programming model comprising subjective value function explains individuals’ deviations from optimality and predicts the choice behaviors under fewer and more opportunities. We further identified that pupil dilation reflected the levels of decision difficulty and subsequent choices to accept or reject the stimulus at each opportunity. The value sensitivity, a model-based estimate that characterizes each individual’s subjective valuation, correlated with the extent to which individuals’ physiological responses tracked stimuli information. Our results provide model-based and physiological evidence for subjective valuation in finite sequential decision-making, rediscovering human suboptimality in subjectively optimal decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
14

Messias, João, Matthijs Spaan, and Pedro Lima. "GSMDPs for Multi-Robot Sequential Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 1408–14. http://dx.doi.org/10.1609/aaai.v27i1.8550.

Full text
Abstract:
Markov Decision Processes (MDPs) provide an extensive theoretical background for problems of decision-making under uncertainty. In order to maintain computational tractability, however, real-world problems are typically discretized in states and actions as well as in time. Assuming synchronous state transitions and actions at fixed rates may result in models which are not strictly Markovian, or where agents are forced to idle between actions, losing their ability to react to sudden changes in the environment. In this work, we explore the application of Generalized Semi-Markov Decision Processes (GSMDPs) to a realistic multi-robot scenario. A case study will be presented in the domain of cooperative robotics, where real-time reactivity must be preserved, and synchronous discrete-time approaches are therefore sub-optimal. This case study is tested on a team of real robots, and also in realistic simulation. By allowing asynchronous events to be modeled over continuous time, the GSMDP approach is shown to provide greater solution quality than its discrete-time counterparts, while still being approximately solvable by existing methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Mármol, Amparo M., Justo Puerto, and Francisco R. Fernández. "Sequential incorporation of imprecise information in multiple criteria decision processes." European Journal of Operational Research 137, no. 1 (2002): 123–33. http://dx.doi.org/10.1016/s0377-2217(01)00082-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khader, Patrick H., Thorsten Pachur, Lilian A. E. Weber, and Kerstin Jost. "Neural Signatures of Controlled and Automatic Retrieval Processes in Memory-based Decision-making." Journal of Cognitive Neuroscience 28, no. 1 (2016): 69–83. http://dx.doi.org/10.1162/jocn_a_00882.

Full text
Abstract:
Decision-making often requires retrieval from memory. Drawing on the neural ACT-R theory [Anderson, J. R., Fincham, J. M., Qin, Y., & Stocco, A. A central circuit of the mind. Trends in Cognitive Sciences, 12, 136–143, 2008] and other neural models of memory, we delineated the neural signatures of two fundamental retrieval aspects during decision-making: automatic and controlled activation of memory representations. To disentangle these processes, we combined a paradigm developed to examine neural correlates of selective and sequential memory retrieval in decision-making with a manipulation of associative fan (i.e., the decision options were associated with one, two, or three attributes). The results show that both the automatic activation of all attributes associated with a decision option and the controlled sequential retrieval of specific attributes can be traced in material-specific brain areas. Moreover, the two facets of memory retrieval were associated with distinct activation patterns within the frontoparietal network: The dorsolateral prefrontal cortex was found to reflect increasing retrieval effort during both automatic and controlled activation of attributes. In contrast, the superior parietal cortex only responded to controlled retrieval, arguably reflecting the sequential updating of attribute information in working memory. This dissociation in activation pattern is consistent with ACT-R and constitutes an important step toward a neural model of the retrieval dynamics involved in memory-based decision-making.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Haiyang, Hyung Jin Chang, and Andrew Howes. "Apparently Irrational Choice as Optimal Sequential Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (2021): 792–800. http://dx.doi.org/10.1609/aaai.v35i1.16161.

Full text
Abstract:
In this paper, we propose a normative approach to modeling apparently human irrational decision making (cognitive biases) that makes use of inherently rational computational mechanisms. We view preferential choice tasks as sequential decision making problems and formulate them as Partially Observable Markov Decision Processes (POMDPs). The resulting sequential decision model learns what information to gather about which options, whether to calculate option values or make comparisons between options and when to make a choice. We apply the model to choice problems where context is known to influence human choice, an effect that has been taken as evidence that human cognition is irrational. Our results show that the new model approximates a bounded optimal cognitive policy and makes quantitative predictions that correspond well to evidence about human choice. Furthermore, the model uses context to help infer which option has a maximum expected value while taking into account computational cost and cognitive limits. In addition, it predicts when, and explains why, people stop evidence accumulation and make a decision. We argue that the model provides evidence that apparent human irrationalities are emergent consequences of processes that prefer higher value (rational) policies.
APA, Harvard, Vancouver, ISO, and other styles
18

Khan, Omar, Pascal Poupart, and James Black. "Minimal Sufficient Explanations for Factored Markov Decision Processes." Proceedings of the International Conference on Automated Planning and Scheduling 19 (October 16, 2009): 194–200. http://dx.doi.org/10.1609/icaps.v19i1.13365.

Full text
Abstract:
Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDP by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. Our explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate our technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate our explanations for course-advising through a user study involving students.
APA, Harvard, Vancouver, ISO, and other styles
19

Farina, Gabriele, Christian Kroer, and Tuomas Sandholm. "Online Convex Optimization for Sequential Decision Processes and Extensive-Form Games." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1917–25. http://dx.doi.org/10.1609/aaai.v33i01.33011917.

Full text
Abstract:
Regret minimization is a powerful tool for solving large-scale extensive-form games. State-of-the-art methods rely on minimizing regret locally at each decision point. In this work we derive a new framework for regret minimization on sequential decision problems and extensive-form games with general compact convex sets at each decision point and general convex losses, as opposed to prior work which has been for simplex decision points and linear losses. We call our framework laminar regret decomposition. It generalizes the CFR algorithm to this more general setting. Furthermore, our framework enables a new proof of CFR even in the known setting, which is derived from a perspective of decomposing polytope regret, thereby leading to an arguably simpler interpretation of the algorithm. Our generalization to convex compact sets and convex losses allows us to develop new algorithms for several problems: regularized sequential decision making, regularized Nash equilibria in zero-sum extensive-form games, and computing approximate extensive-form perfect equilibria. Our generalization also leads to the first regret-minimization algorithm for computing reduced-normal-form quantal response equilibria based on minimizing local regrets. Experiments show that our framework leads to algorithms that scale at a rate comparable to the fastest variants of counterfactual regret minimization for computing Nash equilibrium, and therefore our approach leads to the first algorithm for computing quantal response equilibria in extremely large games. Our algorithms for (quadratically) regularized equilibrium finding are orders of magnitude faster than the fastest algorithms for Nash equilibrium finding; this suggests regret-minimization algorithms based on decreasing regularization for Nash equilibrium finding as future work. Finally we show that our framework enables a new kind of scalable opponent exploitation approach.
APA, Harvard, Vancouver, ISO, and other styles
20

Oh-Hyun Jung. "Are Sequential Decision-Making Processes of Tourists and Consumers the Same?" Culinary Science & Hospitality Research 23, no. 6 (2017): 161–72. http://dx.doi.org/10.20878/cshr.2017.23.6.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chang, H. S. "A Model for Multi-timescaled Sequential Decision-making Processes with Adversary." Mathematical and Computer Modelling of Dynamical Systems 10, no. 3-4 (2004): 287–302. http://dx.doi.org/10.1080/13873950412331335261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kneller, Wendy, Amina Memon, and Sarah Stevenage. "Simultaneous and sequential lineups: decision processes of accurate and inaccurate eyewitnesses." Applied Cognitive Psychology 15, no. 6 (2001): 659–71. http://dx.doi.org/10.1002/acp.739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jean, Mbihi, Owoundi Etouké Paul, and Biyobo Obono Arnaud. "Matlab-Based Modelling and Dynamic Optimization of a Class of Sequential Decision Processes." European Journal of Advances in Engineering and Technology 9, no. 12 (2022): 90–100. https://doi.org/10.5281/zenodo.10647148.

Full text
Abstract:
<strong>ABSTRACT</strong> In this paper, the discrete models of two types of sequential decision processes (i.e. open and closed graph topologies) are developed. Under the adopted counting policy of nodes, it is shown that a sequential open graph topology with n levels along&nbsp; the x-axis,&nbsp; involves a total of&nbsp; n (n+1)/2 states (nodes). However, for a closed graph topology with 2n-1 levels along&nbsp; the x-axis,&nbsp; it is shown also that the total number of state (nodes) is n2. In addition, for both types of open and closed graph processes, &nbsp;their dynamic state models are outlined, and the overall cost optimization problems are transformed into HJB (Hamilton-Jacobi-Bellman) Matrix equations,&nbsp; associated with state model constraints.&nbsp; Furthermore, a set of custom flowcharts are established&nbsp; in order to develop corresponding Matlab custom solvers. Finally, the main results obtained from the analysis of relevant case studies, show the simplicity and great conviviality of the proposed solvers for optimal sequential decision processes.
APA, Harvard, Vancouver, ISO, and other styles
24

Becker, R., S. Zilberstein, V. Lesser, and C. V. Goldman. "Solving Transition Independent Decentralized Markov Decision Processes." Journal of Artificial Intelligence Research 22 (December 1, 2004): 423–55. http://dx.doi.org/10.1613/jair.1497.

Full text
Abstract:
Formal treatment of collaborative multi-agent systems has been lagging behind the rapid progress in sequential decision making by individual agents. Recent work in the area of decentralized Markov Decision Processes (MDPs) has contributed to closing this gap, but the computational complexity of these models remains a serious obstacle. To overcome this complexity barrier, we identify a specific class of decentralized MDPs in which the agents' transitions are independent. The class consists of independent collaborating agents that are tied together through a structured global reward function that depends on all of their histories of states and actions. We present a novel algorithm for solving this class of problems and examine its properties, both as an optimal algorithm and as an anytime algorithm. To our best knowledge, this is the first algorithm to optimally solve a non-trivial subclass of decentralized MDPs. It lays the foundation for further work in this area on both exact and approximate algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Tump, Alan N., Timothy J. Pleskac, and Ralf H. J. M. Kurvers. "Wise or mad crowds? The cognitive mechanisms underlying information cascades." Science Advances 6, no. 29 (2020): eabb0266. http://dx.doi.org/10.1126/sciadv.abb0266.

Full text
Abstract:
Whether getting vaccinated, buying stocks, or crossing streets, people rarely make decisions alone. Rather, multiple people decide sequentially, setting the stage for information cascades whereby early-deciding individuals can influence others’ choices. To understand how information cascades through social systems, it is essential to capture the dynamics of the decision-making process. We introduce the social drift–diffusion model to capture these dynamics. We tested our model using a sequential choice task. The model was able to recover the dynamics of the social decision-making process, accurately capturing how individuals integrate personal and social information dynamically over time and when their decisions were timed. Our results show the importance of the interrelationships between accuracy, confidence, and response time in shaping the quality of information cascades. The model reveals the importance of capturing the dynamics of decision processes to understand how information cascades in social systems, paving the way for applications in other social systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Ortega-Gutiérrez, R. Israel, and H. Cruz-Suárez. "A Moreau-Yosida regularization for Markov decision processes." Proyecciones (Antofagasta) 40, no. 1 (2020): 117–37. http://dx.doi.org/10.22199/issn.0717-6279-2021-01-0008.

Full text
Abstract:
This paper addresses a class of sequential optimization problems known as Markov decision processes. These kinds of processes are considered on Euclidean state and action spaces with the total expected discounted cost as the objective function. The main goal of the paper is to provide conditions to guarantee an adequate Moreau-Yosida regularization for Markov decision processes (named the original process). In this way, a new Markov decision process that conforms to the Markov control model of the original process except for the cost function induced via the Moreau-Yosida regularization is established. Compared to the original process, this new discounted Markov decision process has richer properties, such as the differentiability of its optimal value function, strictly convexity of the value function, uniqueness of optimal policy, and the optimal value function and the optimal policy of both processes, are the same. To complement the theory presented, an example is provided.
APA, Harvard, Vancouver, ISO, and other styles
27

Ortega-Gutiérrez, R. Israel, and H. Cruz-Suárez. "A Moreau-Yosida regularization for Markov decision processes." Proyecciones (Antofagasta) 40, no. 1 (2020): 117–37. http://dx.doi.org/10.22199/issn.0717-6279-2021-01-0008.

Full text
Abstract:
This paper addresses a class of sequential optimization problems known as Markov decision processes. These kinds of processes are considered on Euclidean state and action spaces with the total expected discounted cost as the objective function. The main goal of the paper is to provide conditions to guarantee an adequate Moreau-Yosida regularization for Markov decision processes (named the original process). In this way, a new Markov decision process that conforms to the Markov control model of the original process except for the cost function induced via the Moreau-Yosida regularization is established. Compared to the original process, this new discounted Markov decision process has richer properties, such as the differentiability of its optimal value function, strictly convexity of the value function, uniqueness of optimal policy, and the optimal value function and the optimal policy of both processes, are the same. To complement the theory presented, an example is provided.
APA, Harvard, Vancouver, ISO, and other styles
28

Scherbaum, Stefan, Simon Frisch, Susanne Leiberg, Steven J. Lade, Thomas Goschke, and Maja Dshemuchadse. "Process dynamics in delay discounting decisions: An attractor dynamics approach." Judgment and Decision Making 11, no. 5 (2016): 472–95. http://dx.doi.org/10.1017/s1930297500004575.

Full text
Abstract:
AbstractHow do people make decisions between an immediate but small reward and a delayed but large one? The outcome of such decisions indicates that people discount rewards by their delay and hence these outcomes are well described by discounting functions. However, to understand irregular decisions and dysfunctional behavior one needs models which describe how the process of making the decision unfolds dynamically over time: how do we reach a decision and how do sequential decisions influence one another? Here, we present an attractor model that integrates into and extends discounting functions through a description of the dynamics leading to a final choice outcome within a trial and across trials. To validate this model, we derive qualitative predictions for the intra-trial dynamics of single decisions and for the inter-trial dynamics of sequences of decisions that are unique to this type of model. We test these predictions in four experiments based on a dynamic delay discounting computer game where we study the intra-trial dynamics of single decisions via mouse tracking and the inter-trial dynamics of sequences of decisions via sequentially manipulated options. We discuss how integrating decision process dynamics within and across trials can increase our understanding of the processes underlying delay discounting decisions and, hence, complement our knowledge about decision outcomes.
APA, Harvard, Vancouver, ISO, and other styles
29

Oscarsson, Henrik, and Maria Oskarson. "Sequential vote choice: Applying a consideration set model of heterogeneous decision processes." Electoral Studies 57 (February 2019): 275–83. http://dx.doi.org/10.1016/j.electstud.2018.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Varona, Pablo, and Mikhail I. Rabinovich. "Hierarchical dynamics of informational patterns and decision-making." Proceedings of the Royal Society B: Biological Sciences 283, no. 1832 (2016): 20160475. http://dx.doi.org/10.1098/rspb.2016.0475.

Full text
Abstract:
Traditional studies on the interaction of cognitive functions in healthy and disordered brains have used the analyses of the connectivity of several specialized brain networks—the functional connectome. However, emerging evidence suggests that both brain networks and functional spontaneous brain-wide network communication are intrinsically dynamic. In the light of studies investigating the cooperation between different cognitive functions, we consider here the dynamics of hierarchical networks in cognitive space. We show, using an example of behavioural decision-making based on sequential episodic memory, how the description of metastable pattern dynamics underlying basic cognitive processes helps to understand and predict complex processes like sequential episodic memory recall and competition among decision strategies. The mathematical images of the discussed phenomena in the phase space of the corresponding cognitive model are hierarchical heteroclinic networks. One of the most important features of such networks is the robustness of their dynamics. Different kinds of instabilities of these dynamics can be related to ‘dynamical signatures’ of creativity and different psychiatric disorders. The suggested approach can also be useful for the understanding of the dynamical processes that are the basis of consciousness.
APA, Harvard, Vancouver, ISO, and other styles
31

Tan, Chin Hon, and Joseph C. Hartman. "Sensitivity Analysis in Markov Decision Processes with Uncertain Reward Parameters." Journal of Applied Probability 48, no. 4 (2011): 954–67. http://dx.doi.org/10.1239/jap/1324046012.

Full text
Abstract:
Sequential decision problems can often be modeled as Markov decision processes. Classical solution approaches assume that the parameters of the model are known. However, model parameters are usually estimated and uncertain in practice. As a result, managers are often interested in how estimation errors affect the optimal solution. In this paper we illustrate how sensitivity analysis can be performed directly for a Markov decision process with uncertain reward parameters using the Bellman equations. In particular, we consider problems involving (i) a single stationary parameter, (ii) multiple stationary parameters, and (iii) multiple nonstationary parameters. We illustrate the applicability of this work through a capacitated stochastic lot-sizing problem.
APA, Harvard, Vancouver, ISO, and other styles
32

Tan, Chin Hon, and Joseph C. Hartman. "Sensitivity Analysis in Markov Decision Processes with Uncertain Reward Parameters." Journal of Applied Probability 48, no. 04 (2011): 954–67. http://dx.doi.org/10.1017/s002190020000855x.

Full text
Abstract:
Sequential decision problems can often be modeled as Markov decision processes. Classical solution approaches assume that the parameters of the model are known. However, model parameters are usually estimated and uncertain in practice. As a result, managers are often interested in how estimation errors affect the optimal solution. In this paper we illustrate how sensitivity analysis can be performed directly for a Markov decision process with uncertain reward parameters using the Bellman equations. In particular, we consider problems involving (i) a single stationary parameter, (ii) multiple stationary parameters, and (iii) multiple nonstationary parameters. We illustrate the applicability of this work through a capacitated stochastic lot-sizing problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Chatterjee, Krishnendu, Martin Chmelík, Deep Karkhanis, Petr Novotný, and Amélie Royer. "Multiple-Environment Markov Decision Processes: Efficient Analysis and Applications." Proceedings of the International Conference on Automated Planning and Scheduling 30 (June 1, 2020): 48–56. http://dx.doi.org/10.1609/icaps.v30i1.6644.

Full text
Abstract:
Multiple-environment Markov decision processes (MEMDPs) are MDPs equipped with not one, but multiple probabilistic transition functions, which represent the various possible unknown environments. While the previous research on MEMDPs focused on theoretical properties for long-run average payoff, we study them with discounted-sum payoff and focus on their practical advantages and applications. MEMDPs can be viewed as a special case of Partially observable and Mixed observability MDPs: the state of the system is perfectly observable, but not the environment. We show that the specific structure of MEMDPs allows for more efficient algorithmic analysis, in particular for faster belief updates. We demonstrate the applicability of MEMDPs in several domains. In particular, we formalize the sequential decision-making approach to contextual recommendation systems as MEMDPs and substantially improve over the previous MDP approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Meggendorfer, Tobias, Maximilian Weininger, and Patrick Wienhöft. "Solving Robust Markov Decision Processes: Generic, Reliable, Efficient." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26631–41. https://doi.org/10.1609/aaai.v39i25.34865.

Full text
Abstract:
Markov decision processes (MDP) are a well-established model for sequential decision-making in the presence of probabilities. In *robust* MDP (RMDP), every action is associated with an *uncertainty set* of probability distributions, modelling that transition probabilities are not known precisely. Based on the known theoretical connection to stochastic games, we provide a framework for solving RMDPs that is generic, reliable, and efficient. It is *generic* both with respect to the model, allowing for a wide range of uncertainty sets, including but not limited to intervals, L1- or L2-balls, and polytopes; and with respect to the objective, including long-run average reward, undiscounted total reward, and stochastic shortest path. It is *reliable*, as our approach not only converges in the limit, but provides precision guarantees at any time during the computation. It is *efficient* because -- in contrast to state-of-the-art approaches -- it avoids explicitly constructing the underlying stochastic game. Consequently, our prototype implementation outperforms existing tools by several orders of magnitude and can solve RMDPs with a million states in under a minute.
APA, Harvard, Vancouver, ISO, and other styles
35

Cao, Longbing, and Chengzhang Zhu. "Personalized next-best action recommendation with multi-party interaction learning for automated decision-making." PLOS ONE 17, no. 1 (2022): e0263010. http://dx.doi.org/10.1371/journal.pone.0263010.

Full text
Abstract:
Automated next-best action recommendation for each customer in a sequential, dynamic and interactive context has been widely needed in natural, social and business decision-making. Personalized next-best action recommendation must involve past, current and future customer demographics and circumstances (states) and behaviors, long-range sequential interactions between customers and decision-makers, multi-sequence interactions between states, behaviors and actions, and their reactions to their counterpart’s actions. No existing modeling theories and tools, including Markovian decision processes, user and behavior modeling, deep sequential modeling, and personalized sequential recommendation, can quantify such complex decision-making on a personal level. We take a data-driven approach to learn the next-best actions for personalized decision-making by a reinforced coupled recurrent neural network (CRN). CRN represents multiple coupled dynamic sequences of a customer’s historical and current states, responses to decision-makers’ actions, decision rewards to actions, and learns long-term multi-sequence interactions between parties (customer and decision-maker). Next-best actions are then recommended on each customer at a time point to change their state for an optimal decision-making objective. Our study demonstrates the potential of personalized deep learning of multi-sequence interactions and automated dynamic intervention for personalized decision-making in complex systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Perrault, Andrew. "Monitoring and Intervening on Large Populations of Weakly Coupled Processes with Social Impact Applications." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 15450. http://dx.doi.org/10.1609/aaai.v37i13.26817.

Full text
Abstract:
Many real-world sequential decision problems can be decomposed into processes with independent dynamics that are coupled via the action structure. We discuss recent work on such problems and future directions.
APA, Harvard, Vancouver, ISO, and other styles
37

Takayama, Shota, and Katsuhide Fujita. "Sequential Order Adjustment of Action Decisions for Multi-Agent Transformer (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29509–11. https://doi.org/10.1609/aaai.v39i28.35306.

Full text
Abstract:
Multi-agent reinforcement learning (MARL) trains multiple agents in shared environments. Recently, MARL models have significantly improved performance by leveraging sequential decision-making processes. Although these models can enhance performance, they do not explicitly con-sider the importance of the order in which agents make decisions. We propose AOAD-MAT, a novel model incorporating action decision sequence into learning. AOAD-MAT uses a Transformer-based actor-critic architecture to dynamically adjust agent action order. It introduces a subtask predicting the next agent to act, integrated into a PPO-based loss function. Experiments on StarCraft Multi-Agent Challenge and Multi-Agent MuJoCo benchmarks show AOAD-MAT out-performs existing models, demonstrating the effectiveness of adjusting agent order in MARL.
APA, Harvard, Vancouver, ISO, and other styles
38

Carr, Steven, Nils Jansen, and Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 72 (November 18, 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) are models for sequential decision-making under uncertainty and incomplete information. Machine learning methods typically train recurrent neural networks (RNN) as effective representations of POMDP policies that can efficiently process sequential data. However, it is hard to verify whether the POMDP driven by such RNN-based policies satisfies safety constraints, for instance, given by temporal logic specifications. We propose a novel method that combines techniques from machine learning with the field of formal methods: training an RNN-based policy and then automatically extracting a so-called finite-state controller (FSC) from the RNN. Such FSCs offer a convenient way to verify temporal logic constraints. Implemented on a POMDP, they induce a Markov chain, and probabilistic verification methods can efficiently check whether this induced Markov chain satisfies a temporal logic specification. Using such methods, if the Markov chain does not satisfy the specification, a byproduct of verification is diagnostic information about the states in the POMDP that are critical for the specification. The method exploits this diagnostic information to either adjust the complexity of the extracted FSC or improve the policy by performing focused retraining of the RNN. The method synthesizes policies that satisfy temporal logic specifications for POMDPs with up to millions of states, which are three orders of magnitude larger than comparable approaches.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Richard C., Kevin Wagner, and Gilmer L. Blankenship. "Constrained Partially Observed Markov Decision Processes With Probabilistic Criteria for Adaptive Sequential Detection." IEEE Transactions on Automatic Control 58, no. 2 (2013): 487–93. http://dx.doi.org/10.1109/tac.2012.2208312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tkatek, Said, Saadia Bahti, Otman Abdoun, and Jaafar Abouchabaka. "Intelligent system for recruitment decision making using an alternative parallel-sequential genetic algorithm." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 1 (2021): 385–95. https://doi.org/10.11591/ijeecs.v22.i1.pp385-395.

Full text
Abstract:
The human resources (HR) manager needs effective tools to be able to move away from traditional recruitment processes to make the good decision to select the good candidates for the good posts. To do this, we deliver an intelligent recruitment decision-making method for HR, incorporating a recruitment model based on the multipack model known as the NP-hard model. The system, which is a decision support tool, often integrates a genetic approach that operates alternately in parallel and sequentially. This approach will provide the best recruiting solution to allow HR managers to make the right decision to ensure the best possible compatibility with the desired objectives. Operationally, this system can also predict the altered choice of parallel genetic algorithm (PGA) or sequential genetic algorithm (SeqGA) depending on the size of the instance and constraints of the recruiting posts to produce the quality solution in a reduced CPU time for recruiting decision-making. The results obtained in various tests confirm the performance of this intelligent system which can be used as a decision support tool for intelligently optimized recruitment.
APA, Harvard, Vancouver, ISO, and other styles
41

Chowdhury, Sayak Ray, and Xingyu Zhou. "Differentially Private Regret Minimization in Episodic Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6375–83. http://dx.doi.org/10.1609/aaai.v36i6.20588.

Full text
Abstract:
We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users' sensitive and private information is becoming paramount. We consider two variants of DP -- joint DP (JDP), where a centralized agent is responsible for protecting users' sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks -- one for policy optimization and another for value iteration -- for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
APA, Harvard, Vancouver, ISO, and other styles
42

Fejgin, Naomi, and Ronit Hanegby. "Physical Educators’ Participation in Decision-Making Processes in Dynamic Schools." Journal of Teaching in Physical Education 18, no. 2 (1999): 141–58. http://dx.doi.org/10.1123/jtpe.18.2.141.

Full text
Abstract:
Teacher participation in school decision-making processes is considered one of the major components of school dynamics. It is not known, however, whether all teachers participate in the process to the same extent. This study examines whether teacher participation is related to school dynamics and to subject matter taught. In a 3-step sequential model, the relative contribution of background variables, school measures, school dynamics, and subject matter taught to teacher participation was estimated. Findings showed that school dynamics had the strongest effect on teacher participation, but the effect was not the same for all teachers. Physical educators participated in school decision-making processes less than did other teachers. Physical educators in dynamic schools reported a higher degree of participation than physical educators in non-dynamic schools but a lower degree of participation compared to other teachers in dynamic schools.
APA, Harvard, Vancouver, ISO, and other styles
43

Schrift, Rom Y., Jeffrey R. Parker, Gal Zauberman, and Shalena Srna. "Multistage Decision Processes: The Impact of Attribute Order on How Consumers Mentally Represent Their Choice." Journal of Consumer Research 44, no. 6 (2017): 1307–24. http://dx.doi.org/10.1093/jcr/ucx099.

Full text
Abstract:
Abstract With the ever-increasing number of options from which consumers can choose, many decisions are made in stages. Whether using decision tools to sort, screen, and eliminate options, or intuitively trying to reduce the complexity of a choice, consumers often reach a decision by making sequential, attribute-level choices. The current article explores how the order in which attribute-level choices are made in such multistage decisions affects how consumers mentally represent and categorize their chosen option. The authors find that attribute choices made in the initial stage play a dominant role in how the ultimately chosen option is mentally represented, while later attribute choices serve only to update and refine the representation of that option. Across 13 studies (six of which are reported in the supplemental online materials), the authors find that merely changing the order of attribute choices in multistage decision processes alters how consumers (1) describe the chosen option, (2) perceive its similarity to other available options, (3) categorize it, (4) intend to use it, and (5) replace it. Thus, while the extant decision-making literature has mainly explored how mental representations and categorization impact choice, the current article demonstrates the reverse: that the choice process itself can impact mental representations.
APA, Harvard, Vancouver, ISO, and other styles
44

Farina, Gabriele, Robin Schmucker, and Tuomas Sandholm. "Bandit Linear Optimization for Sequential Decision Making and Extensive-Form Games." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (2021): 5372–80. http://dx.doi.org/10.1609/aaai.v35i6.16677.

Full text
Abstract:
Tree-form sequential decision making (TFSDM) extends classical one-shot decision making by modeling tree-form interactions between an agent and a potentially adversarial environment. It captures the online decision-making problems that each player faces in an extensive-form game, as well as Markov decision processes and partially-observable Markov decision processes where the agent conditions on observed history. Over the past decade, there has been considerable effort into designing online optimization methods for TFSDM. Virtually all of that work has been in the full-feedback setting, where the agent has access to counterfactuals, that is, information on what would have happened had the agent chosen a different action at any decision node. Little is known about the bandit setting, where that assumption is reversed (no counterfactual information is available), despite this latter setting being well understood for almost 20 years in one-shot decision making. In this paper, we give the first algorithm for the bandit linear optimization problem for TFSDM that offers both (i) linear-time iterations (in the size of the decision tree) and (ii) O(sqrt(T)) cumulative regret in expectation compared to any fixed strategy, at all times T. This is made possible by new results that we derive, which may have independent uses as well: 1) geometry of the dilated entropy regularizer, 2) autocorrelation matrix of the natural sampling scheme for sequence-form strategies, 3) construction of an unbiased estimator for linear losses for sequence-form strategies, and 4) a refined regret analysis for mirror descent when using the dilated entropy regularizer.
APA, Harvard, Vancouver, ISO, and other styles
45

Söllner, Anke, Arndt Bröder, and Benjamin E. Hilbig. "Deliberation versus automaticity in decision making: Which presentation format features facilitate automatic decision making?" Judgment and Decision Making 8, no. 3 (2013): 278–98. http://dx.doi.org/10.1017/s1930297500005982.

Full text
Abstract:
AbstractThe idea of automatic decision making approximating normatively optimal decisions without necessitating much cognitive effort is intriguing. Whereas recent findings support the notion that such fast, automatic processes explain empirical data well, little is known about the conditions under which such processes are selected rather than more deliberate stepwise strategies. We investigate the role of the format of information presentation, focusing explicitly on the ease of information acquisition and its influence on information integration processes. In a probabilistic inference task, the standard matrix employed in prior research was contrasted with a newly created map presentation format and additional variations of both presentation formats. Across three experiments, a robust presentation format effect emerged: Automatic decision making was more prevalent in the matrix (with high information accessibility), whereas sequential decision strategies prevailed when the presentation format demanded more information acquisition effort. Further scrutiny of the effect showed that it is not driven by the presentation format as such, but rather by the extent of information search induced by a format. Thus, if information is accessible with minimal need for information search, information integration is likely to proceed in a perception-like, holistic manner. In turn, a moderate demand for information search decreases the likelihood of behavior consistent with the assumptions of automatic decision making.
APA, Harvard, Vancouver, ISO, and other styles
46

Kasianova, Ksenia, and Mark Kelbert. "Context-Dependent Criteria for Dirichlet Process in Sequential Decision-Making Problems." Mathematics 12, no. 21 (2024): 3321. http://dx.doi.org/10.3390/math12213321.

Full text
Abstract:
In models with insufficient initial information, parameter estimation can be subject to statistical uncertainty, potentially resulting in suboptimal decision-making; however, delaying implementation to gather more information can also incur costs. This paper examines an extension of information-theoretic approaches designed to address this classical dilemma, focusing on balancing the expected profits and the information needed to be obtained about all of the possible outcomes. Initially utilized in binary outcome scenarios, these methods leverage information measures to harmonize competing objectives efficiently. Building upon the foundations laid by existing research, this methodology is expanded to encompass experiments with multiple outcome categories using Dirichlet processes. The core of our approach is centered around weighted entropy measures, particularly in scenarios dictated by Dirichlet distributions, which have not been extensively explored previously. We innovatively adapt the technique initially applied to binary case to Dirichlet distributions/processes. The primary contribution of our work is the formulation of a sequential minimization strategy for the main term of an asymptotic expansion of differential entropy, which scales with sample size, for non-binary outcomes. This paper provides a theoretical grounding, extended empirical applications, and comprehensive proofs, setting a robust framework for further interdisciplinary applications of information-theoretic paradigms in sequential decision-making.
APA, Harvard, Vancouver, ISO, and other styles
47

Tkatek, Said, Saadia Bahti, Otman Abdoun, and Jaafar Abouchabaka. "Intelligent system for recruitment decision making using an alternative parallel-sequential genetic algorithm." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 1 (2021): 385. http://dx.doi.org/10.11591/ijeecs.v22.i1.pp385-395.

Full text
Abstract:
&lt;p&gt;The human resources (HR) manager needs effective tools to be able to move away from traditional recruitment processes to make the good decision to select the good candidates for the good posts. To do this, we deliver an intelligent recruitment decision-making method for HR, incorporating a recruitment model based on the multipack model known as the NP-hard model. The system, which is a decision support tool, often integrates a genetic approach that operates alternately in parallel and sequentially. This approach will provide the best recruiting solution to allow HR managers to make the right decision to ensure the best possible compatibility with the desired objectives. Operationally, this system can also predict the altered choice of parallel genetic algorithm (PGA) or sequential genetic algorithm (SeqGA) depending on the size of the instance and constraints of the recruiting posts to produce the quality solution in a reduced CPU time for recruiting decision-making. The results obtained in various tests confirm the performance of this intelligent system which can be used as a decision support tool for intelligently optimized recruitment.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
48

Fontanesi, Laura, Amitai Shenhav, and Sebastian Gluth. "Disentangling choice value and choice conflict in sequential decisions under risk." PLOS Computational Biology 18, no. 10 (2022): e1010478. http://dx.doi.org/10.1371/journal.pcbi.1010478.

Full text
Abstract:
Recent years have witnessed a surge of interest in understanding the neural and cognitive dynamics that drive sequential decision making in general and foraging behavior in particular. Due to the intrinsic properties of most sequential decision-making paradigms, however, previous research in this area has suffered from the difficulty to disentangle properties of the decision related to (a) the value of switching to a new patch versus, which increases monotonically, and (b) the conflict experienced between choosing to stay or leave, which first increases but then decreases after reaching the point of indifference between staying and switching. Here, we show how the same problems arise in studies of sequential decision-making under risk, and how they can be overcome, taking as a specific example recent research on the ‘pig’ dice game. In each round of the ‘pig’ dice game, people roll a die and accumulate rewards until they either decide to proceed to the next round or lose all rewards. By combining simulation-based dissections of the task structure with two experiments, we show how an extension of the standard paradigm, together with cognitive modeling of decision-making processes, allows to disentangle properties related to either switch value or choice conflict. Our study elucidates the cognitive mechanisms of sequential decision making and underscores the importance of avoiding potential pitfalls of paradigms that are commonly used in this research area.
APA, Harvard, Vancouver, ISO, and other styles
49

She, Chung, and Han. "Economic and Environmental Optimization of the Forest Supply Chain for Timber and Bioenergy Production from Beetle-Killed Forests in Northern Colorado." Forests 10, no. 8 (2019): 689. http://dx.doi.org/10.3390/f10080689.

Full text
Abstract:
Harvesting mountain pine beetle-infested forest stands in the northern Colorado Rocky Mountains provides an opportunity to utilize otherwise wasted resources, generate net revenues, and minimize greenhouse gas (GHG) emissions. Timber and bioenergy production are commonly managed separately, and their integration is seldom considered. Yet, degraded wood and logging residues can provide a feedstock for bioenergy, while the sound wood from beetle-killed stands can still be used for traditional timber products. In addition, beneficial greenhouse gas emission (GHG) savings are often realized only by compromising net revenues during salvage harvest where beetle-killed wood has a relatively low market value and high harvesting cost. In this study we compared Sequential and Integrated decision-making scenarios for managing the supply chain from beetle-killed forest salvage operations. In the Sequential scenario, timber and bioenergy production was managed sequentially in two separate processes, where salvage harvest was conducted without considering influences on or from bioenergy production. Biomass availability was assessed next as an outcome from timber production managed to produce bioenergy products. In the Integrated scenario, timber and bioenergy production were managed jointly, where collective decisions were made regarding tree salvage harvest, residue treatment, and bioenergy product selection and production. We applied a multi-objective optimization approach to integrate the economic and environmental objectives of producing timber and bioenergy, and measured results by total net revenues and total net GHG emission savings, respectively. The optimization model results show that distinctively different decisions are made in selecting the harvesting system and residue treatment under the two scenarios. When the optimization is fully economic-oriented, 49.6% more forest areas are harvested under the Integrated scenario than the Sequential scenario, generating 12.3% more net revenues and 50.5% more net GHG emission savings. Comparison of modelled Pareto fronts also indicate the Integrated decision scenario provides more efficient trade-offs between the two objectives and performs better than the Sequential scenario in both objectives.
APA, Harvard, Vancouver, ISO, and other styles
50

Ghaffari, Minou, and Susann Fiedler. "The Power of Attention: Using Eye Gaze to Predict Other-Regarding and Moral Choices." Psychological Science 29, no. 11 (2018): 1878–89. http://dx.doi.org/10.1177/0956797618799301.

Full text
Abstract:
According to research studying the processes underlying decisions, a two-channel mechanism connects attention and choices: top-down and bottom-up processes. To identify the magnitude of each channel, we exogenously varied information intake by systematically interrupting participants’ decision processes in Study 1 ( N = 116). Results showed that participants were more likely to choose a predetermined target option. Because selection effects limited the interpretation of the results, we used a sequential-presentation paradigm in Study 2 (preregistered, N = 100). To partial out bottom-up effects of attention on choices, in particular, we presented alternatives by mirroring the gaze patterns of autonomous decision makers. Results revealed that final fixations successfully predicted choices when experimentally manipulated (bottom up). Specifically, up to 11.32% of the link between attention and choices is driven by exogenously guided attention (1.19% change in choices overall), while the remaining variance is explained by top-down preference formation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography