Zeitschriftenartikel zum Thema „Action Model Learning“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Action Model Learning.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Action Model Learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Rao, Dongning, und Zhihua Jiang. „Cost-Sensitive Action Model Learning“. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 24, Nr. 02 (April 2016): 167–93. http://dx.doi.org/10.1142/s0218488516500094.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Action model learning can relieve people from writing planning domain descriptions from scratch. Real-world learners need to be sensitive to all kinds of expenses which it will spend in the learning. However, most of previous studies in this research line only considered the running time as the learning cost. In real-world applications, we will spend extra expense when we carry out actions or get observations, particularly for online learning. The learning algorithm should apply more techniques for saving the total cost when keeping a high rate of accuracy. The cost of carrying out actions and getting observations is the dominated expense in online learning. Therefore, we design a cost-sensitive algorithm to learn action models under partial observability. It combines three techniques to lessen the total cost: constraints, filtering and active learning. These techniques are used in observation reduction in action model learning. First, the algorithm uses constraints to confine the observation space. Second, it removes unnecessary observations by belief state filtering. Third, it actively picks up observations based on the results of the previous two techniques. This paper also designs strategies to reduce the amount of plan steps used in the learning. We performed experiments on some benchmark domains. It shows two results. For one thing, the learning accuracy is high in most cases. For the other, the algorithm dramatically reduces the total cost according to the definition of cost in this paper. Therefore, it is significant for real-world learners, especially, when long plans are unavailable or observations are expensive.
2

Wang, Zhenyi, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan und Changyou Chen. „Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12281–88. http://dx.doi.org/10.1609/aaai.v34i07.6911.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Human-motion generation is a long-standing challenging task due to the requirement of accurately modeling complex and diverse dynamic patterns. Most existing methods adopt sequence models such as RNN to directly model transitions in the original action space. Due to high dimensionality and potential noise, such modeling of action transitions is particularly challenging. In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality. Conditioned on a latent sequence, actions are generated by a frame-wise decoder shared by all latent action-poses. Specifically, an implicit RNN is defined to model smooth latent sequences, whose randomness (diversity) is controlled by noise from the input. Different from standard action-prediction methods, our model can generate action sequences from pure noise without any conditional action poses. Remarkably, it can also generate unseen actions from mixed classes during training. Our model is learned with a bi-directional generative-adversarial-net framework, which can not only generate diverse action sequences of a particular class or mix classes, but also learns to classify action sequences within the same model. Experimental results show the superiority of our method in both diverse action-sequence generation and classification, relative to existing methods.
3

Chang, Kyungwon. „"A Model of Action Learning Program Design in Higher Education"“. Journal of Educational Technology 27, Nr. 3 (30.09.2011): 475–505. http://dx.doi.org/10.17232/kset.27.3.475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Ziyi, Xinran Li, Luoyang Sun, Haifeng Zhang, Hualin Liu und Jun Wang. „Learning State-Specific Action Masks for Reinforcement Learning“. Algorithms 17, Nr. 2 (30.01.2024): 60. http://dx.doi.org/10.3390/a17020060.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and μRTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.
5

Funai, Naoki. „An Adaptive Learning Model with Foregone Payoff Information“. B.E. Journal of Theoretical Economics 14, Nr. 1 (01.01.2014): 149–76. http://dx.doi.org/10.1515/bejte-2013-0043.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
AbstractIn this paper, we provide theoretical predictions on the long-run behavior of an adaptive decision maker with foregone payoff information. In the model, the decision maker assigns a subjective payoff assessment to each action based on his past experience and chooses the action that has the highest assessment. After receiving a payoff, the decision maker updates his assessments of actions in an adaptive manner, using not only the objective payoff information but also the foregone payoff information, which may be distorted. The distortion may arise from “the grass is always greener on the other side” effect, pessimism/optimism or envy/gloating; it depends on how the decision maker views the source of the information. We first provide conditions in which the assessment of each action converges, in that the limit assessment is expressed as an average of the expected objective payoff and the expected distorted payoff of the action. Then, we show that the decision maker chooses the optimal action most frequently in the long run if the expected distorted payoff of the action is greater than the ones of the other actions. We also provide conditions, under which this model coincides with the experience-weighted attraction learning, stochastic fictitious play and quantal response equilibrium models, and thus this model provides theoretical predictions for the models in decision problems.
6

Mordoch, Argaman, Brendan Juba und Roni Stern. „Learning Safe Numeric Action Models“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 12079–86. http://dx.doi.org/10.1609/aaai.v37i10.26424.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Powerful domain-independent planners have been developed to solve various types of planning problems. These planners often require a model of the acting agent's actions, given in some planning domain description language. Yet obtaining such an action model is a notoriously hard task. This task is even more challenging in mission-critical domains, where a trial-and-error approach to learning how to act is not an option. In such domains, the action model used to generate plans must be safe, in the sense that plans generated with it must be applicable and achieve their goals. Learning safe action models for planning has been recently explored for domains in which states are sufficiently described with Boolean variables. In this work, we go beyond this limitation and propose the NSAM algorithm. NSAM runs in time that is polynomial in the number of observations and, under certain conditions, is guaranteed to return safe action models. We analyze its worst-case sample complexity, which may be intractable for some domains. Empirically, however, NSAM can quickly learn a safe action model that can solve most problems in the domain.
7

Bong, Hyeon-Cheol, Yonjoo Cho und Hyung-Sook Kim. „Developing an action learning design model“. Action Learning: Research and Practice 11, Nr. 3 (11.08.2014): 278–95. http://dx.doi.org/10.1080/14767333.2014.944087.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chalard. „Developing Learner Centered Action Learning Model“. Journal of Social Sciences 7, Nr. 4 (01.04.2011): 635–42. http://dx.doi.org/10.3844/jssp.2011.635.642.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pandey, Ritik, Yadnesh Chikhale, Ritik Verma und Deepali Patil. „Deep Learning based Human Action Recognition“. ITM Web of Conferences 40 (2021): 03014. http://dx.doi.org/10.1051/itmconf/20214003014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Human action recognition has become an important research area in the fields of computer vision, image processing, and human-machine or human-object interaction due to its large number of real time applications. Action recognition is the identification of different actions from video clips (an arrangement of 2D frames) where the action may be performed in the video. This is a general construction of image classification tasks to multiple frames and then collecting the predictions from each frame. Different approaches are proposed in literature to improve the accuracy in recognition. In this paper we proposed a deep learning based model for Recognition and the main focus is on the CNN model for image classification. The action videos are converted into frames and pre-processed before sending to our model for recognizing different actions accurately..
10

Amir, E., und A. Chang. „Learning Partially Observable Deterministic Action Models“. Journal of Artificial Intelligence Research 33 (20.11.2008): 349–402. http://dx.doi.org/10.1613/jair.2575.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
We present exact algorithms for identifying deterministic-actions' effects and preconditions in dynamic partially observable domains. They apply when one does not know the action model(the way actions affect the world) of a domain and must learn it from partial observations over time. Such scenarios are common in real world applications. They are challenging for AI tasks because traditional domain structures that underly tractability (e.g., conditional independence) fail there (e.g., world features become correlated). Our work departs from traditional assumptions about partial observations and action models. In particular, it focuses on problems in which actions are deterministic of simple logical structure and observation models have all features observed with some frequency. We yield tractable algorithms for the modified problem for such domains. Our algorithms take sequences of partial observations over time as input, and output deterministic action models that could have lead to those observations. The algorithms output all or one of those models (depending on our choice), and are exact in that no model is misclassified given the observations. Our algorithms take polynomial time in the number of time steps and state features for some traditional action classes examined in the AI-planning literature, e.g., STRIPS actions. In contrast, traditional approaches for HMMs and Reinforcement Learning are inexact and exponentially intractable for such domains. Our experiments verify the theoretical tractability guarantees, and show that we identify action models exactly. Several applications in planning, autonomous exploration, and adventure-game playing already use these results. They are also promising for probabilistic settings, partially observable reinforcement learning, and diagnosis.
11

Lewis, Alan, und Tim Miller. „Deceptive Reinforcement Learning in Model-Free Domains“. Proceedings of the International Conference on Automated Planning and Scheduling 33, Nr. 1 (01.07.2023): 587–95. http://dx.doi.org/10.1609/icaps.v33i1.27240.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This paper investigates deceptive reinforcement learning for privacy preservation in model-free and continuous action space domains. In reinforcement learning, the reward function defines the agent's objective. In adversarial scenarios, an agent may need to both maximise rewards and keep its reward function private from observers. Recent research presented the ambiguity model (AM), which selects actions that are ambiguous over a set of possible reward functions, via pre-trained Q-functions. Despite promising results in model-based domains, our investigation shows that AM is ineffective in model-free domains due to misdirected state space exploration. It is also inefficient to train and inapplicable in continuous action spaces. We propose the deceptive exploration ambiguity model (DEAM), which learns using the deceptive policy during training, leading to targeted exploration of the state space. DEAM is also applicable in continuous action spaces. We evaluate DEAM in discrete and continuous action space path planning environments. DEAM achieves similar performance to an optimal model-based version of AM and outperforms a model-free version of AM in terms of path cost, deceptiveness and training efficiency. These results extend to the continuous domain.
12

Rahmawati, Sitti, Detris Poba, Magfirah Magfirah und Kusrini Burase. „Application of Cooperative Learning Jigsaw Model to Improve Student's Learning Achievement in Chemistry Learning“. Jurnal Akademika Kimia 11, Nr. 1 (28.02.2022): 39–45. http://dx.doi.org/10.22487/j24775185.2022.v11.i1.pp39-45.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This study aims to improve student achievement in learning chemistry in class X MIA4 at SMA Negeri 1 Palu by applying the Jigsaw Cooperative Learning Model. The Classroom Action Research (CAR) problem can be formulated as follows: Is the Jigsaw Cooperative Learning Model application able to improve student achievement in learning chemistry in class X MIA4 SMAN 1 Palu? CAR is carried out with the following stages to answer the problem: 1. Planning, 2. Implementation. 3. Observation, and 4. Evaluation and Reflection. The study results can be explained as follows several fundamental aspects of learning were successfully improved by applying the Jigsaw Cooperative Learning Model. Such as student activity in collaboration and in completing worksheets independently, actively asking and answering questions, and making students feel happy and enthusiastic. Likewise, the average evaluation of each cycle showed that the % completeness increased. In cycle one, action one was 73.8%, action two 85.5%, and activity three 92.9% increased in cycle two, the average from three actions to 98.0%. It can be concluded that the application of the Jigsaw Cooperative Learning Model can improve student achievement in class X MIA4 SMA Negeri 1.
13

Zuber‐Skerritt, Ortrun. „A model for designing action learning and action research programs“. Learning Organization 9, Nr. 4 (Oktober 2002): 143–49. http://dx.doi.org/10.1108/09696470210428868.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Krishnan, Abhijeet, Aaron Williams und Chris Martens. „Towards Action Model Learning for Player Modeling“. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 16, Nr. 1 (12.04.2021): 238–44. http://dx.doi.org/10.1609/aiide.v16i1.7436.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Player modeling attempts to create a computational model which accurately approximates a player’s behavior in a game. Most player modeling techniques rely on domain knowledge and are not transferable across games. Additionally, player models do not currently yield any explanatory insight about a player’s cognitive processes, such as the creation and refinement of mental models. In this paper, we present our findings with using action model learning (AML), in which an action model is learned given data in the form of a play trace, to learn a player model in a domain-agnostic manner. We demonstrate the utility of this model by introducing a technique to quantitatively estimate how well a player understands the mechanics of a game. We evaluate an existing AML algorithm (FAMA) for player modeling and develop a novel algorithm called Blackout that is inspired by player cognition. We compare Blackout with FAMA using the puzzle game Sokoban and show that Blackout generates better player models.
15

Zhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu und Kun Zhang. „Invariant Action Effect Model for Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Good representations can help RL agents perform concise modeling of their surroundings, and thus support effective decision-making in complex environments. Previous methods learn good representations by imposing extra constraints on dynamics. However, in the causal perspective, the causation between the action and its effect is not fully considered in those methods, which leads to the ignorance of the underlying relations among the action effects on the transitions. Based on the intuition that the same action always causes similar effects among different states, we induce such causation by taking the invariance of action effects among states as the relation. By explicitly utilizing such invariance, in this paper, we show that a better representation can be learned and potentially improves the sample efficiency and the generalization ability of the learned policy. We propose Invariant Action Effect Model (IAEM) to capture the invariance in action effects, where the effect of an action is represented as the residual of representations from neighboring states. IAEM is composed of two parts: (1) a new contrastive-based loss to capture the underlying invariance of action effects; (2) an individual action effect and provides a self-adapted weighting strategy to tackle the corner cases where the invariance does not hold. The extensive experiments on two benchmarks, i.e. Grid-World and Atari, show that the representations learned by IAEM preserve the invariance of action effects. Moreover, with the invariant action effect, IAEM can accelerate the learning process by 1.6x, rapidly generalize to new environments by fine-tuning on a few components, and outperform other dynamics-based representation methods by 1.4x in limited steps.
16

Comfort, Louise K. „Action Research: A Model for Organizational Learning“. Journal of Policy Analysis and Management 5, Nr. 1 (1985): 100. http://dx.doi.org/10.2307/3323415.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Jie Yang, Yangsheng Xu und C. S. Chen. „Human action learning via hidden Markov model“. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 27, Nr. 1 (1997): 34–44. http://dx.doi.org/10.1109/3468.553220.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Comfort, Louise K. „Action research: A model for organizational learning“. Journal of Policy Analysis and Management 5, Nr. 1 (01.02.2007): 100–118. http://dx.doi.org/10.1002/pam.4050050106.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Gaikwad, Suhani, Rutuja Ghodekar, Nikhil Gatkal und Atharv Prayag. „Human Action Recognition using Deep Learning“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 5 (31.05.2023): 1888–92. http://dx.doi.org/10.22214/ijraset.2023.51960.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract: The aim of this project is to recognize human actions for monitoring and security purposes. This project is mainly focused on building a system that is helpful for doctors to monitor patients .Human Action Recognition is required to recognize a set of human activities by training a supervised learning model and displaying the activity/action result as per the input action received. It has wide range of applications such as patient monitoring system, ATM/ Bank security system, etc. Human Action Recognition model can be mainly used for security and monitoring purposes. We can use various machine learning and deep learning algorithms for this project . One of the best approach is using CNN .
20

Edmonstone, John. „Learning and development in action learning: the energy investment model“. Industrial and Commercial Training 35, Nr. 1 (01.02.2003): 26–28. http://dx.doi.org/10.1108/00197850310458216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In action learning sets participants bring their personal energy and attitudes. These produce identifiable behaviour styles (not types of people). In the energy investment model four behaviour styles of set members are identified, showing participants’ typical feelings and reactions, the support needed and what helpful questions may be.
21

Gregory, Michael. „Accrediting Work‐based Learning: Action Learning – A Model for Empowerment“. Journal of Management Development 13, Nr. 4 (Juni 1994): 41–52. http://dx.doi.org/10.1108/02621719410057069.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Fardell, Jill. „Bringing Learning to Life: A User‐led, Action‐Learning Model“. Journal of Integrated Care 11, Nr. 2 (April 2003): 36–42. http://dx.doi.org/10.1108/14769018200300026.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Xi, Kai, Stephen Gould und Sylvie Thiébaux. „Neuro-Symbolic Learning of Lifted Action Models from Visual Traces“. Proceedings of the International Conference on Automated Planning and Scheduling 34 (30.05.2024): 653–62. http://dx.doi.org/10.1609/icaps.v34i1.31528.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Model-based planners rely on action models to describe available actions in terms of their preconditions and effects. Nonetheless, manually encoding such models is challenging, especially in complex domains. Numerous methods have been proposed to learn action models from examples of plan execution traces. However, high-level information, such as state labels within traces, is often unavailable and needs to be inferred indirectly from raw observations. In this paper, we aim to learn lifted action models from visual traces --- sequences of image-action pairs depicting discrete successive trace steps. We present ROSAME, a differentiable neuRO-Symbolic Action Model lEarner that infers action models from traces consisting of probabilistic state predictions and actions. By combining ROSAME with a deep learning computer vision model, we create an end-to-end framework that jointly learns state predictions from images and infers symbolic action models. Experimental results demonstrate that our method succeeds in both tasks, using different visual state representations, with the learned action models often matching or even surpassing those created by humans.
24

Joshila Grace, L. K., K. Rahul und P. S. Sidharth. „An Efficient Action Detection Model Using Deep Belief Networks“. Journal of Computational and Theoretical Nanoscience 16, Nr. 8 (01.08.2019): 3232–36. http://dx.doi.org/10.1166/jctn.2019.8168.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Computer Vision and image processing have gained an enormous advance in the field of machine learning techniques. Some of the major research areas within machine learning are Action detection and Pattern Recognition. Action recognition is a new advancement of pattern recognition approaches where the actions performed by any action or living being is tracked and monitored. Action recognition still encounters some challenges that needs to be looked upon and perform recognize the actions is a very minimal time. Networks like SVM and Neural Networks are used to train the network in such a way they are able to detect a pattern of an action when a new frame is given. In this paper, we have proposed a model which detects patterns of actions from a video or an image. Bounding boxes are used to detect the actions and localize it. Deep Belief Network is used to train the model where numerous images having actions are given as the training set. The performance evaluation was done on the model and it is observed that it detects the actions very accurately when a new image is given to the network.
25

Mtk, Agnes Lugfi Wulandari, Paraniah Paraniah, Subhanudin Subhanudin und A. Rasul. „Application Model Learning Team Games Tournament on Students' Mathematics Learning Results“. Indo-MathEdu Intellectuals Journal 4, Nr. 1 (30.04.2023): 29–37. http://dx.doi.org/10.54373/imeij.v4i1.47.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This study aims to determine the success of increasing the application of the team games tournament (TGT) learning model to mathematics learning outcomes in grade VIII students of YKK Ebenhaezer Mimika Papua Junior High School. This research included the type of Classroom Action Research with the research subjects of grade VIII A students at SMP YPK Ebenhaezer Mimika Papua totaling 29 students. The research design used the Kemmis and Mc. Taggart model. The results showed that mathematics learning outcomes in grade VIII A students at SMP YPK Ebenhaezer could be improved using the team games tournament (TGT) learning model. After the first cycle of actions, students who met the KKM 18 students 62 percent and did not meet the KKM 11 students 38 percent. After the second cycle of action, 22 students met the KKM 76 percent and did not meet the KKM 7 students 24 percent. The results of teacher activity in cycle I were 66 percent and in cycle II there was an increase to 77 percent. As for the results of student activity in cycle I, which was 70 percent and in cycle II there was an increase to 76 percent.
26

Vince, Russ, und Linda Martin. „Inside Action Learning: an Exploration of the Psychology and Politics of the Action Learning Model“. Management Education and Development 24, Nr. 3 (Oktober 1993): 185. http://dx.doi.org/10.1177/135050769302400302.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Vince, Russ, und Linda Martin. „Inside Action Learning: an Exploration of the Psychology and Politics of the Action Learning Model“. Management Education and Development 24, Nr. 3 (Oktober 1993): 205–15. http://dx.doi.org/10.1177/135050769302400308.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Larombo, Sapiudin fiun, und Basuki Wibawa. „INCREASED ACTIVITY AND LEARNING OUTCOMES THROUGH BIOLOGY WITH GUIDED DISCOVERY LEARNING MODEL“. Asia Proceedings of Social Sciences 4, Nr. 3 (26.04.2019): 8–10. http://dx.doi.org/10.31580/apss.v4i3.808.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The purpose of this study is to improve the activities and student learning outcomes on biology subjects through the application of guided discovery learning models. This research is a type of Classroom Action Research. Classroom Action Research is carried out in 2 cycles. The results showed that the application of guided discovery learning models can improve the activity and learning outcomes of students of class XI IPA 1 SMAN 1 Asera. Learning outcomes in the cognitive realm increased by 22.77% after the action of the first cycle and amounted to 11.50% after the second cycle of action and psychomotor domain learning outcomes experienced an increase of 15.50% from the first cycle of action to the second cycle. The application of a guided learning model can improve student learning activities. Student learning activities increased by 40.25% after action I and by 9.88% after cycle II.
29

Long, Jiahuai, und Shuguang Rong. „Application of Machine Learning to Badminton Action Decomposition Teaching“. Wireless Communications and Mobile Computing 2022 (20.04.2022): 1–10. http://dx.doi.org/10.1155/2022/3707407.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The study was aimed at realizing the identification of athletes’ actions in badminton teaching. The teaching process is segmented into many independent actions to help learners standardize their movements in badminton play, improving the national physical quality. First, the principle and advantages of machine vision sensing are introduced. Second, the images and videos about the action decomposition of badminton teaching are collected, and the image data are extracted by Haar-like. Subsequently, badminton players’ actions are recognized and preprocessed, and a dataset is constructed. Furthermore, a new algorithm model is implemented and trained by using Haar-like and Adaptive Boosting (AdaBoost). Finally, the badminton players’ action recognition algorithm is tested and compared with the traditional hidden Markov model (HMM) and support vector machine (SVM). The results show that action images improved by machine vision can process the captured actions effectively, making the computer better identify different badminton teaching actions. The proposed method has a recognition rate of more than 90% for each action, the average recognition accuracy of actions reaches 95%, the average recognition rate of the same person’s actions is 96.5%, and the average recognition rate of different people’s actions is 94.8%. The badminton teaching action recognition model based on Haar-like and AdaBoost can recognize and classify badminton actions and improve the quality of badminton teaching. This study shows that the image processing technology can effectively process the players’ static images, which gives the direction for physical education (PE) under artificial intelligence (AI).
30

Huang, Xinting, Jianzhong Qi, Yu Sun und Rui Zhang. „MALA: Cross-Domain Dialogue Generation with Action Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 7977–84. http://dx.doi.org/10.1609/aaai.v34i05.6306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Response generation for task-oriented dialogues involves two basic components: dialogue planning and surface realization. These two components, however, have a discrepancy in their objectives, i.e., task completion and language quality. To deal with such discrepancy, conditioned response generation has been introduced where the generation process is factorized into action decision and language generation via explicit action representations. To obtain action representations, recent studies learn latent actions in an unsupervised manner based on the utterance lexical similarity. Such an action learning approach is prone to diversities of language surfaces, which may impinge task completion and language quality. To address this issue, we propose multi-stage adaptive latent action learning (MALA) that learns semantic latent actions by distinguishing the effects of utterances on dialogue progress. We model the utterance effect using the transition of dialogue states caused by the utterance and develop a semantic similarity measurement that estimates whether utterances have similar effects. For learning semantic actions on domains without dialogue states, MALA extends the semantic similarity measurement across domains progressively, i.e., from aligning shared actions to learning domain-specific actions. Experiments using multi-domain datasets, SMD and MultiWOZ, show that our proposed model achieves consistent improvements over the baselines models in terms of both task completion and language quality.
31

Mordoch, Argaman, Enrico Scala, Roni Stern und Brendan Juba. „Safe Learning of PDDL Domains with Conditional Effects“. Proceedings of the International Conference on Automated Planning and Scheduling 34 (30.05.2024): 387–95. http://dx.doi.org/10.1609/icaps.v34i1.31498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Powerful domain-independent planners have been developed to solve various types of planning problems. These planners often require a model of the acting agent's actions, given in some planning domain description language. Manually designing such an action model is a notoriously challenging task. An alternative is to automatically learn action models from observation. Such an action model is called safe if every plan created with it is consistent with the real, unknown action model. Algorithms for learning such safe action models exist, yet they cannot handle domains with conditional or universal effects, which are common constructs in many planning problems. We prove that learning non-trivial safe action models with conditional effects may require an exponential number of samples. Then, we identify reasonable assumptions under which such learning is tractable and propose Conditional-SAM, the first algorithm capable of doing so. We analyze Conditional-SAM theoretically and evaluate it experimentally. Our results show that the action models learned by Conditional-SAM can be used to solve perfectly most of the test set problems in most of the experimented domains.
32

Rozhana, Kardiana Metha, Adi Tri Atmaja, Nathasa Pramudita Irianti, Nila Kartika Sari und Kardiana Zendha Avalentina. „Implementation of the STEAM model in mathematics subjects to improve learning outcomes“. Jurnal Bidang Pendidikan Dasar 7, Nr. 2 (17.07.2023): 142–48. http://dx.doi.org/10.21067/jbpd.v7i2.8540.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The purpose of this research is to implement the STEAM learning model to improve the mathematics learning outcomes of fourth grade elementary school students. The method used is classroom action research with research procedures including implementation, among others, preparing action plans, implementing actions, observing, and reflecting. The results obtained in cycle 1 of the student's average score were only 58.5 with an incomplete scale and the need for improvement/reflection. After the action in cycle 1, cycle 2 found a significant increase in learning completeness, namely reaching 85.75. So it can be concluded that the STEAM learning model can be implemented in class IV in mathematics.
33

HISAN, NAILUL. „UPAYA MENINGKATKAN KOMPETENSI NEGOSIASI MELALUI MODEL PEMBELAJARAN ACTION LEARNING“. TEACHING : Jurnal Inovasi Keguruan dan Ilmu Pendidikan 2, Nr. 3 (19.11.2022): 348–58. http://dx.doi.org/10.51878/teaching.v2i3.1662.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Action learning is one form of implementation of the Ministry of Finance's Corporate University. Action learning activities are implemented after the completion of structured learning. The purpose of this study is know the opinion of participants about action learning activities. The types of action learning activities in PJJ Effective Negotiation Skills are summarizing negotiation books, sharing negotiation knowledge with colleagues, or negotiation practices in the workplace. The research method used qualitative research whose data is obtained from action learning activity reports. The data is collected from the PJJ Effective Negotiation Skills action learning report organized by the Financial Education and Training Center Denpasar 2022. The results showed that participants felt the benefits of action learning activities. From the participants' reports, we obtained information that action learning was able to increase participants' knowledge about negotiations, increase participants' ability to prepare for negotiations, increase participants' readiness to face the different characters of negotiating partners and increase confidence in negotiating. In addition, during the practice of negotiating with partners, participants have also been able to convince their negotiating partners. ABSTRAKAction learning merupakan salah satu bentuk implementasi Kementerian Keuangan Corporate Universiy. Kegiatan action learning dilaksanakan setelah selesainya pembelajaran terstruktur. Penelitian ini bertujuan untuk mengetahui pendapat peserta mengenai kegiatan action learning. Jenis kegiatan action learning pada PJJ Effective Negotiation Skills adalah merangkum buku negosiasi, melakukan knowledge sharing negosiasi kepada rekan kerja, atau melakukan praktik negosiasi di tempat kerja. Metode penelitian yang digunakan adalah penelitian kualitatif yang datanya diperoleh dari laporan kegiatan action learning. Data yang dikumpulkan adalah laproan action learning PJJ Effective Negotiation Skills yang diselenggarakan oleh Balai Diklat Keuangan Denpasar 2022. Hasil penelitian menunjukkan bahwa peserta merasakan manfaat kegiatan action learning. Dari laporan peserta, diperoleh informasi bahwa action learning mampu menambah pengetahuan peserta mengenai negosiasi, meningkatkan kemampuan peserta dalam melakukan persiapan negosiasi, meningkatkan kesiapan peserta menghadapi keragaman karakter mitra negosiasi dan menambah kepercayaan diri dalam melakukan negosiasi. Selain itu saat praktek negosiasi dengan mitra kerja peserta juga telah mampu meyakinkan lawan negosiasinya.
34

Cronin, Gerard, und Steven Andrews. „After action reviews: a new model for learning“. Emergency Nurse 17, Nr. 3 (02.06.2009): 32–35. http://dx.doi.org/10.7748/en2009.06.17.3.32.c7090.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Pawar, Reshma V. „Learning a Deep Model for Human Action Recognition“. International Journal for Research in Applied Science and Engineering Technology 7, Nr. 6 (30.06.2019): 2170–73. http://dx.doi.org/10.22214/ijraset.2019.6364.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Pocock, Helen. „SQIFED: A new reflective model for action learning“. Journal of Paramedic Practice 5, Nr. 3 (04.03.2013): 146–51. http://dx.doi.org/10.12968/jpar.2013.5.3.146.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Zhuo, Hankz Hankui, und Qiang Yang. „Action-model acquisition for planning via transfer learning“. Artificial Intelligence 212 (Juli 2014): 80–103. http://dx.doi.org/10.1016/j.artint.2014.03.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Zimmerelli, Lisa, und Victoria Bridges. „Service-Learning Tutor Education: A Model of Action“. WLN: A Journal of Writing Center Scholarship 40, Nr. 7 (2016): 2–10. http://dx.doi.org/10.37514/wln-j.2016.40.7.02.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Grand, Maxence, Damien Pellier und Humbert Fiorino. „TempAMLSI: Temporal Action Model Learning Based on STRIPS Translation“. Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 597–605. http://dx.doi.org/10.1609/icaps.v32i1.19847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Hand-encoding PDDL domains is generally considered difficult, tedious and error-prone. The difficulty is even greater when temporal domains have to be encoded. Indeed, actions have a duration and their effects are not instantaneous. In this paper, we present TempAMLSI, an algorithm based on the AMLSI approach to learn temporal domains. TempAMLSI is the first approach able to learn temporal domains with single hard envelopes, and TempAMLSI is the first approach able to deal with both partial and noisy observations. We show experimentally that TempAMLSI learns accurate temporal domains, i.e., temporal domains that can be used without human proofreading to solve new planning problems with different forms of action concurrency.
40

Flood, Adele. „Student Action - Centred Learning: a new model for learning and teaching“. International Journal for Cross-Disciplinary Subjects in Education 3, Nr. 3 (01.09.2012): 805–15. http://dx.doi.org/10.20533/ijcdse.2042.6364.2012.0115.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Dandoti, Sarosh. „Learning to Survive using Reinforcement Learning with MLAgents“. International Journal for Research in Applied Science and Engineering Technology 10, Nr. 7 (31.07.2022): 3009–14. http://dx.doi.org/10.22214/ijraset.2022.45526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract: Simulations have been there for a long time, in different versions and level of complexity. Training a Reinforcement Learning model in a 3D environment lets us understand a lot of new insights from the inference. There have been some examples where the AI learns to Feed Itself, Learns to Start walking, jumping etc. The reason one trains an entire model from the agent knowing nothing to being a perfect task achiever is that during the process, new behavioral patterns can be recorded. Reinforcement Learning is a feedback-based Machine Learning technique in which an agent learns how to behave in a given environment by performing actions and observing the outcomes of those actions. For each positive action, the agent receives positive feedback; for each negative action, the agent receives negative feedback or a penalty. A general simple agent would learn to perform a task and get some reward on accomplishing it. The Agent is also given punishment if it does something that it’s not supposed to do. These simple simulations can evolve, try to use their surroundings, try to fight with other agents to accomplish their goal
42

Dede Indra Setiabudi und Dewi Utami. „IDENTIFICATION OF THE IMPACT OF THE SIMULATION LEARNING MODEL IN IMPROVING LEARNING ACTIVITIES AND ACHIEVEMENTS IN STRATEGY AND LEARNING PLANNING COURSES“. International Journal of Education and Literature 1, Nr. 2 (05.08.2022): 22–28. http://dx.doi.org/10.55606/ijel.v1i2.22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This study aims to improve the activeness and learning achievement of students by applying simulation methods in Strategy and Planning of Studying courses. This research is a class action research. The research setting was regular semester VI students, totaling 50 students who took Strategy and Planning of Studying courses. The research design involves the lecturer as the main researcher and at the same time the perpetrators of the action, the observer lecturer and students as the subject of the students. The way the research is carried out through: 1. planning, 2. implementation of class actions, 3. monitoring and evaluation, 4. analysis and reflection, 5. summarizing results. Data collection uses observation, interviews, documentation and questionnaires: Test the validity of the data using triangulation of methods and sources.Research this action by applying simulation methods in Strategy and Planning of Studying courses. The results of this action research are 1) The application of simulation learning methods can increase student activities. An increase in learning activities from cycle I to cycle II, and from cycle II to cycle III. 2) Application of simulation learning methods can optimize student learning achievement. Student learning achievement has increased from learning achievement cycle I to cycle II, and from cycle II to cycle III.
43

Datta, Ashoke Kumar. „A fuzzy model for learning in automata“. Robotica 3, Nr. 1 (Januar 1985): 39–44. http://dx.doi.org/10.1017/s0263574700001478.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
SUMMARYA simple general model for learning, using a fuzzy set theoretic approach and fuzzy decision in an automaton which has nonfuzzy input/output, is proposed. The process has been modelled somewhat in the fashion of general biological systems, which may be viewed as a fuzzy decision process where learning consists in taking a tentative action and reinforcing the membership values on the basis of the results of that action. The model is tested on an automaton whose sole purpose is to follow the boundary on an object with which it makes contact during its movements. The automaton is simulated by a computer. it has standard 8–neighbourhood configuration with binary sense capability and three action capabilities. The automaton has been found to learn to take correct action in a large number of possible input situations within only a few thousand moves.
44

Zhao, Quanbin, und Hanqi Wang. „Application of Unsupervised Transfer Technique Based on Deep Learning Model in Physical Training“. Computational Intelligence and Neuroscience 2022 (14.04.2022): 1–12. http://dx.doi.org/10.1155/2022/8679221.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The research purpose is to study the standardization and scientizing of physical training actions. Stacking denoising auto encoder (SDAE), a BiLSTM deep network model (SDAL-DNM) (a kind of training action model), and an unsupervised transfer model are used to deeply study the action problem of physical training. Initially, the physical training action discrimination model adopted here is a combination of stacked noise reduction self-encoder and bidirectional depth network model. Then, this model can collect data for five actions in physical training and further analyze the importance of action standardization for physical training. Afterward, the SDAL-DNM implemented here fully integrates the advantages of SDAE and BiLSTM. Finally, the unsupervised transfer model adopted here is based on SDAL-DNM deep learning (DL). The movement data of the physical training crowd are collected, and then the unsupervised transfer model is trained. According to the movement characteristics of physical training, the data difference between trainers is calculated so that the actions of each trainer can be continuously adapted according to the model, and finally, the benefits of effectively distinguishing the training actions can be achieved. The research shows that before and after unsupervised learning, the average decline of the model used is 1.69%, while the average decline of extreme learning machine (ELM) is 5.5%. The conclusion is that the unsupervised transfer model can improve the discrimination accuracy of physical training actions and provide theoretical support to effectively correct mistakes in physical training actions.
45

Kim, Minbeom, Kyeongha Rho, Yong-duk Kim und Kyomin Jung. „Action-driven contrastive representation for reinforcement learning“. PLOS ONE 17, Nr. 3 (18.03.2022): e0265456. http://dx.doi.org/10.1371/journal.pone.0265456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inputs. However, their representation faced the limitations of being vulnerable to the high diversity inherent in environments or not taking the characteristics for solving control tasks. To attenuate these phenomena, we propose the novel contrastive representation method, Action-Driven Auxiliary Task (ADAT), which forces a representation to concentrate on essential features for deciding actions and ignore control-irrelevant details. In the augmented state-action dictionary of ADAT, the agent learns representation to maximize agreement between observations sharing the same actions. The proposed method significantly outperforms model-free and model-based algorithms in the Atari and OpenAI ProcGen, widely used benchmarks for sample-efficiency and generalization.
46

Warneri, Warneri. „Peningkatan Hasil Belajar Akuntansi Keuangan Melalui Model Cooperative Learning Tipe Jigsaw“. JTP - Jurnal Teknologi Pendidikan 18, Nr. 1 (08.05.2017): 53. http://dx.doi.org/10.21009/jtp1801.6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This research aimed is toimproving the students’ learning achievement and activitiesthrough improving accounting learning process by implementing jigsaw type cooperativelearning model.Using action research method, this research was done throughplanning, implementation, observation and reflection. This action research was conductedin two cycles. Every cycle was done in eight metings.The data of learning achievement werecollected through materials comprehension test before the action, after the first cycle action,after the second cycle action, and after all actions had been completed. The data of thestudents’ activities were collected through observations during the teaching and learningprocesses were administered. The research resulted in the improvement of learning achievementand activities in the process of learning accounting through the implementation ofjigsaw type cooperative learning model.Keywords: Learning Achievement,Financial Accounting,Cooperative Learning, JigsawType Tujuan penelitian ini adalah untuk meningkatkan hasil belajar dan aktivitas belajarsiswa dengan cara memperbaiki proses pembelajaran akuntansi dengan menerapkanmodel pembelajaran kooperatif tipe jigsaw di kelas XI IPS SMA Negeri 1 Bengkayang KalimantanBarat.Metode penelitian yang digunakan adalah penelitian tindakan dengan alurperencanaan, pelaksanaan, observasi dan refleksi. Penelitian tindakan ini dilaksanakan 2siklus, setiap siklus dilaksanakan 8 kali pertemuan.Data hasil belajar diperoleh melalui tespemahaman materi sebelum pelaksanaan tindakan, setelah tindakan siklus 1, setelah tindakansiklus 2 dan setelah pelaksanaan tindakan berakhir. Data keaktifan belajar siswa diperolehmelalui pengamatan pada saat proses belajar mengajar berlangsung. Hasil penelitiandiperoleh bahwa terjadi peningkatan hasil belajar dan aktivitas belajar siswa dalam prosespembelajaran akuntansi dengan menerapkan model pembelajaran kooperatif tipe jigsaw.Kata kunci: Hasil Belajar Akuntansi Keuangan dan Model Pembelajaran Kooperatif TipeJigsaw
47

Zhang, TianYu. „The Motor Action Analysis Based on Deep Learning“. Scientific Programming 2022 (10.03.2022): 1–11. http://dx.doi.org/10.1155/2022/9436736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
For the slow speed and low accuracy of slow motor action recognition methods, this study proposes a motor action analysis method based on the CNN network and the softmax classification model. First, in order to obtain motor action feature information, by using static spatial features of BN-inception based on CNN network extracted actions and high-dimensional features of 3D ConvNet, then based on softmax classifier structure and realizing taxonomic recognition of the motor actions. Finally, through the decision-layer fusion and time semantic continuity optimization strategy, the motion action recognition accuracy is further improved and the more efficient motion action classification recognition is realized. The results show that the proposed method can complete the motor action analysis and achieve the classification recognition accuracy to 83.11%, which has certain practical value.
48

Aineto, Diego, Sergio Jiménez und Eva Onaindia. „A Comprehensive Framework for Learning Declarative Action Models“. Journal of Artificial Intelligence Research 74 (07.07.2022): 1091–123. http://dx.doi.org/10.1613/jair.1.13073.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A declarative action model is a compact representation of the state transitions of dynamic systems that generalizes over world objects. The specification of declarative action models is often a complex hand-crafted task. In this paper we formulate declarative action models via state constraints, and present the learning of such models as a combinatorial search. The comprehensive framework presented here allows us to connect the learning of declarative action models to well-known problem solving tasks. In addition, our framework allows us to characterize the existing work in the literature according to four dimensions: (1) the target action models, in terms of the state transitions they define; (2) the available learning examples; (3) the functions used to guide the learning process, and to evaluate the quality of the learned action models; (4) the learning algorithm. Last, the paper lists relevant successful applications of the learning of declarative actions models and discusses some open challenges with the aim of encouraging future research work.
49

Stenberg, Gunilla. „Infants’ imitative learning from third-party observations“. Interaction Studies 24, Nr. 3 (31.12.2023): 464–83. http://dx.doi.org/10.1075/is.20024.ste.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract In two separate experiments, we examined 17-month-olds’ imitation in a third-party context. The aim was to explore how seeing another person responding to a model’s novel action influenced infant imitation. The infants watched while a reliable model demonstrated a novel action with a familiar (Experiment 1) or an unfamiliar (Experiment 2) object to a second actor. The second actor either imitated or did not imitate the novel action of the model. Fewer infants imitated the model’s novel behavior in the non-imitation condition than in the imitation condition in Experiment 1. In Experiment 2, infants’ likelihood of imitating was not influenced by whether they had watched the second actor imitating the model’s novel action with the unfamiliar object. The findings indicate that infants take into account a second adult’s actions in a third party context when infants receive information that contradicts their existing knowledge and when it corresponds with their own experiences. If infants do not have prior knowledge about how to handle a certain object, then the second adult’s actions do not seem to matter.
50

Wolitzky, Alexander. „Learning from Others' Outcomes“. American Economic Review 108, Nr. 10 (01.10.2018): 2763–801. http://dx.doi.org/10.1257/aer.20170914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
I develop a simple model of social learning in which players observe others’ outcomes but not their actions. A continuum of players arrives continuously over time, and each player chooses once-and-for-all between a safe action (which succeeds with known probability) and a risky action (which succeeds with fixed but unknown probability, depending on the state of the world). The actions also differ in their costs. Before choosing, a player observes the outcomes of K earlier players. There is always an equilibrium in which success is more likely in the good state, and this alignment property holds whenever the initial generation of players is not well informed about the state. In the case of an outcome-improving innovation (where the risky action may yield a higher probability of success), players take the correct action as K → ∞. In the case of a cost-saving innovation (where the risky action involves saving a cost but accepting a lower probability of success), inefficiency persists as K → ∞ in any aligned equilibrium. Whether inefficiency takes the form of under-adoption or over-adoption also depends on the nature of the innovation. Convergence of the population to equilibrium may be nonmonotone. (JEL D81, D83, O32, Q12, Q16)

Zur Bibliographie