Journal articles on the topic 'Prediction (Psychology)'

To see the other types of publications on this topic, follow the link: Prediction (Psychology).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prediction (Psychology).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hall, Andrew N., and Sandra C. Matz. "Targeting Item–level Nuances Leads to Small but Robust Improvements in Personality Prediction from Digital Footprints." European Journal of Personality 34, no. 5 (September 2020): 873–84. http://dx.doi.org/10.1002/per.2253.

Full text
Abstract:
In the past decade, researchers have demonstrated that personality can be accurately predicted from digital footprint data, including Facebook likes, tweets, blog posts, pictures, and transaction records. Such computer–based predictions from digital footprints can complement—and in some circumstances even replace—traditional self–report measures, which suffer from well–known response biases and are difficult to scale. However, these previous studies have focused on the prediction of aggregate trait scores (i.e. a person's extroversion score), which may obscure prediction–relevant information at theoretical levels of the personality hierarchy beneath the Big 5 traits. Specifically, new research has demonstrated that personality may be better represented by so–called personality nuances—item–level representations of personality—and that utilizing these nuances can improve predictive performance. The present work examines the hypothesis that personality predictions from digital footprint data can be improved by first predicting personality nuances and subsequently aggregating to scores, rather than predicting trait scores outright. To examine this hypothesis, we employed least absolute shrinkage and selection operator regression and random forest models to predict both items and traits using out–of–sample cross–validation. In nine out of 10 cases across the two modelling approaches, nuance–based models improved the prediction of personality over the trait–based approaches to a small, but meaningful degree (4.25% or 1.69% on average, depending on method). Implications for personality prediction and personality nuances are discussed. © 2020 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
2

Putka, Dan J., Adam S. Beatty, and Matthew C. Reeder. "Modern Prediction Methods." Organizational Research Methods 21, no. 3 (April 3, 2017): 689–732. http://dx.doi.org/10.1177/1094428117697041.

Full text
Abstract:
Predicting outcomes is critical in many domains of organizational research and practice. Over the past few decades, there have been substantial advances in predictive modeling methods and concepts from the computer science, machine learning, and statistics literatures that may have potential value for organizational science and practice. Nevertheless, treatment of these modern methods in major management and industrial-organizational psychology journals remains minimal. The purpose of this article is to (a) raise awareness among organizational researchers and practitioners with regard to several modern prediction methods and concepts, (b) discuss in nonmathematical terms how they compare to traditional regression-based prediction methods, and (c) provide an empirical example of their application and performance relative to traditional methods. Beyond illustrating their potential for improving prediction, we will also illustrate how these methods can offer deeper insights into how predictor content functions beyond simple construct-based explanations.
APA, Harvard, Vancouver, ISO, and other styles
3

Ganzach, Yoav, and David H. Krantz. "The psychology of moderate prediction." Organizational Behavior and Human Decision Processes 47, no. 2 (December 1990): 177–204. http://dx.doi.org/10.1016/0749-5978(90)90036-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ganzach, Yoav, and David H. Krantz. "The psychology of moderate prediction." Organizational Behavior and Human Decision Processes 48, no. 2 (April 1991): 169–92. http://dx.doi.org/10.1016/0749-5978(91)90011-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dall’Aglio, John. "Sex and Prediction Error, Part 3: Provoking Prediction Error." Journal of the American Psychoanalytic Association 69, no. 4 (August 2021): 743–65. http://dx.doi.org/10.1177/00030651211042059.

Full text
Abstract:
In parts 1 and 2 of this Lacanian neuropsychoanalytic series, surplus prediction error was presented as a neural correlate of the Lacanian concept of jouissance. Affective consciousness (a key source of prediction error in the brain) impels the work of cognition, the predictive work of explaining what is foreign and surprising. Yet this arousal is the necessary bedrock of all consciousness. Although the brain’s predictive model strives for homeostatic explanation of prediction error, jouissance “drives a hole” in the work of homeostasis. Some residual prediction error always remains. Lacanian clinical technique attends to this surplus and the failed predictions to which this jouissance “sticks.” Rather than striving to eliminate prediction error, clinical practice seeks its metabolization. Analysis targets one’s mode of jouissance to create a space for the subject to enjoy in some other way. This entails working with prediction error, not removing or tolerating it. Analysis aims to shake the very core of the subject by provoking prediction error—this drives clinical change. Brief clinical examples illustrate this view.
APA, Harvard, Vancouver, ISO, and other styles
6

Watson-Daniels, Jamelle, David C. Parkes, and Berk Ustun. "Predictive Multiplicity in Probabilistic Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10306–14. http://dx.doi.org/10.1609/aaai.v37i9.26227.

Full text
Abstract:
Machine learning models are often used to inform real world risk assessment tasks: predicting consumer default risk, predicting whether a person suffers from a serious illness, or predicting a person's risk to appear in court. Given multiple models that perform almost equally well for a prediction task, to what extent do predictions vary across these models? If predictions are relatively consistent for similar models, then the standard approach of choosing the model that optimizes a penalized loss suffices. But what if predictions vary significantly for similar models? In machine learning, this is referred to as predictive multiplicity i.e. the prevalence of conflicting predictions assigned by near-optimal competing models. In this paper, we present a framework for measuring predictive multiplicity in probabilistic classification (predicting the probability of a positive outcome). We introduce measures that capture the variation in risk estimates over the set of competing models, and develop optimization-based methods to compute these measures efficiently and reliably for convex empirical risk minimization problems. We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks. Further, we provide insight into how predictive multiplicity arises by analyzing the relationship between predictive multiplicity and data set characteristics (outliers, separability, and majority-minority structure). Our results emphasize the need to report predictive multiplicity more widely.
APA, Harvard, Vancouver, ISO, and other styles
7

Fokkema, Marjolein, Dragos Iliescu, Samuel Greiff, and Matthias Ziegler. "Machine Learning and Prediction in Psychological Assessment." European Journal of Psychological Assessment 38, no. 3 (May 2022): 165–75. http://dx.doi.org/10.1027/1015-5759/a000714.

Full text
Abstract:
Abstract. Modern prediction methods from machine learning (ML) and artificial intelligence (AI) are becoming increasingly popular, also in the field of psychological assessment. These methods provide unprecedented flexibility for modeling large numbers of predictor variables and non-linear associations between predictors and responses. In this paper, we aim to look at what these methods may contribute to the assessment of criterion validity and their possible drawbacks. We apply a range of modern statistical prediction methods to a dataset for predicting the university major completed, based on the subscales and items of a scale for vocational preferences. The results indicate that logistic regression combined with regularization performs strikingly well already in terms of predictive accuracy. More sophisticated techniques for incorporating non-linearities can further contribute to predictive accuracy and validity, but often marginally.
APA, Harvard, Vancouver, ISO, and other styles
8

Stenhaug, Benjamin A., and Benjamin W. Domingue. "Predictive Fit Metrics for Item Response Models." Applied Psychological Measurement 46, no. 2 (February 13, 2022): 136–55. http://dx.doi.org/10.1177/01466216211066603.

Full text
Abstract:
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. In this study, for an alternative view of fit, “predictive fit,” based on the model’s ability to predict new data is advocated. The authors define two prediction tasks: “missing responses prediction”—where the goal is to predict an in-sample person’s response to an in-sample item—and “missing persons prediction”—where the goal is to predict an out-of-sample person’s string of responses. Based on these prediction tasks, two predictive fit metrics are derived for item response models that assess how well an estimated item response model fits the data-generating model. These metrics are based on long-run out-of-sample predictive performance (i.e., if the data-generating model produced infinite amounts of data, what is the quality of a “model’s predictions on average?”). Simulation studies are conducted to identify the prediction-maximizing model across a variety of conditions. For example, defining prediction in terms of missing responses, greater average person ability, and greater item discrimination are all associated with the 3PL model producing relatively worse predictions, and thus lead to greater minimum sample sizes for the 3PL model. In each simulation, the prediction-maximizing model to the model selected by Akaike’s information criterion, Bayesian information criterion (BIC), and likelihood ratio tests are compared. It is found that performance of these methods depends on the prediction task of interest. In general, likelihood ratio tests often select overly flexible models, while BIC selects overly parsimonious models. The authors use Programme for International Student Assessment data to demonstrate how to use cross-validation to directly estimate the predictive fit metrics in practice. The implications for item response model selection in operational settings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Yaniv, Ilan, and Robin M. Hogarth. "Judgmental Versus Statistical Prediction: Information Asymmetry and Combination Rules." Psychological Science 4, no. 1 (January 1993): 58–62. http://dx.doi.org/10.1111/j.1467-9280.1993.tb00558.x.

Full text
Abstract:
The relative predictive accuracy of humans and statistical models has long been the subject of controversy even though models have demonstrated superior performance in many studies. We propose that relative performance depends on the amount of contextual information available and whether it is distributed symmetrically to humans and models. Given their different strengths, human and statistical predictions can be profitably combined to improve prediction.
APA, Harvard, Vancouver, ISO, and other styles
10

Poon, Connie S. K., Derek J. Koehler, and Roger Buehler. "On the psychology of self-prediction: Consideration of situational barriers to intended actions." Judgment and Decision Making 9, no. 3 (May 2014): 207–25. http://dx.doi.org/10.1017/s1930297500005763.

Full text
Abstract:
AbstractWhen people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions. More realistic self-predictions require greater sensitivity to situational barriers, such as obstacles or competing demands, that may interfere with the translation of current intentions into future behavior. We consider three reasons why people may not adjust sufficiently for such barriers. First, self-predictions may focus exclusively on current intentions, ignoring potential barriers altogether. We test this possibility, in three studies, with manipulations that draw greater attention to barriers. Second, barriers may be discounted in the self-prediction process. We test this possibility by comparing prospective and retrospective ratings of the impact of barriers on the target behavior. Neither possibility was supported in these tests, or in a further test examining whether an optimally weighted statistical model could improve on the accuracy of self-predictions by placing greater weight on anticipated situational barriers. Instead, the evidence supports a third possibility: Even when they acknowledge that situational factors can affect the likelihood of carrying out an intended behavior, people do not adequately moderate the weight placed on their current intentions when predicting their future behavior.
APA, Harvard, Vancouver, ISO, and other styles
11

Artishcheva, Lira V., and Evgeniya A. Kuznetcova. "PERSONAL FEATURES OF ORPHANS IN PREDICTING." Volga Region Pedagogical Search 35, no. 1 (2021): 48–59. http://dx.doi.org/10.33065/2307-1052-2021-1-35-48-59.

Full text
Abstract:
The relevance of the study is due to the existing problem of prediction. Orphans are in special life and social conditions, which determine their personal development and the formation of personal qualities. The research is aimed at revealing the relationship between the personality traits of orphans and probabilistic prediction. The aim of the study is to substantiate significant relationships between the signs of predictive abilities with such personal characteristics as resilience and self-esteem based on the analysis of the Pearson correlation statistical method. The research is aimed at solving the following issues: analysis of scientific works devoted to the problem of orphanhood; definition of the essence of the concepts of prediction, resilience, selfesteem; identification of the relationship between the signs of predictive ability and personality traits. According to the theory of probabilistic prediction, predicting the outcome of situations, the correctness of decision-making, as well as tactics of behavior depend on individual personality characteristics. As a result of the study, positive and negative significant interrelationships of indicators of predictive ability, resilience, and self-esteem were revealed. The results can be used in the field of psychology to improve the predictive ability of orphans.
APA, Harvard, Vancouver, ISO, and other styles
12

Balch, William R. "Effect of Class Standing on Students' Predictions of Their Final Exam Scores." Teaching of Psychology 19, no. 3 (October 1992): 136–41. http://dx.doi.org/10.1207/s15328023top1903_1.

Full text
Abstract:
Ninety undergraduate introductory psychology students predicted their numerical scores on a multiple-choice final exam directly before the exam was passed out (pretest prediction) and just after completing the exam (posttest prediction). Based on their all-but-final-exam point totals, students were ranked with respect to class standing and categorized as above average (top third), average (middle third), or below average (bottom third). Below average students significantly overestimated their final exam scores on both pretest (9.47%) and posttest (7.73%) predictions. Average students significantly overestimated their scores on pretest (5.33%) but not posttest (2.13%) predictions. Above average students, however, were fairly accurate for both types of prediction, slightly but not significantly underestimating (about 2%) their exam scores.
APA, Harvard, Vancouver, ISO, and other styles
13

Csillag, Daniel, Lucas Monteiro Paes, Thiago Ramos, João Vitor Romano, Rodrigo Schuller, Roberto B. Seixas, Roberto I. Oliveira, and Paulo Orenstein. "AmnioML: Amniotic Fluid Segmentation and Volume Prediction with Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15494–502. http://dx.doi.org/10.1609/aaai.v37i13.26837.

Full text
Abstract:
Accurately predicting the volume of amniotic fluid is fundamental to assessing pregnancy risks, though the task usually requires many hours of laborious work by medical experts. In this paper, we present AmnioML, a machine learning solution that leverages deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRIs with Dice coefficient over 0.9. Also, we make available a novel, curated dataset for fetal MRIs with 853 exams and benchmark the performance of many recent deep learning architectures. In addition, we introduce a conformal prediction tool that yields narrow predictive intervals with theoretically guaranteed coverage, thus aiding doctors in detecting pregnancy risks and saving lives. A successful case study of AmnioML deployed in a medical setting is also reported. Real-world clinical benefits include up to 20x segmentation time reduction, with most segmentations deemed by doctors as not needing any further manual refinement. Furthermore, AmnioML's volume predictions were found to be highly accurate in practice, with mean absolute error below 56mL and tight predictive intervals, showcasing its impact in reducing pregnancy complications.
APA, Harvard, Vancouver, ISO, and other styles
14

Ameen, Ahmed O., Moshood Alabi Alarape, and Kayode S. Adewole. "STUDENTS’ ACADEMIC PERFORMANCE AND DROPOUT PREDICTION." MALAYSIAN JOURNAL OF COMPUTING 4, no. 2 (November 22, 2019): 278. http://dx.doi.org/10.24191/mjoc.v4i2.6701.

Full text
Abstract:
Students’ Academic Performance (SAP) is an important metric in determining the status of students in any academic institution. It allows the instructors and other education managers to get an accurate evaluation of the students in different courses in a particular semester and also serve as an indicator to the students to review their strategies for better performance in the subsequent semesters. Predicting SAP is therefore important to help learners in obtaining the best from their studies. A number of researches in Educational Psychology (EP), Learning Analytics (LA) and Educational Data Mining (EDM) has been carried out to study and predict SAP, most especially in determining failures or dropouts with the goal of preventing the occurrence of the negative final outcome. This paper presents a comprehensive review of related studies that deal with SAP and dropout predictions. To group the studies, this review proposes taxonomy of the methods and features used in the literature for SAP and dropout prediction. The paper identifies some key issues and challenges for SAP and dropout predictions that require substantial research efforts. Limitations of the existing approaches for SAP and dropout prediction are identified. Finally, the paper exposes the current research directions in the area.
APA, Harvard, Vancouver, ISO, and other styles
15

Saucier, Gerard, Kathryn Iurino, and Amber Gayle Thalmayer. "Comparing predictive validity in a community sample: High–dimensionality and traditional domain–and–facet structures of personality variation." European Journal of Personality 34, no. 6 (December 2020): 1120–37. http://dx.doi.org/10.1002/per.2235.

Full text
Abstract:
Prediction of outcomes is an important way of distinguishing, among personality models, the best from the rest. Prominent previous models have tended to emphasize multiple internally consistent “facet” scales subordinate to a few broad domains. But such an organization of measurement may not be optimal for prediction. Here, we compare the predictive capacity and efficiency of assessments across two types of personality–structure model: conventional structures of facets as found in multiple platforms, and new high–dimensionality structures emphasizing those based on natural–language adjectives, in particular lexicon–based structures of 20, 23, and 28 dimensions. Predictions targeted 12 criterion variables related to health and psychopathology, in a sizeable American community sample. Results tended to favor personality–assessment platforms with (at least) a dozen or two well–selected variables having minimal intercorrelations, without sculpting of these to make them function as indicators of a few broad domains. Unsurprisingly, shorter scales, especially when derived from factor analyses of the personality lexicon, were shown to take a more efficient route to given levels of predictive capacity. Popular 20th–century personality–assessment models set out influential but suboptimal templates, including one that first identifies domains and then facets, which compromise the efficiency of measurement models, at least from a comparative–prediction standpoint. © 2020 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
16

Muckli, Lars, Lucy S. Petro, and Fraser W. Smith. "Backwards is the way forward: Feedback in the cortical hierarchy predicts the expected future." Behavioral and Brain Sciences 36, no. 3 (May 10, 2013): 221. http://dx.doi.org/10.1017/s0140525x12002361.

Full text
Abstract:
AbstractClark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).
APA, Harvard, Vancouver, ISO, and other styles
17

Shepperd, James A. "Developing a Prediction Model to Reduce a Growing Number of Psychology Majors." Teaching of Psychology 20, no. 2 (April 1993): 97–101. http://dx.doi.org/10.1207/s15328023top2002_7.

Full text
Abstract:
This article explores problems associated with increased numbers of undergraduate psychology majors and considers strategies available to psychology departments wishing to reduce these numbers. Special attention is given to an approach using multiple regression procedures to develop a prediction model for reducing the number of psychology majors. With a prediction model, a criterion such as psychology GPA at graduation is selected, and predictors of this criterion (e.g., Introductory Psychology grade and first semester GPA) are examined. The prediction model approach is illustrated with data from Holy Cross College.
APA, Harvard, Vancouver, ISO, and other styles
18

Rabin, Matthew. "An Approach to Incorporating Psychology into Economics." American Economic Review 103, no. 3 (May 1, 2013): 617–22. http://dx.doi.org/10.1257/aer.103.3.617.

Full text
Abstract:
This article proposes an approach to improving the psychological realism of economics while maintaining its conventional techniques and goals--formal theoretical and empirical analysis using tractable models, with a focus on prediction and estimation. Besides tolerating the imperfections that come with precision, models should aim for two crucial criteria: power and scope. The approach advocated is to develop portable extensions of existing models that embed preexisting theories as parameter values, while introducing the new psychological assumptions as alternative parameter values, and make the model portable by defining it in all cases where existing models make predictions.
APA, Harvard, Vancouver, ISO, and other styles
19

Nie, Xuesong, Yunfeng Yan, Siyuan Li, Cheng Tan, Xi Chen, Haoyuan Jin, Zhihang Zhu, Stan Z. Li, and Donglian Qi. "Wavelet-Driven Spatiotemporal Predictive Learning: Bridging Frequency and Time Variations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4334–42. http://dx.doi.org/10.1609/aaai.v38i5.28230.

Full text
Abstract:
Spatiotemporal predictive learning is a paradigm that empowers models to learn spatial and temporal patterns by predicting future frames from past frames in an unsupervised manner. This method typically uses recurrent units to capture long-term dependencies, but these units often come with high computational costs and limited performance in real-world scenes. This paper presents an innovative Wavelet-based SpatioTemporal (WaST) framework, which extracts and adaptively controls both low and high-frequency components at image and feature levels via 3D discrete wavelet transform for faster processing while maintaining high-quality predictions. We propose a Time-Frequency Aware Translator uniquely crafted to efficiently learn short- and long-range spatiotemporal information by individually modeling spatial frequency and temporal variations. Meanwhile, we design a wavelet-domain High-Frequency Focal Loss that effectively supervises high-frequency variations. Extensive experiments across various real-world scenarios, such as driving scene prediction, traffic flow prediction, human motion capture, and weather forecasting, demonstrate that our proposed WaST achieves state-of-the-art performance over various spatiotemporal prediction methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Duckitt, John H. "The Prediction of Violence." South African Journal of Psychology 18, no. 1 (March 1988): 10–16. http://dx.doi.org/10.1177/008124638801800102.

Full text
Abstract:
Behaviour prediction is an important applied goal of psychology and the prediction of violent behaviour, in particular, has attracted considerable attention. Although the ability of mental health professionals to predict violence adequately was widely accepted till the late 1960s, a number of important studies then seemed to establish irrefutably the conclusion that clinical assessments of dangerousness, or violence proneness, were hopelessly inaccurate. Renewed attempts to predict violent behaviour, particularly in criminal populations, however, have recently culminated in the development of empirically based actuarial systems, which have shown a dramatically improved capacity to predict violent behaviour. These systems have already begun to have important impacts on parole and institutional classification policies. It is argued that these new systems involve not merely a methodological, but also an important conceptual shift in the enterprise of violence prediction, and that actuarial strategies may have been unjustifiably neglected by psychologists. Some suggestions for the integration of such actuarial approaches with contemporary theoretical developments in personality and social psychology are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Zupancic, Maja, and Tina Kavcic. "Predicting early academic achievement: The role of higher-versus lower-order personality traits." Psihologija 44, no. 4 (2011): 295–306. http://dx.doi.org/10.2298/psi1104295z.

Full text
Abstract:
The study explored the role of children?s (N = 193) individual differences and parental characteristics at the beginning of the first year of schooling in predicting students? attainment of academic standards at the end of the year. Special attention was paid to children?s personality as perceived by the teachers? assistants. Along with parents? education, parenting practices and first-graders? cognitive ability, the incremental predictive power of children?s higher-order (robust) personality traits was compared to the contribution of lower-order (specific) traits in explaining academic achievement. The specific traits provided a somewhat more accurate prediction than the robust traits. Unique contributions of maternal authoritative parenting, children?s cognitive ability, and personality to academic achievement were established. The ratings of first-graders? conscientiousness (a higher-order trait) improved the prediction of academic achievement based on parenting and cognitive ability by 12%, whereas assistant teacher?s perceived children?s intelligence and low antagonism (lower-order traits) improved the prediction by 17%.
APA, Harvard, Vancouver, ISO, and other styles
22

Smith, David D. "Indeterminacy in Psychology." Psychological Reports 69, no. 3 (December 1991): 771–77. http://dx.doi.org/10.2466/pr0.1991.69.3.771.

Full text
Abstract:
There is an irreducible uncertainty in the prediction of human behavior because the dynamics of the brain, as a self-organizing system consisting of many millions of elements, are inherently indeterminate. Thus the Laplacian ideal of universal laws relating knowable causes to predictable effects cannot be realized in psychology.
APA, Harvard, Vancouver, ISO, and other styles
23

Saylam, Berrenur, and Özlem Durmaz İncel. "Multitask Learning for Mental Health: Depression, Anxiety, Stress (DAS) Using Wearables." Diagnostics 14, no. 5 (February 26, 2024): 501. http://dx.doi.org/10.3390/diagnostics14050501.

Full text
Abstract:
This study investigates the prediction of mental well-being factors—depression, stress, and anxiety—using the NetHealth dataset from college students. The research addresses four key questions, exploring the impact of digital biomarkers on these factors, their alignment with conventional psychology literature, the time-based performance of applied methods, and potential enhancements through multitask learning. The findings reveal modality rankings aligned with psychology literature, validated against paper-based studies. Improved predictions are noted with temporal considerations, and further enhanced by multitasking. Mental health multitask prediction results show aligned baseline and multitask performances, with notable enhancements using temporal aspects, particularly with the random forest (RF) classifier. Multitask learning improves outcomes for depression and stress but not anxiety using RF and XGBoost.
APA, Harvard, Vancouver, ISO, and other styles
24

Yarkoni, Tal, and Jacob Westfall. "Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning." Perspectives on Psychological Science 12, no. 6 (August 25, 2017): 1100–1122. http://dx.doi.org/10.1177/1745691617693393.

Full text
Abstract:
Psychology has historically been concerned, first and foremost, with explaining the causal mechanisms that give rise to behavior. Randomized, tightly controlled experiments are enshrined as the gold standard of psychological research, and there are endless investigations of the various mediating and moderating variables that govern various behaviors. We argue that psychology’s near-total focus on explaining the causes of behavior has led much of the field to be populated by research programs that provide intricate theories of psychological mechanism but that have little (or unknown) ability to predict future behaviors with any appreciable accuracy. We propose that principles and techniques from the field of machine learning can help psychology become a more predictive science. We review some of the fundamental concepts and tools of machine learning and point out examples where these concepts have been used to conduct interesting and important psychological research that focuses on predictive research questions. We suggest that an increased focus on prediction, rather than explanation, can ultimately lead us to greater understanding of behavior.
APA, Harvard, Vancouver, ISO, and other styles
25

Heil, Lieke, Johan Kwisthout, Stan van Pelt, Iris van Rooij, and Harold Bekkering. "One wouldn’t expect an expert bowler to hit only two pins: Hierarchical predictive processing of agent-caused events." Quarterly Journal of Experimental Psychology 71, no. 12 (January 23, 2018): 2643–54. http://dx.doi.org/10.1177/1747021817752102.

Full text
Abstract:
Evidence is accumulating that our brains process incoming information using top-down predictions. If lower level representations are correctly predicted by higher level representations, this enhances processing. However, if they are incorrectly predicted, additional processing is required at higher levels to “explain away” prediction errors. Here, we explored the potential nature of the models generating such predictions. More specifically, we investigated whether a predictive processing model with a hierarchical structure and causal relations between its levels is able to account for the processing of agent-caused events. In Experiment 1, participants watched animated movies of “experienced” and “novice” bowlers. The results are in line with the idea that prediction errors at a lower level of the hierarchy (i.e., the outcome of how many pins fell down) slow down reporting of information at a higher level (i.e., which agent was throwing the ball). Experiments 2 and 3 suggest that this effect is specific to situations in which the predictor is causally related to the outcome. Overall, the study supports the idea that a hierarchical predictive processing model can account for the processing of observed action outcomes and that the predictions involved are specific to cases where action outcomes can be predicted based on causal knowledge.
APA, Harvard, Vancouver, ISO, and other styles
26

Yan, Jin, Songhui Hou, and Alexander Unger. "High Construal Level Reduces Overoptimistic Performance Prediction." Social Behavior and Personality: an international journal 42, no. 8 (September 24, 2014): 1303–13. http://dx.doi.org/10.2224/sbp.2014.42.8.1303.

Full text
Abstract:
Overoptimistic performance prediction is a very common feature of people's goal-directed behavior. In this study we examined overoptimistic prediction as a function of construal level. In construal level theory an explanation is set out with regard to how people make predictions through the abstract connections between past and future events, with high-level construal bridging near and distant events. We conducted 2 experiments to confirm our hypothesis that, compared with people with local, concrete construals, people with global, abstract construals would make predictions that were less overoptimistic. In Study 1 we manipulated construal level by priming mindset, and participants (n = 81) predicted the level of their productivity in an anagram task. The results supported our hypothesis. In Study 2, in order to improve the generalizability of the conclusion, we varied the manipulation of the construal level by priming a scenario, and measured performance prediction by having the participants (n = 119) estimate task duration. The results showed that high-level construal consistently decreased overoptimistic prediction, supporting our hypothesis. The theoretical implications of our findings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Dall’Aglio, John. "Sex and Prediction Error, Part 2: Jouissance and The Free Energy Principle in Neuropsychoanalysis." Journal of the American Psychoanalytic Association 69, no. 4 (August 2021): 715–41. http://dx.doi.org/10.1177/00030651211042377.

Full text
Abstract:
Jouissance refers to an excess enjoyment beyond (yet tied to) speech and representation. From the perspective of some Lacanian analysts, jouissance is precisely what testifies against any relationship to the brain— jouissance “slips” out of cognition. On the contrary, it is argued here that jouissance has a central place in contemporary neuropsychoanalysis. In part 1 of this series the metapsychology of jouissance was presented in relation to the real and symbolic registers. Here, in part 2, Mark Solms’s neuropsychoanalytic model of Karl Friston’s free energy principle is summarized. In this model, “predictions” aim to resolve prediction errors—most notably, those signaled by affective consciousness. “Surplus prediction error”—prediction error that arises at the point where the predictive model fails—is proposed to be a neural correlate of jouissance. This limit within prediction is analogous to the real as a structural negativity within the symbolic.
APA, Harvard, Vancouver, ISO, and other styles
28

Letlaka-Rennert, Kedibone, Peggy Luswazi, Janet E. Helms, and Maria Cecilia Zea. "Does the Womanist Identity Model Predict Aspects of Psychological Functioning in Black South African Women?" South African Journal of Psychology 27, no. 4 (December 1997): 236–43. http://dx.doi.org/10.1177/008124639702700406.

Full text
Abstract:
This article's area of inquiry is the reactions of black South African women to gender oppression. It also examines whether Helms's Womanist Identity Model is useful in predicting self-related personality characteristics, specifically Locus of control and Self-efficacy. The Womanist Identity Model was predictive of self-efficacy, with Immersion-Emersion and Internalisation subscales making unique contributions to its prediction, but in opposite directions. The Womanist Model was also predictive of Locus of control among black South African women. The findings therefore demonstrated that internalised gender oppression can differentially contribute to this South African sample's perceptions of personal empowerment.
APA, Harvard, Vancouver, ISO, and other styles
29

Vlasceanu, Madalina, Michael J. Morais, and Alin Coman. "The Effect of Prediction Error on Belief Update Across the Political Spectrum." Psychological Science 32, no. 6 (June 2021): 916–33. http://dx.doi.org/10.1177/0956797621995208.

Full text
Abstract:
Making predictions is an adaptive feature of the cognitive system, as prediction errors are used to adjust the knowledge they stemmed from. Here, we investigated the effect of prediction errors on belief update in an ideological context. In Study 1, 704 Cloud Research participants first evaluated a set of beliefs and then either made predictions about evidence associated with the beliefs and received feedback or were just presented with the evidence. Finally, they reevaluated the initial beliefs. Study 2, which involved a U.S. Census–matched sample of 1,073 Cloud Research participants, was a replication of Study 1. We found that the size of prediction errors linearly predicts belief update and that making large errors leads to more belief update than does not engaging in prediction. Importantly, the effects held for both Democrats and Republicans across all belief types (Democratic, Republican, neutral). We discuss these findings in the context of the misinformation epidemic.
APA, Harvard, Vancouver, ISO, and other styles
30

Hickok, Gregory. "Predictive coding? Yes, but from what source?" Behavioral and Brain Sciences 36, no. 4 (June 24, 2013): 358. http://dx.doi.org/10.1017/s0140525x12002750.

Full text
Abstract:
AbstractThere is little doubt that predictive coding is an important mechanism in language processing–indeed, in information processing generally. However, it is less clear whether the action system is the source of such predictions during perception. Here I summarize the computational problem with motor prediction for perceptual processes and argue instead for a dual-stream model of predictive coding.
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Lydia T., Solon Barocas, Jon Kleinberg, and Karen Levy. "On the Actionability of Outcome Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22240–49. http://dx.doi.org/10.1609/aaai.v38i20.30229.

Full text
Abstract:
Predicting future outcomes is a prevalent application of machine learning in social impact domains. Examples range from predicting student success in education to predicting disease risk in healthcare. Practitioners recognize that the ultimate goal is not just to predict but to act effectively. Increasing evidence suggests that relying on outcome predictions for downstream interventions may not have desired results. In most domains there exists a multitude of possible interventions for each individual, making the challenge of taking effective action more acute. Even when causal mechanisms connecting the individual's latent states to outcomes are well understood, in any given instance (a specific student or patient), practitioners still need to infer---from budgeted measurements of latent states---which of many possible interventions will be most effective for this individual. With this in mind, we ask: when are accurate predictors of outcomes helpful for identifying the most suitable intervention? Through a simple model encompassing actions, latent states, and measurements, we demonstrate that pure outcome prediction rarely results in the most effective policy for taking actions, even when combined with other measurements. We find that except in cases where there is a single decisive action for improving the outcome, outcome prediction never maximizes "action value", the utility of taking actions. Making measurements of actionable latent states, where specific actions lead to desired outcomes, may considerably enhance the action value compared to outcome prediction, and the degree of improvement depends on action costs and the outcome model. This analysis emphasizes the need to go beyond generic outcome prediction in interventional settings by incorporating knowledge of plausible actions and latent states.
APA, Harvard, Vancouver, ISO, and other styles
32

Shenavi,, Sakshi. "PERSONALITY PREDICTION SYSTEM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 12, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33864.

Full text
Abstract:
Personality prediction is a challenging yet crucial task in various fields such as psychology, human resources, and marketing. In this study, we propose a questionnaire-based approach using the random forest algorithm to predict personality traits. The questionnaire is designed to gather information related to the personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The dataset used for this study consists of responses from individuals who completed the questionnaire. Responses to specific questions are used as input variables for the random forest algorithm. The algorithm is trained on a portion of the dataset and then tested on the remaining portion to evaluate its performance in predicting personality traits. Our results show that the random forest algorithm achieves high accuracy in predicting personality traits, outperforming other machine learning algorithms such as logistic regression and support vector machines. This approach has the potential to be used in various applications, such as personalized marketing, recommendation systems, and mental health assessment. Key Words: Personality prediction, Random forest algorithm, personality traits, Questionnaire-based approach.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Chenglong, and Hyunchul Ahn. "E-Learning at-Risk Group Prediction Considering the Semester and Realistic Factors." Education Sciences 13, no. 11 (November 13, 2023): 1130. http://dx.doi.org/10.3390/educsci13111130.

Full text
Abstract:
This study focused on predicting at-risk groups of students at the Open University (OU), a UK university that offers distance-learning courses and adult education. The research was conducted by drawing on publicly available data provided by the Open University for the year 2013–2014. The semester’s time series was considered, and data from previous semesters were used to predict the current semester’s results. Each course was predicted separately so that the research reflected reality as closely as possible. Three different methods for selecting training data were listed. Since the at-risk prediction results needed to be provided to the instructor every week, four representative time points during the semester were chosen to assess the predictions. Furthermore, we used eight single and three integrated machine-learning algorithms to compare the prediction results. The results show that using the same semester code course data for training saved prediction calculation time and improved the prediction accuracy at all time points. In week 16, predictions using the algorithms with the voting classifier method showed higher prediction accuracy and were more stable than predictions using a single algorithm. The prediction accuracy of this model reached 81.2% for the midterm predictions and 84% for the end-of-semester predictions. Finally, the study used the Shapley additive explanation values to explore the main predictor variables of the prediction model.
APA, Harvard, Vancouver, ISO, and other styles
34

Sajjadian, Mehri, Raymond W. Lam, Roumen Milev, Susan Rotzinger, Benicio N. Frey, Claudio N. Soares, Sagar V. Parikh, et al. "Machine learning in the prediction of depression treatment outcomes: a systematic review and meta-analysis." Psychological Medicine 51, no. 16 (October 12, 2021): 2742–51. http://dx.doi.org/10.1017/s0033291721003871.

Full text
Abstract:
AbstractBackgroundMultiple treatments are effective for major depressive disorder (MDD), but the outcomes of each treatment vary broadly among individuals. Accurate prediction of outcomes is needed to help select a treatment that is likely to work for a given person. We aim to examine the performance of machine learning methods in delivering replicable predictions of treatment outcomes.MethodsOf 7732 non-duplicate records identified through literature search, we retained 59 eligible reports and extracted data on sample, treatment, predictors, machine learning method, and treatment outcome prediction. A minimum sample size of 100 and an adequate validation method were used to identify adequate-quality studies. The effects of study features on prediction accuracy were tested with mixed-effects models. Fifty-four of the studies provided accuracy estimates or other estimates that allowed calculation of balanced accuracy of predicting outcomes of treatment.ResultsEight adequate-quality studies reported a mean accuracy of 0.63 [95% confidence interval (CI) 0.56–0.71], which was significantly lower than a mean accuracy of 0.75 (95% CI 0.72–0.78) in the other 46 studies. Among the adequate-quality studies, accuracies were higher when predicting treatment resistance (0.69) and lower when predicting remission (0.60) or response (0.56). The choice of machine learning method, feature selection, and the ratio of features to individuals were not associated with reported accuracy.ConclusionsThe negative relationship between study quality and prediction accuracy, combined with a lack of independent replication, invites caution when evaluating the potential of machine learning applications for personalizing the treatment of depression.
APA, Harvard, Vancouver, ISO, and other styles
35

Hayes, Brett. "The Category Use Effect in Clinical Diagnosis." Clinical Psychological Science 6, no. 2 (July 27, 2017): 216–27. http://dx.doi.org/10.1177/2167702617712525.

Full text
Abstract:
Conventionally, the process of clinical diagnosis is seen as distinct from subsequent clinical decisions. In the current study we challenge this distinction by examining a category use effect whereby diagnostic features that can be used to classify instances and make additional predictions are more likely to be used in diagnosis. During training, senior medical students ( n = 36) and undergraduates ( n = 44) learned to classify cases into one of two artificial disease categories. They were then asked to make an additional prediction about each instance. Some features were informative for both diagnosis and the additional prediction (relevant-use features) whereas others were useful only for diagnosis (irrelevant-use features). At test, all groups classified instances with relevant-use features more accurately and confidently than instances with irrelevant-use features. This effect was stronger in those with clinical training and when there was a plausible connection between diagnosis and prediction. The implications for clinical psychology diagnosis and judgment are discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Quinsey, Vernon L. "Improving decision accuracy where base rates matter: The prediction of violent recidivism." Behavioral and Brain Sciences 19, no. 1 (March 1996): 37–38. http://dx.doi.org/10.1017/s0140525x0004139x.

Full text
Abstract:
AbstractBase rates are vital in predicting violent criminal recidivism. However, both lay people given simulated prediction tasks and professionals milking real life predictions appear insensitive to variations in the base rate of violent recidivism. Although there are techniques to help decision makers attend to base rates, increased decision accuracy is better sought in improved actuarial models as opposed to improved clinicians.
APA, Harvard, Vancouver, ISO, and other styles
37

Hilbig, Benjamin E., Morten Moshagen, and Ingo Zettler. "Prediction Consistency: A Test of the Equivalence Assumption across Different Indicators of the Same Construct." European Journal of Personality 30, no. 6 (November 2016): 637–47. http://dx.doi.org/10.1002/per.2085.

Full text
Abstract:
Prominent theoretical constructs such as the Big Five personality factors often inspire the development and use of different inventories. This practice rests on the vital assumption that different indicators equivalently assess the same construct—otherwise, it would often be inappropriate to draw conclusions on the construct level. In comparison to the evidence typically relied on to support this equivalence assumption, we argue that a direct test of prediction consistency will provide further insights: prediction consistency is a necessary condition for the equivalence assumption that indicators from different inventories predict an external criterion to the same extent. Here, we outline guidelines how to design studies to establish prediction consistency and illustrate this approach in an experiment testing the prediction consistency of the Agreeableness indicators from three prominent Big Five inventories. Specifically, we considered prediction consistency with respect to honesty (vs. cheating) as the behavioral criterion for which a specific a priori hypothesis can be derived on theoretical grounds. Results contradicted predictions consistency and thus the equivalence assumption by showing qualitatively different relations to behavioral honesty, thereby also emphasizing that the interchangeability of inventories should generally be subjected to a strict test, rather than assumed. Copyright © 2016 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
38

Woehr, David J., and Timothy A. Cavell. "Self-Report Measures of Ability, Effort, and Nonacademic Activity as Predictors of Introductory Psychology Test Scores." Teaching of Psychology 20, no. 3 (October 1993): 156–60. http://dx.doi.org/10.1207/s15328023top2003_5.

Full text
Abstract:
Self-report measures of academic ability, academic effort, and nonacademic activity were used to predict students' performance on their first introductory psychology test. Collectively, these predictor variables explained a significant proportion of the variance in test performance. In addition, academic ability, academic effort, and nonacademic activity each contributed significantly to the prediction of test scores. The relative predictive value of different aspects of academic effort was also examined. Results are discussed in terms of how introductory psychology instructors might advise students who wish to improve their test performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Friese, Malte, Matthias Bluemke, and Michaela Wänke. "Predicting Voting Behavior with Implicit Attitude Measures." Experimental Psychology 54, no. 4 (January 2007): 247–55. http://dx.doi.org/10.1027/1618-3169.54.4.247.

Full text
Abstract:
Abstract. Implicit measures of attitudes are commonly seen to be primarily capable of predicting spontaneous behavior. However, evidence exists that these measures can also improve the prediction of more deliberate behavior. In a prospective study we tested the hypothesis that Implicit Association Test (IAT) measures of the five major political parties in Germany would improve the prediction of voting behavior over and above explicit self-report measures in the 2002 parliamentary elections. Additionally we tested whether general interest in politics moderates the relationship between explicit and implicit attitude measures. The results support our hypotheses. Implications for predictive models of explicitly and implicitly measured attitudes are discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Heyman, Tom, and Geert Heyman. "Can prediction-based distributional semantic models predict typicality?" Quarterly Journal of Experimental Psychology 72, no. 8 (February 21, 2019): 2084–109. http://dx.doi.org/10.1177/1747021819830949.

Full text
Abstract:
Recent advances in the field of computational linguistics have led to the development of various prediction-based models of semantics. These models seek to infer word representations from large text collections by predicting target words from neighbouring words (or vice versa). The resulting representations are vectors in a continuous space, collectively called word embeddings. Although psychological plausibility was not a primary concern for the developers of predictive models, it has been the topic of several recent studies in the field of psycholinguistics. That is, word embeddings have been linked to similarity ratings, word associations, semantic priming, word recognition latencies, and so on. Here, we build on this work by investigating category structure. Throughout seven experiments, we sought to predict human typicality judgements from two languages, Dutch and English, using different semantic spaces. More specifically, we extracted a number of predictor variables, and evaluated how well they could capture the typicality gradient of common categories (e.g., birds, fruit, vehicles, etc.). Overall, the performance of predictive models was rather modest and did not compare favourably with that of an older count-based model. These results are somewhat disappointing given the enthusiasm surrounding predictive models. Possible explanations and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Blum, Gabriela S., John F. Rauthmann, Richard Göllner, Tanja Lischetzke, and Manfred Schmitt. "The Nonlinear Interaction of Person and Situation (Nips) Model: Theory and Empirical Evidence." European Journal of Personality 32, no. 3 (May 2018): 286–305. http://dx.doi.org/10.1002/per.2138.

Full text
Abstract:
Despite the broad consensus in psychology that human behaviour is influenced by the interaction between characteristics of the person and characteristics of the situation, not much is known about the precise shape of this person–situation (P × S) interaction. To address this issue, we introduce and test the nonlinear interaction of person and situation (NIPS) model. The NIPS model can be applied to explain contradictory research results, offers a more accurate prediction of behaviour, and can be applied to any trait. In three studies and with three different analytical approaches, we test the NIPS model and its implications. In the pre–study, we test whether variability in participants’ behaviour is smaller in extreme aggression–provoking and jealousy–inducing situations than in moderate situations, suggesting the effect of ‘strong’ situations at the extremes of the situation continuum. In Studies 1 and 2, we test the nonlinear relation between person and situation variables in predicting behaviour in within–subject designs and provide support for the predictions of the NIPS model. Future lines of research with the NIPS model are discussed. Copyright © 2018 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
42

Hollweg, Lewis. "Synthetic Oil Is Better for Whom?" Industrial and Organizational Psychology 3, no. 3 (September 2010): 363–65. http://dx.doi.org/10.1017/s1754942600002558.

Full text
Abstract:
After reviewing the article by Johnson et al. (2010), I began to ask myself and other industrial and organizational (I-O) psychologists what interest groups might be impacted by a Society for Industrial and Organizational Psychology (SIOP)-sponsored synthetic validity database and its resulting mechanical behavior and performance predictions. Who are the various stakeholders and what might be the positive or negative outcomes caused by this “disruptive technology” that could cause “creative destruction” in the I-O psychology profession? Among the consumers and producers of performance prediction, who might gain and who might be creatively destroyed? From these questions and the subsequent conversations, I identified the following categories and possibilities, but I am sure there are others that I have not anticipated.
APA, Harvard, Vancouver, ISO, and other styles
43

Fox, James Alan. "Untimely Prediction." Contemporary Psychology: A Journal of Reviews 32, no. 2 (February 1987): 139–40. http://dx.doi.org/10.1037/026764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chavanne, Alice V., Charlotte Meinke, Till Langhammer, Kati Roesmann, Joscha Boehnlein, Bettina Gathmann, Martin J. Herrmann, et al. "Individual-Level Prediction of Exposure Therapy Outcome Using Structural and Functional MRI Data in Spider Phobia: A Machine-Learning Study." Depression and Anxiety 2023 (August 22, 2023): 1–11. http://dx.doi.org/10.1155/2023/8594273.

Full text
Abstract:
Machine-learning prediction studies have shown potential to inform treatment stratification, but recent efforts to predict psychotherapy outcomes with clinical routine data have only resulted in moderate prediction accuracies. Neuroimaging data showed promise to predict treatment outcome, but previous prediction attempts have been exploratory and reported small clinical sample sizes. Herein, we aimed to examine the incremental predictive value of neuroimaging data in contrast to clinical and demographic data alone (for which results were previously published), using a two-level multimodal ensemble machine-learning strategy. We used pretreatment structural and task-based fMRI data to predict virtual reality exposure therapy outcome in a bicentric sample of N = 190 patients with spider phobia. First, eight 1st-level random forest classifications were conducted using separate data modalities (clinical questionnaire scores and sociodemographic data, cortical thickness and gray matter volumes, functional activation, connectivity, connectivity-derived graph metrics, and BOLD signal variance). Then, the resulting predictions were used to train a 2nd-level classifier that produced a final prediction. No 1st-level or 2nd-level classifier performed above chance level except BOLD signal variance, which showed potential as a contributor to higher-level prediction from multiple regions across the brain (1st-level balanced accuracy = 0.63 ). Overall, neuroimaging data did not provide any incremental accuracy for treatment outcome prediction in patients with spider phobia with respect to clinical and sociodemographic data alone. Thus, we advise caution in the interpretation of prediction performances from small-scale, single-site patient samples. Larger multimodal datasets are needed to further investigate individual-level neuroimaging predictors of therapy response in anxiety disorders.
APA, Harvard, Vancouver, ISO, and other styles
45

Bouteska, Ahmed, and Boutheina Regaieg. "Psychology and behavioral finance." EuroMed Journal of Business 15, no. 1 (November 25, 2019): 39–64. http://dx.doi.org/10.1108/emjb-08-2018-0052.

Full text
Abstract:
Purpose The purpose of this paper is to detect quantitatively the existence of anchoring bias among financial analysts on the Tunisian stock market. Both non-parametric and parametric methods are used. Design/methodology/approach Two studies have been conducted over the period 2010–2014. A first analysis is non-parametric, based on observations of the sign taking by the surprise of result announcement according to the evolution of earning per share (EPS). A second analysis uses simple and multiple linear regression methods to quantify the anchor bias. Findings Non-parametric results show that in the majority of cases, the earning per share variations are followed by unexpected earnings surprises of the same direction, which verify the hypothesis of an anchoring bias of financial analysts to the past benefits. Parametric results confirm these first findings by testing different psychological anchors’ variables. Financial analysts are found to remain anchored to the previous benefits and carry out insufficient adjustments following the announcement of the results by the companies. There is also a tendency for an over/under-reaction in changes in forecasts. Analysts’ behavior is asymmetrical depending on the sign of the forecast changes: an over-reaction for positive prediction changes and a negative reaction for negative prediction changes. Originality/value The evidence provided in this paper largely validates the assumptions derived from the behavioral theory particularly the lessons learned by Kaestner (2005) and Amir and Ganzach (1998). The authors conclude that financial analysts on the Tunisian stock market suffer from anchoring, optimism, over and under-reaction biases when announcing the earnings.
APA, Harvard, Vancouver, ISO, and other styles
46

Meeker, Frank, Daniel Fox, and Bernard E. Whitley. "Predictors of Academic Success in the Undergraduate Psychology Major." Teaching of Psychology 21, no. 4 (December 1994): 238–41. http://dx.doi.org/10.1207/s15328023top2104_9.

Full text
Abstract:
Transcript data were compiled on 288 recent college graduates majoring in psychology to determine the variables that correlated best with grade point average in psychology (PSYGPA). The graduates were a highly diverse group in terms of high school academic backgrounds, grades in high school, and Scholastic Aptitude Test scores. Factor analysis of 26 predictor variables revealed three clusters of variables: high school grades/verbal, general studies, and mathematics. Multiple regression analyses revealed PSYGPA to be predicted by the grade in Introductory Psychology, general studies coursework, and mathematics factors, which together accounted for 67% of the variance. The prediction equation differed somewhat from that obtained for students at another university; consequently, prediction equations used to screen majors should be based only on students at a particular institution.
APA, Harvard, Vancouver, ISO, and other styles
47

Limongi, Roberto, and Angélica M. Silva. "Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time." Experimental Psychology 63, no. 6 (November 2016): 333–42. http://dx.doi.org/10.1027/1618-3169/a000339.

Full text
Abstract:
Abstract. The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production – where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Xuan, Kokil Jaidka, and Niyati Chayya. "The Psychology of Semantic Spaces: Experiments with Positive Emotion (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13007–8. http://dx.doi.org/10.1609/aaai.v36i11.21640.

Full text
Abstract:
Psychological concepts can help computational linguists to better model the latent semantic spaces of emotions, and understand the underlying states motivating the sharing or suppressing of emotions. This abstract applies the understanding of agency and social interaction in the happiness semantic space to its role in positive emotion. First, BERT-based fine-tuning yields an expanded seed set to understand the vocabulary of the latent space. Next, results benchmarked against many emotion datasets suggest that the approach is valid, robust, offers an improvement over direct prediction, and is useful for downstream predictive tasks related to psychological states.
APA, Harvard, Vancouver, ISO, and other styles
49

Shaughnessy, Michael F., and Robert Evans. "Word/World Knowledge: Prediction of College GPA." Psychological Reports 59, no. 3 (December 1986): 1147–50. http://dx.doi.org/10.2466/pr0.1986.59.3.1147.

Full text
Abstract:
The prediction of college grade point average has been extensively investigated. The present study examined two salient domains (word knowledge and world knowledge) with two groups of 137 student teachers (elementary and secondary) and 36 freshman students in introductory psychology classes. The results support past research indicating high school GPA to be a good predictor of college GPA. The addition of vocabulary ability improved prediction.
APA, Harvard, Vancouver, ISO, and other styles
50

Rich, Marshall S., and Mary P. Aiken. "An Interdisciplinary Approach to Enhancing Cyber Threat Prediction Utilizing Forensic Cyberpsychology and Digital Forensics." Forensic Sciences 4, no. 1 (March 4, 2024): 110–51. http://dx.doi.org/10.3390/forensicsci4010008.

Full text
Abstract:
The Cyber Forensics Behavioral Analysis (CFBA) model merges Cyber Behavioral Sciences and Digital Forensics to improve the prediction and effectiveness of cyber threats from Autonomous System Numbers (ASNs). Traditional cybersecurity strategies, focused mainly on technical aspects, must be revised for the complex cyber threat landscape. This research proposes an approach combining technical expertise with cybercriminal behavior insights. The study utilizes a mixed-methods approach and integrates various disciplines, including digital forensics, cybersecurity, computer science, and forensic psychology. Central to the model are four key concepts: forensic cyberpsychology, digital forensics, predictive modeling, and the Cyber Behavioral Analysis Metric (CBAM) and Score (CBS) for evaluating ASNs. The CFBA model addresses initial challenges in traditional cyber defense methods and emphasizes the need for an interdisciplinary, comprehensive approach. This research offers practical tools and frameworks for accurately predicting cyber threats, advocating for ongoing collaboration in the ever-evolving field of cybersecurity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography