Academic literature on the topic 'Answer diagnosticity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Answer diagnosticity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Answer diagnosticity"

1

Rusconi, Patrice, and Craig R. M. McKenzie. "Insensitivity and Oversensitivity to Answer Diagnosticity in Hypothesis Testing." Quarterly Journal of Experimental Psychology 66, no. 12 (December 2013): 2443–64. http://dx.doi.org/10.1080/17470218.2013.793732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weidemann, Christoph T., David E. Huber, and Richard M. Shiffrin. "Prime diagnosticity in short-term repetition priming: Is primed evidence discounted, even when it reliably indicates the correct answer?" Journal of Experimental Psychology: Learning, Memory, and Cognition 34, no. 2 (2008): 257–81. http://dx.doi.org/10.1037/0278-7393.34.2.257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sacchi, Simona, Patrice Rusconi, Mattia Bonomi, and Paolo Cherubini. "Effects of Asymmetric Questions on Impression Formation." Social Psychology 45, no. 1 (June 1, 2014): 41–53. http://dx.doi.org/10.1027/1864-9335/a000158.

Full text
Abstract:
When examining social targets, people may ask asymmetric questions, that is, questions for which “yes” and “no” answers are neither equally diagnostic nor equally frequent. The consequences of this information-gathering strategy on impression formation deserve empirical investigation. The present work explored the role played by the trade-off between the diagnosticity and frequency of answers that follow asymmetric questions. In Study 1, participants received answers to symmetric/asymmetric questions on an anonymous social target. In Study 2, participants read answers to a specific symmetric/asymmetric question provided by different group members. Overall, the results of both studies indicate that asymmetric questions had less impact on impressions than did symmetric questions, suggesting that individuals are more sensitive to data frequency than diagnosticity when forming impressions.
APA, Harvard, Vancouver, ISO, and other styles
4

Bickart, Barbara A. "Carryover and Backfire Effects in Marketing Research." Journal of Marketing Research 30, no. 1 (February 1993): 52–62. http://dx.doi.org/10.1177/002224379303000105.

Full text
Abstract:
The author examines how rating a brand on specific attributes early in a survey affects responses to a later overall brand evaluation. Specifically, she investigates the conditions under which respondents’ answers to a brand evaluation are likely to be consistent with a previous attribute rating (carryover effect) versus inconsistent (backfire effect). In a laboratory experiment, the occurrence of carryover and backfire was related to the respondent's level of subjective product knowledge and the diagnosticity of the attribute rating for the brand evaluation. Implications of these findings for questionnaire design and measurement of consumers’ attitudes are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Orthey, Robin, Ewout Meijer, Emmeke Kooistra, and Nick Broers. "How to Detect Concealed Crime Knowledge in Situations With Little Information Using the Forced Choice Test." Collabra: Psychology 8, no. 1 (2022). http://dx.doi.org/10.1525/collabra.37483.

Full text
Abstract:
The Forced Choice Test (FCT) can be used to detect concealed crime knowledge, but it requires more evidence than typically available from a crime to be constructed. We propose a method to repeat individual pieces of evidence to achieve the necessary test length, hence widening the practical applicability. According to our method, FCT trials are created so that on each trial examinees are presented with a novel and unique decision between two answer alternatives even if a specific piece of information is presented again. We argue that if the decision in each trial is unique, the properties and diagnosticity of a traditional FCT can be maintained. In experiment 1, we provide a proof of concept by comparing our novel method with a traditional FCT and demonstrate that an FCT with repeated presentation of the same evidence has diagnostic value (AUC = .69) albeit less so than a traditional FCT (AUC = .86). In experiment 2, we put our novel FCT to the test in a situation with insufficient information for a traditional FCT alongside the Concealed Information Test (CIT), which also detects concealed information but relies on psychophysiological indices. Both, the FCT (AUC = .81) and CIT (AUC = .83) were diagnostic and combining them increased the detection accuracy even further (AUC = .91). If replicated, our novel FCT increase practical applicability of the FCT in general and in conjunction with the CIT.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Answer diagnosticity"

1

RUSCONI, PATRICE PIERCARLO. "Search and evaluation strategies in belief revision: psychological mechanisms and normative deviations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/18977.

Full text
Abstract:
The procedures people adopt in order to seek out and use information have been the focus of empirical research since long in psychology, especially so from the late 50s. This dissertation addresses some key questions left unanswered by a series of seminal studies which date back to the 80s and the 90s on information-gathering and information-use strategies. We first dealt with the question-asking preferences that people exhibit in an abstract task of hypothesis testing. Specifically, we pitted against one another the tendencies to ask positive questions, wherein the confirming answer is expected given the truth of the working hypothesis, and to pose asymmetric queries, wherein the anticipated outcomes of a dichotomous question (i.e., “yes” and “no” answers) convey different amounts of information. Finally, we investigated whether or not people prefer either asymmetrically confirming queries (i.e., questions for which the confirming answer weights more than the disconfirming answer) or asymmetrically disconfirming queries (i.e., questions for which the disconfirming answer conveys more information than the confirming answer). We found a robust tendency to ask positive testing, in keeping with the literature, but neither a preference for asymmetric questions, nor a predominant use of symmetric testing. Furthermore, we showed, correlationally, that people are sensitive to the diagnosticity of questions, as some previous studies in the literature pointed out. Finally, it emerged an interaction between the positivity of questions and the confirming valence of asymmetric queries. A close analysis of the latter finding allowed us to undermine the possibility that people would try to maximize the probability of occurrence of the tested feature, while suggesting a less sophisticated strategy based on the consideration of an easily accessible feature, that is, the probability of a feature under the working hypothesis. After further deepening the study of strategies adopted in the testing phase of hypothesis development, we turned to the evaluation stage. Specifically, we addressed the finding emerged in previous studies of the relative insensitivity of people to the different diagnosticity conveyed by different answers (i.e., “yes” and “no”) to the same question in an abstract task. We showed that not only people might exhibit insensitivity but also oversensitivity to differentially informative answers, indicating a more general failure in information use than previously thought. We also addressed the issue of why people are either insensitive or oversensitive to answer diagnosticity. We provided evidence that an explanation based on the use of the feature-difference heuristic, which has been proposed previously in the literature and wherein people’s estimates are influenced by the difference between the likelihoods, seems unable to explain people’s behavior. By contrast, we found that people prefer to rely on an averaging strategies, in particular on the average between the prior probability and the likelihood. Finally, we investigated an aspect emerged but not directly investigated by previous studies on hypothesis evaluation, that is the feature-positive effect, wherein people tend to overestimate the presence of a feature as opposed to its absence. The results of three experiments with abstract tasks strongly confirmed that the hypothesized effect influences both frequency and accuracy of participants’ responses. We also found that participants exhibited some sensitivity to the formal amount of information, although only with respect to the present clues. Overall, the series of experiments presented in this dissertation contributes to better clarify how people search for information, by showing that they might rely both on formally relevant and formally irrelevant properties of the information they have at hand and by putting into question the alleged tendency to hypothesis confirmation, defined as a maximization of the probability of a confirming datum. Furthermore, these experiments help understand how people treat information, by specifying how people misweigh differentially diagnostic answers and showing that a psychologically compelling tendency, namely the feature-positive effect, might, at least in part, account for people’s information use.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography