To see the other types of publications on this topic, follow the link: Structured validity.

Journal articles on the topic 'Structured validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Structured validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Holden, Ronald R., and G. Cynthia Fekken. "Structured psychopathological test item characteristics and validity." Psychological Assessment 2, no. 1 (1990): 35–40. http://dx.doi.org/10.1037/1040-3590.2.1.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haas, Ann Pollinger, Herbert Hendin, and Paul Singer. "Psychodynamic and structured interviewing: Issues of validity." Comprehensive Psychiatry 28, no. 1 (January 1987): 40–53. http://dx.doi.org/10.1016/0010-440x(87)90043-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jensen, D. R. "Structured dispersion and validity in linear inference." Linear Algebra and its Applications 249, no. 1-3 (December 1996): 189–96. http://dx.doi.org/10.1016/0024-3795(95)00354-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Holden, Ronald R., G. Cynthia Fekken, and Douglas N. Jackson. "Structured personality test item characteristics and validity." Journal of Research in Personality 19, no. 4 (December 1985): 386–94. http://dx.doi.org/10.1016/0092-6566(85)90007-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Köse Çinar, Rugül, and Søren Dinesen Østergaard. "Validation of the semi-structured Psychotic Depression Assessment Scale (PDAS) interview." Acta Neuropsychiatrica 30, no. 3 (May 24, 2017): 175–80. http://dx.doi.org/10.1017/neu.2017.15.

Full text
Abstract:
ObjectiveRecently, a semi-structured interview dedicated to aid rating on the Psychotic Depression Assessment Scale (PDAS) was developed. Here, we aimed to validate PDAS ratings collected via this semi-structured interview.MethodsA total of 50 patients with psychotic depression – 34 with unipolar psychotic depression and 16 with bipolar psychotic depression – were recruited for the study. The following aspects of validity were investigated: clinical validity, psychometric validity (scalability), and responsiveness.ResultsThe PDAS ratings were clinically valid (Spearman’s coefficient of correlation between PDAS total scores and Clinical Global Impressions scale – severity of illness ratings=0.66, p<0.001), scalable (Loevinger’s coefficient of heterogeneity at endpoint=0.45), and responsive (no participants met the criterion for remission on the PDAS (total score <8) at baseline – at endpoint 74% (95% CI: 60–85) of the participants met this criterion).ConclusionsThe semi-structured PDAS interview provides valid ratings of the severity of psychotic depression.
APA, Harvard, Vancouver, ISO, and other styles
6

Hecker, Simon. "Generating structured LPV-models with maximized validity region." IFAC Proceedings Volumes 47, no. 3 (2014): 6901–6. http://dx.doi.org/10.3182/20140824-6-za-1003.02461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reiter, Ehud. "A Structured Review of the Validity of BLEU." Computational Linguistics 44, no. 3 (September 2018): 393–401. http://dx.doi.org/10.1162/coli_a_00322.

Full text
Abstract:
The BLEU metric has been widely used in NLP for over 15 years to evaluate NLP systems, especially in machine translation and natural language generation. I present a structured review of the evidence on whether BLEU is a valid evaluation technique—in other words, whether BLEU scores correlate with real-world utility and user-satisfaction of NLP systems; this review covers 284 correlations reported in 34 papers. Overall, the evidence supports using BLEU for diagnostic evaluation of MT systems (which is what it was originally proposed for), but does not support using BLEU outside of MT, for evaluation of individual texts, or for scientific hypothesis testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Walters, Laurie C., Mark R. Miller, and Malcolm James Ree. "Structured Interviews for Pilot Selection: No Incremental Validity." International Journal of Aviation Psychology 3, no. 1 (January 1993): 25–38. http://dx.doi.org/10.1207/s15327108ijap0301_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Joffe, Michael. "Validity of Exposure Data Derived from a Structured Questionnaire." American Journal of Epidemiology 135, no. 5 (March 1, 1992): 564–70. http://dx.doi.org/10.1093/oxfordjournals.aje.a116323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Runnacles, Jane, Libby Thomas, James Korndorffer, Sonal Arora, and Nick Sevdalis. "Validation evidence of the paediatric Objective Structured Assessment of Debriefing (OSAD) tool." BMJ Simulation and Technology Enhanced Learning 2, no. 3 (May 24, 2016): 61–67. http://dx.doi.org/10.1136/bmjstel-2015-000017.

Full text
Abstract:
IntroductionDebriefing is essential to maximise the simulation-based learning experience, but until recently, there was little guidance on an effective paediatric debriefing. A debriefing assessment tool, Objective Structured Assessment of Debriefing (OSAD), has been developed to measure the quality of feedback in paediatric simulation debriefings. This study gathers and evaluates the validity evidence of OSAD with reference to the contemporary hypothesis-driven approach to validity.MethodsExpert input on the paediatric OSAD tool from 10 paediatric simulation facilitators provided validity evidence based on content and feasibility (phase 1). Evidence for internal structure validity was sought by examining reliability of scores from video ratings of 35 postsimulation debriefings; and evidence for validity based on relationship to other variables was sought by comparing results with trainee ratings of the same debriefings (phase 2).ResultsSimulation experts’ scores were significantly positive regarding the content of OSAD and its instructions. OSAD's feasibility was demonstrated with positive comments regarding clarity and application. Inter-rater reliability was demonstrated with intraclass correlations above 0.45 for 6 of the 7 dimensions of OSAD. The internal consistency of OSAD (Cronbach α) was 0.78. Pearson correlation of trainee total score with OSAD total score was 0.82 (p<0.001) demonstrating validity evidence based on relationships to other variables.ConclusionThe paediatric OSAD tool provides a structured approach to debriefing, which is evidence-based, has multiple sources of validity evidence and is relevant to end-users. OSAD may be used to improve the quality of debriefing after paediatric simulations.
APA, Harvard, Vancouver, ISO, and other styles
11

Vijayalakshmi, K., S. Revathi, and Latha Venkatesan. "Validity of objective structured clinical examination (OSCE) in psychiatric nursing." Journal of Nursing Trendz 7, no. 1 (2016): 16. http://dx.doi.org/10.5958/2249-3190.2016.00004.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Blake, KD, N. Vincent, S. Wakefield, K. Mann, and J. Murphy. "A Structured Communication Adolescent Guide: Assessment of Reliability and Validity." Paediatrics & Child Health 8, suppl_B (May 1, 2003): 19B. http://dx.doi.org/10.1093/pch/8.suppl_b.19bb.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Taylor, Paul J., Karl Pajo, Gordon W. Cheung, and Paul Stringfield. "Dimensionality and Validity of a Structured Telephone Reference Check Procedure." Personnel Psychology 57, no. 3 (September 2004): 745–72. http://dx.doi.org/10.1111/j.1744-6570.2004.00006.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

van der Gulden, J. W. J., P. F. J. Vogelzang, and J. J. Kolk. "RE: “VALIDITY OF EXPOSURE DATA DERIVED FROM A STRUCTURED QUESTIONNAIRE”." American Journal of Epidemiology 138, no. 5 (September 1, 1993): 350–51. http://dx.doi.org/10.1093/oxfordjournals.aje.a116865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Dumont, Jeanne. "Validity of multidimensional scaling in the context of structured conceptualization." Evaluation and Program Planning 12, no. 1 (January 1989): 81–86. http://dx.doi.org/10.1016/0149-7189(89)90026-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Navas-Ferrer, Carlos, Fernando Urcola-Pardo, Ana Belén Subirón-Valera, and Concepción Germán-Bes. "Validity and Reliability of Objective Structured Clinical Evaluation in Nursing." Clinical Simulation in Nursing 13, no. 11 (November 2017): 531–43. http://dx.doi.org/10.1016/j.ecns.2017.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Motowidlo, Stephan J., and Jennifer R. Burnett. "Aural and Visual Sources of Validity in Structured Employment Interviews." Organizational Behavior and Human Decision Processes 61, no. 3 (March 1995): 239–49. http://dx.doi.org/10.1006/obhd.1995.1019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Golden, Sara E., Elizabeth R. Hooker, Sarah Shull, Matthew Howard, Kristina Crothers, Reid F. Thompson, and Christopher G. Slatore. "Validity of Veterans Health Administration structured data to determine accurate smoking status." Health Informatics Journal 26, no. 3 (November 7, 2019): 1507–15. http://dx.doi.org/10.1177/1460458219882259.

Full text
Abstract:
We compared smoking status from Veterans Health Administration (VHA) structured data with text in electronic health record (EHR) to assess validity. We manually abstracted the smoking status of 5,610 VHA patients. Only those with a smoking status found in both EHR text data and VHA structured data were included (n=5,289). We calculated agreement and kappa statistics to compare structured data vs. manually abstracted EHR text smoking status. We found a kappa statistic of 0.70 and total agreement of 81.1% between EHR text data and structured data for Current, Former, and Never smoking categories. Comparing EHR text data and structured data between Never and Ever smokers revealed a kappa statistic of 0.62 and total agreement of 89.1%. For comparison between Current and Never/Former smokers, the kappa statistic was 0.80 and total agreement was 90.2%. We found substantial and significant agreement between smoking status in EHR text data and structured data that may aid in future research.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ji Yong, Tong Hong Jin, and Lu Bo Cheng. "A New-Structured Tower Crane." Key Engineering Materials 522 (August 2012): 288–92. http://dx.doi.org/10.4028/www.scientific.net/kem.522.288.

Full text
Abstract:
Advantages and disadvantages of present tower crane's upper structures are analyzed in this paper. According to the theory of prestressing jib, through combination design, a kind of structure is presented which can realize multiple use and greatly improve the hoisting capacity. Mechanical analysis and contrasts are done for old and new designs by finite element analysis software ALGOR. Finally, an example is given to verify the validity and feasibility of the design
APA, Harvard, Vancouver, ISO, and other styles
20

Rifani, Rohmah, Suryanto Suryanto, and Dewi Retno Suminar. "Analisis Faktor Eksploratori dan Konfirmatori untuk Validasi Skala Lingkungan Makan Terstruktur pada Ibu Bekerja." Psympathic : Jurnal Ilmiah Psikologi 7, no. 2 (January 3, 2021): 327–40. http://dx.doi.org/10.15575/psy.v7i2.9890.

Full text
Abstract:
This study aims to develop and validate the construct of structured meals environment scale. The method used was a cross-sectional survey. The participants were working mothers with early-aged children. The structured meals environment scale used Likert scale which consists of three dimensions: structured meals setting, structured meals timing, and family meals setting. The results of EFA (N = 302) showed that the scale explains the construct of a structured meals environment was 63.98%. The results of CFA (N = 202) showed that the goodness of fit model was acceptable. Factors loading > .5 (.503 - .819) shows the items are valid. Discriminant validity (AVE) moves from .482 - .577 with construct reliability > .7 (.760 - .802). Structured meals environment scale has met the psychometric property test so that the scale is proper to be used.
APA, Harvard, Vancouver, ISO, and other styles
21

Novara, Caterina, Paolo Cavedini, Stella Dorz, Susanna Pardini, and Claudio Sica. "Structured Interview for Hoarding Disorder (SIHD)." European Journal of Psychological Assessment 35, no. 4 (July 2019): 512–20. http://dx.doi.org/10.1027/1015-5759/a000433.

Full text
Abstract:
Abstract. The Structured Interview for Hoarding Disorder (SIHD) is a semi-structured interview designed to assist clinicians in diagnosing a hoarding disorder (HD). This study aimed to validate the Italian version of the SIHD. For this purpose, its inter-rater reliability has been analyzed as well as its ability to differentiate HD from other disorders often comorbid. The sample was composed of 74 inpatients who had been diagnosed within their clinical environment: 9 with HD, 11 with obsessive-compulsive disorder (OCD) and HD, 22 with OCD, 19 with major depressive disorder (MDD), and 13 with schizophrenia spectrum disorders (SSD). The results obtained indicated “substantial” or “perfect” inter-rater reliability for all the core HD criteria, HD diagnosis, and specifiers. The SIHD differentiated between subjects suffering from and not suffering from a HD. Finally, the results indicated “good” convergent validity and high scores were shown in terms of both sensitivity and specificity for HD diagnosis. Altogether, the SIHD represents a useful instrument for evaluating the presence of HD and is a helpful tool for the clinician during the diagnostic process.
APA, Harvard, Vancouver, ISO, and other styles
22

Lowry, Phillip E. "The Structured Interview: An Alternative to the Assessment Center?" Public Personnel Management 23, no. 2 (June 1994): 201–15. http://dx.doi.org/10.1177/009102609402300203.

Full text
Abstract:
This article discusses how to improve the validity and reliability of structured interviews. A framework for the structured interview is suggested. The framework is based on the foundations laid by various researchers, as well as the guidelines for assessment centers. The proposed framework was used to structure an interview used in a selection test. The results suggest that this kind of structured interview may be a valid and less costly alternative to the assessment center. Additional research to refine and build on the framework is suggested.
APA, Harvard, Vancouver, ISO, and other styles
23

Vincent, Gina M., John Chapman, and Nathan E. Cook. "Risk-Needs Assessment in Juvenile Justice." Criminal Justice and Behavior 38, no. 1 (November 30, 2010): 42–62. http://dx.doi.org/10.1177/0093854810386000.

Full text
Abstract:
The authors conducted a prospective study of the predictive validity of the Structured Assessment of Violence Risk in Youth (SAVRY) using a 5-year follow-up period and a sample of 480 male adolescents assessed by juvenile detention personnel. Analyses were conducted to examine differential validity by race-ethnicity, the relative contribution of structured professional judgments of risk level, and the incremental validity of dynamic to static risk factors. Overall, the SAVRY total scores were significantly predictive of any type of reoffending with some variability across racial-ethnic groups. Youths rated as moderate to high risk by evaluators using structured professional judgment had greater odds of rearrest, but these risk ratings did not have incremental validity over numeric scores. Static factors were most strongly predictive of nonviolent rearrest, but dynamic factors (social-contextual) were the most predictive of violent rearrest. Implications for use of risk-needs assessment tools in juvenile justice programs and areas in need of further investigation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Sari, Mira Permata, and Minda Azhar. "Pengembangan Modul Perhitungan Rumus Kimia dan Persamaan Reaksi Berbasis Inkuiri Terstruktur dengan Tiga Level Representasi untuk Kelas X SMA/MA." Edukimia 1, no. 2 (August 10, 2019): 46–52. http://dx.doi.org/10.24036/ekj.v1.i2.a20.

Full text
Abstract:
Module of chemical formula and reaction equation with three levels representation based structured inquiry had been determined validity and practility. The research type was research and development (RD). The development model was 4-D model the consist of 4 stages, namely: define, design, develop and disseminate. This research was limited to the stage of development, namely the validity and practicality tests. The research instruments were used observation questionnaire, validity and practicality. The modul was validated by 5 validators. Practicality test was carried out by 2 chemistry teachers and 24 XI grade students of IPA 2 SMAN Basa Ampek Balai. Data of the validity and practicality test were analyzed using the cohen kappa formula. The average kappa moment of validity test was 0,93 with a very high validity category. The average kappa moment of teacher and student practicality were 0,88 and 0,91 respectively with category very high practicality category.Thus, module of chemical formula and reaction equation with three levels representation based structured inquiry was valid and practice.
APA, Harvard, Vancouver, ISO, and other styles
25

Barman, Nabanita, and Mridula Saikia Khanikor. "Content validity of a structured tool: knowledge questionnaire on behavioural problems." Open Journal of Psychiatry & Allied Sciences 10, no. 2 (2019): 146. http://dx.doi.org/10.5958/2394-2061.2019.00031.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Klehe, Ute-Christine, Cornelius J. König, Gerald M. Richter, Martin Kleinmann, and Klaus G. Melchers. "Transparency in Structured Interviews: Consequences for Construct and Criterion-Related Validity." Human Performance 21, no. 2 (March 27, 2008): 107–37. http://dx.doi.org/10.1080/08959280801917636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Campion, Michael A., James E. Campion, and J. Peter Hudson. "Structured interviewing: A note on incremental validity and alternative question types." Journal of Applied Psychology 79, no. 6 (December 1994): 998–1002. http://dx.doi.org/10.1037/0021-9010.79.6.998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

MATSELL, D. G., N. M. WOLFISH, and E. HSU. "Reliability and validity of the objective structured clinical examination in paediatrics." Medical Education 25, no. 4 (July 1991): 293–99. http://dx.doi.org/10.1111/j.1365-2923.1991.tb00069.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nam, Yoon Ho, Yang Bai, Joey A. Lee, Youngwon Kim, Jung-Min Lee, Nathan F. Meier, and Gregory J. Welk. "Validity Of Consumer-based Physical Activity Monitors In Semi-structured Activities." Medicine & Science in Sports & Exercise 47 (May 2015): 260–61. http://dx.doi.org/10.1249/01.mss.0000477134.71573.a5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Maurer, Todd J., Jerry M. Solamon, and Michael Lippstreu. "How does coaching interviewees affect the validity of a structured interview?" Journal of Organizational Behavior 29, no. 3 (December 4, 2007): 355–71. http://dx.doi.org/10.1002/job.512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fekken, G. Cynthia, and Ronald R. Holden. "The construct validity of differential response latencies in structured personality tests." Canadian Journal of Behavioural Science / Revue canadienne des sciences du comportement 26, no. 1 (January 1994): 104–20. http://dx.doi.org/10.1037/0008-400x.26.1.104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zheng, Jin, Bo Li, Ming Xin, and Gang Luo. "Structured fragment-based object tracking using discrimination, uniqueness, and validity selection." Multimedia Systems 25, no. 5 (June 29, 2017): 487–511. http://dx.doi.org/10.1007/s00530-017-0556-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Blake, Kim, Nicolle Vincent, Susan Wakefield, Joseph Murphy, Karen Mann, and Matthew Kutcher. "A structured communication adolescent guide (SCAG): assessment of reliability and validity." Medical Education 39, no. 5 (April 18, 2005): 482–91. http://dx.doi.org/10.1111/j.1365-2929.2005.02123.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Winckel, Christopher P., Richard K. Reznick, Robert Cohen, and Bryce Taylor. "Reliability and construct validity of a Structured Technical Skills Assessment Form." American Journal of Surgery 167, no. 4 (April 1994): 423–27. http://dx.doi.org/10.1016/0002-9610(94)90128-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

dos Santos, Diamantino José Figueiredo, Isabel Maria Marques Alberto, and Catarina Maria Valente Antunes Marques. "The Structured Interview of Family Assessment Risk: Convergent Validity, Inter-rater Reliability and Structural Relations." Child and Adolescent Social Work Journal 33, no. 6 (April 1, 2016): 487–97. http://dx.doi.org/10.1007/s10560-016-0444-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Smith-Merry, Jennifer. "Evidence-based policy, knowledge from experience and validity." Evidence & Policy: A Journal of Research, Debate and Practice 16, no. 2 (May 1, 2020): 305–16. http://dx.doi.org/10.1332/174426419x15700265131524.

Full text
Abstract:
Evidence-based policy has at its foundation a set of ideas about what makes evidence valid so that it can be trusted in the creation of policy. This validity is frequently conceptualised in terms of rigour deriving from scientific studies which adhere to highly structured processes around data collection, analysis and inscription. In comparison, the knowledge gained from lived experience, while viewed as important for ensuring that policy meets the needs of the people it is trying to serve, is characterised by its tacit nature, unstructure and difficulty in transferring from one actor to another. Validity of experiential knowledge in policy arises from the connection of policy knowledge to the lived experience of individuals. This paper considers validity in this context through exploring four modes in which experiential knowledge is currently utilised within policy. The tensions surrounding validity in the policy context find resolution through the development of a situated notion of validity decoupled from structural rigour and recoupled to context.
APA, Harvard, Vancouver, ISO, and other styles
37

Collis, Kevin F., Thomas A. Romberg, and Murad E. Jurdak. "A Technique for Assessing Mathematical Problem-Solving Ability." Journal for Research in Mathematics Education 17, no. 3 (May 1986): 206–21. http://dx.doi.org/10.5951/jresematheduc.17.3.0206.

Full text
Abstract:
This report sets out the procedures followed in developing, administering, and scoring a set of mathematical problem-solving superitems and examining their construct validity through a recently developed technique of evaluation associated with a taxonomy of the structure of learned outcomes. Each superitem includes a mathematical situation and a structured set of questions about that situation. To judge whether the response patterns of students to the superitems were interpretable, two questions were raised about the response patterns. For each question, the data strongly support the validity of the underlying theoretical constructs.
APA, Harvard, Vancouver, ISO, and other styles
38

Hatala, Rose, David A. Cook, Ryan Brydges, and Richard Hawkins. "Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence." Advances in Health Sciences Education 20, no. 5 (February 22, 2015): 1149–75. http://dx.doi.org/10.1007/s10459-015-9593-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Guimarães, Mark Drew Crosland, Helian Nunes de Oliveira, Lorenza Nogueira Campos, Carolina Ali Santos, Carlos Eduardo Resende Gomes, Suely Broxado de Oliveira, Maria Imaculada de Fátima Freitas, Francisco de Assis Acúrcio, and Carla Jorge Machado. "Reliability and validity of a questionnaire on vulnerability to sexually transmitted infections among adults with chronic mental illness: PESSOAS Project." Revista Brasileira de Psiquiatria 30, no. 1 (January 31, 2008): 55–59. http://dx.doi.org/10.1590/s1516-44462008005000005.

Full text
Abstract:
OBJECTIVE: To describe reliability/validity of a semi-structured questionnaire designed to assess risk behavior to sexually transmitted diseases among adults with chronic mental illness. METHOD: A cross-sectional pilot study was conducted in one psychiatric hospital and one mental health outpatient clinic. Clinical, behavioral and demographic data were collected from semi-structured interviews and medical charts. One-hundred and twenty patients were randomly selected from pre-defined lists in both centers while 89 (74%) were interviewed, indicating 26% nonparticipation rate. Protocol, participation rates, consent form and feasibility issues were assessed. The semi-structured interview was evaluated with regard to reliability (intra- and inter-rater) and construct validity by randomly repeating the interviews in a 1:1 ratio up to one-week interval. Reliability was estimated by percent agreement and Kappa statistics (95% confidence interval). Construct validity was assessed by Grade of Membership model. RESULTS: Kappa statistics ranged from 0.40 to 1.00 for most variables. Grade of Membership analysis generated three profiles. Profile one was represented by mostly women with no condom use in stable relationships; profile two revealed mostly men in stable relationship but with multiple risk behaviors; while profile three indicated a higher proportion of licit or illicit substance use. CONCLUSIONS: Reliability and construct validity assessment using Grade of Membership analysis indicated that the semi-structured interview was suitable for capturing risk behavior among patients with chronic mental illness.
APA, Harvard, Vancouver, ISO, and other styles
40

Armstrong, J. Scott, Rui Du, Kesten C. Green, and Andreas Graefe. "Predictive validity of evidence-based persuasion principles." European Journal of Marketing 50, no. 1/2 (February 8, 2016): 276–93. http://dx.doi.org/10.1108/ejm-10-2015-0728.

Full text
Abstract:
Purpose – This paper aims to test whether a structured application of persuasion principles might help improve advertising decisions. Evidence-based principles are currently used to improve decisions in other complex situations, such as those faced in engineering and medicine. Design/methodology/approach – Scores were calculated from the ratings of 17 self-trained novices who rated 96 matched pairs of print advertisements for adherence to evidence-based persuasion principles. Predictions from traditional methods – 10,809 unaided judgments from novices and 2,764 judgments from people with some expertise in advertising and 288 copy-testing predictions – provided benchmarks. Findings – A higher adherence-to-principles-score correctly predicted the more effective advertisement for 75 per cent of the pairs. Copy testing was correct for 59 per cent, and expert judgment was correct for 55 per cent. Guessing would provide 50 per cent accurate predictions. Combining judgmental predictions led to substantial improvements in accuracy. Research limitations/implications – Advertisements for high-involvement utilitarian products were tested on the assumption that persuasion principles would be more effective for such products. The measure of effectiveness that was available –day-after-recall – is a proxy for persuasion or behavioral measures. Practical/implications – Pretesting advertisements by assessing adherence to evidence-based persuasion principles in a structured way helps in deciding which advertisements would be best to run. That procedure also identifies how to make an advertisement more effective. Originality/value – This is the first study in marketing, and in advertising specifically, to test the predictive validity of evidence-based principles. In addition, the study provides the first test of the predictive validity of the index method for a marketing problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Ryoo, Ji Hoon, Seohee Park, Seongeun Kim, and Hyun Suk Ryoo. "Efficiency of Cluster Validity Indexes in Fuzzy Clusterwise Generalized Structured Component Analysis." Symmetry 12, no. 9 (September 14, 2020): 1514. http://dx.doi.org/10.3390/sym12091514.

Full text
Abstract:
Fuzzy clustering has been broadly applied to classify data into K clusters by assigning membership probabilities of each data point close to K centroids. Such a function has been applied into characterizing the clusters associated with a statistical model such as structural equation modeling. The characteristics identified by the statistical model further define the clusters as heterogeneous groups selected from a population. Recently, such statistical model has been formulated as fuzzy clusterwise generalized structured component analysis (fuzzy clusterwise GSCA). The same as in fuzzy clustering, the clusters are enumerated to infer the population and its parameters within the fuzzy clusterwise GSCA. However, the identification of clusters in fuzzy clustering is a difficult task because of the data-dependence of classification indexes, which is known as a cluster validity problem. We examined the cluster validity problem within the fuzzy clusterwise GSCA framework and proposed a new criterion for selecting the most optimal number of clusters using both fit indexes of the GSCA and the fuzzy validity indexes in fuzzy clustering. The criterion, named the FIT-FHV method combining a fit index, FIT, from GSCA and a cluster validation measure, FHV, from fuzzy clustering, performed better than any other indices used in fuzzy clusterwise GSCA.
APA, Harvard, Vancouver, ISO, and other styles
42

Sidi, Avner, Nikolaus Gravenstein, and Samsun Lampotang. "Construct Validity and Generalizability of Simulation-Based Objective Structured Clinical Examination Scenarios." Journal of Graduate Medical Education 6, no. 3 (September 1, 2014): 489–94. http://dx.doi.org/10.4300/jgme-d-13-00356.1.

Full text
Abstract:
Abstract Background It is not known if construct-related validity (progression of scores with different levels of training) and generalizability of Objective Structured Clinical Examination (OSCE) scenarios previously used with non-US graduating anesthesiology residents translate to a US training program. Objective We assessed for progression of scores with training for a validated high-stakes simulation-based anesthesiology examination. Methods Fifty US anesthesiology residents in postgraduate years (PGYs) 2 to 4 were evaluated in operating room, trauma, and resuscitation scenarios developed for and used in a high-stakes Israeli Anesthesiology Board examination, requiring a score of 70% on the checklist for passing (including all critical items). Results The OSCE error rate was lower for PGY-4 than PGY-2 residents in each field, and for most scenarios within each field. The critical item error rate was significantly lower for PGY-4 than PGY-3 residents in operating room scenarios, and for PGY-4 than PGY-2 residents in resuscitation scenarios. The final pass rate was significantly higher for PGY-3 and PGY-4 than PGY-2 residents in operating room scenarios, and also was significantly higher for PGY-4 than PGY-2 residents overall. PGY-4 residents had a better error rate, total scenarios score, general evaluation score, critical items error rate, and final pass rate than PGY-2 residents. Conclusions The comparable error rates, performance grades, and pass rates for US PGY-4 and non-US (Israeli) graduating (PGY-4 equivalent) residents, and the progression of scores among US residents with training level, demonstrate the construct-related validity and generalizability of these high-stakes OSCE scenarios.
APA, Harvard, Vancouver, ISO, and other styles
43

Kim Kyo Heon, Sun Jung Kwon, 김세진, and 임숙희. "Validity and Reliability Testing of Korean Structured Clinical Interview for Gambling Addiction." Korean Journal of Health Psychology 20, no. 2 (June 2015): 485–94. http://dx.doi.org/10.17315/kjhp.2015.20.2.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

XU, Changjiang, Chongli LIANG, and Zhengguang LIU. "The Contributing Components Analysis on the Predictive Validity of the Structured Interview." Advances in Psychological Science 21, no. 5 (December 10, 2013): 940–50. http://dx.doi.org/10.3724/sp.j.1042.2013.00940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Siddiqui, Nazema Y., Michael L. Galloway, Elizabeth J. Geller, Isabel C. Green, Hye-Chun Hur, Kyle Langston, Michael C. Pitter, Megan E. Tarr, and Martin A. Martino. "Validity and Reliability of the Robotic Objective Structured Assessment of Technical Skills." Obstetrics & Gynecology 123, no. 6 (June 2014): 1193–99. http://dx.doi.org/10.1097/aog.0000000000000288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sabour, Siamak. "Validity and Reliability of the Robotic Objective Structured Assessment of Technical Skills." Obstetrics & Gynecology 124, no. 4 (October 2014): 839. http://dx.doi.org/10.1097/aog.0000000000000499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Harijan, P. D., E. M. Boyle, J. J. Kurinczuk, T. Jaspan, D. O. C. Anumba, and E. S. Draper. "Structured review of validity of clinical definitions of hypoxic-ischaemic encephalopathy (HIE)." Archives of Disease in Childhood - Fetal and Neonatal Edition 96, Supplement 1 (June 1, 2011): Fa26. http://dx.doi.org/10.1136/archdischild.2011.300164.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

COHEN, ROBERT, ARTHUR I. ROTHMAN, PEETER POLDRE, and JOHN ROSS. "Validity and Generalizability of Global Ratings in an Objective Structured Clinical Examination." Journal of the Association of American Medical Colleges 66, no. 9 (September 1991): 545–48. http://dx.doi.org/10.1097/00001888-199109000-00023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

CROWLEY, THOMAS J., SUSAN K. MIKULICH, KRISTEN M. EHLERS, ELIZABETH A. WHITMORE, and MARILYN J. MACDONALD. "Validity of Structured Clinical Evaluations in Adolescents With Conduct and Substance Problems." Journal of the American Academy of Child & Adolescent Psychiatry 40, no. 3 (March 2001): 265–73. http://dx.doi.org/10.1097/00004583-200103000-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Guilbault, Ryan, Elizabeth Hathaway, and Michael Schmidt. "Validity of Individual versus Group Actigraph Prediction Equations During Structured Walking Sessions." Medicine & Science in Sports & Exercise 46 (May 2014): 649. http://dx.doi.org/10.1249/01.mss.0000495417.66965.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography