Journal articles on the topic 'Assessment and psychometrics'

To see the other types of publications on this topic, follow the link: Assessment and psychometrics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Assessment and psychometrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ruud, T., R. E. Drake, and G. R. Bond. "Measuring Fidelity to Evidence-Based Practices: Psychometrics." Administration and Policy in Mental Health and Mental Health Services Research 47, no. 6 (July 31, 2020): 871–73. http://dx.doi.org/10.1007/s10488-020-01074-7.

Full text
Abstract:
Abstract This special section presents the psychometric properties of fidelity scales used in a national mental health services project in Norway to improve the quality of care of people with psychoses. Across Norway, 39 clinical units in six health trusts participated. The project provided education, implementation support and fidelity assessments. The papers in the section address the psychometrics of fidelity measurement for the specific evidence-based practices: illness management and recovery, family psychoeducation, physical healthcare and antipsychotic medication management. Another paper analyzes the psychometrics of a scale measuring individualization and quality improvement that may be used in conjunction with fidelity scales for specific evidence-based practices. The first paper in the section presents the development and field of fidelity scales, and the two final papers with comments add some additional perspectives and discuss fidelity scales in a wider context. The psychometrics of the five scales were good to excellent. Fidelity assessment is a necessary and effective strategy for quality improvement.
APA, Harvard, Vancouver, ISO, and other styles
2

Coombes, Lee, Martin Roberts, Daniel Zahra, and Steven Burr. "Twelve tips for assessment psychometrics." Medical Teacher 38, no. 3 (October 16, 2015): 250–54. http://dx.doi.org/10.3109/0142159x.2015.1060306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gresham, Frank M. "Alternative psychometrics for authentic assessment?" School Psychology Quarterly 6, no. 4 (1991): 305–9. http://dx.doi.org/10.1037/h0088824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hunt, Earl, and Anne Thissen-Roe. "Interleaving Instruction and Assessment." Journal of Cognitive Education and Psychology 4, no. 1 (January 2004): 47–64. http://dx.doi.org/10.1891/194589504787382901.

Full text
Abstract:
Assessment is conducted for three reasons; personnel selection in academic, industrial, and government situations, certification of individuals and accountability of training institutions, and to provide formative instruction. The first goal has dominated much of psychometrics over the past century, the second goal is relatively new, and the third is not yet in widespread use. We review the mathematics of personnel selection and argue that there is not a great deal to be gained by improvement of present testing methods for personnel selection unless the institution doing the selection is either extremely large or highly selective. We concentrate on the third use, providing formative instruction. Formative instruction ought to diagnose an individual’s capabilities and knowledge, rather than simply grading them on a satisfactory-unsatisfactory scale. However the idea of grading is, in some fashion, built into traditional psychometric forms of testing. We sketch the beginnings of a psychometric approach intended to provide analyses for formative testing. Examples are given from the field of science instruction.
APA, Harvard, Vancouver, ISO, and other styles
5

Davies, Mark G., Michael J. Rowan, and John Feely. "Psychometrics in assessing hepatic encephalopathy – a brief review." Irish Journal of Psychological Medicine 8, no. 2 (September 1991): 144–46. http://dx.doi.org/10.1017/s0790966700015135.

Full text
Abstract:
AbstractHepatic encephalopathy is a neuropsychiatric disorder usually associated with severe hepatic insufficiency. It may however be divided into clinical and subclinical groupings. Psychometric testing, serial EEG's, EEG spectral analysis and event related potentials are all presently being used to quantify and differentiate between the various stages of hepatic encephalopathy. We review the use of psychometrics in hepatic encephalopathy and discuss evidence that these findings are comparable with the more objective data of electrophysiological studies. An adequate, simple and inexpensive assessment may be carried out using a battery of psychometric tests which include number connection tests and five pointed star construction.
APA, Harvard, Vancouver, ISO, and other styles
6

(Aiden) Loe, Bao Sheng, Mark Abrahams, and Philippa Riley. "Opening the Black Box: A practitioner’s guide to Artificial Intelligence and Machine Learning in assessment (Part 1)." Assessment and Development Matters 10, no. 2 (2018): 29–33. http://dx.doi.org/10.53841/bpsadm.2018.10.2.29.

Full text
Abstract:
Key digested messageThe use of machine learning (ML) and Artificial Intelligence (AI) is beginning to impact the assessment of human characteristics across a number of domains. Assessment practitioners are increasingly presented with assessments which purport to be‘ML or AI scored’, as well as new approaches to assessment design which previously used psychometrics. Computer-driven scoring can appear opaque and confusing to the assessment user who is unlikely to understand why or how a score is derived.In the first of two articles, we explain what is and is not machine learning, and how it differs from psychometric approaches and Artificial Intelligence. The second part will build on this to guide assessment practitioners on the questions to ask and issues to consider when deciding whether to use ML-based assessment, as well as signposting some jargon used by data scientists.
APA, Harvard, Vancouver, ISO, and other styles
7

Dai, Shenghai. "Handling Missing Responses in Psychometrics: Methods and Software." Psych 3, no. 4 (November 19, 2021): 673–93. http://dx.doi.org/10.3390/psych3040043.

Full text
Abstract:
The presence of missing responses in assessment settings is inevitable and may yield biased parameter estimates in psychometric modeling if ignored or handled improperly. Many methods have been proposed to handle missing responses in assessment data that are often dichotomous or polytomous. Their applications remain nominal, however, partly due to that (1) there is no sufficient support in the literature for an optimal method; (2) many practitioners and researchers are not familiar with these methods; and (3) these methods are usually not employed by psychometric software and missing responses need to be handled separately. This article introduces and reviews the commonly used missing response handling methods in psychometrics, along with the literature that examines and compares the performance of these methods. Further, the use of the TestDataImputation package in R is introduced and illustrated with an example data set and a simulation study. Corresponding R codes are provided.
APA, Harvard, Vancouver, ISO, and other styles
8

Shirinkina, E. V. "Psychometric Assessment Standards Based on the «Performance Based Assessment» Methodology." Quality and life 29, no. 1 (March 15, 2021): 15–19. http://dx.doi.org/10.34214/2312-5209-2021-29-1-15-19.

Full text
Abstract:
The relevance of the study is due to the fact that there has been a request for a comprehensive assessment of knowledge and skills based on the methodology of competency-based assignments, which consists in a performance-based assessment. In the article, the author presents the psychometric standards and recommendations existing in foreign practice that are necessary for a fair, reliable and objective assessment of the learning outcomes of company employees. The purpose of the work is to study existing standards and recommendations for assessing learning outcomes in the KPI system. The empirical basis of the study was the data of the International Test Commission; British Psychological Society – British Psychological Society, BPS and others. The author analyzes the empirical base of the research and systematizes the existing standards and recommendations for assessing testing. The practical significance of the research results lies in the fact that, given the challenges and demands of the modern world, most organizations will allow them to form a systemic development of the measurement process and quickly respond to new challenges that arise before psychometrics.
APA, Harvard, Vancouver, ISO, and other styles
9

Cordier, Reinie, Renée Speyer, Matthew Martinez, and Lauren Parsons. "Reliability and Validity of Non-Instrumental Clinical Assessments for Adults with Oropharyngeal Dysphagia: A Systematic Review." Journal of Clinical Medicine 12, no. 2 (January 16, 2023): 721. http://dx.doi.org/10.3390/jcm12020721.

Full text
Abstract:
This systematic review on non-instrumental clinical assessment in adult oropharyngeal dysphagia (OD) provides an overview of published measures with reported reliability and validity. In alignment with PRISMA, four databases (CINAHL, Embase, PsycINFO, and PubMed) were searched, resulting in a total of 16 measures and 32 psychometric studies included. The included measures assessed any aspect of swallowing, consisted of at least one specific subscale relating to swallowing, were developed by clinical observation, targeted adults, and were developed in English. The included psychometric studies focused on adults, reported on measures for OD-related conditions, described non-instrumental clinical assessments, reported on validity or reliability, and were published in English. Methodological quality was assessed using the standard quality assessment QualSyst. Most measures targeted only restricted subdomains within the conceptual framework of non-instrumental clinical assessments. Across the 16 measures, hypothesis testing and reliability were the most reported psychometrics, whilst structural validity and content validity were the least reported. Overall, data on the reliability and validity of the included measures proved incomplete and frequently did not meet current psychometric standards. Future research should focus on the development of comprehensive non-instrumental clinical assessments for adults with OD using contemporary psychometric research methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Schroeder, Amber N., Kaleena R. Odd, and Julia H. Whitaker. "Agree to disagree: Examining the psychometrics of cybervetting." Journal of Managerial Psychology 35, no. 5 (June 25, 2020): 435–50. http://dx.doi.org/10.1108/jmp-09-2018-0420.

Full text
Abstract:
PurposeDue to the paucity of research on web-based job applicant screening (i.e. cybervetting), the purpose of the current study was to examine the psychometric properties of cybervetting, including an examination of the impact of adding structure to the rating process.Design/methodology/approachUsing a mixed-factorial design, 122 supervisors conducted cybervetting evaluations of applicant personality, cognitive ability, written communication skills, professionalism, and overall suitability. Cross-method agreement (i.e. the degree of similarity between cybervetting ratings and other assessment methods), as well as interrater reliability and agreement were examined, and unstructured versus structured cybervetting rating formats were compared.FindingsCybervetting assessments demonstrated high interrater reliability and interrater agreement, but only limited evidence of cross-method agreement was provided. In addition, adding structure to the cybervetting process did not enhance the psychometric properties of this assessment technique.Practical implicationsThis study highlighted that whereas cybervetting raters demonstrated a high degree of consensus in cybervetting-based attributions, there may be concerns regarding assessment accuracy, as cybervetting-based ratings generally differed from applicant test scores and self-assessment ratings. Thus, employers should use caution when utilizing this pre-employment screening technique.Originality/valueWhereas previous research has suggested that cybervetting ratings demonstrate convergence with other traditional assessments (albeit with relatively small effects), these correlational links do not provide information regarding cross-method agreement or method interchangeability. Thus, this study bridges a crucial gap in the literature by examining cross-method agreement for a variety of job-relevant constructs, as well as empirically testing the impact of adding structure to the cybervetting rating process.
APA, Harvard, Vancouver, ISO, and other styles
11

Castellanos, Irina, William G. Kronenberger, and David B. Pisoni. "Questionnaire-based assessment of executive functioning: Psychometrics." Applied Neuropsychology: Child 7, no. 2 (November 14, 2016): 93–109. http://dx.doi.org/10.1080/21622965.2016.1248557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Yunxiao, and Hua-Hua Chang. "Psychometrics Help Learning: From Assessment to Learning." Applied Psychological Measurement 42, no. 1 (September 27, 2017): 3–4. http://dx.doi.org/10.1177/0146621617730393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ferguson, Brian. "Modern psychometrics. The science of psychological assessment." Journal of Psychosomatic Research 34, no. 5 (January 1990): 598. http://dx.doi.org/10.1016/0022-3999(90)90043-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Remawi, Bader Nael, Amy Gadoud, Iain Malcolm James Murphy, and Nancy Preston. "Palliative care needs-assessment and measurement tools used in patients with heart failure: a systematic mixed-studies review with narrative synthesis." Heart Failure Reviews 26, no. 1 (August 3, 2020): 137–55. http://dx.doi.org/10.1007/s10741-020-10011-7.

Full text
Abstract:
AbstractPatients with heart failure have comparable illness burden and palliative care needs to those with cancer. However, few of them are offered timely palliative care. One main barrier is the difficulty in identifying those who require palliative care. Several palliative care needs-assessment/measurement tools were used to help identify these patients and assess/measure their needs, but it is not known which one is the most appropriate for this population. This review aimed to identify the most appropriate palliative care needs-assessment/measurement tools for patients with heart failure. Cochrane Library, MEDLINE Complete, AMED, PsycINFO, CINAHL Complete, EMBASE, EThOS, websites of the identified tools, and references and citations of the included studies were searched from inception to 25 June 2020. Studies were included if they evaluated palliative care needs-assessment/measurement tools for heart failure populations in terms of development, psychometrics, or palliative care patient/needs identification. Twenty-seven papers were included regarding nineteen studies, most of which were quantitative and observational. Six tools were identified and compared according to their content and context of use, development, psychometrics, and clinical applications in identifying patients with palliative care needs. Despite limited evidence, the Needs Assessment Tool: Progressive Disease – Heart Failure (NAT:PD-HF) is the most appropriate palliative care needs-assessment tool for use in heart failure populations. It covers most of the patient needs and has the best psychometric properties and evidence of identification ability and appropriateness. Psychometric testing of the tools in patients with heart failure and evaluating the tools to identify those with palliative care needs require more investigation.
APA, Harvard, Vancouver, ISO, and other styles
15

Terwilliger, James. "Semantics, Psychometrics, and Assessment Reform: A Close Look at "Authentic" Assessments." Educational Researcher 26, no. 8 (November 1997): 24. http://dx.doi.org/10.2307/1176303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Trafimow, David. "Holding teachers accountable: An old-fashioned, dry, and boring perspective." Advances in Educational Research and Evaluation 2, no. 1 (2021): 138–45. http://dx.doi.org/10.25082/aere.2021.01.005.

Full text
Abstract:
Few would disagree with the desirability to hold teachers accountable, but student evaluations of teaching and department head evaluations of teaching fail to do the job validly. Although this may be due, in part, to difficulties conceptualizing teaching effectiveness and student learning, it also is due to insufficient attention to measurement reliability. Measurement reliability sets an upper bound on measurement validity, thereby guaranteeing that unreliable measures of teaching effectiveness are invalid too. In turn, for measures of teaching effectiveness to be reliable, the items in the measure must correlate well with each other, there must be many items, or both. Unfortunately, at most universities, those who are tasked with teaching assessment do not understand the basics of psychometrics, thereby rendering their assessments of teachers invalid. To ameliorate unsatisfactory assessment procedures, the present article addresses the relationship between reliability and validity, some requirements of reliable and valid measures, and the psychometric implications for current teaching assessment practices.
APA, Harvard, Vancouver, ISO, and other styles
17

Thomson, Douglas. "Then and now: Statutory assessment and educational psychology." Assessment and Development Matters 11, no. 3 (2019): 38–40. http://dx.doi.org/10.53841/bpsadm.2019.11.3.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Choate, Peter W., and Amber McKenzie. "Psychometrics in Parenting Capacity Assessments: A problem for Aboriginal parents." First Peoples Child & Family Review 10, no. 2 (May 17, 2021): 31–43. http://dx.doi.org/10.7202/1077260ar.

Full text
Abstract:
Parenting Capacity Assessments (PCA) are used by child protection workers to assist in determining the ability of a parent to care for their children. They may be used at various stages of the case management process but these assessments serve as powerful tools for decision making by these workers. They can also be introduced in court as part of expert testimony. Most PCAs utilize psychometric assessment measures to elicit data in respect to personality, parenting knowledge, as well as mental health and addiction issues. The authors argue that the norming of these measures has insufficient inclusion of Aboriginal peoples to be used for assessments with this population. They further argue that different approaches need to be developed as current approaches, including assessment measures, are based upon the constructs of the dominant culture, which is individualistic as opposed to the Aboriginal collectivistic approaches to parenting.
APA, Harvard, Vancouver, ISO, and other styles
19

Lindstrom, Debra, and Carolyn Sithong. "Psychometrics for the Home for Life Home Assessment." American Journal of Occupational Therapy 71, no. 4_Supplement_1 (July 1, 2017): 7111500056p1. http://dx.doi.org/10.5014/ajot.2017.71s1-po6128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zarem, Cori, Hiroyuki Kidokoro, Jeffrey Neil, Michael Wallendorf, Terrie Inder, and Roberta Pineda. "Psychometrics of the Neonatal Oral Motor Assessment Scale." Developmental Medicine & Child Neurology 55, no. 12 (July 20, 2013): 1115–20. http://dx.doi.org/10.1111/dmcn.12202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hollar, David W. "Psychometrics and Assessment of an Empathy Distance Gradient." Journal of Psychoeducational Assessment 35, no. 4 (January 12, 2016): 377–90. http://dx.doi.org/10.1177/0734282915623882.

Full text
Abstract:
Research has indicated declining empathy within specific professions and social structures. Few psychometric instruments have addressed empathy within the context of psychological distance/relatedness to other individuals and even to other species, relationships that can be important contributors to psychological well-being and health. We developed and tested the Empathy Gradient Questionnaire (EGQ), which contains five subscales (i.e., Family, Friend, Peer, Distant Other, and Species Empathy) representing increasing psycho-spatial distances. We used LISREL to factor validate the five-factor structure of the EGQ, and we evaluated levels of empathy among a sample of n = 161 individuals, aged 18 to 60+. The EGQ was shown to have high subscale (0.80-0.89) and overall internal consistencies (0.94). The factor pattern and structural equation models showed five latent factors explaining 69.8% of variance for all variables (goodness-of-fit index [GFI] = 0.98, adjusted goodness-of-fit index [AGFI] = 0.98, standardized root mean square residual [SRMR] = 0.06, comparative fit index [CFI] = 0.92). There were no significant effects for age, gender, or race on overall empathy or for each of the five subscales. A decreasing gradient was noted for Friend to Species Empathies.
APA, Harvard, Vancouver, ISO, and other styles
22

Woods, Stephen A. "Assessment in the digital age: Some challenges for Test Developers and Users." Assessment and Development Matters 10, no. 2 (2018): 22–25. http://dx.doi.org/10.53841/bpsadm.2018.10.2.22.

Full text
Abstract:
Key digested messageHow should psychometrics specialists (test developers and users) respond to the challenges of the digital age? A broad challenge that faces the discipline of psychometrics is to avoid being rooted in old ways of thinking about testing, whilst simultaneously ensuring that key principles of best practice in the science of assessment are maintained and applied to the new methodologies of testing. This article explores key issues in this focal challenge.
APA, Harvard, Vancouver, ISO, and other styles
23

Norman, Julia. "Leadership potential: Measurement beyond psychometrics." Assessment and Development Matters 12, no. 2 (2020): 43–48. http://dx.doi.org/10.53841/bpsadm.2020.12.2.43.

Full text
Abstract:
Key digested messageIdentifying and developing individuals with the potential to progress is crucial to an organisation’s growth, success and survival. This article includes a review of current thinking on leadership potential and then goes on to suggest robust ways to measure this, alongside assessment of performance, using a combination of bespoke behavioural simulations, psychometrics and a complex processing test.
APA, Harvard, Vancouver, ISO, and other styles
24

Jewell, Vanessa, and Noralyn Pickens. "Psychometric Evaluation of the Occupation-Centered Intervention Assessment." OTJR: Occupation, Participation and Health 37, no. 2 (January 19, 2017): 82–88. http://dx.doi.org/10.1177/1539449216688619.

Full text
Abstract:
A challenge of intervention research is the lack of a means to identify and measure clinical practice from an occupation-centered approach. The objective of this study is to establish basic psychometric properties of the Occupation-Centered Intervention Assessment (OCIA). The study is approached by establishing content validity and utility through expert panel and two focus groups. Interrater reliability (IRR) was determined through standardized video analysis and Krippendorff’s alpha. Results from the expert panel and focus groups indicated an overall agreement that the OCIA was able to capture the full range of elements of rehabilitation-focused interventions for older adults (occupational, contextual, and personal relevance) and a good fit with the occupational therapy intervention process model. IRR found adequate level of agreement (α = .76). The OCIA has demonstrated initial basic psychometrics for observation of rehabilitation-focused interventions with older adults.
APA, Harvard, Vancouver, ISO, and other styles
25

Rosenkoetter, Ulrike, and Robyn L. Tate. "Assessing Features of Psychometric Assessment Instruments: A Comparison of the COSMIN Checklist with Other Critical Appraisal Tools." Brain Impairment 19, no. 1 (December 7, 2017): 103–18. http://dx.doi.org/10.1017/brimp.2017.29.

Full text
Abstract:
The past 20 years have seen the development of instruments designed to specify standards and evaluate the adequacy of published studies with respect to the quality of study design, the quality of findings, as well as the quality of their reporting. In the field of psychometrics, the first minimum set of standards for the review of psychometric instruments was published in 1996 by the Scientific Advisory Committee of the Medical Outcomes Trust. Since then, a number of tools have been developed with similar aims. The present paper reviews basic psychometric properties (reliability, validity and responsiveness), compares six tools developed for the critical appraisal of psychometric studies and provides a worked example of using the COSMIN checklist, Terwee-m statistical quality criteria, and the levels of evidence synthesis using the method of Schellingerhout and colleagues (2012). This paper will aid users and reviewers of questionnaires in the quality appraisal and selection of appropriate instruments by presenting available assessment tools, their characteristics and utility.
APA, Harvard, Vancouver, ISO, and other styles
26

Struckmeyer, Linda R., Noralyn Pickens, Diane Brown, and Katy Mitchell. "Home Environmental Assessment Protocol–Revised Initial Psychometrics: A Pilot Study." OTJR: Occupation, Participation and Health 40, no. 3 (June 22, 2020): 175–82. http://dx.doi.org/10.1177/1539449220912186.

Full text
Abstract:
Efficient home assessments are needed for persons with dementia and their caregivers. Pilot studies to establish a content validity index (CVI), measure concurrent criterion validity, and examine test–retest reliability of the Home Environment Assessment Protocol–Revised (HEAP-R). Six experts reviewed the tool and scored content validity items. Twenty-one caregiver/person with dementia dyads engaged with HEAP and HEAP-R to examine concurrent criterion validity. Seventeen occupational therapists viewed 10 videos of home environments to examine reliability. The CVI score was .980. Concurrent criterion validity for domains: hazards ( r =.792), adaptations ( r = .742), clutter ( r = .843), and comfort ( r = .958). Test–retest reliability: hazards ( r = .820), adaptations ( r = .887), visual cues ( r = .487), and clutter ( r = .696). Pilot data suggest the HEAP-R has preliminary content and concurrent criterion validity and test–retest reliability. Robust psychometric analysis is needed prior to use in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
27

Duckitt, John. "Book Review: Modern Psychometrics: The Science of Psychological Assessment." South African Journal of Psychology 23, no. 4 (December 1993): 212–13. http://dx.doi.org/10.1177/008124639302300409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Uijtdehaage, Sebastian, and Lambert W. T. Schuwirth. "Assuring the quality of programmatic assessment: Moving beyond psychometrics." Perspectives on Medical Education 7, no. 6 (November 8, 2018): 350–51. http://dx.doi.org/10.1007/s40037-018-0485-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jenkinson, Josephine C. "Diagnosis of Developmental Disability: Psychometrics, Behaviour, and Etiology." Behaviour Change 14, no. 2 (June 1997): 60–72. http://dx.doi.org/10.1017/s0813483900003545.

Full text
Abstract:
Diagnosis of developmental disability lacks precision, partly because of differences in definitions of the concept, but largely because of problems specific to the use of psychometric measures with children who have a developmental disability. These problems arise from inadequate evidence of reliability for psychometric measures at extremes of the normal distribution, from lack of comparability between different tests and between different editions of tests, and from practical considerations in the assessment of people with various disabilities. Adaptive behaviour assessment has been introduced to supplement intelligence testing, but lack of a clear conceptualisation of this concept and doubts about the appropriateness of United States norms for Australian children add to the difficulties of interpreting results of standardised scales. Systematic assessment of behavioural problems needs to be incorporated into diagnostic procedures. This paper argues that improvements in the accuracy of diagnosis are unlikely to come from further technical advances in psychometric assessment, and suggests that diagnosis in the future should take into account new technologies which link etiology to specific behavioural patterns to supplement existing procedures.
APA, Harvard, Vancouver, ISO, and other styles
30

Nichols, Paul D. "A Framework for Developing Cognitively Diagnostic Assessments." Review of Educational Research 64, no. 4 (December 1994): 575–603. http://dx.doi.org/10.3102/00346543064004575.

Full text
Abstract:
Over the past decade or so, a growing number of writers have argued that cognitive science and psychometrics could be combined in the service of instruction. Researchers have progressed beyond statements of intent to the hands-on business of researching and developing diagnostic assessments combining cognitive science and psychometrics, what I call cognitively diagnostic assessment (CDA). In this article, I attempt to organize the many loosely connected efforts to develop cognitively diagnostic assessments. I consider the development of assessments to guide specific instructional decisions, sometimes referred to as diagnostic assessments. Many of my arguments apply to program evaluation as well—assessments that reveal the mechanisms test takers use in responding to items or tasks provide important information on whether instruction is achieving its goals. My goal in this article is to characterize CDA in terms of the intended use of assessment and the methods of developing and evaluating assessments. Towards this goal, I (a) outline the societal trends that motivate the development of CDA, (b) introduce a framework within which the psychological and statistical aspects of CDA can be coordinated, and (c) summarize efforts to develop CDA in a five-step methodology that can guide future development efforts. Finally, I address some of the issues developers of CDA must resolve if CDA is to succeed.
APA, Harvard, Vancouver, ISO, and other styles
31

Jones, Chelsea, Jessica Harasym, Antonio Miguel-Cruz, Shannon Chisholm, Lorraine Smith-MacDonald, and Suzette Brémault-Phillips. "Neurocognitive Assessment Tools for Military Personnel With Mild Traumatic Brain Injury: Scoping Literature Review." JMIR Mental Health 8, no. 2 (February 22, 2021): e26360. http://dx.doi.org/10.2196/26360.

Full text
Abstract:
Background Mild traumatic brain injury (mTBI) occurs at a higher frequency among military personnel than among civilians. A common symptom of mTBIs is cognitive dysfunction. Health care professionals use neuropsychological assessments as part of a multidisciplinary and best practice approach for mTBI management. Such assessments support clinical diagnosis, symptom management, rehabilitation, and return-to-duty planning. Military health care organizations currently use computerized neurocognitive assessment tools (NCATs). NCATs and more traditional neuropsychological assessments present unique challenges in both clinical and military settings. Many research gaps remain regarding psychometric properties, usability, acceptance, feasibility, effectiveness, sensitivity, and utility of both types of assessments in military environments. Objective The aims of this study were to explore evidence regarding the use of NCATs among military personnel who have sustained mTBIs; evaluate the psychometric properties of the most commonly tested NCATs for this population; and synthesize the data to explore the range and extent of NCATs among this population, clinical recommendations for use, and knowledge gaps requiring future research. Methods Studies were identified using MEDLINE, Embase, American Psychological Association PsycINFO, CINAHL Plus with Full Text, Psych Article, Scopus, and Military & Government Collection. Data were analyzed using descriptive analysis, thematic analysis, and the Randolph Criteria. Narrative synthesis and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for Scoping Reviews) guided the reporting of findings. The psychometric properties of NCATs were evaluated with specific criteria and summarized. Results Of the 104 papers, 33 met the inclusion criteria for this scoping review. Thematic analysis and NCAT psychometrics were reported and summarized. Conclusions When considering the psychometric properties of the most commonly used NCATs in military populations, these assessments have yet to demonstrate adequate validity, reliability, sensitivity, and clinical utility among military personnel with mTBIs. Additional research is needed to further validate NCATs within military populations, especially for those living outside of the United States and individuals experiencing other conditions known to adversely affect cognitive processing. Knowledge gaps remain, warranting further study of psychometric properties and the utility of baseline and normative testing for NCATs.
APA, Harvard, Vancouver, ISO, and other styles
32

Piasta, Shayne B., Kristin S. Farley, MS, Beth M. Phillips, Jason L. Anthony, and Ryan P. Bowles. "Assessment of Young Children’s Letter-Sound Knowledge: Initial Validity Evidence for Letter-Sound Short Forms." Assessment for Effective Intervention 43, no. 4 (October 31, 2017): 249–55. http://dx.doi.org/10.1177/1534508417737514.

Full text
Abstract:
The Letter-Sound Short Forms (LSSFs) were designed to meet criteria for effective progress monitoring tools by exhibiting strong psychometrics, offering multiple equivalent forms, and being brief and easy to administer and score. The present study expands available psychometric information for the LSSFs by providing an initial examination of their validity in assessing young children’s emerging letter-sound knowledge. In a sample of 998 preschool-aged children, the LSSFs were sensitive to change over time, showed strong concurrent validity with established letter-sound knowledge and related emergent literacy measures, and demonstrated predictive validity with emergent literacy measures. The LSSFs also predicted kindergarten readiness scores available for a subsample of children. These findings have implications for using the LSSFs to monitor children’s alphabet knowledge acquisition and to support differentiated early alphabet instruction.
APA, Harvard, Vancouver, ISO, and other styles
33

Jelovsek, J. Eric. "Value in workplace-based assessment rater training: psychometrics or edumetrics?" Medical Education 49, no. 7 (June 15, 2015): 650–52. http://dx.doi.org/10.1111/medu.12763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Richards, James M., Denise C. Gottfredson, and Gary D. Gottfredson. "Units of Analysis and the Psychometrics of Environmental Assessment Scales." Environment and Behavior 23, no. 4 (July 1991): 423–37. http://dx.doi.org/10.1177/0013916591234002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lan, Yu-Ling, and Yuhsuan Chang. "Development and initial psychometrics of the Psychological Assessment Competency Scale." Training and Education in Professional Psychology 10, no. 2 (May 2016): 93–101. http://dx.doi.org/10.1037/tep0000108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hopkins, Kathleen Garrubba, Theresa A. Koleck, Dianxu Ren, and Alice M. Blazeck. "The Development and Psychometrics of SEAT (Self-Efficacy Assessment Tool)." International Journal of Nursing Education 8, no. 2 (2016): 180. http://dx.doi.org/10.5958/0974-9357.2016.00072.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

McCredie, Hugh. "Heroes, landmarks and blind alleys in personality assessment." Assessment and Development Matters 6, no. 2 (2014): 16–18. http://dx.doi.org/10.53841/bpsadm.2014.6.2.16.

Full text
Abstract:
This is a new series of four articles we have asked Hugh McCredie, a regular contributor to ADM and Vice-Chair of The Psychometrics Forum, to write to mark the centenary of the First World War. The articles will span the ages of psychological testing, and we hope you will enjoy reading about the testing milestones of the last 100 years.
APA, Harvard, Vancouver, ISO, and other styles
38

Hammond, Jared B., and Adrienne Garro. "A-238 Test Selection Among Psychologists in the USA." Archives of Clinical Neuropsychology 37, no. 6 (August 17, 2022): 1394. http://dx.doi.org/10.1093/arclin/acac060.238.

Full text
Abstract:
Abstract Objective: Explore popular assessments among psychologists from various specialties and variables impacting test selection. Method: A survey designed by Rabin and colleagues (2016) was adapted with permission and administered via Qualtrics, taking 10-15 minutes to complete. Sampled individuals were licensed doctoral level psychologists living in the USA across multiple disciplines. School psychologists with master’s degree were also included. Non-parametric correlations were conducted to understand variables impacting test selection practices, particularly psychometrics (e.g., ecological validity, normative groups) and frequency of assessment. Results: Demographics of the 77 participants meeting study criteria are reviewed. The top three most popular assessments for all participants were: WAIS-IV (11.1%), WISC-V (8.7%), and PAI (6.8%). Popular assessments for the fields of neuropsychology, forensic, clinical and school psychology are reported. No significant relationships were found between psychology specialty and major psychometric variables tested. Psychology specialty was significantly related to time spent on assessment (𝝆 = -0.46, p < .001) and the number of tests in each battery (𝝆 = -0.36, p = .006). However, there was no relationship between psychology specialty and number of assessments conducted each week. Conclusions: Wechsler intelligence tests continued as the most popular assessments among all sampled participants. PAI increased in popularity whereas projectives decreased in popularity. Results for individual psychology specialties remained relatively consistent with previous literature, except for forensic participants. All specialties reported similar likelihood of using assessments reflecting real-world outcomes and demographically representative normative data. Specialized fields of psychology may devote additional time to assessment and use longer batteries despite similar caseloads. Possible explanations are provided.
APA, Harvard, Vancouver, ISO, and other styles
39

Terwilliger, James. "Research news and Comment: Semantics, Psychometrics, and Assessment Reform: A Close Look at “Authentic” Assessments." Educational Researcher 26, no. 8 (November 1997): 24–27. http://dx.doi.org/10.3102/0013189x026008024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mittinty, Manasi Murthy, Pedro H. R. Santiago, and Lisa Jamieson. "Assessment of Pain-Related Fear in Indigenous Australian Populations Using the Fear of Pain-9 Questionnaire (FPQ-9)." International Journal of Environmental Research and Public Health 19, no. 10 (May 20, 2022): 6256. http://dx.doi.org/10.3390/ijerph19106256.

Full text
Abstract:
In this study, we examined the psychometric properties of the Fear of Pain Questionnaire (FPQ-9) in Indigenous Australian people. FPQ-9, a shorter version of the original Fear of Pain Questionnaire-III, was developed to support the demand for more concise scales with faster administration time in the clinical and research setting. The psychometric properties of FPQ-9 in Indigenous Australian participants (n = 735) were evaluated with network psychometrics, such as dimensionality, model fit, internal consistency and reliability, measurement invariance, and criterion validity. Our findings indicated that the original FPQ-9 three-factor structure had a poor fit and did not adequately capture pain-related fear in Indigenous Australian people. On removal of two cross-loading items, an adapted version Indigenous Australian Fear of Pain Questionnaire-7 (IA-FPQ-7) displayed good fit and construct validity and reliability for assessing fear of pain in a sample of Indigenous Australian people. The IA-FPQ-7 scale could be used to better understand the role and impact of fear of pain in Indigenous Australian people living with chronic pain. This could allow for more tailored and timely interventions for managing pain in Indigenous Australian communities.
APA, Harvard, Vancouver, ISO, and other styles
41

Franić, Sanja, Conor V. Dolan, Denny Borsboom, James J. Hudziak, Catherina E. M. van Beijsterveldt, and Dorret I. Boomsma. "Can genetics help psychometrics? Improving dimensionality assessment through genetic factor modeling." Psychological Methods 18, no. 3 (September 2013): 406–33. http://dx.doi.org/10.1037/a0032755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sliwinski, Martin J., Jacqueline A. Mogle, Jinshil Hyun, Elizabeth Munoz, Joshua M. Smyth, and Richard B. Lipton. "Reliability and Validity of Ambulatory Cognitive Assessments." Assessment 25, no. 1 (April 15, 2016): 14–30. http://dx.doi.org/10.1177/1073191116643164.

Full text
Abstract:
Mobile technologies are increasingly used to measure cognitive function outside of traditional clinic and laboratory settings. Although ambulatory assessments of cognitive function conducted in people’s natural environments offer potential advantages over traditional assessment approaches, the psychometrics of cognitive assessment procedures have been understudied. We evaluated the reliability and construct validity of ambulatory assessments of working memory and perceptual speed administered via smartphones as part of an ecological momentary assessment protocol in a diverse adult sample ( N = 219). Results indicated excellent between-person reliability (≥0.97) for average scores, and evidence of reliable within-person variability across measurement occasions (0.41-0.53). The ambulatory tasks also exhibited construct validity, as evidence by their loadings on working memory and perceptual speed factors defined by the in-lab assessments. Our findings demonstrate that averaging across brief cognitive assessments made in uncontrolled naturalistic settings provide measurements that are comparable in reliability to assessments made in controlled laboratory environments.
APA, Harvard, Vancouver, ISO, and other styles
43

Ünlü, Ali, and Waqas Ahmed Malik. "Interactive Glyph Graphics of Multivariate Data in Psychometrics." Methodology 7, no. 4 (August 1, 2011): 134–44. http://dx.doi.org/10.1027/1614-2241/a000031.

Full text
Abstract:
Gauguin (Grouping And Using Glyphs Uncovering Individual Nuances) is statistical data visualization software for the interactive graphical exploration of multivariate data using glyph representations. Glyphs are defined as geometric shapes scaled by the values of multivariate data. Each glyph represents one high-dimensional data point or the prototype (average) of a group or cluster of data points. This paper reviews the capabilities, functionality, and interactive properties of this software package. Key features of Gauguin are illustrated with data from the Programme for International Student Assessment.
APA, Harvard, Vancouver, ISO, and other styles
44

Mallidou, Anastasia A., Elizabeth Borycki, Noreen Frisch, and Lynne Young. "Research Competencies Assessment Instrument for Nurses: Preliminary Psychometric Properties." Journal of Nursing Measurement 26, no. 3 (December 2018): E159—E182. http://dx.doi.org/10.1891/1061-3749.26.3.e159.

Full text
Abstract:
Background and Purpose: Clinician research competencies influence research use for evidence-based practice (EBP). We aimed to develop, refine, and psychometrically assess the Research Competencies Assessment Instrument for Nurses (RCAIN) to measure registered nurse research competencies (i.e., knowledge, skills, attitudes) focused on EBP-related domains: research process, knowledge synthesis, and knowledge translation activities. Methods: The preliminary psychometrics (face, content, construct/criterion validity) were evaluated based on 63 completed surveys. Results: The Cronbach’s α coefficients were .871, .813, and .946 for each domain, respectively; interitem correlations ranged from .472 to .833 (explained variance: 68.5%). Three components/factors revealed: comprehension of and skills required in research process and application of knowledge and skills. The revised RCAIN consists of 19 five-point Likert-type questions. Conclusions: The RCAIN assesses modifiable characteristics and explains variance in practice, health system, and patient outcomes. Further assessments are underway.
APA, Harvard, Vancouver, ISO, and other styles
45

Al Maqbali, Mohammed, Jackie Gracey, Jane Rankin, Lynn Dunwoody, Eileen Hacker, and Ciara Hughes. "Cross-Cultural Adaptation and Psychometric Properties of Quality of Life Scales for Arabic-Speaking Adults: A systematic review." Sultan Qaboos University Medical Journal [SQUMJ] 20, no. 2 (June 28, 2020): 125. http://dx.doi.org/10.18295/squmj.2020.20.02.002.

Full text
Abstract:
This review aimed to explore the psychometric properties of quality of life (QOL) scales to identify appropriate tools for research and clinical practice in Arabic-speaking adults. A systematic search of the Cumulative Index to Nursing and Allied Health Literature® (EBSCO Information Services, Ipswich, Massachusetts, USA), MEDLINE® (National Library of Medicine, Bethesda, Maryland, USA), EMBASE (Elsevier, Amsterdam, Netherlands) and PsycINFO (American Psychological Association, Washington, District of Columbia, USA) databases was conducted according to Preferred Reporting Items Systematic Reviews and Meta- Analysis guidelines. Quality assessment criteria were then utilised to evaluate the psychometric properties of identified QOL scales. A total of 27 studies relating to seven QOL scales were found. While these studies provided sufficient information regarding the scales’ validity and reliability, not all reported translation and cross-cultural adaptation processes. Researchers and clinicians should consider whether the psychometric properties, subscales and characteristics of their chosen QOL scale are suitable for use in their population of interest.Keywords: Quality of Life; Cross-Cultural Comparison; Translations; Psychometrics; Validity and Reliability; Surveys and Questionnaires; Systematic Review.
APA, Harvard, Vancouver, ISO, and other styles
46

Peltz, Jack, Ronald Rogge, Joseph Buckhalt, and Lori Elmore-Staton. "614 The Development and Psychometrics of an Assessment of Children’s Sleep Environments." Sleep 44, Supplement_2 (May 1, 2021): A241—A242. http://dx.doi.org/10.1093/sleep/zsab072.612.

Full text
Abstract:
Abstract Introduction Approximately half of school-aged children (ages 5–18) get either insufficient sleep during school nights or barely meet the required amount of sleep expected for healthy functioning (National Sleep Foundation, 2014).This percentage increases as children develop into adolescents (National Sleep Foundation, 2006). Accordingly, sleep problems and insufficient sleep are so pervasive that they could be considered an epidemic due to their adverse impact on children’s mental and physical health (Owens, 2015; Shochat et al., 2014). Fundamental to children’s sleep health is their sleep environment (Billings et al., 2019; Spilsbury et al., 2005). Despite its importance, however, there remains a noticeable absence of valid and reliable assessments of this construct. The current study sought to develop a measure of children’s sleep environments to support research and clinical work on youth’s sleep health. Methods A total of 813 parents (Mage = 40.6, SD = 8.6; 72% female) completed an online survey regarding their child’s (Mage = 10.5, SD = 3.8; 45% female) sleep environment and sleep-related behavior. The majority of families identified as Caucasian (approximately 80%). Parents reported fairly high annual incomes (Median = $75,000), but 28.2% of families reported incomes less than $50,000. A total of 18 items (total scale score; alpha = .74) were selected from a pool of 38 items developed from previous research that examined aspects of the sleep environment and were entered into an exploratory factor analysis from which 4 factors emerged: general sleep environment (10 items, alpha = .91), sleeping alone vs. with siblings (2 items, alpha = .78), presence of electronic screens (4 items, alpha = .75), and emotional environment (2 items, alpha = .80). Results The subscales demonstrated distinct patterns of correlations with related constructs, and unique predictive variance in explaining children’s daytime sleepiness even after controlling for children’s sleep hygiene, behavior problems, and sleep problems. Conclusion The current study is one of the first to demonstrate a valid/reliable assessment of children’s sleep environments. Not only will this measure provide researchers with an assessment of a fundamental influence on children’s sleep, but it will also enable clinicians to better measure this construct and support effective sleep health recommendations. Support (if any):
APA, Harvard, Vancouver, ISO, and other styles
47

Sinharay, Sandip. "Experiences With Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples." Journal of Educational and Behavioral Statistics 29, no. 4 (December 2004): 461–88. http://dx.doi.org/10.3102/10769986029004461.

Full text
Abstract:
There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the algorithm. Using the output of an MCMC algorithm that has not converged may lead to incorrect inferences on the problem at hand. The convergence is not one to a point, but that of the distribution of a sequence of generated values to another distribution, and hence is not easy to assess; there is no guaranteed diagnostic tool to determine convergence of an MCMC algorithm in general. This article examines the convergence of MCMC algorithms using a number of convergence diagnostics for two real data examples from psychometrics. Findings from this research have the potential to be useful to researchers using the algorithms. For both the examples, the number of iterations required (suggested by the diagnostics) to be reasonably confident that the MCMC algorithm has converged may be larger than what many practitioners consider to be safe.
APA, Harvard, Vancouver, ISO, and other styles
48

Cook, Paul F., Ed Farrell, and Jennifer Perlman. "The CCH Consumer Outcome Scales: A Brief Instrument to Assess the Multiple Problems of Homelessness." Journal of Nursing Measurement 15, no. 2 (September 2007): 83–104. http://dx.doi.org/10.1891/106137407782156345.

Full text
Abstract:
Homeless persons are underresearched; existing instruments do not adequately address this population. Clinical experts developed a brief instrument to assess housing, employment, benefits, physical health, mental health, and substance use that was tested for its psychometric properties. The instrument demonstrated content validity based on expert consensus, adequate interrater reliability (average r = .58), convergent and divergent validity with established measures, freedom from social desirability bias (average r = .00 with the Marlowe-Crowne scale), criterion-related validity for housing (85% accurate) and employment (83% accurate) items, and no floor effects. The benefits item had poorer psychometrics. The Colorado Coalition for the Homeless (CCH) Consumer Outcome Scales are recommended for assessment and service planning with homeless individuals. Further research is needed on the instrument’s sensitivity to change over time and applicability to diverse cultural groups.
APA, Harvard, Vancouver, ISO, and other styles
49

Holt, Robert W., Peder J. Johnson, and Timothy E. Goldsmith. "Application of Psychometrics to the Calibration of Air Carrier Evaluators." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 916–20. http://dx.doi.org/10.1177/107118139704100244.

Full text
Abstract:
The FAA's Advanced Qualification Program (AQP) encourages airlines to implement proficiency-based training programs and requires collection of reliable and valid performance assessment data. We present applications of traditional and innovative psychometric methods to this domain.
APA, Harvard, Vancouver, ISO, and other styles
50

Cardone, Daniela, Paola Pinti, and Arcangelo Merla. "Thermal Infrared Imaging-Based Computational Psychophysiology for Psychometrics." Computational and Mathematical Methods in Medicine 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/984353.

Full text
Abstract:
Thermal infrared imaging has been proposed as a potential system for the computational assessment of human autonomic nervous activity and psychophysiological states in a contactless and noninvasive way. Through bioheat modeling of facial thermal imagery, several vital signs can be extracted, including localized blood perfusion, cardiac pulse, breath rate, and sudomotor response, since all these parameters impact the cutaneous temperature. The obtained physiological information could then be used to draw inferences about a variety of psychophysiological or affective states, as proved by the increasing number of psychophysiological studies using thermal infrared imaging. This paper presents therefore a review of the principal achievements of thermal infrared imaging in computational physiology with regard to its capability of monitoring psychophysiological activity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography