Journal articles on the topic 'Testing, assessment and psychometrics'

To see the other types of publications on this topic, follow the link: Testing, assessment and psychometrics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Testing, assessment and psychometrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hunt, Earl, and Anne Thissen-Roe. "Interleaving Instruction and Assessment." Journal of Cognitive Education and Psychology 4, no. 1 (January 2004): 47–64. http://dx.doi.org/10.1891/194589504787382901.

Full text
Abstract:
Assessment is conducted for three reasons; personnel selection in academic, industrial, and government situations, certification of individuals and accountability of training institutions, and to provide formative instruction. The first goal has dominated much of psychometrics over the past century, the second goal is relatively new, and the third is not yet in widespread use. We review the mathematics of personnel selection and argue that there is not a great deal to be gained by improvement of present testing methods for personnel selection unless the institution doing the selection is either extremely large or highly selective. We concentrate on the third use, providing formative instruction. Formative instruction ought to diagnose an individual’s capabilities and knowledge, rather than simply grading them on a satisfactory-unsatisfactory scale. However the idea of grading is, in some fashion, built into traditional psychometric forms of testing. We sketch the beginnings of a psychometric approach intended to provide analyses for formative testing. Examples are given from the field of science instruction.
APA, Harvard, Vancouver, ISO, and other styles
2

Woods, Stephen A. "Assessment in the digital age: Some challenges for Test Developers and Users." Assessment and Development Matters 10, no. 2 (2018): 22–25. http://dx.doi.org/10.53841/bpsadm.2018.10.2.22.

Full text
Abstract:
Key digested messageHow should psychometrics specialists (test developers and users) respond to the challenges of the digital age? A broad challenge that faces the discipline of psychometrics is to avoid being rooted in old ways of thinking about testing, whilst simultaneously ensuring that key principles of best practice in the science of assessment are maintained and applied to the new methodologies of testing. This article explores key issues in this focal challenge.
APA, Harvard, Vancouver, ISO, and other styles
3

Davies, Mark G., Michael J. Rowan, and John Feely. "Psychometrics in assessing hepatic encephalopathy – a brief review." Irish Journal of Psychological Medicine 8, no. 2 (September 1991): 144–46. http://dx.doi.org/10.1017/s0790966700015135.

Full text
Abstract:
AbstractHepatic encephalopathy is a neuropsychiatric disorder usually associated with severe hepatic insufficiency. It may however be divided into clinical and subclinical groupings. Psychometric testing, serial EEG's, EEG spectral analysis and event related potentials are all presently being used to quantify and differentiate between the various stages of hepatic encephalopathy. We review the use of psychometrics in hepatic encephalopathy and discuss evidence that these findings are comparable with the more objective data of electrophysiological studies. An adequate, simple and inexpensive assessment may be carried out using a battery of psychometric tests which include number connection tests and five pointed star construction.
APA, Harvard, Vancouver, ISO, and other styles
4

Shirinkina, E. V. "Psychometric Assessment Standards Based on the «Performance Based Assessment» Methodology." Quality and life 29, no. 1 (March 15, 2021): 15–19. http://dx.doi.org/10.34214/2312-5209-2021-29-1-15-19.

Full text
Abstract:
The relevance of the study is due to the fact that there has been a request for a comprehensive assessment of knowledge and skills based on the methodology of competency-based assignments, which consists in a performance-based assessment. In the article, the author presents the psychometric standards and recommendations existing in foreign practice that are necessary for a fair, reliable and objective assessment of the learning outcomes of company employees. The purpose of the work is to study existing standards and recommendations for assessing learning outcomes in the KPI system. The empirical basis of the study was the data of the International Test Commission; British Psychological Society – British Psychological Society, BPS and others. The author analyzes the empirical base of the research and systematizes the existing standards and recommendations for assessing testing. The practical significance of the research results lies in the fact that, given the challenges and demands of the modern world, most organizations will allow them to form a systemic development of the measurement process and quickly respond to new challenges that arise before psychometrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Cordier, Reinie, Renée Speyer, Matthew Martinez, and Lauren Parsons. "Reliability and Validity of Non-Instrumental Clinical Assessments for Adults with Oropharyngeal Dysphagia: A Systematic Review." Journal of Clinical Medicine 12, no. 2 (January 16, 2023): 721. http://dx.doi.org/10.3390/jcm12020721.

Full text
Abstract:
This systematic review on non-instrumental clinical assessment in adult oropharyngeal dysphagia (OD) provides an overview of published measures with reported reliability and validity. In alignment with PRISMA, four databases (CINAHL, Embase, PsycINFO, and PubMed) were searched, resulting in a total of 16 measures and 32 psychometric studies included. The included measures assessed any aspect of swallowing, consisted of at least one specific subscale relating to swallowing, were developed by clinical observation, targeted adults, and were developed in English. The included psychometric studies focused on adults, reported on measures for OD-related conditions, described non-instrumental clinical assessments, reported on validity or reliability, and were published in English. Methodological quality was assessed using the standard quality assessment QualSyst. Most measures targeted only restricted subdomains within the conceptual framework of non-instrumental clinical assessments. Across the 16 measures, hypothesis testing and reliability were the most reported psychometrics, whilst structural validity and content validity were the least reported. Overall, data on the reliability and validity of the included measures proved incomplete and frequently did not meet current psychometric standards. Future research should focus on the development of comprehensive non-instrumental clinical assessments for adults with OD using contemporary psychometric research methods.
APA, Harvard, Vancouver, ISO, and other styles
6

McCredie, Hugh. "Heroes, landmarks and blind alleys in personality assessment." Assessment and Development Matters 6, no. 2 (2014): 16–18. http://dx.doi.org/10.53841/bpsadm.2014.6.2.16.

Full text
Abstract:
This is a new series of four articles we have asked Hugh McCredie, a regular contributor to ADM and Vice-Chair of The Psychometrics Forum, to write to mark the centenary of the First World War. The articles will span the ages of psychological testing, and we hope you will enjoy reading about the testing milestones of the last 100 years.
APA, Harvard, Vancouver, ISO, and other styles
7

Schroeder, Amber N., Kaleena R. Odd, and Julia H. Whitaker. "Agree to disagree: Examining the psychometrics of cybervetting." Journal of Managerial Psychology 35, no. 5 (June 25, 2020): 435–50. http://dx.doi.org/10.1108/jmp-09-2018-0420.

Full text
Abstract:
PurposeDue to the paucity of research on web-based job applicant screening (i.e. cybervetting), the purpose of the current study was to examine the psychometric properties of cybervetting, including an examination of the impact of adding structure to the rating process.Design/methodology/approachUsing a mixed-factorial design, 122 supervisors conducted cybervetting evaluations of applicant personality, cognitive ability, written communication skills, professionalism, and overall suitability. Cross-method agreement (i.e. the degree of similarity between cybervetting ratings and other assessment methods), as well as interrater reliability and agreement were examined, and unstructured versus structured cybervetting rating formats were compared.FindingsCybervetting assessments demonstrated high interrater reliability and interrater agreement, but only limited evidence of cross-method agreement was provided. In addition, adding structure to the cybervetting process did not enhance the psychometric properties of this assessment technique.Practical implicationsThis study highlighted that whereas cybervetting raters demonstrated a high degree of consensus in cybervetting-based attributions, there may be concerns regarding assessment accuracy, as cybervetting-based ratings generally differed from applicant test scores and self-assessment ratings. Thus, employers should use caution when utilizing this pre-employment screening technique.Originality/valueWhereas previous research has suggested that cybervetting ratings demonstrate convergence with other traditional assessments (albeit with relatively small effects), these correlational links do not provide information regarding cross-method agreement or method interchangeability. Thus, this study bridges a crucial gap in the literature by examining cross-method agreement for a variety of job-relevant constructs, as well as empirically testing the impact of adding structure to the cybervetting rating process.
APA, Harvard, Vancouver, ISO, and other styles
8

Remawi, Bader Nael, Amy Gadoud, Iain Malcolm James Murphy, and Nancy Preston. "Palliative care needs-assessment and measurement tools used in patients with heart failure: a systematic mixed-studies review with narrative synthesis." Heart Failure Reviews 26, no. 1 (August 3, 2020): 137–55. http://dx.doi.org/10.1007/s10741-020-10011-7.

Full text
Abstract:
AbstractPatients with heart failure have comparable illness burden and palliative care needs to those with cancer. However, few of them are offered timely palliative care. One main barrier is the difficulty in identifying those who require palliative care. Several palliative care needs-assessment/measurement tools were used to help identify these patients and assess/measure their needs, but it is not known which one is the most appropriate for this population. This review aimed to identify the most appropriate palliative care needs-assessment/measurement tools for patients with heart failure. Cochrane Library, MEDLINE Complete, AMED, PsycINFO, CINAHL Complete, EMBASE, EThOS, websites of the identified tools, and references and citations of the included studies were searched from inception to 25 June 2020. Studies were included if they evaluated palliative care needs-assessment/measurement tools for heart failure populations in terms of development, psychometrics, or palliative care patient/needs identification. Twenty-seven papers were included regarding nineteen studies, most of which were quantitative and observational. Six tools were identified and compared according to their content and context of use, development, psychometrics, and clinical applications in identifying patients with palliative care needs. Despite limited evidence, the Needs Assessment Tool: Progressive Disease – Heart Failure (NAT:PD-HF) is the most appropriate palliative care needs-assessment tool for use in heart failure populations. It covers most of the patient needs and has the best psychometric properties and evidence of identification ability and appropriateness. Psychometric testing of the tools in patients with heart failure and evaluating the tools to identify those with palliative care needs require more investigation.
APA, Harvard, Vancouver, ISO, and other styles
9

Jenkinson, Josephine C. "Diagnosis of Developmental Disability: Psychometrics, Behaviour, and Etiology." Behaviour Change 14, no. 2 (June 1997): 60–72. http://dx.doi.org/10.1017/s0813483900003545.

Full text
Abstract:
Diagnosis of developmental disability lacks precision, partly because of differences in definitions of the concept, but largely because of problems specific to the use of psychometric measures with children who have a developmental disability. These problems arise from inadequate evidence of reliability for psychometric measures at extremes of the normal distribution, from lack of comparability between different tests and between different editions of tests, and from practical considerations in the assessment of people with various disabilities. Adaptive behaviour assessment has been introduced to supplement intelligence testing, but lack of a clear conceptualisation of this concept and doubts about the appropriateness of United States norms for Australian children add to the difficulties of interpreting results of standardised scales. Systematic assessment of behavioural problems needs to be incorporated into diagnostic procedures. This paper argues that improvements in the accuracy of diagnosis are unlikely to come from further technical advances in psychometric assessment, and suggests that diagnosis in the future should take into account new technologies which link etiology to specific behavioural patterns to supplement existing procedures.
APA, Harvard, Vancouver, ISO, and other styles
10

Bachman, Lyle F. "Assessment and Evaluation." Annual Review of Applied Linguistics 10 (March 1989): 210–26. http://dx.doi.org/10.1017/s0267190500001318.

Full text
Abstract:
Research and development in the assessment of language abilities in the past decade have been concerned both with achieving a better understanding of the nature of language abilities and other factors that affect performance on language tests and with developing methods of assessment that are consistent with the way applied linguists view language use. The way language testers conceptualize language abilities has been strongly influenced by the broadened view of language proficiency as communicative competence that has emerged in applied linguistics. And while this view of language proficiency provides a much richer conceptual basis for characterizing the language abilities to be measured, it has presented language testers with a major challenge in defining these abilities and the interactions among them with sufficient precision to permit their measurement. Language testing researchers have also been influenced by developments in second language acquisition, investigating the effects on test performance of other factors such as background knowledge, cognitive style, native language, ethnicity, and sex. Finally, language testing research and practice have been influenced by advances in psychometrics, in that more sophisticated analytic tools are being used both to unravel the tangled web of language abilities and to assure thhat the measures of these abilities are reliable, valid, efficient, and appropriate for the uses for which they are intended.
APA, Harvard, Vancouver, ISO, and other styles
11

Slepkov, A. D., M. L. Van Bussel, K. M. Fitze, and W. S. Burr. "A Baseline for Multiple-Choice Testing in the University Classroom." SAGE Open 11, no. 2 (April 2021): 215824402110168. http://dx.doi.org/10.1177/21582440211016838.

Full text
Abstract:
There is a broad literature in multiple-choice test development, both in terms of item-writing guidelines, and psychometric functionality as a measurement tool. However, most of the published literature concerns multiple-choice testing in the context of expert-designed high-stakes standardized assessments, with little attention being paid to the use of the technique within non-expert instructor-created classroom examinations. In this work, we present a quantitative analysis of a large corpus of multiple-choice tests deployed in the classrooms of a primarily undergraduate university in Canada. Our report aims to establish three related things. First, reporting on the functional and psychometric operation of 182 multiple-choice tests deployed in a variety of courses at all undergraduate levels of education establishes a much-needed baseline for actual as-deployed classroom tests. Second, we motivate and present modified statistical measures—such as item-excluded correlation measures of discrimination and length-normalized measures of reliability—that should serve as useful parameters for future comparisons of classroom test psychometrics. Finally, we use the broad empirical data from our survey of tests to update widely used item-quality guidelines.
APA, Harvard, Vancouver, ISO, and other styles
12

Wyche, LaMonte G., and Melvin R. Novick. "Standards for Educational and Psychological Testing: The Issue of Testing Bias from the Perspective of School Psychology and Psychometrics." Journal of Black Psychology 11, no. 2 (February 1985): 43–48. http://dx.doi.org/10.1177/009579848501100202.

Full text
Abstract:
This article examines the various ways in which the problem of testing bias in the context of some contemporary educational, legal, and societal developments can and cannot be addressed by developments in test standards. It reiterates the contention that school psychologists should utilize differential assessment strategies in formulating decisions regarding school age Black children.
APA, Harvard, Vancouver, ISO, and other styles
13

Lundgren, Henriette, Brigitte Kroon, and Rob F. Poell. "Personality testing and workplace training." European Journal of Training and Development 41, no. 3 (April 3, 2017): 198–221. http://dx.doi.org/10.1108/ejtd-03-2016-0015.

Full text
Abstract:
Purpose The purpose of this paper is to explore how and why personality tests are used in workplace training. This research paper is guided by three research questions that inquire about the role of external and internal stakeholders, the value of psychometric and practical considerations in test selection, and the purpose of personality test use in workplace training. Design/methodology/approach This research paper uses multiple-case study analysis. Interviews, test reports, product flyers and email correspondence were collected and analyzed from publishers, associations, psychologists and human resource development (HRD) practitioners in Germany, the UK and The Netherlands between 2012 and 2016. Findings Themes emerge around industry tensions among practitioners and professional associations, psychologists and non-psychologists. Ease of use is a more important factor than psychometrics in the decision-making process. Also, practitioners welcome publishers that offer free coaching support. In the process of using tests for development rather than assessment, re-labeling takes place when practitioners and publishers use positive terms for personality tests as tools for personal stocktaking and development. Research limitations/implications Despite extensive data collection and analysis efforts, this study is limited by its focus on a relatively small number of country cases and stakeholders per case. Practical implications By combining scientific evidence with practical application, stakeholders can take first steps toward more evidence-based HRD practice around personality testing in workplace training. Originality/value Little academic literature exists on the use of personality testing in workplace training. Without a clear understanding of the use of personality testing outside personnel selection, the current practice of personality tests for developmental purposes could raise ethical concerns about the rights and responsibilities of test takers.
APA, Harvard, Vancouver, ISO, and other styles
14

Jones, Chelsea, Jessica Harasym, Antonio Miguel-Cruz, Shannon Chisholm, Lorraine Smith-MacDonald, and Suzette Brémault-Phillips. "Neurocognitive Assessment Tools for Military Personnel With Mild Traumatic Brain Injury: Scoping Literature Review." JMIR Mental Health 8, no. 2 (February 22, 2021): e26360. http://dx.doi.org/10.2196/26360.

Full text
Abstract:
Background Mild traumatic brain injury (mTBI) occurs at a higher frequency among military personnel than among civilians. A common symptom of mTBIs is cognitive dysfunction. Health care professionals use neuropsychological assessments as part of a multidisciplinary and best practice approach for mTBI management. Such assessments support clinical diagnosis, symptom management, rehabilitation, and return-to-duty planning. Military health care organizations currently use computerized neurocognitive assessment tools (NCATs). NCATs and more traditional neuropsychological assessments present unique challenges in both clinical and military settings. Many research gaps remain regarding psychometric properties, usability, acceptance, feasibility, effectiveness, sensitivity, and utility of both types of assessments in military environments. Objective The aims of this study were to explore evidence regarding the use of NCATs among military personnel who have sustained mTBIs; evaluate the psychometric properties of the most commonly tested NCATs for this population; and synthesize the data to explore the range and extent of NCATs among this population, clinical recommendations for use, and knowledge gaps requiring future research. Methods Studies were identified using MEDLINE, Embase, American Psychological Association PsycINFO, CINAHL Plus with Full Text, Psych Article, Scopus, and Military & Government Collection. Data were analyzed using descriptive analysis, thematic analysis, and the Randolph Criteria. Narrative synthesis and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for Scoping Reviews) guided the reporting of findings. The psychometric properties of NCATs were evaluated with specific criteria and summarized. Results Of the 104 papers, 33 met the inclusion criteria for this scoping review. Thematic analysis and NCAT psychometrics were reported and summarized. Conclusions When considering the psychometric properties of the most commonly used NCATs in military populations, these assessments have yet to demonstrate adequate validity, reliability, sensitivity, and clinical utility among military personnel with mTBIs. Additional research is needed to further validate NCATs within military populations, especially for those living outside of the United States and individuals experiencing other conditions known to adversely affect cognitive processing. Knowledge gaps remain, warranting further study of psychometric properties and the utility of baseline and normative testing for NCATs.
APA, Harvard, Vancouver, ISO, and other styles
15

Baker, Eva L., and Edmund W. Gordon. "From the Assessment OF Education to the Assessment for Education: Policy and Futures." Teachers College Record: The Voice of Scholarship in Education 116, no. 11 (November 2014): 1–24. http://dx.doi.org/10.1177/016146811411601107.

Full text
Abstract:
Context Educational reform in the United States has had a growing dependence on accountability achieved through large-scale assessment. Despite discussion and advocacy for assessment purposes that would assist learning, provide help to teachers’ instructional plans and execution, and give a broader perspective of the depth and breadth of learning, the general focus still remains on accountability, now elaborated with sanctions for schools and personnel. Focus of Study To generate scholarly discussion, options for practice, and grounded predictions about testing in the next decades, the Gordon Commission on the Future of Assessment in Education. Participants Convened over a two-year period and with 30 people on the steering committee, the commissioners included well-known scholars grounded in psychometrics, assessment design, technology, learning, instruction, language, subject matter, and teaching in discussion. Commissioners, additional authors, and reviewers were largely drawn from universities, private profit, and nonprofit institutions. Professor Edmund W. Gordon was the Chair of the Commission. Design A knowledge acquisition and synthesis study, the product design relied on papers authored by expert scholars describing their understanding of productive student testing in their own domains. The commission funded papers on a wide variety of topics. This paper focuses on two of the major topics of the reports, the emphasis on shifting assessment to help rather than simply to mark progress and how future contexts, including technological change, may impinge on testing options. Conclusions The paper calls for a transformation of assessment purpose and use, from annual, time-controlled accountability assessments to more continuous assessments used in the course of a learners’ acquisition of understanding, motivation for learning, collaboration, and deep application of knowledge in problem solving, communication, and authentic settings. Assessments should emphasize helping students of varying backgrounds and goals as well as their teachers. The role of technology as an assessment design, administration, and reporting toolset is described, in the context of changing knowledge expectations and a global competitive environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Sword, Wendy, Maureen Heaman, Wendy E. Peterson, Ann Salvador, Noori Akhtar-Danesh, and Amanda Bradford-Janke. "Psychometric Testing of the French Language Quality of Prenatal Care Questionnaire." Journal of Nursing Measurement 23, no. 3 (2015): 436–51. http://dx.doi.org/10.1891/1061-3749.23.3.436.

Full text
Abstract:
Background and Purpose: To assess the psychometrics of the French language Quality of Prenatal Care Questionnaire (QPCQ). Methods: Data from 302 women were used in a confirmatory factor analysis and in assessment of construct validity through hypothesis testing and internal consistency reliability using Cronbach’s alpha. Results: The 6 factors (subscales) were verified and confirmed. Hypothesis testing further supported construct validity. The overall QPCQ had acceptable internal consistency reliability (Cronbach’s alpha = .97) as did 5 subscales (Cronbach’s alpha = .70–.92); the Sufficient Time subscale had poorer reliability (Cronbach’s alpha = .61). Conclusions: The French language QPCQ is a valid and reliable self-report measure of prenatal care quality. It can be used in research and in quality improvement work to strengthen prenatal care services.
APA, Harvard, Vancouver, ISO, and other styles
17

Magimairaj, Beula M., Philip Capin, Sandra L. Gillam, Sharon Vaughn, Greg Roberts, Anna-Maria Fall, and Ronald B. Gillam. "Online Administration of the Test of Narrative Language–Second Edition: Psychometrics and Considerations for Remote Assessment." Language, Speech, and Hearing Services in Schools 53, no. 2 (April 11, 2022): 404–16. http://dx.doi.org/10.1044/2021_lshss-21-00129.

Full text
Abstract:
Purpose: Our aim was to evaluate the psychometric properties of the online administered format of the Test of Narrative Language–Second Edition (TNL-2; Gillam & Pearson, 2017), given the importance of assessing children's narrative ability and considerable absence of psychometric studies of spoken language assessments administered online. Method: The TNL-2 was administered to 357 school-age children at risk for language and literacy difficulties as part of a randomized controlled trial, across three annual cohorts, at three time points (pretest, posttest, and 5-month follow-up). Cohort 3 students were tested using an online format at posttest and at follow-up. We compared the Cronbach's alpha internal consistency reliability of the TNL-2 online testing scores with in-person scores from TNL-2 normative data and Cohort 3 in-person testing at pretest, and interrater reliability for Cohort 3 across test points. In addition, we examined measurement invariance across test occasions and the criterion validity of the TNL-2, the latter based on its correlations with narrative sample measures (Mean Length of Utterance in words and the Monitoring Indicators of Scholarly Language rubric). Results: Internal consistency reliability, interrater reliability, and measurement invariance analyses of the online and in-person administration of the TNL-2 yielded similar outcomes. The criterion validity of the TNL-2 was found to be good. Conclusions: TNL-2 psychometric properties from online administration were generally in the good range and were not significantly different from in-person testing. When administered online using standardized procedures, the TNL-2 is valid and reliable for use in assessing narrative language proficiency in school-age children at risk for language and learning difficulties.
APA, Harvard, Vancouver, ISO, and other styles
18

Chuang, Hsiao-Ling, Ching-Pyng Kuo, Chia-Ying Li, and Wen-Chun Liao. "Psychometric Testing of Behavior Assessment for Children." Asian Nursing Research 10, no. 1 (March 2016): 39–44. http://dx.doi.org/10.1016/j.anr.2015.10.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ham, Yeajin, Suyeong Bae, Heerim Lee, Yaena Ha, Heesu Choi, Ji-Hyuk Park, Hae Yean Park, and Ickpyo Hong. "Item-level psychometrics of the Ascertain Dementia Eight-Item Informant Questionnaire." PLOS ONE 17, no. 7 (July 5, 2022): e0270204. http://dx.doi.org/10.1371/journal.pone.0270204.

Full text
Abstract:
The aim of this study is to evaluate the item-level psychometrics of the Ascertain Dementia Eight-Item Informant Questionnaire (AD-8) by examining its dimensionality, rating scale integrity, item fit statistics, item difficulty hierarchy, item-person match, and precision. We used confirmatory factor analysis and the Rasch rating scale model for analyzing the data extracted from the proxy versions of the 2019 and 2020 National Health and Aging Trends Study, USA. A total of 403 participants were included in the analysis. The confirmatory factor analysis with a 1-factor model using the robust weighted least squares (WLSMV) estimator indicated a unidimensional measurement structure (χ2 = 41.015, df = 20, p = 0.004; root mean square error of approximation = 0.051; comparative fit index = 0.995; Tucker–Lewis Index = 0.993;). The findings indicated that the AD-8 has no misfitting items and no differential item functioning across sex and gender. The items were evenly distributed in the item difficulty rating (range: −2.30 to 0.98 logits). While there were floor effects, the AD-8 revealed good reliability (Rasch person reliability = 0.67, Cronbach’s alpha = 0.89). The Rasch analysis reveals that the AD-8 has excellent psychometric properties that can be used as a screening assessment tool in clinical settings allowing clinicians to measure dementia both quickly and efficiently. To summarize, the AD-8 could be a useful primary screening tool to be used with additional diagnostic testing, if the patient is accompanied by a reliable informant.
APA, Harvard, Vancouver, ISO, and other styles
20

Lambert, H. C., E. G. Gisel, M. E. Groher, M. Abrahamowicz, and S. Wood-Dauphinee. "Psychometric Testing of the McGill Ingestive Skills Assessment." American Journal of Occupational Therapy 60, no. 4 (July 1, 2006): 409–19. http://dx.doi.org/10.5014/ajot.60.4.409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Orenduff, Melissa C., Erika T. Rezeli, Stephen D. Hursting, and Carl F. Pieper. "Psychometrics of the Balance Beam Functional Test in C57BL/6 Mice." Comparative Medicine 71, no. 4 (August 1, 2021): 302–8. http://dx.doi.org/10.30802/aalas-cm-21-000033.

Full text
Abstract:
Aging is associated with a progressive decline in physical function characterized by decreased mobility, which is an important risk factor for loss of independence and reduced quality of life. Functional testing conducted in animals has advanced our understanding of age-related changes in physical ability and contributed to the development of physiologic measurements that can be used to assess functional changes during aging. The balance beam test is one assessment tool used to measure age-related changes in balance and coordination. The goal of this study is to provide analytical examples and psychometric support of a protocol that has been analyzed to show how the number of successive test runs, foot slips, pauses, and hesitations affect the reliability of the primary outcome measure, which is the time to cross the beam. Our results suggest that conducting more than 1 training session, consisting of greater than or equal to 3 successful training runs, followed by at least one test session with no less than 2 successful runs (that is, runs without pauses or hesitations) provides a psychometrically sound outcome. The data presented here indicate that a psychometric approach can improve protocol design and reliability of balance beam measures in mice.
APA, Harvard, Vancouver, ISO, and other styles
22

Steele-Moses, Susan, Mary Koloroutis, and Dana M. Ydarraga. "Testing a “Caring Assessment for Care Givers” Instrument." Creative Nursing 17, no. 1 (2011): 43–50. http://dx.doi.org/10.1891/1078-4535.17.1.43.

Full text
Abstract:
Based on Kristen Swanson’s theory of caring, Caring Assessment for the Care Giver has been traditionally used in preparation for Relationship-Based Care (RBC) implementation; however, its reliability and validity were not known. This article discusses the psychometric testing of the instrument.
APA, Harvard, Vancouver, ISO, and other styles
23

Shanmugam, S. Kanageswari Suppiah, and Leong Chee Kin. "Introducing Computer Adaptive Testing to a Cohort of Mathematics Teachers: The Case of Concerto." Southeast Asian Mathematics Education Journal 2, no. 1 (November 30, 2012): 61–73. http://dx.doi.org/10.46517/seamej.v2i1.18.

Full text
Abstract:
This article describes a study that explores on-line assessment, with the objectives to identify features that support or impede the usability of Concerto, an on-line adaptive testing software that was developed by the Psychometrics Centre of the University of Cambridge. We report on the analysis of data collected during a one-month in-service programme organised for secondary teachers and teacher educators from the Southeast Asian Minister of Education Organisation (SEAMEO) region. The study identifies the challenges the participants encountered during a one-day workshop and evaluates thedifficulties of adopting Concerto to create a simple and an adaptive on-line mathematics test. While the small study limits the possibility of applicability for other samples, yet the findings of the study illustrate the complexity of using the Concerto’s features and the commonly occurring difficulties, providing the basis for the development of some new workshop materials that will contribute to the improvement of introductory Concerto workshops that will be conducted in the future.
APA, Harvard, Vancouver, ISO, and other styles
24

Sleutel, Martha R., Celestina Barbosa-Leiker, and Marian Wilson. "Psychometric Testing of the Health Care Evidence-Based Practice Assessment Tool." Journal of Nursing Measurement 23, no. 3 (2015): 485–98. http://dx.doi.org/10.1891/1061-3749.23.3.485.

Full text
Abstract:
Background and Purpose: Evidence-based practice (EBP) is essential to optimal health care outcomes. Interventions to improve use of evidence depend on accurate assessments from reliable, valid, and user-friendly tools. This study reports psychometric analyses from a modified version of a widely used EBP questionnaire, the information literacy for nursing practice (ILNP). Methods: After content validity assessments by nurse researchers, a convenience sam ple of 2,439 nurses completed the revised 23-item questionnaire. We examined internal consistency and used factor analyses to assess the factor structure. Results: A modified 4-factor model demonstrated adequate fit to the data. Cronbach’s alpha was .80–.92 for the subscales. Conclusions: The shortened ILNP (renamed Healthcare EBP Assessment Tool or HEAT) demonstrated adequate content validity, construct validity, and reliability.
APA, Harvard, Vancouver, ISO, and other styles
25

Smith, Leann Schow, and Julie M. Barkmeier-Kraemer. "Conceptual Framework Behind the Development of a Level of Confidence Tool: The Pediatric Videofluoroscopic Swallow Study Value Scale." American Journal of Speech-Language Pathology 31, no. 2 (March 10, 2022): 689–704. http://dx.doi.org/10.1044/2021_ajslp-20-00295.

Full text
Abstract:
Purpose: The videofluoroscopic swallow study (VFSS) is the most commonly used instrumental procedure for evaluating swallowing in pediatric populations suspected of having dysphagia. Assessment and interpretation of a VFSS in pediatric populations is frequently challenged by testing-specific factors that can raise concerns regarding the representativeness of swallow events observed during testing compared to daily feeding/swallowing physiology. When VFSS findings do not represent typical swallowing patterns, treatment recommendations can result in suboptimal outcomes. To address this current challenge to pediatric VFSS interpretation and associated treatment recommendations, the pediatric VFSS Value Scale (pVFSS Value Scale) was developed within a tertiary regional pediatric medical center. This clinical focus article summarizes the initial scale development phases and resulting conceptual framework for rating clinical testing factors that influence a clinician's level of confidence regarding pediatric VFSS findings. Future goals for scientific evaluation and clinical utilization of this new rating scale are also reported. Conclusions: The pVFSS Value Scale was developed to assist clinicians with interpretation of pediatric VFSS assessment outcomes and to efficiently communicate factors influencing impressions and treatment recommendations with team members and caregivers. This clinical concept article summarizes potential uses of this tool to inform treatment planning as well as future clinical research to evaluate its psychometrics and clinical utility.
APA, Harvard, Vancouver, ISO, and other styles
26

Fabiano-Smith, Leah. "Standardized Tests and the Diagnosis of Speech Sound Disorders." Perspectives of the ASHA Special Interest Groups 4, no. 1 (February 26, 2019): 58–66. http://dx.doi.org/10.1044/2018_pers-sig1-2018-0018.

Full text
Abstract:
Purpose The purpose of this tutorial is to provide speech-language pathologists with the knowledge and tools to (a) evaluate standardized tests of articulation and phonology and (b) utilize criterion-referenced approaches to assessment in the absence of psychometrically strong standardized tests. Method Relevant literature on psychometrics of standardized tests used to diagnose speech sound disorders in children is discussed. Norm-referenced and criterion-referenced approaches to assessment are reviewed, and a step-by-step guide to a criterion-referenced assessment is provided. Published criterion references are provided as a quick and easy resource guide for professionals. Results Few psychometrically strong standardized tests exist for the evaluation of speech sound disorders for monolingual and bilingual populations. The use of criterion-referenced testing is encouraged to avoid diagnostic pitfalls. Discussion Speech-language pathologists who increase their use of criterion-referenced measures and decrease their use of standardized tests will arrive at more accurate diagnoses of speech sound disorders.
APA, Harvard, Vancouver, ISO, and other styles
27

Alina Stanciu, Cătălin Gabriel Ioniță, and Adrian Toșcă Dan Florin Stănescu. "Development of an Integrated Game Based Assessment Approach – The Next Generation of Psychometric Testing." European Journal of Sustainable Development 8, no. 5 (October 1, 2019): 270. http://dx.doi.org/10.14207/ejsd.2019.v8n5p270.

Full text
Abstract:
Game-based assessment have received a lot of attention in the last decade. In a recent study of human resources practitioners, 75% of participants indicated that they would consider using gamification as part of their own recruitment and selection strategy in the near future. Following the methodological approach previously used in educational environment, two approaches to building and using GBA in the organizational environment can be distinguished: gamified assessment – by gamifying (already existing) psychometric test; psychometric play - use of a game to gather evaluation data. Previous studies highlighted that those applying for a job are eager to use game-based assessment for self-evaluation, especially when these games are available for free. Game-based assessments can also help maintain a high commitment during the evaluation, which reduces the likelihood of some candidates dropping out in the process and also increases the amount of time that data can be collected. Current paper aim at presenting the preliminary efforts made to gamify two psychometric tests, namely spatial and verbal reasoning.Keywords: Game based assessment; recruitment; spatial reasoning; verbal reasoning
APA, Harvard, Vancouver, ISO, and other styles
28

Ecoff, Laurie, and Jaynelle F. Stichler. "Development and Psychometric Testing of a Leadership Competency Assessment." JONA: The Journal of Nursing Administration 52, no. 12 (December 2022): 666–71. http://dx.doi.org/10.1097/nna.0000000000001229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Watkins, S. E., P. Williams, R. E. J. Ryder, and W. Bowshier. "Psychometric Assessment of Diabetic Impotence." British Journal of Psychiatry 162, no. 6 (June 1993): 840–42. http://dx.doi.org/10.1192/bjp.162.6.840.

Full text
Abstract:
In a study of 23 diabetic men complaining of impotence, completion of physical tests, self-report psychometric testing, a rating of marital intimacy, and a semistructured interview revealed that, of ten patients found to be at risk of psychogenic impotence secondary to marital or psychiatric morbidity, five were thought to have adequate erectile response and to have a psychogenic component to their problem. This seems to show high sensitivity, if not specificity, of the selfreport questionnaires.
APA, Harvard, Vancouver, ISO, and other styles
30

Bloniasz, Patrick Francis. "On Educational Assessment Theory: A High-Level Discussion of Adolphe Quetelet, Platonism, and Ergodicity." Philosophies 6, no. 2 (June 4, 2021): 46. http://dx.doi.org/10.3390/philosophies6020046.

Full text
Abstract:
Educational assessments, specifically standardized and normalized exams, owe most of their foundations to psychological test theory in psychometrics. While the theoretical assumptions of these practices are widespread and relatively uncontroversial in the testing community, there are at least two that are philosophically and mathematically suspect and have troubling implications in education. Assumption 1 is that repeated assessment measures that are calculated into an arithmetic mean are thought to represent some real stable, quantitative psychological trait or ability plus some error. Assumption 2 is that aggregated, group-level educational data collected from assessments can then be interpreted to make inferences about a given individual person over time without explicit justification. It is argued that the former assumption cannot be taken for granted; it is also argued that, while it is typically attributed to 20th century thought, the assumption in a rigorous form can be traced back at least to the 1830s via an unattractive Platonistic statistical thesis offered by one of the founders of the social sciences—Belgian mathematician Adolphe Quetelet (1796–1874). While contemporary research has moved away from using his work directly, it is demonstrated that cognitive psychology is still facing the preservation of assumption 1, which is becoming increasingly challenged by current paradigms that pitch human cognition as a dynamical, complex system. However, how to deal with assumption 1 and whether it is broadly justified is left as an open question. It is then argued that assumption 2 is only justified by assessments having ergodic properties, which is a criterion rarely met in education; specifically, some forms of normalized standardized exams are intrinsically non-ergodic and should be thought of as invalid assessments for saying much about individual students and their capability. The article closes with a call for the introduction of dynamical mathematics into educational assessment at a conceptual level (e.g., through Bayesian networks), the critical analysis of several key psychological testing assumptions, and the introduction of dynamical language into philosophical discourse. Each of these prima facie distinct areas ought to inform each other more closely in educational studies.
APA, Harvard, Vancouver, ISO, and other styles
31

Mabry, Linda, Jayne Poole, Linda Redmond, and Angelia Schultz. "Local Impact of State Testing in Southwest Washington." education policy analysis archives 11 (July 17, 2003): 22. http://dx.doi.org/10.14507/epaa.v11n22.2003.

Full text
Abstract:
A decade after implementation of a state testing and accountability mandate, teachers' practices and perspectives regarding their classroom assessments and their state's assessments of student achievement were documented in a study of 31 teachers in southwest Washington state. Against a background of national trends and standards of psychometric quality, the data were analyzed for teachers' beliefs and practices regarding classroom assessment and also regarding state assessment, commonalities and differences among teachers who taught at grade levels tested by the state and those who did not, teachers' views about the impact of state assessment on their students and their classrooms, and their views about whether state testing promoted educational improvement or reform as intended. Data registered (1) teachers' preferences for multiple measures and their objections to single-shot high-stakes testing as insufficiently informative, unlikely to promote valid inferences of student achievement, and often distortive of curriculum and pedagogy; (2) teachers' objections to the state test as inappropriate for nonproficient speakers of English, for students eligible for special services, and for impoverished students; and (3) teachers' preferences for personalized assessments respectful of student circumstances and readiness, rather than standardized assessments. Teachers' practical wisdom thus appeared more congruent than the state testing program with measurement principles regarding (1) multiple methods and (2) validation for specific test usage, including usage with disadvantaged subgroups of test-takers. Findings contrasted a distinction of emphasis: state focus on "testing students" as distinct from teachers' focus on "testing students."
APA, Harvard, Vancouver, ISO, and other styles
32

Noijons, José. "Testing Computer Assisted Language Testing." CALICO Journal 12, no. 1 (January 14, 2013): 37–58. http://dx.doi.org/10.1558/cj.v12i1.37-58.

Full text
Abstract:
Much computer assisted language learning (CALL) material that includes tests and exercises looks attractive enough but is clearly lacking in terms of validation: the possibilities of the computer and the inventiveness of the programmers mainly determine the format of tests and exercises, causing possible harm to a fair assessment of pupils' language abilities. This article begins with a definition of computer assisted language testing (CALT), followed by a discussion of the various processes involved. E3oth advantages and disadvantages of CALT are outlined. Psychometric aspects of computer adaptive testing are then discussed. Issues of validity and reliability in CALT are acknowledged. A table of factors in CALT distinguishes between test content and the mechanics of taking a test, before, during and after a test. The various factors are examined and comprise a table for developing a CALT checklist. The article ends with a call for professional testers and developers of educational software to work together in developing CALT.
APA, Harvard, Vancouver, ISO, and other styles
33

DeFreese, J. D., Michael J. Baum, Julianne D. Schmidt, Benjamin M. Goerger, Nikki Barczak, Kevin M. Guskiewicz, and Jason P. Mihalik. "Effects of College Athlete Life Stressors on Baseline Concussion Measures." Journal of Sport Rehabilitation 29, no. 7 (September 1, 2020): 976–83. http://dx.doi.org/10.1123/jsr.2018-0378.

Full text
Abstract:
Context: Concussion baseline testing helps injury evaluation by allowing postinjury comparisons to preinjury measures. To facilitate best practice, common neurocognitive, balance, and symptom report metrics used in concussion baseline testing merit examination relative to participant life stressors. Objective: The purpose of this study was to determine if life stressors are associated with college athlete neurocognitive function, postural control, and symptom scores at preseason baseline assessment. Design: All study variables were collected in a single laboratory session where athletes completed valid and reliable psychometrics as well as a computerized neurocognitive and balance assessments. Setting: Sports medicine research center on an American university campus. Participants: A convenience sample of 123 college student-athletes: 47 females (age = 18.9 [4.3] y) and 76 males (age = 19.4 [1.6] y). Main Outcome Measures: Participants were categorized into low, moderate, or high life stressors groups using scores from the Social Readjustment Rating Scale-Revised. Dependent variables included outcomes from the CNS Vitals Signs test, the Sensory Organization Test, and the graded symptom checklist indexing neurocognition, balance, and symptom severity, respectfully. Results: One-way analysis of variance revealed that the moderate life stressors group performed significantly worse than the low life stressors group on the baseline verbal memory domain of the CNS Vital Signs (F2,119 = 3.28; P = .04) only. Conclusion: In the current college athlete sample, few baseline concussion assessment variables were found to be significantly associated with life stressors. Considering the clinical significance of these variables, psychological life stressors may not be a confounding factor in concussion evaluation.
APA, Harvard, Vancouver, ISO, and other styles
34

Stolt, Minna, Riitta Suhonen, Pauli Puukka, Matti Viitanen, Päivi Voutilainen, and Helena Leino-Kilpi. "Development process and psychometric testing of foot health assessment instrument." Journal of Clinical Nursing 22, no. 9-10 (March 29, 2013): 1310–21. http://dx.doi.org/10.1111/jocn.12078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kennerly, Susan M., Tracey L. Yap, Annette Hemmings, Gulbahar Beckett, John C. Schafer, and Andrea Borchers. "Development and Psychometric Testing of the Nursing Culture Assessment Tool." Clinical Nursing Research 21, no. 4 (April 19, 2012): 467–85. http://dx.doi.org/10.1177/1054773812440810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jansen, Lynn, and Dorothy Forbes. "The Psychometric Testing of a Urinary Incontinence Nursing Assessment Instrument." Journal of Wound, Ostomy and Continence Nursing 33, no. 1 (January 2006): 69–76. http://dx.doi.org/10.1097/00152192-200601000-00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gordon, Shirley C., Cynthia Ann Blum, and Dax Andrew Parcells. "Psychometric Testing of the Gordon Facial Muscle Weakness Assessment Tool." Journal of School Nursing 26, no. 6 (October 7, 2010): 461–72. http://dx.doi.org/10.1177/1059840510384266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Babaei, Masoud, Ashraf Karbalaee-Nouri, Hassan Rafiey, Mehdi Rassafiani, Hojjatollah Haghgoo, Akbar Biglarian, and Douglas N. Morris. "Occupational Therapy Assessment of Spirituality questionnaire: translation into Persian and psychometric testing." International Journal of Therapy and Rehabilitation 28, no. 5 (May 2, 2021): 1–10. http://dx.doi.org/10.12968/ijtr.2020.0004.

Full text
Abstract:
Background/aims Occupational therapy is a profession that uses holistic and person-centered approaches that deal with all aspects of daily life. Clients' needs fall into four areas, and one of them is spirituality. Therefore, occupational therapists should pay attention to this area, but there is little information on the status of occupational therapists' knowledge and use in clinical practice. The aim of this study was to translate the occupational therapy assessment of spirituality questionnaire into Persian and determine its validity, factor analysis and reliability. Methods This is a psychometric study that was conducted between June and September 2018. The Occupational Therapy Assessment of Spirituality is a self-report, 25-item questionnaire, with self-exploratory scoring that investigates occupational therapists' views on four factors: spirituality in the scope of practice following its addition in the theoretical framework; formal education and training on spirituality; need for future educational opportunities and training to address spirituality; and awareness of assessments and evaluations in occupational therapy that incorporate clients' spirituality. The International Quality of Life Assessment approach was used for translation. Content validity was performed with 10 occupational therapists regarding qualitative content validity, content validity index and content validity ratio. Exploratory factor analysis and internal consistency with a sample size of 125 people and test–retest coefficient with a sample size of 25 people were computed for reliability. Results Qualitative content validity was confirmed, with content validity index greater than 0.79 and content validity ratio greater than 0.62. During the exploratory factor analysis process, the number of factors was reduced to three factors and the number of questions were reduced from 21 to 15 questions. Internal consistency was good (0.88). Test–retest coefficient was 0.96, with a high level of significance (P<0.001). Conclusions The Persian version of the Occupational Therapy Assessment of Spirituality is a reliable and valid questionnaire and can be used among Iranian occupational therapists in different clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
39

Copp, Derek T. "The impact of teacher attitudes and beliefs about large-scale assessment on the use of provincial data for instructional change." education policy analysis archives 24 (October 24, 2016): 109. http://dx.doi.org/10.14507/epaa.24.2522.

Full text
Abstract:
In the quest to improve measured educational outcomes national governments across the OECD and beyond have instituted large-scale assessment (LSA) policies in their public schools. Controversy almost universally follows the implementation of such testing, related to such topics as: a) the uncertain quality of the tests themselves as psychometrics measures; b) the uses to which the data can and should be put; c) the unintended consequences of test-preparation activities and resulting score inflation; and d) the effects of high-stakes tests on students. Debates of this nature naturally involve and impact the attitudes and opinions of teachers related to their collection and use of these data. This paper examines the impact of these attitudes using both the qualitative and quantitative data from a large-scale research study on Canadian provincial assessment. Data were collected from nation-wide teacher surveys as well as interviews with teachers, administrators and district-level staff. Results show that teacher attitudes about these assessments are strongly correlated to classroom-level instructional change. Three attitudinal factors have significant effects on teaching (to) the provincial curricula, yet none significantly affects the use of less constructive instructional strategies also known as ‘teaching to the test.’ Specifically, the belief that large-scale assessment data have more appropriate uses and the belief that these data could lead to school improvement were significant factors in facilitating change. The implications of these findings are profound in that large-scale assessment policy cannot succeed even by its own standards without more buy in from teaching professionals.
APA, Harvard, Vancouver, ISO, and other styles
40

Wolverton, Cheryl Lynn, Sue Lasiter, Joanne R. Duffy, Michael T. Weaver, and Anna M. McDaniel. "Psychometric testing of the caring assessment tool: Administration (CAT-Adm©)." SAGE Open Medicine 6 (January 1, 2018): 205031211876073. http://dx.doi.org/10.1177/2050312118760739.

Full text
Abstract:
Objectives: The overall purpose of this study was to evaluate the validity and reliability of the Caring Assessment Tool-Administration survey. Three specific aims were to (1) evaluate construct validity of the Caring Assessment Tool-Administration survey by testing the hypothesized eight-factor structure of staff nurses’ perceptions of nurse manager caring behaviors, (2) estimate the internal consistency, and (3) conduct item reduction analysis. Methods: A 94-item Caring Assessment Tool-Administration designed to assess nurse manager caring behaviors appeared in the literature but lacked robust psychometric testing. Using a foundational theory and a cross-sectional descriptive design, the Caring Assessment Tool-Administration was evaluated for reliability and construct validity. Using convenience sampling, 1143 registered nurses were recruited from acute care hospitals in three states located in the Midwestern, Mid-Atlantic, and Southern Regions of the United States. Results: Psychometric testing of the Caring Assessment Tool-Administration was conducted using confirmatory analysis to determine the dimensionality of the construct, nurse manager caring behavior. The null hypothesis was an eight-factor solution fitting the theoretical model being tested. The null hypothesis was rejected because none of the measures examined for goodness of fit indicated the model fit the data. Confirmatory factor analysis did not support the hypothesized structure; however, exploratory factor analysis supported a one-factor solution that was conceptually labeled caring behaviors. To decrease subject burden, the 94-item survey was reduced to 25 items using item reduction analysis including assessing minimum factor loadings of ≥0.60 and evaluating survey item-total correlation and alpha. The Cronbach’s alpha of the new 25-item survey was 0.98. Conclusion: The new 25-item Caring Assessment Tool-Administration survey provides hospital administrators, nurse managers, and researchers with a sound, less burdensome instrument to collect valuable information about nurse manager caring behaviors.
APA, Harvard, Vancouver, ISO, and other styles
41

Fernández-Ballesteros, R., E. E. J. De Bruyn, A. Godoy, L. F. Hornke, J. Ter Laak, C. Vizcarro, K. Westhoff, H. Westmeyer, and J. L. Zaccagnini. "Guidelines for the Assessment Process (GAP): A Proposal for Discussion." European Journal of Psychological Assessment 17, no. 3 (September 2001): 187–200. http://dx.doi.org/10.1027//1015-5759.17.3.187.

Full text
Abstract:
Summary: Current existing or proposed standards and guidelines in the field of psychological assessment are confined to psychological tests and psychological testing. But tests constitute only one category of psychological assessment procedures, and testing is only one of many available strategies or classes of actions in the course of the assessment process. Tests and testing are closely linked to a certain approach to psychological assessment, i. e., the psychometric one. This is one reason why it is relatively easy to formulate and establish standards or guidelines in the case of psychological tests and testing. The much more comprehensive assessment process is an indispensable part of any approach to psychological assessment, even of those that do not use psychometric tests. This makes the formulation of guidelines for the assessment process an ambitious and very difficult enterprise. But it can be done, at least at the level of recommendations that could help the assessor to cope with the complexities and demands of assessment processes in various contexts of psychological assessment. The European Association of Psychological Assessment (EAPA) decided to sponsor the development of Guidelines for the Assessment Process (GAP), setting up a Task Force for this specific purpose. The GAP introduced in this paper are intended as a first proposal to initiate a broad discussion about how to improve the practice of psychological assessment and the education and training of psychological assessors.
APA, Harvard, Vancouver, ISO, and other styles
42

Tracy, Derek K., and Keith J. B. Rix. "Malingering mental disorders: Clinical assessment." BJPsych Advances 23, no. 1 (January 2017): 27–35. http://dx.doi.org/10.1192/apt.bp.116.015958.

Full text
Abstract:
SummaryMalingering is the dishonest and intentional production of symptoms. It can cause considerable difficulty as assessment runs counter to normal practice, and it may expose clinicians to testing medicolegal situations. In this first part of a two-article review, we explore types of psychiatric malingering and their occurrence across a range of common and challenging scenarios, discussing presentations that may help delineate true from feigned illness. A framework is provided for undertaking an assessment where malingering is suspected, including recommendations on clinician approach, the use of collateral information, and self-evaluation of biases. The uses, and limitations, of psychometric tests are discussed, including ‘general’, malingering-specific and ‘symptom validity’ scales.Learning Objectives• Understand the challenges of determining ‘real’ from ‘malingered’ symptomatology across a range of psychiatric conditions• Have a rational strategy for approaching a clinical assessment where malingering is suspected• Appreciate the role and limitations of various psychometric tests that can be used in such assessments
APA, Harvard, Vancouver, ISO, and other styles
43

Woodard, John L., and Annalise A. M. Rahman. "The Human-Computer Interface in Computer-Based Concussion Assessment." Journal of Clinical Sport Psychology 6, no. 4 (December 2012): 385–408. http://dx.doi.org/10.1123/jcsp.6.4.385.

Full text
Abstract:
Recent progress in technology has allowed for the development and validation of computer-based adaptations of existing pencil-and-paper neuropsychological measures and comprehensive cognitive test batteries. These computer-based assessments are frequently implemented in the field of clinical sports psychology to evaluate athletes’ functioning postconcussion. These tests provide practical and psychometric advantages over their pencil-and-paper counterparts in this setting; however, these tests also provide clinicians with unique challenges absent in paper-and-pencil testing. The purpose of this article is to present advantages and disadvantages of computer-based testing, generally, as well as considerations for the use of computer-based assessments for the evaluation of concussion among athletes. Furthermore, the paper provides suggestions for further development of computerized assessment of sports concussion given the limitations of the current technology.
APA, Harvard, Vancouver, ISO, and other styles
44

Fergadiotis, Gerasimos, Marianne Casilio, William D. Hula, and Alexander Swiderski. "Computer Adaptive Testing for the Assessment of Anomia Severity." Seminars in Speech and Language 42, no. 03 (June 2021): 180–91. http://dx.doi.org/10.1055/s-0041-1727252.

Full text
Abstract:
AbstractAnomia assessment is a fundamental component of clinical practice and research inquiries involving individuals with aphasia, and confrontation naming tasks are among the most commonly used tools for quantifying anomia severity. While currently available confrontation naming tests possess many ideal properties, they are ultimately limited by the overarching psychometric framework they were developed within. Here, we discuss the challenges inherent to confrontation naming tests and present a modern alternative to test development called item response theory (IRT). Key concepts of IRT approaches are reviewed in relation to their relevance to aphasiology, highlighting the ability of IRT to create flexible and efficient tests that yield precise measurements of anomia severity. Empirical evidence from our research group on the application of IRT methods to a commonly used confrontation naming test is discussed, along with future avenues for test development.
APA, Harvard, Vancouver, ISO, and other styles
45

Gul, Sherin, and Dr Saima Ghazal. "Need of Psychometrics for Recruitment and Selection in Organizations: A Qualitative Perspective from a Developing Country." Journal of Professional & Applied Psychology 3, no. 1 (March 31, 2022): 98–107. http://dx.doi.org/10.52053/jpap.v3i1.96.

Full text
Abstract:
The ultimate goal of any organization is to improve their efficacy and performance which is achievable through people. Within organizations, Human Resource Management (HRM) is a key factor for recruitment and selection of employees; hence organizations are in competition for acquiring qualified and highly skilled personnel. By applying a qualitative approach, this study explored the current practices of personnel recruitment and selection with reference to psychometric testing in Pakistan in a sample of (N = 6) senior Human Resource managers by using semi-structured interview protocol. Two central themes emerged after the content analysis of the transcribed data such as recruitment and selection practices and the challenges behind effective decision making. Findings indicated that 34% organizations stated the use of both, psychological test and interview methods for selection purposes whereas 67% organizations have been using personality tests as part of their selection processes. Almost 50% organizations have reported some lacks in their personnel selection decision such as difficulty in the assessment of employee’s attitude and poor decisiveness skills among team members. The current research has expanded the limits of existing literature in this realm, in addition, themes could be applied for indigenous tool development.
APA, Harvard, Vancouver, ISO, and other styles
46

Di Nuovo, Santo. "Metodi di valutazione dell'abuso sessuale sui minori: č ammissibile, ed č utile, il testing psicometrico?" MALTRATTAMENTO E ABUSO ALL'INFANZIA, no. 2 (June 2009): 33–46. http://dx.doi.org/10.3280/mal2009-002004.

Full text
Abstract:
- The article focuses the strongly criticized use of psychometric tests in the evaluation of children sexual abuse. The use of inventories and projective techniques as Rorschach, thematic and design-based tests, is useful to assess the psychic consequences after the abuse and to plan a therapeutic intervention. But the reliability and the discriminant validity of the indicators derived from the tests are reduced when the aim of the assessment is to search for signs of an hypothesized abuse. Results of empirical studies regarding these issues are reported, suggesting that a proper use of psychometric tests in juridical settings requests interpretive caution.Key words: sexual abuse, assessment, psychometric tests, projective techniques.Parole chiave: abuso sessuale, assessment, test psicometrici, tecniche proiettive.
APA, Harvard, Vancouver, ISO, and other styles
47

Lewis, Keeta DeStefano, and Sandra J. Weiss. "Psychometric testing of an infant risk assessment for prenatal drug exposure." Journal of Pediatric Nursing 18, no. 6 (December 2003): 371–78. http://dx.doi.org/10.1016/s0882-5963(03)00163-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Abdullah, Rebaz Jalil, and Tang Jian. "Psychometric Testing of Shopping Mall Universal Design Assessment Tool (SM-UD)." International Journal of Built Environment and Sustainability 7, no. 2 (April 29, 2020): 67–77. http://dx.doi.org/10.11113/ijbes.v7.n2.497.

Full text
Abstract:
While there are people with disability live in Kurdish parts of Iraq, a very limited number of buildings are properly designed to serve these people. Considering the challenges that people with disability face in public buildings, the United Nations has recommended the implementation of the Universal Design (UD) principles in public buildings in Iraq to ensure that all people could have access to the public buildings regardless of their abilities and backgrounds. Hence, there is a need to gather pertinent data by assessing the adherence of shopping malls in this part of Iraq to the Universal Design (UD) principles given the role of the facilities to the locals. The present study aims to develop a tool for assessing whether the shopping malls in Sulaymaniyah city adhere to Universal Design principles. An analytical tool, which was abbreviated as SM-UD, was developed using a wide range of shopping mall design elements. The tool was tested for reliability and validity through several statistical tests. Besides, the tool was tested for practicality and communicability in six different shopping malls of Sulaymaniyah. The reliability and validity test indicate that the majority of items showed good to excellent reliability and fair to excellent validity. The results of using the tool show that it is capable of identifying the drawbacks of shopping malls in terms of their universality of design. The proposed tool appears ready to be used by shopping malls’ managers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
49

Peng, Ling, and Adam Finn. "How Cloudy a Crystal Ball: A Psychometric Assessment of Concept Testing." Journal of Product Innovation Management 27, no. 2 (March 2010): 238–52. http://dx.doi.org/10.1111/j.1540-5885.2010.00712.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Choi, Younyoung, and Cayce McClenen. "Development of Adaptive Formative Assessment System Using Computerized Adaptive Testing and Dynamic Bayesian Networks." Applied Sciences 10, no. 22 (November 19, 2020): 8196. http://dx.doi.org/10.3390/app10228196.

Full text
Abstract:
Online formative assessments in e-learning systems are increasingly of interest in the field of education. While substantial research into the model and item design aspects of formative assessment has been conducted, few software systems embodied with a psychometric model have been proposed to allow us to adaptively implement formative assessments. This study aimed to develop an adaptive formative assessment system, called computerized formative adaptive testing (CAFT) by using artificial intelligence methods based on computerized adaptive testing (CAT) and Bayesian networks as learning analytics. CAFT can adaptively administer personalized formative assessment to a learner by dynamically selecting appropriate items and tests aligned with the learner’s ability. Forty items in an item bank were evaluated by 410 learners, moreover, 1000 learners were recruited for a simulation study and 120 learners were enrolled to evaluate the efficiency, validity, and reliability of CAFT in an application study. The results showed that, through CAFT, learners can adaptively take item s and tests in order to receive personalized diagnostic feedback about their learning progression. Consequently, this study highlights that a learning management system which integrates CAT as an artificially intelligent component is an efficient educational evaluation tool for a remote personalized learning service.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography