Academic literature on the topic 'Test validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Test validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Test validity"

1

Stout, William, Howard Wainer, and Henry I. Braun. "Test Validity." Journal of the American Statistical Association 85, no. 411 (September 1990): 901. http://dx.doi.org/10.2307/2290036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kane, Michael T., Howard Wainer, and Henry I. Braun. "Test Validity." Journal of Educational Statistics 14, no. 3 (1989): 291. http://dx.doi.org/10.2307/1165021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shepard, Lorrie A. "Evaluating Test Validity." Review of Research in Education 19 (1993): 405. http://dx.doi.org/10.2307/1167347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knoll, Ross W., David P. Valentiner, and Jacob B. Holzman. "Development and Initial Test of the Safety Behaviors in Test Anxiety Questionnaire: Superstitious Behavior, Reassurance Seeking, Test Anxiety, and Test Performance." Assessment 26, no. 2 (December 29, 2016): 271–80. http://dx.doi.org/10.1177/1073191116686685.

Full text
Abstract:
The purpose of the current studies is to identify safety behavior dimensions relevant to test anxiety, to develop a questionnaire to assess those dimensions, and to examine the validity of that questionnaire. Items were generated from interviews with college students ( N = 24). Another sample ( N = 301) completed an initial 33-item measure. Another sample ( N = 151) completed the final 19-item version the Safety Behaviors in Test Anxiety Questionnaire and provided access to their academic records. Interviews and expert evaluations were used to select items for the initial pool. An examination of item distributions and exploratory factor analysis were used to identify dimensions and reduce the item pool. Confirmatory factor analyses were used to validate the factorial structure. Correlational analyses were used to examine criterion validity of the final measure. The Safety Behaviors in Test Anxiety Questionnaire consists of a 9-item “Superstitious Behaviors” scale and a 10-item “Reassurance Seeking.” The measure shows good content validity, factorial validity, internal consistency, and convergent and discriminant validity. Only the Reassurance Seeking scale showed good incremental criterion validity. Overall, these findings suggest that reassurance seeking may be a neglected target for interventions that might increase performance on high stakes tests.
APA, Harvard, Vancouver, ISO, and other styles
5

Doss, Robert C., Gordon J. Chelune, and Richard I. Naugle. "Victoria Symptom Validity Test." Journal of Forensic Neuropsychology 1, no. 4 (December 1999): 5–20. http://dx.doi.org/10.1300/j151v01n04_02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sweet, Jerry J., and John H. King. "Category Test Validity Indicators." Journal of Forensic Neuropsychology 3, no. 1-2 (February 18, 2003): 241–74. http://dx.doi.org/10.1300/j151v03n01_04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Waldman, David A., and Bruce J. Avolio. "Homogeneity of test validity." Journal of Applied Psychology 74, no. 2 (April 1989): 371–74. http://dx.doi.org/10.1037/0021-9010.74.2.371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sudaryono, Untung Rahardja, Qurotul Aini, Yuliana Isma Graha, and Ninda Lutfiani. "Validity of Test Instruments." Journal of Physics: Conference Series 1364 (December 2019): 012050. http://dx.doi.org/10.1088/1742-6596/1364/1/012050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kane, Michael T. "Book Reviews: Test Validity." Journal of Educational Statistics 14, no. 3 (September 1989): 291–96. http://dx.doi.org/10.3102/10769986014003291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Colliver, Jerry A., Melinda J. Conlee, and Steven J. Verhulst. "From test validity to construct validity … and back?" Medical Education 46, no. 4 (March 16, 2012): 366–71. http://dx.doi.org/10.1111/j.1365-2923.2011.04194.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Test validity"

1

Woolard, Christopher. "Moderation of Personality Test Validity." TopSCHOLAR®, 1998. http://digitalcommons.wku.edu/theses/326.

Full text
Abstract:
Personality testing can be an adequate instrument for prediction of future job performance. However, the predictive ability of these tests has been only moderate at best. This researcher attempted to determine if feedback would help improve the predictive ability of personality tests. The results indicated that feedback did not moderate the relationship between the personality dimensions and job performance for all of the personality construct s except Openness to Experience. This researcher also attempted to replicate the findings of the Barrick and Mount (1993) study which found that autonomy moderated the relationship between Conscientiousness, Extraversion, Agreeableness, and job performance. This researcher found support for Barrick and Mount's findings for Extraversion and Conscientiousness, but not for Agreeableness.
APA, Harvard, Vancouver, ISO, and other styles
2

Sargsyan, Alex. "Test Validity and Statistical Analysis." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/8472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roivainen, E. (Eka). "Validity in psychological measurement:an investigation of test norms." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526209432.

Full text
Abstract:
Abstract A psychological test may be defined as an objective and standardized measure of a sample of behaviour. The interpretation of test results is usually based on comparing an individual’s performance to norms based on a representative sample of the population. The present study examined the norms of popular adult tests. The validity of the Wartegg drawing test (WZT) was studied using two rating scales, the Toronto Alexithymia Scale and the Beck Depression Inventory as criterion tests. Weak to moderate correlations were found. It is concluded that the WZT has some validity in the assessment of Alexithymia. Efforts to develop a psychometrically valid and reliable method of interpreting the WZT should be continued. Cross-national and historical analyses of the norms of Wechsler’s adult intelligence scale (WAIS) were performed. The results show that the Finnish WAIS III test norms are distorted in the younger age groups. Significant cross-national and cross-generational differences in relative subtest scores, test profiles were also observed. Differences in general intelligence cannot explain such variations, and educational and cultural factors probably underlie the observed differences. It is suggested that the concept of a national IQ profile is useful for cross-national test validation studies. The validity of a validity scale, the Chapman Infrequency Scale, was studied in the context of a survey study. Results showed that careless responding is significantly more frequent among psychiatric patients relative to healthy respondents. The common procedure of excluding careless responders from final samples may affect the results of survey studies targeting individuals with psychiatric symptoms. Cut-off scores for exclusion should be flexible and chosen according to the demographic and health characteristics of the sample. In conclusion, the results of this study underscore the need for up-to-date and representative test norms for valid test interpretation
Tiivistelmä Psykologiset testit voidaan ymmärtää otoksiksi tutkittavan käyttäytymisestä. Mittauksen tulosta tulkitaan yleensä vertaamalla sitä tavalliseen tai keskimääräiseen tulokseen eli testinormeihin. Väitöskirjatutkimus tarkastelee suosittujen aikuistestien normien pätevyyttä. Warteggin piirrostestin validiteettia aleksitymian ja depression mittarina tutkittiin käyttämällä vertailukriteerinä kahta lomaketestiä, Toronton aleksitymia-asteikkoa ja Beckin depressioasteikkoa. Mitatut korrelaatiot olivat melko matalia. Tutkimuksen johtopäätöksenä oli, että Wartegg-testi saattaa olla hyödyllinen menetelmä aleksitymian toteamisessa ja että empiiriseen tutkimukseen perustuvaa tulkintamenetelmien kehittämistä pitäisi jatkaa. Tutkimuksessa selvitettiin myös Wechslerin aikuisten älykkyystestien (WAIS) eri versioiden osatestien kansallisten normien välisiä eroja ja eroja ikäkohorttien välillä. Tulokset osoittivat, että suomalaiset WAIS III testinormit ovat vinoutuneet nuorempien ikäryhmien osalta. Tutkimuksessa havaittiin merkitseviä eroja osatestien keskiarvojen suhteissa eli testiprofiileissa eri maiden ja ikäkohorttien välillä. Kyseisiä eroja ei voida selittää älykkyyden yleisellä faktorilla, vaan niiden taustalla on luultavasti koulutukseen ja kulttuuriin liittyviä tekijöitä. Osa eroista kansallisissa testiprofiileissa näyttää olevan luonteeltaan pysyviä, ja tätä tietoa voidaan käyttää hyväksi testinormien pätevyyttä arvioitaessa. Chapmanin vastaustapa-asteikon (CIS) validiteettia tutkittiin Pohjois-Suomen vuoden 1966 syntymäkohortin kyselytutkimusaineistolla. Psykiatrisista oireista kärsivät henkilöt saivat korkeampia pistemääriä kuin terveet vastaajat. Johtopäätöksenä oli, että vastaustapamittarit voivat karsia psykiatrisia potilaita liian herkästi ulos tutkimusjoukosta, mikä voi vääristää tutkimusten tuloksia. Kriteeripistemäärän pitäisi olla joustava ja sen määrityksessä pitäisi ottaa huomioon tutkimusjoukon ominaisuudet. Tutkimukset osoittavat, että testituloksen luotettava tulkinta vaatii ajanmukaiset ja edustavaan otokseen perustuvat testinormit
APA, Harvard, Vancouver, ISO, and other styles
4

Katalayi, Godefroid Bantumbandi. "The DR Congo English state examination: some fundamental validity issues." Thesis, University of the Western Cape, 2011. http://hdl.handle.net/11394/1682.

Full text
Abstract:
Magister Educationis - MEd
The test context is of paramount importance in language testing as it provides an understanding of the kind of tasks to be included in the test, how these tasks are executed by the test takers and how they can be efficiently administered. The objective of this study was to investigate the extent to which the context of the DR Congo English state examination (ESE) is valid and to come out with some useful suggestions that are likely to improve its validity. Two basic theories, the modern validity theory and the schema theory, informed this study. Weir's (2005) socio-cognitive framework was used to build the validity argument for the evaluation of the English state examination. A mixed method was used where the research design consisted of the combination of both qualitative and quantitative data during the collection and analysis stages. The content document analysis method was used to examine the content of the different state examination papers so as to identify the main features of the test, and the statistic (descriptive) method was used to quantify observations identified in the state examination papers and to evaluate the context validity of the ESE. Three techniques were used to collect the research data: the questionnaire, the test, and the interview. Three main findings of this study were reported: (1) the conditions under which the ESE tasks are performed and the relevance of these tasks to the test domain and characteristics are still far to contribute to the quality of evaluation of high school finalist students; (2) the extent to which the ESE includes tasks that take into consideration the nature of information in the text as well as the knowledge required for completing the task is globally good; (3) the conditions under which the test takes place are poor and these conditions affect the validity of test scores. The study recommends the test developers to approximate test tasks to those students have been exposed to in classroom situations and those they are likely to encounter in real life. It also recommends all the people involved in the administration of the test to adhere to high ethical standards.
South Africa
APA, Harvard, Vancouver, ISO, and other styles
5

Kyei-Blankson, Lydia S. "Predictive Validity, Differential Validity, and Differential Prediction of the Subtests of the Medical College Admission Test." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1125524238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

DeKort, Cynthia Dianne. "Validity measures of the Communication Attitude Test." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq22590.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Storey, Peter. "Investigating construct validity through test-taker introspection." Thesis, University of Reading, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Buddin, William Howard Jr. "The Validity of the Medical Symptom Validity Test in a Mixed Clinical Population." NSUWorks, 2010. http://nsuworks.nova.edu/cps_stuetd/15.

Full text
Abstract:
Clinicians have a small number of measurement instruments available to them to assist in the identification of suboptimal effort during an evaluation, which is largely agreed upon as a necessary component in the identification of malingering. Green's Medical Symptom Validity Test is a forced-choice test that was created to assist in the identification of suboptimal effort. The goal of this study was to provide clinical evidence for the validity of the Medical Symptom Validity Test using a large, archival clinical sample. The Test of Memory Malingering and the Medical Symptom Validity Test were compared to assess for level of agreement, and were found to agree in their identification of good or poor effort in approximately 75% of cases, which was lower than expected. Scores from the Medical Symptom Validity Test's effort subtests were tested for differences between adult litigants and clinically referred adults. Scores between these groups were different, and it was found that adult litigants obtained scores that were statistically significantly lower than those in the clinical group. Additionally, children were able to obtain results on the Medical Symptom Validity Test subtests that were equivalent to those of adults. Finally, the Wechlser Memory Scales - Third Edition core memory subtests were assessed for their ability to predict outcomes on the Medical Symptom Validity Test Delayed Recognition subtest. This analysis of the adult litigants and adult clinical groups revealed that, collectively, the predictors explained approximately one-third of the variance in scores on the Delayed Recognition subtest. Outcomes from these hypotheses indicated that the Medical Symptom Validity Test was measuring a construct similar to that of the Test of Memory Malingering. Due to the lower than expected level of agreement between the tests, it is recommended that clinicians use more than one measure of effort, which should increase the reliability of poor effort identification. Due to their lower scores the effort subtests, adults similar to those in the adult litigants group can be expected to perform more poorly than those who are clinically referred. Because effort subtest scores were not affected by cognitive or developmental domains, clinically referred children or adult examinees can be expected to obtain scores above cutoffs, regardless of mean age, IQ, or education. Additionally, an examinee's memory will not impact outcome scores on the effort subtests of the Medical Symptom Validity Test. Further research is needed to understand the Medical Symptom Validity Test's ability to accurately identify poor effort with minimal false positives, examine the impact of reading ability on effort subtests, and compare simulators' outcomes to those of a clinical population.
APA, Harvard, Vancouver, ISO, and other styles
9

Žujović, Alisa Murphy. "Predictive Validity of Florida’s Postsecondary Education Readiness Test." Scholar Commons, 2018. http://scholarcommons.usf.edu/etd/7253.

Full text
Abstract:
The role of the community college is constantly evolving. At its inception in the early 1900’s, the community college’s broad focus was to provide quality, affordable education to the members of the community the college serves. Today, that focus remains the same, but has also morphed into one that meets the specific needs of its students. One of these needs that is a critical issue for community colleges relates to developmental education. The assessment of developmental education has been a contentious subject among higher education institutions. Defining college readiness, methods describing how to measure it, and instruments with which to measure it, have all been issues that higher education researchers have debated. Using multilevel modeling, this study evaluated a customized developmental education assessment measure in a single community college in Florida, and its ability to correctly place students in appropriate courses. The Postsecondary Education Readiness Test (PERT) was implemented in Florida in 2010 as the primary gauge of student readiness based on competencies identified by Florida’s high school, college and university faculty. PERT assesses these competencies in the areas of mathematics, reading and writing. The courses of interest in this study were four math courses offered in community colleges across Florida: Developmental Math I (MAT 0018), Developmental Math II (MAT 0028), Intermediate Algebra (MAT 1033), and College Algebra (MAC 1105). The sample for Developmental Math I consisted of 727 students in 64 sections; for Developmental Math II, 900 students in 197 sections; for Intermediate Algebra, 713 students in 328 sections; and for College Algebra, 270 students in 204 sections. Five models were formulated to investigate the predictive validity of the PERT with final grades in the aforementioned math courses. These models also analyzed the relationships with student and course level predictors. Student level predictors included whether student had a first time in college status, student race/ethnicity, gender, student enrollment status (part-time or full-time), age, PERT score, and final grade in the math course. Course level variables consisted of employment status of instructor (part-time or full-time), the number of years the instructor had been employed, time of day of the course (day or evening), and the course delivery method (on campus or online). Results of this study indicated that the PERT score was a significant predictor for Developmental Math I, Developmental Math II, and College Algebra showing a positive relationship with final grade in each of these courses. Four of the research questions inquired as to whether interaction effects with the PERT score and race, and PERT score and gender existed. No interaction were significant, which indicated that no differential predictive validity was evident. The remaining two research questions examined the level of variance associated with the student and course level variables. For Developmental Math I, Black students had lower final grades than White students, and older students performed better than younger students. In Developmental Math II, female students had higher final grades than males, and older students had higher grades. For the credit-level courses, in Intermediate Algebra, full-time students had higher final grades than part-time students, and once again, older students exhibited higher grades. In College Algebra, for the final model, only the PERT score was significant. No other student nor course level variables was found to be significant predictors of final grade. These results are only a preliminary view of how PERT test scores relate to final math grades in only one institution in Florida. Statewide standard setting procedures are necessary in order to properly assess whether cut score for the PERT are appropriate, and to determine if this test is properly measuring the construct it intends in order to verify the reliability of the test items, and the validity of the test itself.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Kaung-Hsung. "Validity studies of the Heinrich Spatial Visualization Test /." Connect to resource, 1995. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1244142270.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Test validity"

1

Howard, Wainer, Braun Henry I. 1949-, and Educational Testing Service, eds. Test validity. Hillsdale, N.J: L. Erlbaum Associates, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Boldt, Robert F. Generalization of SAT validity across colleges. New York: College Entrance Examination Board, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roussos, Louis A. LSAT item-type validity study. Newtown, PA: Law School Admission Council, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roussos, Louis A. LSAT item-type validity study. Newtown, PA: Law School Admission Council, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

E, Robbins Douglas, and Sawicki Robert F, eds. Reliability and validity in neuropsychological assessment. New York: Plenum Press, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Franzen, Michael D. Reliability and validity in neuropsychological assessment. 2nd ed. New York: Kluwer Academic/Plenum Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boldt, Robert F. The validity of various methods of treating multiple SAT scores. New York (Box 886, New York 10101): College Entrance Examination Board, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

A, Shepard Lorrie, ed. Methods for identifying biased test items. Thousand Oaks [Calif.]: Sage Publications, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Statistical significance: Rationale, validity, and utility. London: Sage Publications, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Daniel, Slick, ed. VSVT, Victoria Symptom Validity Test: Version 1.0, professional manual. Odessa, FL: Psychological Assessment Resources, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Test validity"

1

Franzen, Michael. "Test Validity." In Encyclopedia of Clinical Neuropsychology, 3436–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_2242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Franzen, Michael. "Test Validity." In Encyclopedia of Clinical Neuropsychology, 1–2. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_2242-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sireci, Stephen G., and Tia Sukin. "Test validity." In APA handbook of testing and assessment in psychology, Vol. 1: Test theory and testing and assessment in industrial and organizational psychology., 61–84. Washington: American Psychological Association, 2013. http://dx.doi.org/10.1037/14047-004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franzen, Michael D. "Test Validity." In Encyclopedia of Clinical Neuropsychology, 2497–98. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_2242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hart, Eric S. "Victoria Symptom Validity Test." In Encyclopedia of Clinical Neuropsychology, 3588–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_2227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hart, Eric S. "Victoria Symptom Validity Test." In Encyclopedia of Clinical Neuropsychology, 1–3. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_2227-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Prasad, Kameshwar. "Diagnostic Test: Validity Appraisal." In Fundamentals of Evidence Based Medicine, 91–97. New Delhi: Springer India, 2013. http://dx.doi.org/10.1007/978-81-322-0831-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hart, Eric S. "Victoria Symptom Validity Test." In Encyclopedia of Clinical Neuropsychology, 2613–15. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_2227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Osterlind, Steven J. "Determining the Content for Items: Validity." In Constructing Test Items, 63–113. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-009-1071-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Weir, Cyril J. "The Nature of Test Validity." In Language Testing and Validation, 11–16. London: Palgrave Macmillan UK, 2005. http://dx.doi.org/10.1057/9780230514577_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Test validity"

1

"Proceedings International Test Conference 1996. Test and Design Validity." In Proceedings International Test Conference 1996. Test and Design Validity. IEEE, 1996. http://dx.doi.org/10.1109/test.1996.556937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Author index." In Proceedings International Test Conference 1996. Test and Design Validity. IEEE, 1996. http://dx.doi.org/10.1109/test.1996.557216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lunardini, Francesca, Matteo Luperto, Katia Daniele, Nicola Basilico, Sarah Damanti, Carlo Abbate, Daniela Mari, Matteo Cesari, Simona Ferrante, and Nunzio Alberto Borghese. "Validity of digital Trail Making Test and Bells Test in elderlies." In 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI). IEEE, 2019. http://dx.doi.org/10.1109/bhi.2019.8834513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

F, Mona Fiametta, Firas J, Hadi Sartono, and Dudung Hasanudin Cholil. "Validity and Reliability Test of Construction of Power Legs Test Measurement." In Proceedings of the 3rd International Conference on Sport Science, Health, and Physical Education (ICSSHPE 2018). Paris, France: Atlantis Press, 2019. http://dx.doi.org/10.2991/icsshpe-18.2019.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Priegnitz, Christina, Marcel Treml, Kerstin Richter, and Winfried J. Randerath. "Validity of 50m walking test in obese patients." In ERS International Congress 2016 abstracts. European Respiratory Society, 2016. http://dx.doi.org/10.1183/13993003.congress-2016.pa2286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prasetyo, Heru, Siswantoyo Siswantoyo, and Yudik Prasetyo. "Validity and Reliability of Holding Bow Digitec Test." In Conference on Interdisciplinary Approach in Sports in conjunction with the 4th Yogyakarta International Seminar on Health, Physical Education, and Sport Science (COIS-YISHPESS 2021). Paris, France: Atlantis Press, 2022. http://dx.doi.org/10.2991/ahsr.k.220106.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Putra, Muhamar Kodafi, and Anton Komaini. "Validity and Reliability of Pencak Silat Straight Kick Test Instrument (ANQO Test)." In 1st International Conference on Sport Sciences, Health and Tourism (ICSSHT 2019). Paris, France: Atlantis Press, 2021. http://dx.doi.org/10.2991/ahsr.k.210130.053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Papadakis, Mike, Christopher Henard, Mark Harman, Yue Jia, and Yves Le Traon. "Threats to the validity of mutation-based test assessment." In ISSTA '16: International Symposium on Software Testing and Analysis. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2931037.2931040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sekiguchi, T., S. Amakawa, N. Ishihara, and K. Masu. "On the validity of bisection-based thru-only de-embedding." In 2010 International Conference on Microelectronic Test Structures (ICMTS). IEEE, 2010. http://dx.doi.org/10.1109/icmts.2010.5466857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fichte, Lars Ole, Sven Knoth, Stefan Potthast, Frank Sabath, and Marcus Stiemer. "On the validity and statistical significance of HEMP test standards." In 2015 IEEE International Symposium on Electromagnetic Compatibility - EMC 2015. IEEE, 2015. http://dx.doi.org/10.1109/isemc.2015.7256278.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Test validity"

1

Kitagawa, Toru. A Test for Instrument Validity. IFS, August 2014. http://dx.doi.org/10.1920/wp.cem.2014.3414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vickers, Ross R., and Jr. Modeling Run Test Validity: A Meta-Analytic Approach. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada421244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Honts, Charles R., Susan Amato, and Anne Gordon. Validity of Outside-Issue Questions in the Control Question Test. Fort Belvoir, VA: Defense Technical Information Center, April 2000. http://dx.doi.org/10.21236/ada376666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kitagawa, Toru. A bootstrap test for instrument validity in heterogeneous treatment effect models. Institute for Fiscal Studies, October 2013. http://dx.doi.org/10.1920/wp.cem.2013.5313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Honts, Charles R., and Racheal Reavy. Effects of Comparison Question Type and Between Test Stimulation on the Validity of Comparison Question Test. Fort Belvoir, VA: Defense Technical Information Center, September 2009. http://dx.doi.org/10.21236/ada505303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

English, Christina. The Test of written English : a statistical analysis of validity and reliability. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roldán-González, Elizabeth, Carolina Robledo-Castro, Piedad Rocío Lerma-Castaño, and María Luisa Hurtado-Otero. Validity and reliability of the Wolf Motor Function Test -WMFT in patients with Cerebrovascular disease: Scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, October 2022. http://dx.doi.org/10.37766/inplasy2022.10.0044.

Full text
Abstract:
Review question / Objective: This scoping review aimed to compile the studies that have examined the validity and reliability of the different versions of the Wolf Motor Function Test - WMFT in patients with Cerebrovascular disease. Background: Numerous investigations in rehabilitation have used the WMFT as an instrument for the primary measurement of the results; however, to date, there are no known reviews that have compiled the reliability and validity of the wolf test in its different versions, which is considered of vital importance and constitutes critical information for decision making in the process of evaluation and follow-up of patients with stroke in clinical, academic and research environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Hartke, Darrell D., and Lawrence O. Short. Validity of the Academic Aptitude Composite of the Air Force Officer Qualifying Test (AFOQT). Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada194753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McCrea, Michael. An Independent, Prospective, Head to Head Study of the Reliability and Validity of Neurocognitive Test Batteries for the Assessment of Mild Traumatic Brain Injury. Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada573016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Serban, Christa, Diana Lungeanu, Sergiu-David Bota, Claudia C. Cotca, Meda Lavinia Negrutiu, Virgil-Florin Duma, Cosmin Sinescu, and Emanuela Lidia Craciunescu. Emerging Technologies for Dentin Caries Detection. A Systematic Review and Meta-Analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, January 2022. http://dx.doi.org/10.37766/inplasy2022.1.0097.

Full text
Abstract:
Review question / Objective: What is the diagnostic test accuracy of emerging technologies for non-cavitated dentin caries detection, considering in vivo and in vitro studies that reported results regarding the occlusal and proximal surfaces, over the last 10 years? Information sources: Electronic databases of Medline, Embase, and PubMed were searched for articles published within the last decade (January 2011 to August 2021).in the period mentioned above. Medline and Embase databases were searched concomitantly using the Ovid interface. To find articles potentially missed by the search, Google Scholar was queried for diagnostic validity studies pertaining to technologies for dentin caries diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography