To see the other types of publications on this topic, follow the link: Validity.

Dissertations / Theses on the topic 'Validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Morris, David Charles. "Comparing job component validity to observed validity across jobs." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2046.

Full text
Abstract:
Five hundred and eighteen observed validity coefficients based on correlations between commercially available test data and supervisory ratings of overall job performances were collected in 89 different job titles. Using Dictionary of Occupational Title Codes, Job Component Validity (JUV) estimates based on similar job titles residing in the PAQ Service database were collected and averaged across the General Aptitude Test.
APA, Harvard, Vancouver, ISO, and other styles
2

James, Megan. "The validity endeavour." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4162.

Full text
Abstract:
Thesis (MPhil (Sociology and Social Anthropology))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Qualitative and quantitative research implies different meta-theoretical approaches to knowledge production. The former maintains a constructivist and interpretative perspective, as opposed to the latter, which exists within a realist and even positivist paradigm. Within the field of research methodology, the dominant conceptualisation of validity is based on a positivist discourse, which suggests that (social) scientific research should strive to attain an ultimate truth. This understanding of validity is difficult to achieve within a research paradigm that values the idiosyncratic world views of the participants under investigation. The introduction of CAQDAS (Computer Assisted Qualitative Data Analysis Software), however, brought with it the hope that its application would confer upon qualitative research the rigour associated with validity in a mainly positivist interpretation of the research process. The ultimate goal of this thesis is to determine whether CAQDAS can make a significant contribution to efforts aimed at validating qualitative research. The research design employed in the present study is that of a descriptive content analysis, focussing on scientific articles that not only report qualitative studies, but also make explicit reference to the use of CAQDAS, and describe validation techniques applied during the research process. Purposive sampling was applied to select 108 articles, published from 1996 to 2009, that meet the sampling criteria and that were identified through online searches of various bibliographic databases and search engines. The study investigates three predominant research questions concerned with the following: (1) the most commonly used software programmes; (2) trends in CAQDAS use over time; as well as (3) the validation techniques reported in examined scientific articles, distinguishing between techniques that are performed with and without the use of CAQDAS. With regard to the first two research questions, it was found that the three most commonly used software programmes are QSR N programmes (including NUD.IST, NVivo, N4, N5 and N6), followed by Atlas.ti and MAXqda (including the earlier version winMAX), and that there has been a general increase over iv the past 13 years (1996-2009) in the number of qualitative research articles reporting CAQDAS use. The exploration of validation techniques utilised in qualitative research, as reported in the examined scientific articles, demonstrated that the techniques are in most cases performed manually. Although CAQDAS offers many benefits, the predominant validation techniques reported can be, and still are, performed without CAQDAS. Techniques that would have been impossible without CAQDAS are based on the data display features of CAQDAS, as well as on the accuracy and consistency offered by CAQDAS in the execution of certain actions. The findings generated by this study seem to support the hypothesis that CAQDAS per se does not enhance validity, since it is predominantly utilised as merely a research tool.
AFRIKAANSE OPSOMMING: Kwalitatiewe en kwantitatiewe navorsing is gegrond op verskillende meta-teoretiese benaderings tot die lewering van inligting. Eersgenoemde handhaaf ‘n konstruktivistiese en interpretatiewe perspektief, teenoor laasgenoemde, wat binne ‘n paradigma bestaan wat gegrond is op realisme en positivisme. Binne die veld van navorsingsmetodologie, is die oorwegende konseptualisering van geldigheid, gebaseer op ‘n positivistiese diskoers, wat voorstel dat (sosiale) wetenskaplike navorsing daarna moet streef om ‘n absolute waarheid te bereik. Hierdie begrip van geldigheid word moeilik verwesenlik binne ‘n navorsingsparadigma wat waarde heg aan die ideosinkratiese wêreldbeskouinge van die navorsingsdeelnemers. CAQDAS (Computer Assisted Qualitative Data Analysis Software), is bekend gestel met die hoop dat die gebruik daarvan kwalitatiewe navorsing sal verleen met stiptheid wat met geldigheid geassosieer word, oorwegend binne ‘n positivistiese interpretasie van die navorsingsproses. Die oorkoepelende doelwit van hierdie tesis is om vas te stel of CAQDAS enige betekenisvolle bydrae kan maak tot pogings om die geldigheid van kwalitatiewe navorsing te verbeter. Die navorsingsontwerp van die huidige studie is die van ‘n beskrywende inhoudsanalise, wat fokus op wetenskaplike artikels wat nie net berig oor kwalitatiwe studies nie, maar ook verwys na die gebruik van CAQDAS, en die geldigheidstegnieke wat tydens die navorsingsproses toegepas is, bespreek. Doelgerigte steekproeftrekking is toegepas en 108 artikels, wat gepubliseer is vanaf 1996-2009, was geselekteer op grond van die feit dat hulle aan die seleksie kriteria voldoen. Die artikels was geïdentifiseer deur aanlyn soektogte van verskeie bibliografiese databasisse en soekenjins. Die studie ondersoek drie oorwegende navorsingsvrae met betrekking tot die volgende: (1) die sagteware programme wat die meeste gebruik word; (2) neigings in die gebruik van CAQDAS oor verloop van ‘n tydperk; sowel as (3) die geldigheidstegnieke wat in die ondersoekte wetenskaplike artikels vi gerapporteer word, deur onderskeid te tref tussen tegnieke wat met of sonder die gebruik van CAQDAS uitgevoer word. Met verwysing na die eerste twee navorsingsvrae, was dit gevind dat die drie algemeenste sagteware programme wat gebruik is, QSR N programme (insluitend NUD.IST, NVivo, N4, N5 en N6), gevolg deur Atlas.ti en MAXqda (insluitend die vroëere weergawe winMAX) is, en dat daar oor die algemeen ‘n toename is in die getal kwalitatiewe navorsingsartikels oor die afgelope 13 jaar (1996-2009), wat die gebruik van CAQDAS rapporteer. Die ondersoek na geldigheidstegnieke wat in kwalitatiewe navorsing gebruik word, soos berig in die ondersoekte wetenskaplike artikels, het getoon dat die tegnieke in die meeste gevalle sonder die gebruik van CAQDAS uitgevoer is. Ten spyte van die feit dat die gebruik van CAQDAS voordele inhou, word die meerderheid geldigheidstegnieke wat gerapporteeer word, steeds sonder die gebruik van CAQDAS uitgevoer. Tegnieke wat nie sonder die hulp van CAQDAS uitegevoer kon word nie, is gebaseer op die data vertoningsvermoë van CAQDAS, sowel as op die akkuraatheid en konsekwentheid waarmee CAQDAS sekere opdragte uitvoer. Die bevindinge wat gegenereer is deur hierdie studie blyk asof dit die hipotese ondersteun dat CAQDAS nie opsig self die geldigheid versterk nie, aangesien dit oorwegend bloot as ‘n navorsingsinstrument gebruik word.
APA, Harvard, Vancouver, ISO, and other styles
3

Westrick, Paul Andrew. "Validity decay versus validity stability in stem and non-stem fields." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3402.

Full text
Abstract:
The main purpose of this study was to determine if validity coefficients for ACT scores, both composite scores and subject area test scores, and high school grade point average (HSGPA) decayed or held stable over eight semesters of undergraduate study in science, technology, engineering, and mathematics (STEM) fields at civilian four-year institutions, and whether the decay patterns differed from those found in non-STEM fields at the same institutions. Data from 62,212 students at 26 four-year institutions were analyzed in a hierarchical meta-analysis in which student major category (SMC), gender, and admission selectivity levels were considered potential moderators. Four sets of analyses were run. The first was by the three SMCs: STEM-Quantitative majors, STEM-Biological majors, and non-STEM majors. The second was SMC by gender. The third was SMC by admission selectivity level. The fourth was SMC by gender by admission selectivity level. The results across all four analyses indicated that ACT score validity coefficients for STEM-Quantitative and STEM-Biological majors decayed less over eight semesters than the validity coefficients for non-STEM majors did. This was true for the uncorrected and corrected validity coefficients. For the HSGPA validity coefficients, this was true for the corrected validity coefficients. Non-STEM majors had very similar validity decay patterns regardless of the level of analysis. However, four of the eight STEM subgroups in the final set of analyses had minimal amounts of decay, and in some instances small amounts of validity growth.
APA, Harvard, Vancouver, ISO, and other styles
4

Thomas, Adrain L. "Accounting for correlated artifacts and true validity in validity generalization procedures : an extension of model 1 for assessing validity generalization." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/28967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tng, Thiam-huat. "Validity of cephalometric landmarks." Click to view the E-thesis via HKUTO, 1991. http://sunzi.lib.hku.hk/HKUTO/record/B38628399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Williams, S. G. "Meaning, validity and necessity." Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.354816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

湯添發 and Thiam-huat Tng. "Validity of cephalometric landmarks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B38628399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kampakoglou, Kyriaki. "Neuromarketing : Validity and Morality." Thesis, Högskolan i Borås, Institutionen Textilhögskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16794.

Full text
Abstract:
The new way of doing marketing the so -called neuromarketing, which is a combination of neuroscience findings collected for and used in the marketing domain, has raised a lot of support but a lot of critic as well. The research question is focusing on whether the neuromarketing has been an approach of explaining and defining the human behavior, or whether it has transformed to an unethical manipulation of consumers in order to discover the much wanted “buy button” in consumers’ brains. Additionally issues of validity of neuromarketing researches is to be examined as well their purpose of conduction and the use of their findings.
Program: Master in Fashion Management with specialisation in Fashion Marketing and Retailing
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Yi, and John Potter. "Object validity and effects." Thesis, University of New South Wales, 2008. http://unsworks.unsw.edu.au/fapi/datastream/unsworks:1679/SOURCE02?view=true.

Full text
Abstract:

The object-oriented community is paying increasing attention to techniques for object instance encapsulation and alias protection. Formal techniques for modular verification of programs at the level of objects are being developed hand in hand with type systems and static analysis techniques for restricting the structure of runtime object graphs. Ownership type systems have provided a sound basis for such structural restrictions by being able to statically represent an extensible object ownership hierarchy. However, such structural restrictions may potentially have limitations on cases when more flexible reference structures are desired. In this thesis, we present a different encapsulation technique, called Effect Encapsulation, which confines side effects rather than object references. With relaxed restriction on reference structure, it is able to express certain common object-oriented patterns which cannot be expressed in Ownership Types. From this basis, we also describe a model of Object Validity --- a framework for reasoning about object invariants. Such a framework can track the effect and dependency of method calls on object invariants within an ownership-based type system, even in the presence of re-entrant calls. Moreover, we present an access control technique for protecting object instances. Combined with context variance, the resulting type system allows for a more flexible and useful access control policy, hence is capable of expressing more object-oriented patterns.
APA, Harvard, Vancouver, ISO, and other styles
10

Bester, Kyle John. "The external validity of South African substance use contextual risk instrument: predictive validity." University of the Western Cape, 2017. http://hdl.handle.net/11394/5689.

Full text
Abstract:
Magister Psychologiae - MPsych
The purpose of the present study was to gather further external validity evidence towards the validity argument for an instrument designed to measure individual and contextual factors associated with adolescent substance use in low socio-economic status communities in the Western Cape, South Africa. The South African Substance Use Contextual Risk Instrument (SASUCRI) measures adolescents' subjective experiences of their own psycho-social and their communities' functioning. The present study uses secondary data analysis in order to further evaluate its external validity. Both content and structural evidence for the instrument has been gathered in the larger study in which the present study is located. Validity theory was used as the theoretical framework for the gathering of the different types of evidence in support of the validity argument for this instrument. The study employed non-probability purposive sampling to select schools from three education districts from which twenty-six schools were selected where the sample total was N=1959. English and Afrikaans versions of the instrument were administered to English- and Afrikaans home language, school-going adolescents, aged 12 to 21 years. All ethical standards were maintained throughout the research process. External evidence procedures were conducted using Discriminant Function Analysis (DFA) to evaluate the extent to which the instrument could discriminate between substance using and non-using adolescents. The DFA revealed that nine SASUCRI sub-scales totals can act as significant predictors to substance use among adolescents based on the predictive validity of sub-scales.
APA, Harvard, Vancouver, ISO, and other styles
11

Buddin, William Howard Jr. "The Validity of the Medical Symptom Validity Test in a Mixed Clinical Population." NSUWorks, 2010. http://nsuworks.nova.edu/cps_stuetd/15.

Full text
Abstract:
Clinicians have a small number of measurement instruments available to them to assist in the identification of suboptimal effort during an evaluation, which is largely agreed upon as a necessary component in the identification of malingering. Green's Medical Symptom Validity Test is a forced-choice test that was created to assist in the identification of suboptimal effort. The goal of this study was to provide clinical evidence for the validity of the Medical Symptom Validity Test using a large, archival clinical sample. The Test of Memory Malingering and the Medical Symptom Validity Test were compared to assess for level of agreement, and were found to agree in their identification of good or poor effort in approximately 75% of cases, which was lower than expected. Scores from the Medical Symptom Validity Test's effort subtests were tested for differences between adult litigants and clinically referred adults. Scores between these groups were different, and it was found that adult litigants obtained scores that were statistically significantly lower than those in the clinical group. Additionally, children were able to obtain results on the Medical Symptom Validity Test subtests that were equivalent to those of adults. Finally, the Wechlser Memory Scales - Third Edition core memory subtests were assessed for their ability to predict outcomes on the Medical Symptom Validity Test Delayed Recognition subtest. This analysis of the adult litigants and adult clinical groups revealed that, collectively, the predictors explained approximately one-third of the variance in scores on the Delayed Recognition subtest. Outcomes from these hypotheses indicated that the Medical Symptom Validity Test was measuring a construct similar to that of the Test of Memory Malingering. Due to the lower than expected level of agreement between the tests, it is recommended that clinicians use more than one measure of effort, which should increase the reliability of poor effort identification. Due to their lower scores the effort subtests, adults similar to those in the adult litigants group can be expected to perform more poorly than those who are clinically referred. Because effort subtest scores were not affected by cognitive or developmental domains, clinically referred children or adult examinees can be expected to obtain scores above cutoffs, regardless of mean age, IQ, or education. Additionally, an examinee's memory will not impact outcome scores on the effort subtests of the Medical Symptom Validity Test. Further research is needed to understand the Medical Symptom Validity Test's ability to accurately identify poor effort with minimal false positives, examine the impact of reading ability on effort subtests, and compare simulators' outcomes to those of a clinical population.
APA, Harvard, Vancouver, ISO, and other styles
12

Burchett, Danielle L. "MMPI-2-RF Validity Scale Scores as Moderators of Substantive Scale Criterion Validity." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1351280854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rosenberg, Sharyn L. Cizek Gregory J. "Multilevel validity assessing the validity of school-level inferences from student achievement test data /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2304.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Jun. 26, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Education." Discipline: Education; Department/School: Education.
APA, Harvard, Vancouver, ISO, and other styles
14

Jain, Pragati. "Validity and its epistemic role." [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3162241.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, 2004.
Source: Dissertation Abstracts International, Volume: 66-01, Section: A, page: 0201. Chair: Michael Dunn. Title from dissertation home page (viewed Oct. 12, 2006).
APA, Harvard, Vancouver, ISO, and other styles
15

Woolard, Christopher. "Moderation of Personality Test Validity." TopSCHOLAR®, 1998. http://digitalcommons.wku.edu/theses/326.

Full text
Abstract:
Personality testing can be an adequate instrument for prediction of future job performance. However, the predictive ability of these tests has been only moderate at best. This researcher attempted to determine if feedback would help improve the predictive ability of personality tests. The results indicated that feedback did not moderate the relationship between the personality dimensions and job performance for all of the personality construct s except Openness to Experience. This researcher also attempted to replicate the findings of the Barrick and Mount (1993) study which found that autonomy moderated the relationship between Conscientiousness, Extraversion, Agreeableness, and job performance. This researcher found support for Barrick and Mount's findings for Extraversion and Conscientiousness, but not for Agreeableness.
APA, Harvard, Vancouver, ISO, and other styles
16

Nguyen, Quan Hoang Computer Science &amp Engineering Faculty of Engineering UNSW. "Validity contracts for software transactions." Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/44533.

Full text
Abstract:
Software Transactional Memory is a promising approach to concurrent program- ming, freeing programmers from error-prone concurrency control decisions that are complicated and not composable. But few such systems address consistencies of transactional objects. In this thesis, I propose a contract-based transactional programming model toward more secure transactional sofwares. In this general model, a validity contract spec- ifies both requirements and effects for transactions. Validity contracts bring nu- merous benefits including reasoning about and verifying transactional programs, detecting and resolving transactional conflicts, automating object revalidation and easing program debugging. I introduce an ownership-based framework, namely AVID, derived from the gen- eral model, using object ownership as a mechanism for specifying and reasoning validity contracts. I have specified a formal type system and implemented a pro- totype type checker to support static checking. I also have built a transactional library framework AVID, based on existing Java DSTM2 framework, for express- ing transactions and validity contracts. Experimental results on a multi-core system show that contracts add little over- heads to the original STM. I find that contract-aware contention management yields significant speedups in some cases. The results have suggested compiler- directed optimisation for tunning contract-based transactional programs. My further work will investigate the applications of transaction contracts on various aspects of TM research such as hardware support and open-nesting.
APA, Harvard, Vancouver, ISO, and other styles
17

Gregory, William Scott. "Construct validity of personal motives /." Access abstract and link to full text, 1992. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9222149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wilcox, Aidan. "The validity of reconviction studies." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hooker, J. "An exploration into response validity." Thesis, Canterbury Christ Church University, 2018. http://create.canterbury.ac.uk/17684/.

Full text
Abstract:
Objectives: Performance validity tests (PVTs) and symptom validity tests (SVTs) have been recommended by the British Psychological Society to assist clinicians in validating assessment data. The current study aimed to explore the base rate of PVT failure in an NHS neuropsychology service, a setting relatively unexplored. A secondary aim was to investigate the relationship between PVT and SVT performance. Lastly, group differences in those passing and failing PVTs were explored in terms of demographics, and psychological functioning. Method: Archival test data (n=127) was drawn from an NHS outpatient neuropsychology service. Participants completed one stand-alone PVT (the Test of Memory Malingering [TOMM]), one embedded PVT (Digit Span age-corrected scaled score [DS-SS]), and one SVT (the Personality Assessment Inventory [PAI]). Results: The base rate of failure on any one PVT was 26%. The rate of TOMM failure was 12% and 6% additionally failed an embedded PVT. A significant relationship was found between PVT and SVT performance. Significantly elevated Paranoia, Anxiety-Related Disorders, and Schizophrenia PAI scales, as well as lower Full Scale IQ scores, were found in those who failed PVTs compared to those who passed. No other group differences on demographics were found, including reported financial incentive. Conclusions: Findings suggest that PVT failure occurs in a sizable minority of NHS ABI outpatients, which is unlikely to be simply explained by malingering for material gain. Elevations in reported psychopathological symptoms may be related to emotional and cognitive sequalae resulting from the ABI itself. Careful interpretation of neuropsychological test data is endorsed.
APA, Harvard, Vancouver, ISO, and other styles
20

Sargsyan, Alex. "Test Validity and Statistical Analysis." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/8472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shepherd, Hunter L., and L. Lee Glenn. "Measurement Validity of Childbirth Perceptions." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cox, Joy Wiechmann. "Predictive Validity of the LOOK." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5485.

Full text
Abstract:
The LOOK, an iOS app, is a viewing time measure used to assess sexual interest. The measure is based on the assumption that sexual interest can be assessed by the amount of time a participant spends looking at an image. The purpose of this study was to examine the ability of the LOOK, a newly developed viewing time instrument, to accurately screen and diagnose individuals with deviant sexual interest. The profiles of known sexual offenders were compared to norm-referenced profiles of an exclusively heterosexual, non-pedophilic, male, college student population. Researchers were not able to find a fair constant multiplier that would allow for a positive screen of our offender sample while not over identifying our non-offender sample. Instead a graph was generated which showed the trends of offenders were closely related to those of non-offenders using Fischer’s Chi-square model. Additionally, when looking at the predictive validity of being able to identify victim demographics of known perpetrators based on Fischer’s Chi Square residuals, only 15.9% were found to have offense histories that were consistent with their profiles on the LOOK. The LOOK, using Fischer’s Chi-square model does not seem to be able to differentiate offenders from non-offenders. Future studies may include looking at the predictive nature of ipsative data.
APA, Harvard, Vancouver, ISO, and other styles
23

Kyei-Blankson, Lydia S. "Predictive Validity, Differential Validity, and Differential Prediction of the Subtests of the Medical College Admission Test." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1125524238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

McCormick, Carroll Owen. "Rater familiarity in simulation validity studies." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26560.

Full text
Abstract:
Previous research on photographic simulation validity is reviewed. Evidence supporting simulation validity is found, but methodological flaws seriously compromise the results of many of the studies. In the present study 408 University of British Columbia undergraduates rated the affective quality of two building interiors, using actual site, color slide, and written description media. Groups rating each site at each medium were equally represented by familiar and unfamiliar raters. The hypothesis that compensating efects of rater familiarity with stimulus sites produce results that are misinterpreted as supportive of simulation validity was tested. Results show that valid ratings of color slides were obtained, but results that appear supportive of simulation validity can be confounded with rater familiarity effects and building prototypicality.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Robinson, Katherine MacLeod. "Verbal report validity and children's subtraction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0016/NQ46911.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Guo, Tong. "Statistical analysis of reliability-validity studies." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0028/MQ50780.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mandeville, V. Ann. "The Scriptural validity of working women." Theological Research Exchange Network (TREN), 1986. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Krogstad, Finn. "Evaluating the validity of research implications /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Tong 1968. "Statistical analysis of reliability-validity studies." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21560.

Full text
Abstract:
Reliability and validity studies are common in epidemiology, especially in the course of the development of measure instruments. Reliability is typically assessed quantitatively, through an appropriate coefficient, such as the Intra-class Correlation Coefficient (ICC). Validity, on the other hand, is assessed more informally through a series of tests and checks: for instance, construct validity may be established by testing the significance of factors that are supposed to influence the measure at hand. In general an ICC is calculated as a ratio of variance components to the total variance. Therefore the first step in the calculation of an ICC is the estimation of variance components from an appropriate statistical model. This thesis presents two approaches to the estimation of variance components in the context of reliability and validity studies: One is the ANOVA approach, based on the method of moments and valid especially for the case of balanced data. The other is the mixed linear model approach, for the more general case of unbalanced data. Furthermore, a general framework is developed which permits treatment of reliability and validity within the same statistical model, By means of this statistical model, a special case of the mixed linear model, appropriate ICCs for both reliability and validity can be computed, while construct validity can be established by testing the significance of appropriate fixed effects. The delta method and the bootstrap are proposed for the calculation of the standard errors and confidence intervals of the ICCs. Finally, an example of case-vignette study is presented. All calculations were carried out using the SAS system.
APA, Harvard, Vancouver, ISO, and other styles
30

Tinture, Maris Kopcke. "Some main questions concerning legal validity." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sandmann, Gleb. "Stochastic volatility : estimation and empirical validity." Thesis, London School of Economics and Political Science (University of London), 1997. http://etheses.lse.ac.uk/1456/.

Full text
Abstract:
Estimation of stochastic volatility (SV) models is a formidable task because the presence of the latent variable makes the likelihood function difficult to construct. The model can be transformed to a linear state space with non-Gaussian disturbances. Durbin and Koopman (1997) have shown that the likelihood function of the general non-Gaussian state space model can be approximated arbitrarily accurately by decomposing it into a Gaussian part (constructed by the Kalman filter) and a remainder function (whose expectation is evaluated by simulation). This general methodology is specialised to the estimation of SV models. A finite sample simulation experiment illustrates that the resulting Monte Carlo likelihood estimator achieves full efficiency with minimal computational effort. Accurate values of the likelihood function allow inference within the model to be performed by means of likelihood ratio tests. This enables tests for the presence of a unit root in the volatility process to be constructed which are shown to be more powerful than the conventional unit root tests. The second part of the thesis consists of two empirical applications of the SV model. First, the informational content of implied volatility is examined. It is shown that the in- sample evolution of DEM/USD exchange rate volatility can be accurately captured by implied volatility of options. However, better forecasts of ex post volatility can be constructed from the basic SV model. This suggests that options implied volatility may not be market's best forecast of the future asset volatility, as is often assumed. Second, the regulatory claim of a destabilising effect of futures market trading on stock market volatility is critically assessed. It is shown how volume-volatility relationships can be accurately modelled in the SV framework. The variables which approximate the activity in the FT100 index futures market are found to have no influence on the volatility of the underlying stock market index.
APA, Harvard, Vancouver, ISO, and other styles
32

Blok, Marius Jacobus Johannes. "The educational validity of visual geometry." Thesis, University of Hull, 1997. http://hydra.hull.ac.uk/resources/hull:3487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Judy S. "Advances in simulation: validity and efficiency." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53457.

Full text
Abstract:
In this thesis, we present and analyze three algorithms that are designed to make computer simulation more efficient, valid, and/or applicable. The first algorithm uses simulation cloning to enhance efficiency in transient simulation. Traditional simulation cloning is a technique that shares some parts of the simulation results when simulating different scenarios. We apply this idea to transient simulation, where multiple replications are required to achieve statistical validity. Computational savings are achieved by sharing some parts of the simulation results among several replications. We improve the algorithm by inducing negative correlation to compensate for the (undesirable) positive correlation introduced by sharing some parts of the simulation. Then we identify how many replications should share the same data, and provide numerical results to analyze the performance of our approach. The second algorithm chooses a set of best systems when there are multiple candidate systems and multiple objectives. We provide three different formulations of correct selection of the Pareto optimal set, where a system is Pareto optimal if it is not inferior in all objectives compared to other competing systems. Then we present our Pareto selection algorithm and prove its validity for all three formulations. Finally, we provide numerical results aimed at understanding how well our algorithm performs in various settings. Finally, we discuss the estimation of input distributions when theoretical distributions do not provide a good fit to existing data. Our approach is to use a quasi-empirical distribution, which is a mixture of an empirical distribution and a distribution for the right tail. We describe an existing approach that involves an exponential tail distribution, and adapt the approach to incorporate a Pareto tail distribution and to use a different cutoff point between the empirical and tail distributions. Then, to measure the impact, we simulate a stable M/G/1 queue with a known inter-arrival and unknown service time distributions, and estimate the mean and tail probabilities of the waiting time in queue using the different approaches. The results suggest that if we know that the system is stable, and suspect that the tail of the service time distribution is not exponential, then a quasi-empirical distribution with a Pareto tail works well, but with a lower bound imposed on the tail index.
APA, Harvard, Vancouver, ISO, and other styles
34

Orell, T. (Tuure). "Validity of the Lindblad master equations." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201712193337.

Full text
Abstract:
Theory of open quantum systems, which studies quantum systems interacting with their environments, has a wide variety of applications in physics, ranging from quantum optics and condensed matter physics to quantum informatics and quantum computation. One important feature of open quantum systems is quantum mechanical treatment for damping, which is usually described by Lindblad master equations derived using the Born–Markov approximation. However, because of their perturbative nature, such master equations can only model systems with weak system-environment coupling. For example, in superconducting quantum circuits the environmental coupling can be increased to values for which the master equations are no longer accurate. A more accurate equation of motion, the formally exact stochastic Liouville–von Neumann equation, can be obtained by using the path integral formalism of quantum mechanics. We study a simple but important model where a two state system, qubit, is coupled to a quantum harmonic oscillator, which is further coupled to a harmonic oscillator bath. The qubit-oscillator system can be described by the Rabi Hamiltonian. We solve the dynamics of this system numerically with the stochastic Liouville–von Neumann equation and two different Lindblad master equations, the quantum optical master equation and the eigenstate master equation. The former treats the qubit and the oscillator separately and the transitions occur between the eigenstates of the oscillator, whereas the latter treats them as a single system with transitions between the eigenstates of the whole system Hamiltonian. Numerical solutions of the stochastic Liouville–von Neumann equation are unstable with long simulation times. Because of this we are only able to solve it in the case where the qubit and the oscillator are in resonance. In this case we find parameters with which all three equations produce nearly the same results. We also see that there exists cases where only one of the master equations agrees with the stochastic Liouville–von Neumann equation. Further on, we find parameters for which both master equations produce results that deviate notably from those given by the stochastic Liouville–von Neumann equation. In off-resonance, we find that the solutions of the two master equations do not agree with each other for any parameters we study. These results suggest that the current state of the numerical simulations of open quantum systems can be improved by using the formally exact methods instead of the approximate ones for situations where the environmental coupling is of the order of the qubit-cavity coupling or stronger.
APA, Harvard, Vancouver, ISO, and other styles
35

Kajdasz, James Edward. "Face Validity and Decision Aid Neglect." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1287176743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

French, Elizabeth. "The Validity of the CampusReady Survey." Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18369.

Full text
Abstract:
The purpose of this study is to examine the evidence underlying the claim that scores from CampusReady, a diagnostic measure of student college and career readiness, are valid indicators of student college and career readiness. Participants included 4,649 ninth through twelfth grade students from 19 schools who completed CampusReady in the 2012-13 school year. The first research question tested my hypothesis that grade level would have an effect on CampusReady scores. There were statistically significant effects of grade level on scores in two subscales, and I controlled for grade level in subsequent analyses on those subscales. The second, third and fourth research questions examined the differences in scores for subgroups of students to explore the evidence supporting the assumption that scores are free of sources of systematic error that would bias interpretation of student scores as indicators of college and career readiness. My hypothesis that students' background characteristics would have little to no effect on scores was confirmed for race/ethnicity and first language but not for mothers' education, which had medium effects on scores. The fifth and six research questions explored the assumption that students with higher CampusReady scores are more prepared for college and careers. My hypothesis that there would be small to moderate effects of students' aspirations for after high school on CampusReady scores was confirmed, with higher scores for students who aspired to attend college than for students with other plans. My hypothesis that there would be small to moderate relationships between CampusReady scores and grade point average was also confirmed. I conclude with a discussion of the implications and limitations of these results for the argument supporting the validity of CampusReady score interpretation as well as the implications of these results for future CampusReady validation research. This study concludes with the suggestion that measures of metacognitive learning skills, such as the CampusReady survey, show promise for measuring student preparation for college and careers when triangulated with other measures of college and career preparation.
APA, Harvard, Vancouver, ISO, and other styles
37

Gilbert, Elizabeth. "The Validity of Summary Comorbidity Measures." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/382997.

Full text
Abstract:
Statistics
Ph.D.
Prognostic scores, and more specifically comorbidity scores, are important and widely used measures in the health care field and in health services research. A comorbidity is an existing disease an individual has in addition to a primary condition of interest, such as cancer. A comorbidity score is a summary score that can be created from these individual comorbidities for prognostic purposes, as well as for confounding adjustment. Despite their widespread use, the properties of and conditions under which comorbidity scores are valid dimension reduction tools in statistical models is largely unknown. This dissertation explores the use of summary comorbidity measures in statistical models. Three particular aspects are examined. First, it is shown that, under standard conditions, the predictive ability of these summary comorbidity measures remains as accurate as the individual comorbidities in regression models, which can include factors such as treatment variables and additional covariates. However, these results are only true when no interaction exists between the individual comorbidities and any additional covariate. The use of summary comorbidity measures in the presence of such interactions leads to biased results. Second, it is shown that these measures are also valid in the causal inference framework through confounding adjustment in estimating treatment effects. Lastly, we introduce a time dependent extension of summary comorbidity scores. This time dependent score can account for changes in patients' health over time and is shown to be a more accurate predictor of patient outcomes. A data example using breast cancer data from the SEER Medicare Database is used throughout this dissertation to illustrate the application of these results to the health care field.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
38

Carter, Devin Matthew. "Using Working Memory to Address the Validity-Diversity Dilemma: Incremental Validity and Subgroup Differences Compared to GMA." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100807.

Full text
Abstract:
General mental ability (GMA) has been found to be the best predictor of job knowledge and job performance, and it is widely-used for personnel selection decisions. However, the use of GMA in selection is a concern for practitioners because of the large Black-White race differences associated with GMA tests. The use of GMA tests, therefore, results in adverse impact when basing decisions on predicted performance. In order to address this validity-diversity tradeoff, a more specific cognitive ability is examined – working memory (WM). Two-hundred participants (50% Black, 50% White) were given measures of GMA and WM before being presented with learning opportunities meant to teach them novel information. The participants were then instructed to complete tasks which apply this newly learned knowledge. WM was examined in terms of how much additional variance was accounted for in task knowledge and task performance after controlling for GMA. In addition, race group differences of WM were compared to those of GMA. Results indicated that WM was able to account for significant additional variance in knowledge and performance, and that this relationship have been moderated by task complexity. WM exhibited slightly smaller absolute race differences as well, but these reductions were nonsignificant. Results are discussed in terms of the possible use of WM in a selection context.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
39

Parker, Kimberly. "Utility of the General Validity Scale Model: Development of Validity Scales for the Co-parenting Behavior Questionnaire." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2301.

Full text
Abstract:
Validity scales for child-report measures are necessary tools in clinical and forensic settings in which major decisions affecting the child and family are in question. Currently there is no standard model for the development and testing of such validity scales. The present study focused on 1) creating the General Validity Scale (GVS) Model to serve as a guide in validity scale development and 2) applying this model in the development of validity scales for the Co-parenting Behavior Questionnaire (CBQ), a child-report measure of parenting and co-parenting behaviors for children whose parents are divorced. Study 1 used the newly developed GVS Model to identify threats to CBQ validity and to develop procedures for detecting such threats. Four different validity scales were created to detect inaccurate responding due to 1) presenting mothering, fathering, and/or co-parenting in an overly negative light, 2) rating mothering and fathering in a highly discrepant manner, 3) inconsistent item responses, and 4) low reading level. Study 2 followed the GVS Model to test the newly developed scales by comparing CBQ responses produced under a standard instruction set to responses from contrived or randomly generated data. Support for the ability of each validity scale to accurately detect threats to validity was found.
APA, Harvard, Vancouver, ISO, and other styles
40

Alberts, Philippus Petrus Hermanus. "The predictive validity of a selection battery for university bridging students in a public sector organisation / Philippus Petrus Hermanus Alberts." Thesis, North-West University, 2007. http://hdl.handle.net/10394/203.

Full text
Abstract:
South Africa has faced tremendous changes over the past decade, which has had a huge impact on the working environment. Organisations are compelled to address the societal disparities between various cultural groups. However, previously disadvantaged groups have had to face inequalities of the education system in the past, such as a lack of qualified teachers (especially in the natural sciences), and poor educational books and facilities. This has often resulted in poor grade 12 results. Social responsibility and social investment programmes are an attempt to rectify these inequalities. The objective of this research was to investigate the validity of the current selection battery of the Youth Foundation Training Programme (YFTP) in terms of academic performance of the students on the bridging programme. A correlational design was used in this research in order to investigate predictive validity whereby data on the assessment procedure was collected at about the time applicants were hired. The scores obtained from the Advanced Progressive Matrices (APM), which forms part of the Raven's Progressive Matrices as well as the indices of the Potential Index Battery (PIB) tests, acted as the independent variables, while the Matric results of the participants served as the criterion measure ofthe dependent variable. The data was analysed using the Statistical Package for Social Sciences (SPSS) software programme by means of correlations and regression analyses. The results showed that although the current selection battery used for the bridging students does indeed have some value, it only appears to be a poor predictor of the Matric results. Individually, the SpEEx tests used in the battery evidently were not good predictors of the Matric results, while the respective beta weights of the individual instruments did confirm that the APM was the strongest predictor. Limitations were identified and recommendations for further research were discussed.
Thesis (M.A. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2007.
APA, Harvard, Vancouver, ISO, and other styles
41

ARUA, CEASER. "Assessing the validity of microcredit impact studies in Uganda : Assessing the validity of microcredit impact studies in Uganda." Thesis, Linnéuniversitetet, Institutionen för samhällsstudier (SS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36218.

Full text
Abstract:
A number of developing countries including Uganda have of recent experienced tremendous growth of microfinance industry in financial and credit service provision. Microfinance development in developing countries and its’ impacts on the poor’s livelihood have been a central point of focus by academic community and development stakeholders. A number of actors like donors and government agencies have accredited microcredit as a program to help the poor improve their living conditions, fight extreme poverty and reduce the number of people living in absolutely lacking situations. The growth of microcredit schemes in Uganda has incited donors, government agencies, different microfinance institutions, individual and academia to measure the achievements of the program in relation to its’ different objectives. Despite the growing efforts and attention to measure microcredit impacts on livelihood transformation, less focus has been given to this scientific process of measuring program impacts. Ensuring credibility and validity is an important aspect that guarantees realistic representation and quality in scientific research when researchers seek to understand what has been achieved. It is upon the above background that this study established strong interest to understand and explore how different scientific research processes of impact evaluation relate to the quality of impact reports or outcomes measured. The study examines the main debate about microcredit impacts, this is aimed at providing necessary information required (epistemological benefit) to understand microcredit impacts within different perspectives of development. Different researchers’ background more specifically their academic qualifications, expertise, gender, institutions attached to and roles played during different impact studies is assessed by this study. The study looks at different methods of data collection, analysis employed by different microcredit impact studies and they impacted on different studies being assessed. The study uses text and systematic method of data and information analysis, different articles searched from Linnaeus University library website and other organizational reports got from different organizations databases, form set of data used in this study. A total of sixteen impact studies done in Uganda have been systematically reviewed. Conceptual framework in which validity is used as the main tool in the analytical discussion of study has been employed.
APA, Harvard, Vancouver, ISO, and other styles
42

Cruz, de Echeverria Loebell Nicole. "Sur le rôle de la déduction dans le raisonnement à partir de prémisses incertaines." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEP023/document.

Full text
Abstract:
L’approche probabiliste du raisonnement émet l’hypothèse que la plupart des raisonnements, aussi bien dans la vie quotidienne qu’en science, se réalisent dans des contextes d’incertitude. Les concepts déductifs centraux de la logique classique, consistance et validité, peuvent être généralisés afin d’englober des degrés de croyance incertains. La consistance binaire peut être généralisée à travers la dénomination de cohérence, lorsque les jugements de probabilité à deux affirmations sont cohérents seulement s’ils respectent les axiomes de la théorie de la probabilité. La validité binaire peut se généraliser comme validité probabiliste (validité-p), lorsqu’une interférence est valide-p seulement si l’incertitude de sa conclusion ne peut être de façon cohérente plus grande que la somme des incertitudes de ses prémisses. Cependant le fait que cette généralisation soit possible dans une logique formelle n’implique pas le fait que les gens utilisent la déduction de manière probabiliste. Le rôle de la déduction dans le raisonnement à partir de prémisses incertaines a été étudié à travers dix expériences et 23 inférences de complexités différentes. Les résultats mettent en évidence le fait que la cohérence et la validité-p ne sont pas juste des formalismes abstraits, mais que les gens vont suivre les contraintes normatives établies par eux dans leur raisonnement. Que les prémisses soient certaines ou incertaines n’a pas créé de différence qualitative, mais la certitude pourrait être interprétée comme l’aboutissement d’une échelle commune de degrés de croyance. Les observations sont la preuve de la pertinence descriptive de la cohérence et de la validité-p comme principes de niveau de calcul pour le raisonnement. Ils ont des implications pour l’interprétation d’observations antérieures sur les rôles de la déduction et des degrés de croyance. Enfin, ils offrent une perspective pour générer de nouvelles hypothèses de recherche quant à l’interface entre raisonnement déductif et inductif
The probabilistic approach to reasoning hypothesizes that most reasoning, both in everyday life and in science, takes place in contexts of uncertainty. The central deductive concepts of classical logic, consistency and validity, can be generalised to cover uncertain degrees of belief. Binary consistency can be generalised to coherence, where the probability judgments for two statements are coherent if and only if they respect the axioms of probability theory. Binary validity can be generalised to probabilistic validity (p-validity), where an inference is p-valid if and only if the uncertainty of its conclusion cannot be coherently greater than the sum of the uncertainties of its premises. But the fact that this generalisation is possible in formal logic does not imply that people will use deduction in a probabilistic way. The role of deduction in reasoning from uncertain premises was investigated across ten experiments and 23 inferences of differing complexity. The results provide evidence that coherence and p-validity are not just abstract formalisms, but that people follow the normative constraints set by them in their reasoning. It made no qualitative difference whether the premises were certain or uncertain, but certainty could be interpreted as the endpoint of a common scale for degrees of belief. The findings are evidence for the descriptive adequacy of coherence and p-validity as computational level principles for reasoning. They have implications for the interpretation of past findings on the roles of deduction and degrees of belief. And they offer a perspective for generating new research hypotheses in the interface between deductive and inductive reasoning
APA, Harvard, Vancouver, ISO, and other styles
43

Long, Byron L. "Validity in a variant of separation logic." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3378369.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Jul 9, 2010). Source: Dissertation Abstracts International, Volume: 70-10, Section: B, page: 6348. Adviser: Daniel Leivant.
APA, Harvard, Vancouver, ISO, and other styles
44

DeKort, Cynthia Dianne. "Validity measures of the Communication Attitude Test." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq22590.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Storey, Peter. "Investigating construct validity through test-taker introspection." Thesis, University of Reading, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hodge, Raquel. "Issues in validity generalization the criterion problem." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4604.

Full text
Abstract:
Schmidt and Hunter's validity generalization model poses seven sources of error variance affecting validation studies. Of the seven sources of error variance, only four sources have been tested. This study looks at an additional source of error variance, the difference between studies in the amount and kind of criterion contamination and deficiency, as proposed by Schmidt and Hunter. The current study proposes a method of evaluating criterion contamination and deficiency in criterion measures in order to minimize their effects on the relationship between criterion and predictor measures. Two unique criteria are used including a traditional subjective measure of current performance and a non-traditional subjective measure of expandability (future performance). Data from 378 employees from a large international financial institution were used to test the proposed method. Results do not support the hypotheses. Single criteria predicted the same or better than the combined criteria, suggesting that the criterion problem was not addressed. Possible reasons for these findings are discussed. An unexpected finding supports the utility of personality measures compared to cognitive ability measures. The study concludes with a discussion of the implications and limitations of the study as well as directions for future research.
ID: 028916979; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2010.; Includes bibliographical references (p. 46-49).
M.S.
Masters
Department of Psychology
Sciences
APA, Harvard, Vancouver, ISO, and other styles
47

Stephenson, Heather. "Screening and Diagnostic Validity of Affinity 2.5." Thesis, Brigham Young University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3680970.

Full text
Abstract:

Affinity 2.5 is a computer-based instrument designed to assess sexual interest using viewing-time measures. Viewing-time measures of sexual interest have been developed to identify individuals with deviant sexual interest. The purpose of this study is to examine the validity of Affinity 2.5 in screening and diagnosing individuals with sexually deviant interests. This study used viewing time profiles of known sexual offenders compared to norm-referenced profiles of an exclusively heterosexual, non-pedophilic college population. Participants were 155 males and 3 females who had sexually offended against children and 63 male and 84 female non-offender college students. Results show that 43.7% of offenders were correctly identified as having significantly deviant sexual interest, compared to the reference group. Further 12.0% of offenders showed statistical significant interest in at least one category of individuals from a protected population and offended against that same category. The results of this study do not provide support for the utility of the Affinity 2.5 as a screening or diagnostic tool.

APA, Harvard, Vancouver, ISO, and other styles
48

Dadds, Marion. "Validity and award-bearing teacher action-research." Thesis, University of East Anglia, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Crispim, Ana Carla. "Exploring the validity evidence of core affect." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/66140/.

Full text
Abstract:
Core affect is an elementary affective state expressed through subjective feelings. Nonetheless, despite extensive empirical evidence in the field, researchers still disagree about its dimensionality. Thus, the present thesis aims to verify the validity evidence of existing models of core affect, overcoming the methodological issues of previous studies, and establishing the dimensionality of core affect. First, theoretical contributions are presented, and both conceptual (e.g. what is core affect?) and methodological issues (e.g. how core affect is measured?) are discussed. Following that, two empirical studies are presented. The first study explores the dimensionality of core affect and provides validity evidence of a new core affect measure. In the second study, a robust-to-biases core affect measure is developed and tested. In addition, the relationship between core affect, contextual variables (e.g. mood) and personality traits are studied in a longitudinal design. Items formats and their consequences in the measurement of core affect (e.g. rating scales, forced-choice items) are debated. Theoretical and methodological advances are discussed at last, as well as limitations and future directions.
APA, Harvard, Vancouver, ISO, and other styles
50

Micciancio, Daniele. "The validity problem for extended regular expressions." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography