To see the other types of publications on this topic, follow the link: Stanford-Binet Test.

Dissertations / Theses on the topic 'Stanford-Binet Test'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Stanford-Binet Test.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Troyka, Rebecca J. "An investigation of item difficulty in the Stanford-Binet intelligence scale, fourth edition." Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/560300.

Full text
Abstract:
Introduced in 1986, the Stanford-Binet Intelligence Scale: Fourth Edition differs radically from its predecessors. Because of the adaptive testing format and the limited number of items given to each subject, it is especially important that consecutive levels in each of the tests increase in difficulty. The purpose of this study was to investigate the progression of difficulty among items in the Fourth Edition.Three hundred sixty-four subjects f iii Indiana who ranged in age from 3 years, 0 months to 23 years, 4 months were administered the Fourth Edition. The study was limited to those subjects earning a Composite SAS Score at or above 68.Data were presented to indicate trends in the difficulty of each item as well as in the difficulty of each level in the Fourth Edition. Three research questions were answered. 1.) Are the items at each level equally difficult? 2.) Are the levels in each test arranged so that the level with the least difficult items is first followed by levels with more and more difficult items? 3.) In each test is an item easier for subjects who have entered at a higher level than it is for subjects who have entered at a lower level?The results supported the hypotheses, confirming that the Fourth Edition is a solidly constructed test in terms of item difficulty levels. Most item pairs within a level were found to be approximately equal in difficulty. Nearly all of the levels in each test were followed by increasingly more difficult levels. And each item was found to be more difficult for subjects entering at a lower entry level than for those entering at a higher entry level with very few exceptions. For these few discrepancies found, there was no reason to believe that these were caused by anything other than chance.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
2

Chase, Danielle Chute Douglas L. "Underlying factor structures of the Stanford-Binet Intelligence Scales - Fifth Edition /." Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860/738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ng, Agnes Oi Kee. "Relationships among the Stanford-Binet Intelligence Scale : Fourth Edition, the Peabody Picture Vocabulary test-Revised and teacher rating for Canadian Chinese elementary age students." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/31247.

Full text
Abstract:
The use of standardized tests in the assessment of ethnic students who speak English as a second language has become an important issue in Canada due to the increasing number of immigrant students in the school system. The subjects of this study were a group of 34 Canadian born, bilingual Chinese third graders with at least three years of schooling in English. They were tested on two standardized tests and the results were compared with the standardization population. The study also investigated the correlations among these two measures and an informal teacher rating scale. The subjects were found to perform more than one standard deviation below the norm on the Peabody Picture Vocabulary test - Revised, which is a test of receptive language. Chinese speaking home environments and the culturally biased items in the test might have resulted in the significantly low score obtained by the subjects. On the Stanford-Binet Intelligence Scale: Fourth Edition, the subjects did not perform significantly different from the norm on the Test Composite, Verbal Reasoning, Abstract/Visual Reasoning, Short-Term Memory and seven subtests. They did score significantly higher than the norm on Pattern Analysis, Matrices, Number Series and Quantitative Reasoning and significantly lower on Copying and Memory for Sentences. When compared with a group of Asian subjects (ages 7-11) from the Stanford-Binet standardization sample, the subjects performed significantly higher on Quantitative Reasoning and lower on Short-term Memory. As consistent with the results of previous research, the subjects in the present study excelled in visual/perceptual and mathematical tests. It is possible that their (English Language) proficiency may have brought about significantly low score in Memory for Sentences. The four reasoning area scores on the Stanford-Binet were found to be significantly different from each other with the subjects' highest score in Quantitative Reasoning and the lowest in Short-Term Memory. Correlations among the three measures reached statistical significance ranging from the thirties to the sixties. Teacher rating correlated equally well with the standardized tests as there was no significant difference among the correlations. However, the correlations indicated that though these tests shared something in common, in practice, they cannot be used interchangeably. The study concluded that the Peabody Picture Vocabulary Test - Revised may not be an appropriate instrument for measuring the receptive language of Chinese students who have English as their second language. The Stanford-Binet Intelligence Scale: Fourth Edition could be considered a valid measure of the cognitive ability of this group of students. The positive and significant correlations among Teacher Rating and standardized tests indicate that teachers' perception of student ability parallels what formal testing reveals.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Bass, Catherine. "Comparability of the WPPSI-R and the Stanford-Binet: Fourth Edition." Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc500383/.

Full text
Abstract:
The purpose of this study was to compare the performance of children on the Wechsler Preschool and Primary Scale of Intelligence-Revised (WPPSI-R) with their performance on the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE). One hundred and four children between 3 and 7 years of age were administered both tests. A moderate correlation was found between the WPPSI-R Full Scale IQ and the SB:FE Composite Score with a Pearson product-moment correlation of .46. This correlation suggests that the two tests are not interchangeable measures of children's intelligence. They may measure different, equally important aspects of intelligence. As both tests used are relatively new, the current findings should be considered one step in the accumulation of knowledge about the usefulness of the WPPSI-R.
APA, Harvard, Vancouver, ISO, and other styles
5

Church, Rex W. "An investigation of the value of the Peabody picture vocabulary test-revised and the Slosson intelligence test as screening instruments for the fourth edition of the Stanford-Binet intelligence scale." Virtual Press, 1986. http://liblink.bsu.edu/uhtbin/catkey/467365.

Full text
Abstract:
The Peabody Picture Vocabulary Test-Revised (PPVT-R) and Slosson Intelligence Test (SIT) were designed, at least in part, to provide a quick estimate of scores which might be obtained on the Stanford-Binet Intelligence Scale, Form L -M, without requiring extensive technical training by the examiner. Both the PPVT-R and SIT are frequently used as screening instruments to identify children for possible placement in special education programs, remedial reading groups, speech and language therapy, gifted programs, or "tracks." This study investigated the value of the PPVT-R and SIT as screening instruments for the Fourth Edition Stanford-Binet.Fifty students, grades kindergarten through fifth, were randomly selected to participate in the study. All subjects were involved in regular education at least part-time. Subjects were administered the PPVT R, SIT, and Fourth Edition Binet by a single licensed school psychologist. The administration order of the instruments was randomized. Participants were tested on consecutive school days (10) until all subjects had been administered the three instruments.Correlation coefficients were determined for the Standard Score of the PPVT-R and each Standard Age Score of the Binet (four area scores and one total test score), as well as for the SIT IQ score and each Standard Age Score of the Binet. All correlations were positive and significant beyond the p<.Ol level except between the PPVT-R and Binet Quantitative Reasoning.Analyses of Variance were used to determine mean differences of scores obtained on the three instruments. Significant differences (p<.05) were found between scores on the PPVT-R and Abstract/Visual Reasoning, SIT and Verbal Reasoning, SIT and Short-Term Memory, SIT and Abstract/Visual Reasoning, and SIT and Total Test Composite.Results indicated that, in general, the SIT is a better predictor of Fourth Edition Binet scores than the PPVT R, however frequently yielded significantly different scores. It was concluded that neither the PPVT R nor SIT should be used as a substitute for more comprehensive measures of intellectual functioning, and caution should be used when interpreting their results. Much more research is needed to clarify the diagnostic value of the Fourth Edition Stanford-Binet as a psychometric instrument.
APA, Harvard, Vancouver, ISO, and other styles
6

Blood, Beverly A. "The relationship between achievement on the test of cognitive skills and the Stanford-Binet intelligence scale : fourth edition for elementary school students." Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/720137.

Full text
Abstract:
For many school psychologists the constraints of time create a need to identify an instrument that can be used to screen students referred for comprehensive psychoeducational evaluations. This study examined the relationship between scores students obtained on the group-administered Test of Cognitive Skills (TCS) and those they obtained on the individually administered Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE). Comparisons were made between the Cognitive Skills Index (CSI) and Sattler's Factor scores from the SB:FE, and between the CSI and the SB:FE Composite score.The subjects were 75 elementary public school students who were enrolled in regular education classes at least 50% of their school day. The students were referred for comprehensive evaluations because of concern about their academic progress. Archival data from tests administered during the 1987-1988 school year were gathered from the students' cumulative school files.Pearson product moment correlations indicate that (in the sample studied) there was a significant positive correlation between the CSI scores and each of the Factor scores and the Composite scores. Analysis of variance (ANOVA) procedures were used to test mean differences. The data indicate that there was no statistically significant difference between the mean score of the CSI and the Verbal Comprehension Factor score, nor between the CSI and Memory Factor. However, the Nonverbal Reasoning/Visualization and Composite means differed significantly from the CSI mean.The results of this study suggest that the CSI can make a worthwhile contribution to referral information. Correlational and mean difference data derived from this study demonstrate the need for caution when interpreting and applying statistical findings. Additional research is needed to clarify further the relationship among group-administered and individually administered intelligence tests, and between the SB:FE and other individually administered intelligence tests.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
7

Nelson, Stephanie Anne. "Associations Between Intelligence Test Scores and Test Session Behavior in Children with ADHD, LD, and EBD." ScholarWorks @ UVM, 2008. http://scholarworks.uvm.edu/graddis/159.

Full text
Abstract:
Individually administered intelligence tests are a routine component of psychological assessments of children who may meet criteria for Attention-Deficit/Hyperactivity Disorder (ADHD), learning disorders (LD), or emotional and behavioral disorders (EBD). In addition to providing potentially useful test scores, the individual administration of an intelligence test provides an ideal opportunity for observing a child’s behavior in a standardized setting, which may contribute clinically meaningful information to the assessment process. However, little is known about the associations between test scores and test session behavior of children with these disorders. This study examined patterns of test scores and test session observations in groups of children with ADHD, LD, EBD who were administered the Stanford Binet Intelligence Scales, Fifth Edition (SB5), as well as in control children from the SB5 standardization sample. Three hundred and twelve children receiving special education services for ADHD (n = 50), LD (n = 234), EBD (n = 28) and 100 children selected from the SB5 standardization sample were selected from a data set of children who were administered both the SB5 and the Test Observation Form (TOF; a standardized rating form for assessing behavior during cognitive or achievement testing of children). The groups were then compared on SB scores and TOF scores. Associations between test scores and TOF scores in children with ADHD, LD, and EBD and normal controls were also examined. The results of this investigation indicated that children with ADHD, LD, and EBD and normal control children differed on several SB5 and TOF scales. Control children scored higher on all of the SB5 scales than children with LD, and scored higher on many of the SB5 scales than children with ADHD and EBD. Children with EBD demonstrated the most problem behavior during testing, followed by children with ADHD. Children with LD were similar to control children with respect to test session behavior. In addition, several combinations of test scores and test session behavior were able to predict diagnostic group status. Overall, the results of this investigation suggest that test scores and behavioral observations during testing can and should be important components of multi-informant, multi-method assessment of children with ADHD, LD, and EBD.
APA, Harvard, Vancouver, ISO, and other styles
8

Powers, Abigail Dormire. "The fourth edition of the Stanford-Binet intelligence scale and the Woodcock-Johnson tests of achievement : a criterion validity study." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/558350.

Full text
Abstract:
The purpose of the study was to investigate the validity of the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE) area and composite scores and Sattler's SB:FE factor scores as predictors of school performance on the Woodcock-Johnson Tests of Achievement (WJTA).The subjects were 80 Caucasian third grade students enrolled in regular education in a rural and small town school district in northeastern Indiana. The SB:FE and WJTA were administered to all students.Two canonical analyses were conducted to test the overall relationships between sets of SB:FE predictor variables and the set of WJTA criterion variables. Results indicated that the SB:FE area scores and Sattler's SB:FE factor scores were valid predictors of academic achievement at a general level.To clarify the results of the canonical analyses, series of multiple regression analyses were conducted. Results of multiple regression with SB:FE area and composite scores indicated that the best single predictor of all WJTA scores was the SB:FE Test Composite Score. No other SB:FE variable provided a significant contribution to the regression equation for reading, math, and written language achievement over that offered by the Test Composite Score.Multiple regression analyses were also employed with Sattler's SB:FE factor scores and the WJTA scores. The optimal predictor composite for reading included the Verbal Comprehension and Memory factor scores. To predict math, the best predictor composite consisted of the Nonverbal Reasoning/Visualization and Verbal Comprehension factor scores. The optimal predictor composite for written language included the Nonverbal Reasoning/Visualization and Memory factor scores.Results of the regression analyses indicated that, without exception, the predictor composites composed of the SB:FE area and composite scores were superior in their prediction of school performance to the predictor composites developed from Sattler's SB:FE factor scores.The regression equation containing the SB:FE Test Composite Score alone was determined to be the preferred approach for predicting WJTA scores. Use of the Test Composite Score sacrifices only a minimal degree of accuracy in the prediction of achievement and requires no additional effort to compute.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
9

Mullins, James E. "A comparison of performance of students referred for gifted evaluation on the WISC-III and Binet IV." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=1172.

Full text
Abstract:
Thesis (Ed. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains xi, 182 p. : ill. Vita. Includes abstract. Includes bibliographical references (p. 139-143).
APA, Harvard, Vancouver, ISO, and other styles
10

Tucker, Sandra K. "A validation study of the general purpose abbreviated battery of the Stanford-Binet : fourth edition used in the reevaluation of learning disabled students." Virtual Press, 1990. http://liblink.bsu.edu/uhtbin/catkey/720164.

Full text
Abstract:
At the same time that research has raised questions about the efficiency, cost effectiveness and overall value of triennial reevaluation in special education programs, school psychologists have expressed a desire to spend less time in psychometric testing. This study examined the effects of using the General Purpose Abbreviated Battery of the Stanford-Binet: Fourth Edition (Binet GP) in the triennial reevaluation of learning disabled students.The Binet GP, Wechsler Intelligence Scale for Children-Revised (WISC-R) and the Kaufman Test of Educational Achievement-Brief Form (Kaufman BF) were given concurrently to 50 learning disabled students during triennial reevaluation. Intelligence/ achievement discrepancy scores were calculated by subtracting Kaufman BF achievement subtest scores from achievement levels predicted by performance on the Binet GP and WISC-R intelligence scales. These discrepancy scores were compared to determine how use of the Binet GP might effect eligibility for placement in a learning disabilities program. Cognitive scores derived from the Binet GP and the WISC-R were also compared.Descriptive statistics and univariate correlations were computed. The correlational relationship between intelligence scores on the Binet GP and the WISC-R was significant, positive and substantial. The relationship between discrepancy scores was significant, positive and high. A repeated measures analysis of mean differences between Binet GP and WISC-R scores was nonsignificant as was a comparison of the variances and mean discrepancy scores. A chi-square and a coefficient of level of classification (Kappa) were used to test agreement in classification as projected by Binet GP and WISC-R discrepancy scores. Agreement in classification and level of classification was significant with 86% of the subjects classified the same by both cognitive measures. It appears that, used judiciously and in like context, the Binet GP might be a time efficient and valid addition to reevaluation.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
11

Perley-McField, Jo-Anne. "The appropriateness of selected subtests of the Stanford-Binet Intelligence Scale, Fourth Edition for hearing impaired children." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29000.

Full text
Abstract:
This study proposed to evaluate the appropriateness of selected subtests of the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE) for use with severely to profoundly hearing impaired children. The subjects used in this study were enrolled in a residential/day school for the deaf whose educational methodology was Total Communication. The subjects were tested on both the SB:FE nonverbal selected subtests and the Performance Scale of the Wechsler Intelligence Scale for Children-Revised (WISC-R PIQ). To assess appropriateness, several procedures were employed comparing data gathered from the hearing impaired sample with data reported for the standardized population of the SB:FE. Correlations were computed between the WISC-R and the SB:FE and comparisons of the total composite scores for each measure were made to detect any systematic differences. The results indicated that the correlations reported for the hearing impaired sample are generally similar to the correlations reported for the standardized sample of the SB:FE. The analysis performed between the Area Scores of the SB:FE and the WISC-R PIQ to detect systematic differences revealed a difference of one standard deviation between these two instruments, with the. SB:FE results being lower than the WISC-R PIQ results. It was concluded that the selected subtests of the SB:FE and the WISC-R PIQ could not be used interchangeably. Further research into this area was advised before using this measure to estimate general cognitive ability for hearing impaired children whose levels of language development may be delayed. Further research was also encouraged to confirm the suggestion of greater predictive validity of the SB:FE with academic measures. It was suggested that these findings indicated that the use of language as a cognitive tool may be important in acquiring certain problem solving skills.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Williams, Tasha H. "A joint-confirmatory factor analysis using the Woodcock-Johnson III tests of cognitive ability and the Stanford-Binet Intelligence Scales, fifth edition, with high achieving children." Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1318454.

Full text
Abstract:
A considerable about of research has concentrated on studying the performance of high achieving children on measures of intellectual functioning. Findings have indicated high achieving children display differences in performance patterns as well as in the cognitive constructs measured when compared to their average peers. The conceptualization of intelligence has evolved over time and contemporary theories of intelligence have described cognitive ability as consisting of multiple constructs which are often interrelated. Currently. one of the most comprehensive and empirically supported theories of intelligence is the Cattell-Horn-Carroll (CHC) theory (Cattell, 1941; Horn, 1968: Carroll, 1993). The multidimensional and hierarchical CHC theory has served as the foundation for the development and recent revisions of cognitive ability measures such as the Woodcock-Johnson Tests of Cognitive Ability– Third Edition (WJ-III COG; McGrew & Woodcock 2001) and the Stanford-Binet Intelligence Scales – Fifth Edition (SBS: Raid, 2003b). The purpose of this study was to explore the construct validity of the WJ-III COG and SB5 with a sample of high achieving children. Individual confirmatory factor analyses were conducted using the WJ-III COG and SB5. Additionally. a joint confirmatory factor analysis was conducted using both the WJ-III COG and SB5. The results indicated an alternative six-factor WJ-IlI COG and four-factor SB5 models provided the best fit to the data of a high achieving sample, supporting previous research suggesting high achieving children display differences in cognitive constructs when compared with their average counterparts. The joint-confirmatory factor analysis indicated the best measures for the CHC factors measured by both the WJ-III COG and SB5 to help guide cross-battery assessments with high functioning children. Clinical applications of the findings are discussed.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
13

Morgan, Kimberly E. "The validity of intelligence tests using the Cattell-Horn-Carroll model of intelligence with a preschool population." Virtual Press, 2008. http://liblink.bsu.edu/uhtbin/catkey/1389688.

Full text
Abstract:
Individual differences in human intellectual abilities and the measurement of those differences have been of great interest to the field of school psychology. As such, different theoretical perspectives and corresponding test batteries have evolved over the years as a way to explain and measure these abilities. A growing interest in the field of school psychology has been to use more than one intelligence test in a "cross-battery" assessment in hopes of measuring a wider range (or a more in-depth but selective range) of cognitive abilities. Additionally, interest in assessing intelligence began to focus on preschool-aged children because of initiatives to intervene early with at-risk children. The purpose of this study was to examine the Stanford-Binet Intelligence Scales, Fifth Edition (SB-V) and Kaufman Assessment Battery for Children, Second Edition (KABC-II) in relation to the Cattell-Horn-Carroll (CHC) theory of intelligence using a population of 200 preschool children. Confirmatory factor analyses (CFAs) were conducted with these two tests individually as well as in conjunction with one another. Different variations of the CHC model were examined to determine which provided the best representation of the underlying CHC constructs measured by these tests. Results of the CFAs with the SBV revealed that it was best interpreted from a two-stratum model, although results with the KABC-II indicated that the three-stratum CHC model was the best overall design. Finally, results from the joint CFA did not provide support for a cross-battery assessment with these two particular tests.3
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
14

Steffey, Dixie Rae. "The relationship between the Wechsler Adult Intelligence Scale - Revised and the Stanford-Binet Intelligence Scale: Fourth Edition in brain-damaged adults." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184412.

Full text
Abstract:
This study investigated the relationship between the Wechsler Adult Intelligence Scale-Revised (WAIS-R) and the Stanford-Binet Intelligence Scale: Fourth Edition (SBIV) in a brain-damaged adult sample. The sample in this study was composed of 30 adult patients at two residential treatment programs who completed comprehensive psychological evaluations between August, 1986 and November, 1987. Each patient was administered both the WAIS-R and the SBIV as part of these evaluations. Data gathered in this study was submitted to Pearson product moment correlational statistical procedures. Significant correlations were found in the following pairs of summary scores: the SBIV Test Composite Standard Age Score (SAS) and the WAIS-R Full Scale IQ; the SBIV Abstract/Visual Reasoning Area SAS and the WAIS-R Performance IQ; the SBIV Quantitative Reasoning Area SAS and the WAIS-R Verbal Scale IQ; the SBIV Verbal Reasoning Area SAS and the WAIS-R Verbal Scale IQ; the SBIV Short-Term Memory Area SAS and the WAIS-R Verbal Scale IQ; and the SBIV Short-Term Memory Area SAS and the WAIS-R Full Scale IQ. Significant correlations were also found in the following pairs of individual subtest results: the SBIV and WAIS-R Vocabulary subtests; the SBIV Memory for Digits subtest and the WAIS-R Digit Span subtest; the SBIV Pattern Analysis subtest and the WAIS-R Block Design subtest; and the SBIV Paper Folding and Cutting subtest and the WAIS-R Picture Arrangement subtest. Directions for future research were also suggested upon review of the subtest correlation matrix and the descriptive statistics of data generated.
APA, Harvard, Vancouver, ISO, and other styles
15

Hansen, Daryl P. "Comparison of the performance of intellectually disabled children on the WISC-111 and SB-1V." 1999. http://arrow.unisa.edu.au:8081/1959.8/25032.

Full text
Abstract:
This study investigated the results of administering two intelligence tests, the Wechsler Intelligence Scale for Children -Third Edition (WISC-111), and the Stanford-Binet Intelligence Scale - Fourth Edition, to each of 33 Australian children with an intellectual disability. The experiment used a counterbalanced design in which the tests, order of presentation of the tests, the gender of the subjects, and the gender of the test administrators were factors. The 33 volunteer subjects, 14 males and 19 females, aged between 6 and 16 years, and known to have an intellectual disability, were allocated randomly for the assessments. The test administrators were students in the Clinical and Organisational Masters Program from the University of South Australia. It was hypothesised that; there would be a difference between the IQs on the two tests; that on average the WISC-111 FSIQ would be lower than the SB-1V TC; and that there would be a positive relationship between the WISC-111 FSIQ and the SB-1 V TC Statistical analysis of the data found the two tests' overall scores to be significantly different, while the counterbalanced factors and their interactions did not reach significance. There was a significant 4 point difference found between the mean WISC-111 FSIQs and SB-1V TCs. The results of a Pearson Product Moment Correlation Coefficient revealed a strong positive correlation (r = .83). between the WISC-111 FSIQ and SB-1V TC. This finding supported the concurrent validity of the tests in this special population sample. It was suggested that while the two tests measured similar theoretical constructs of intelligence, the two tests were not identical and therefore the results were not interchangeable. Variable patterns of results were found among subtest scores from the two tests, and the implications for field work discussed. The differences between raw WISC-111 FSIQ and SB-1V TC scores were calculated, and a z transformation was applied to the difference scores. The resulting difference distribution and cumulative percentages were then suggested as a reference table for practitioners. Studies that examined clerical errors in scoring intelligence test protocols were reviewed. The manually scored test protocols in this study were rescored using a computer scoring programme and 27 errors were found and corrected. From the results of the experiment several suggestions were made; that agencies using large numbers of intelligence tests, or which test the same child over time, should make a decision to use the same test, wherever possible, for comparison; that all intelligence test protocols be computer scored as a checking mechanism; and that all professional staff should be aware of the possible differences which can occur between intelligence scores, resulting from norming and other differences.
thesis (MSocSc)--University of South Australia, 1999.
APA, Harvard, Vancouver, ISO, and other styles
16

Chang, Mei. "Joint confirmatory factor analysis of the Woodcock-Johnson Tests of Cognitive Abilities, third edition, and the Stanford-Binet Intelligence Scales, fifth edition, with a preschool population." 2011. http://liblink.bsu.edu/uhtbin/catkey/1652227.

Full text
Abstract:
Significant evidence from the legislation, medical/clinical, or professional practice perspective all points to the advantages and necessity of conducting comprehensive assessment of cognitive abilities, especially in young children, to identify cognitive deficits, arrive at an accurate diagnosis, and establish bases for developing interventions and recommending services. Cross-battery assessment approach provides school psychologists a useful tool to strengthen their preferred cognitive battery by adopting and comparing subtests from other batteries to build up a comprehensive and theoretically sound evaluation of an individual’s cognitive profile to increase the validity of test interpretation. Using joint confirmatory factor analysis, this study explored the combined underlying construct validity of the Woodcock-Johnson Tests of Cognitive Abilities, Third Edition (WJ-III COG) and the Stanford-Binet Intelligence Scales, Fifth Edition (SB5) with an independent sample of preschool children. Seven models were examined and the results showed that relatively, the underlying construct of the two tests was best represented by a threestratum alternative CHC model in which the Gf factor and subtests had been removed. This indicates that not all the CHC constructs shared by both tests can be reliably identified among young children. Constructs of the CHC theory may be represented differently on preschool cognitive batteries due to developmental influences. Although WJ-III COG and SB5 tests as a whole did not demonstrate good results for purposes of cross-battery assessment, certain subtests (e.g., subtests representing crystallized intelligence) from each battery offer interpretative value for individual broad ability factors, providing school psychologists an in-depth understanding of a preschooler’s crystallized knowledge. Exploratory factor analyses were conducted with subtests from WJ-III COG and SB5 representing the four shared broad factors (Gc, Gf, Gv, and Gsm). Results revealed that a 4-factor solution is a better model fit to the data. Future research includes recruiting young children with disabilities or special needs to explore best representative underlying construct of combined WJ-III COG and SB5, allowing for cross-battery assessment.
Access to thesis permanently restricted to Ball State community only
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
17

Sweeny, Ryan Michael. "Making sense of the Mozart effect correcting the problems created by null hypothesis significance testing /." 2006. http://etd.nd.edu/ETD-db/theses/available/etd-12082006-122440/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography