Dissertations / Theses on the topic 'Australian Scholastic Aptitude Test'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'Australian Scholastic Aptitude Test.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Geering, Margo, and n/a. "Gender differences in multiple choice assessment." University of Canberra. Education, 1993. http://erl.canberra.edu.au./public/adt-AUC20050218.141005.
Full textSteele, Charles Noah. "Scholastic aptitude test scores and the economic returns to college education." Thesis, Montana State University, 1990. http://etd.lib.montana.edu/etd/1990/steele/SteeleC1990.pdf.
Full textDrakulich, Elaine. "An Analysis of the Involvement of Ten High Schools in Scholastic Aptitude Testing Student Preparation." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/1154.
Full textAdams, Edward R. "The effects of cost, income, and socio-economic variables on student scholastic aptitude scores." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917821.
Full textDepartment of Educational Leadership
Fenner, Sherrie. "A study of the correlation between Pennsylvania system of school assessment and scholastic aptitude test scores in mathematics." Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 2001. http://www.kutztown.edu/library/services/remote_access.asp.
Full textSource: Masters Abstracts International, Volume: 45-06, page: 2797. Typescript. Abstract precedes thesis as preliminary leaves. Includes bibliographical references (leaves 41-43).
Belec, Marguerite E. "The Scholastic Aptitude Test as a performance predictor of Broadened Opportunity for Officer Selection and Training (BOOST)." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/26235.
Full textBrown, Georgia M. "An investigation of the general aptitude test battery as a predictor of academic success for college students." Online version, 1999. http://www.uwstout.edu/lib/thesis/1999/1999browng.pdf.
Full textBolinger, Rex W. "The effect of socioeconomic levels and similar instruction on scholastic aptitude test scores of Asian, Black, Hispanic, and White students." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/845922.
Full textCurabay, Muhammet. "Meta-analysis of the predictive validity of Scholastic Aptitude Test (SAT) and American College Testing (ACT) scores for college GPA." Thesis, University of Denver, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10242126.
Full textThe college admission systems of the United States require the Scholastic Aptitude Test (SAT) and American College Testing (ACT) examinations. Although, some resources suggest that SAT and ACT scores give some meaningful information about academic success, others disagree. The objective of this study was to determine whether there is significant predictive validity of SAT and ACT exams for college success. This study examined the effectiveness of SAT and ACT scores for predicting college students’ first year GPA scores with a meta-analytic approach. Most of the studies were retrieved from Academic Search Complete and ERIC databases, published between 1990 and 2016. In total, 60 effect sizes were obtained from 48 studies. The average correlation between test score and college GPA was 0.36 (95% confidence interval: .32, .39) using a random effects model. There was a significant positive relationship between exam score and college success. Moderators examined were publication status and exam type with no effect found for publication status. A significant effect of exam type was found, with a slightly higher average correlation for SAT compared to ACT score and college GPA. No publication bias was found in the study.
Clingman, Elizabeth Ann. "The Relationship of Student Mathematics Scores on the Scholastic Aptitude Test to Teacher Effectiveness as Measured by the Texas Teacher Appraisal System." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc332301/.
Full textMullins, Lavorious M. "An investigative study of the predictive validity of high school grade-point averages and scholastic aptitude test scores as they relate to baccalaureate grade-point averages." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2000. http://digitalcommons.auctr.edu/dissertations/3703.
Full textWarry, Jaye Ellen. "An analysis of variables affecting standardized test results at the high school level." Thesis, Boston University, 2003. https://hdl.handle.net/2144/33587.
Full textPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
The purpose of this study was to determine the relative contribution to the Massachusetts Comprehensive Assessment System (MCAS) English Language Arts (ELA) of five variables: Type of Community (Urban or Suburban), Gender, Race, Preliminary Scholastic Aptitude Test (PSAT) Verbal, and PSAT Writing. MCAS is a criterion-referenced examination administered to students at various grade levels to determine their knowledge of approved curriculum. As of 2003, students must pass the mathematics and English language arts sections in order to receive a diploma. Data for the study was gathered from three urban and four suburban school districts in Massachusetts. Data about 914 students was collected from Summer 2001 - Winter 2002. Multiple regression statistical analysis was used to examine the collective and separate contributions of five independent variables; gender, race, type of community (urban or suburban), score on the verbal subtest of the PSAT, and score on the writing subtest to the findings on the dependent variab le - tenth grade language arts achievement on the MCAS. Results of the statistical analyses showed a strong relationship between MCASELA and the five independent variables, with most of the relationship attributable to the PSAT Verbal test results. Three other variables combined - PSAT Writing, Type of Community, and Gender - accounted for just 4% of the additional variance. Step-wise multiple regression analysis indicated that exclusion of Race did not diminish predictiveness, and Gender added very little to predictiveness. PSAT Verbal, PSAT Writing, and Type of Community were the principal contributions to variation in MCAS-ELA in the study. The four null hypotheses and results follow: There is no significant relationship between the dependent variable - MCAS-ELA -- and the independent variables -- Type of Community, Gender, Race, PSA T Verbal, and PSAT Writing - rejected. There is no significant relationship between each independent variable and each of the other independent variables - rejected. There is no significant relationship between the dependent variable and the other variables taken together - rejected. There is no significant additional vanance m MCAS - ELA accou nted for by an independent variable after other variable (s) - responsible for greater contributions to variance - (have) accounted for as much of the variance as possible - accepted.
2031-01-01
Andersson, Per. "Att studera och bli bedömd : Empiriska och teoretiska perspektiv på gymnasie- och vuxenstuderandes sätt att erfara studier och bedömningar." Doctoral thesis, Linköpings universitet, Avdelningen för studier av vuxenutbildning, folkbildning och högre utbildning (VUFo), 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12624.
Full textFan, Yuan-shun, and 范元順. "A Comparison of Scholastic Aptitude English Test and Department Required English Test." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/64523960522400598087.
Full text國立中正大學
外國文學所
96
This study aimed to compare the test items of SAET and DRET administered from 2002 to 2007 in Taiwan and to survey senior high school English teachers’perspectives on the two tests and their approaches to helping students prepare for the tests respectively. The comparison was conducted in terms of item difficulty level, reference word level, text readability (Flesch-Kincaid readability), and item types. With regards to the questionnaire, 55 senior high school English Teachers in Taiwan filled in the questionnaire and the validity of questionnaire was ensured by several testing specialists and a pilot test on a couple of high school teachers. The questionnaire data were presented and tabulated by descriptive statistics and analyzed by the chi-square test to detect whether there are differences in teachers’responses toward the tests and their teaching strategies. The results indicated that (1) in 2002 and 2003 SAET contained more difficult items than DRET; however, from 2004 to 2007 more difficult items were found in DRET, (2) DRET examined more words beyond level 5 in vocabulary test while SAET had more words below level 4, and DRET had more words at level 5-6 in reading texts than SAET, (3) reading texts of DRET had higher readability indices and revealed more difficulty for reading, (4) DRET contained more items that require higher-level language competence and processing skills than SAET in cloze test formats, (5) in DRET reading comprehension, more inferential items were found while in SAET, more factual and local items were found, and (6) most English teachers stated that the difficulty of SAET is different from that of DRET and DRET seems to be more difficult than SAET in several aspects, and most approaches used by teachers yielded no obvious difference between SAET and DRET. Based on the results, this study demonstrated that DRET is generally difficult than SAET in various aspects. Additionally, test constructions of SAET and DRET reflected the assumption and suggestion provided by CEEC studies as well as the results of previous research in college entrance examination. It also indicated that development in test construction was not only built on solid research foundation but also moving toward a more systematic and scientific approach. Finally, pedagogical implications and suggestions for future research are provided in the end.
Chou, Ssu-yu, and 周思余. "A Study of Cloze Test Items in Scholastic Aptitude English Test and Department Required English Test." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/07595302968765142752.
Full text國立中正大學
外國文學所
97
The study analyzed items in Zong-He-Ce-Yian section in Scholastic Aptitude English Test (SAET) and Department Required English Test (DRET) from 2004 to 2008. It aimed to investigate what kind of language processing skills, discrete-point or integrative skill, the cloze items in both tests measure. Passing rates and discrimination indices are used to indicate the difficulty level and discrimination power of each item type. Aside from categorizing items into two general types, the study further classified items into subtypes to find out learners’ strengths and weaknesses. The results of this study are summarized as follows: First, cloze items in SAET and DRET tend to measure examinees’ discrete-point skill rather than integrative skills. It is found that in DRET from 2006 to 2008, there is an increasing ratio of global items. Whether an increasing ratio of global items in these three years is a coincident or a continued situation is worthy of a follow-up analysis of cloze items in the coming years. Secondly, the passing rates of the two item types did not have consistent result. In SAET, the average passing rate of local items is higher than that of global items, whereas in DRET, an opposite result is found. Thirdly, in terms of item discrimination indices, the result showed that the discrimination indices of both global and local items are above .30, revealing that both item types can effectively discriminate high scorers from low scorers. Finally, test takers performed well on items that can be inferred from the context, such as relative word in headed clause, conjunction/subordinator, syntactic elements, and textual discourse markers. As for examinees’ weaknesses, it is found that learners seem to have problems in items with fixed usage, such as preposition, phrases, and reiteration. The study suggests that English teachers and test constructors should be aware that deletions of words/phrases in a cloze test must be well justified. Test constructors can take the amount of context required to restore each item into consideration to generate items that require test takers to refer to a broader context rather than randomly select a word for deletion. Furthermore, as for learners’ weaknesses, the study proposes that, in addition to teaching what has been provided in a textbook, teachers can provide learners with supplemental material to expose learners to different language contexts.
SHEN, LI-CHING, and 沈例憬. "The Item Analysis of Scholastic Aptitude Math Test In 2015 Academic Year." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/9h39um.
Full text國立高雄師範大學
數學系
105
This study aims to adopt test analysis and item analysis on"104 school year general scholastic ability test mathematics".The data resource comes from all of examinees attending 104 school year scholastic ability test with 11,000 samples by random sampling from the College Entrance Examination Center (CEEC).On the basis of classical test theory(CTT) and item response theory(IRT),the researcher adopts qualitative and quantitative analysis on the test data by statistical softwares SPSS17.0,TestGraf98 and BILOG-MG3.0. Synthesizing the findings,analysis of results and discussions,the reasearcher summarizes the following conclusions : 1.The two-way specification table demonstrate this test achieves positive content validity. 2.The Cronbach's coefficient is.856,and it shows that the reliability of the test is good. 3.The average item difficulty discrimination index with.447 suggests that the questions are moderate. 4.The average item discrimination index with.566 suggests the very good discrimination of the test. 5.Provided whih the distractibility Analysis,all of the multiple-choice items meet the distractibility index. 6.The option characteristic curves suggests the correct items have good discrimination and the incorrect items have good distractibility in multiple-choice items(with only one answer). 7.The item information function suggests most fill-in-the-blank items are suitable for moderate-ability examinees.
Liang, Jen-hsin, and 梁仁馨. "The relations between the Scholastic Aptitude Test and the achievements of Calculus." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/7c4eqp.
Full text國立中央大學
數學研究所
97
The subjects of study are the freshmen of the National Central University in 2007. The samples of study include the scores of the Scholastic Aptitude Test (SAT) by the College Entrance Examination Center, the achievements of the United Classes of Calculus (UCC), and the classes of the Freshman English Course. The main software of study is Matlab and Microsoft Excel. The data is analyzed by the methods of k-means and coefficients of correlation, t-test and regression in statistics. The researcher tries to use the k-means and the graph of distribution, which are commonly used in the signal analysis and the machine learning, in education and discusses the benefit of the result. After classifying and graphing, the correlation between “the scores of SAT” and “the achievements of UCC” is discussed. There are two conclusions in this research. Technologically, the k-means is useful in education and helpful for researchers to observe the data. Educationally, the groups of higher variance or lower English score in SAT have a higher risk of failure in UCC. We should pay more attention to these groups of students.
Fan, Kuang-Hui, and 范光輝. "The investigation of applying computerized adaptive testing to the college scholastic aptitude test." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/70809279292673783042.
Full text中原大學
心理學系
82
The purpose of the study is to investigate the feasibility of applying computerized adaptive testing (CAT) to the college scholastic aptitude test. Investigations focus on: (1) the correlation between the total score of classical test theory (CTT), the ability estimate of item response theory (IRT), and the ability estimate of CAT, (2) how much test length reduction resulted from CAT in comparison with the conventional paper- pencil test? (3) the students'' attitude toward the Chinese CAT. The subjects for the study are 172 students from the Department of Psychology, the Chung Yuan Christian University. The test data are (1) the conventional test: the college scholastic aptitude test comprising both verbal reasoning and mathematical comparison subtests, and (2) the Chinese CAT: basing on the Chinese Computerized Adaptive Testing (CCAT) system from the Educational Measurement Center of the Tainan Teachers College, inputing the items of the college scholastic aptitude test which item parameters come from Huang (1994) study. The major findings are: 1. In both verbal and mathematical subtests, the correlation coefficients of the ability value between CTT and IRT are 0.8515 and 0.8696 respectively, CTT and CAT are 0.6062 and 0.6350 respectively, IRT and CAT are 0.6884 and 0.6570 respectively. 2. In comparison with the conventional paper- pencil test, the item saving resulted from CAT are 72.55% for the verbal test and 86.96% for the mathematical test. 3. Regarding the CAT attitude questionnaire, most subjects prefer the computer test to the paper-pencil test. Because the correlation coefficients of the ability value between CAT and CTT, CAT and IRT are not high, the study hold conservative attitude to the feasibility of applying CAT to the college scholastic aptitude test. But in the future, if the disadvantage of CCAT system can be improved, and the item parameters of the college scholastic aptitude test can be estimated again, then the feasibility can be more promising.
Jenkins, David James. "The predictive validity of the general scholastic aptitude test (GSAT) for first year students in information technology." Thesis, 2004. http://hdl.handle.net/10530/57.
Full textThis study investigates the validity of the General Scholastic Aptitude Test as a tool for predicting academic success for first year Information Technology (IT) students. Secondly it seeks to establish if it is an equally good predictor for the various racial groups in South Africa. Thirdly it investigates it’s usefulness as a predictor for the different gender groups. The final aim is to establish whether the GSAT correlates with the Swedish Rating (SR) and English language ability in terms of predicting academic success for first year IT students. The student group that served as the sample was the first year IT student group over the three year peroid from 2000 to 2002 at the Port Elizabeth (PE) Technikon. The study found that there was a weak link between GSAT and academic success across the entire sample. It however proved not to be an equally good predictor across all the racial groups, where it proved to be a far more useful tool for white students than for students from the other racial groups. Insofar as the gender groups were concerned it appeared to have some predictive power across the whole sample but not necessarily equally for the different gender and racial groups. There appeared to be a positive correlation between GSAT and Swedish Rating but not between GSAT and English language ability. From this study it appears that the GSAT has some merit in predicting academic success, although with differing rates of usefulness across different demographic groupings in South Africa. In addition there are many other factors that may militate against academic success in a student’s life which may hinder the usefulness of the GSAT as a predictive tool. If such assessments are to be used it would seem that they should be used very carefully, that factors reducing the chances of academic success need to be identified, and that institutions ensure that programmes are in place to empower students to maximize their potential.
Marais, Amanda Claudia. "Using the differential aptitude test to estimate intelligence and scholastic achievement at grade nine level." Diss., 2007. http://hdl.handle.net/10500/531.
Full textEducational Studies
M.Ed. (Specialisation in Guidance and Counselling)
Saka, Thomas T. "Differential item functioning among mainland U.S. and Hawaii examinees on the verbal subtest of the scholastic aptitude test." Thesis, 1992. http://hdl.handle.net/10125/9728.
Full textDe, Beer Marie. "The construction and evaluation of a dynamic computerised adaptive test for the measurement of learning potential." Thesis, 2000. http://hdl.handle.net/10500/5337.
Full textPsychology
D. Litt. et Phil. (Psychology)
Cooper, Patricia Anne. "Lexis and the undergraduate : analysing vocabulary needs, proficiencies and problems." Diss., 1999. http://hdl.handle.net/10500/17926.
Full textLinguistics and Modern Languages
M.A. (Linguistics)
Scheepers, Ruth. "Assessing grade 7 students' English vocabulary in different immersion contexts." Thesis, 2003. http://hdl.handle.net/10500/1464.
Full textLinguistics and Modern Languages
(M.A. (Linguistics))
Scheepers, Ruth Angela. "Assessing grade 7 students' English vocabulary in different immersion contexts." Thesis, 2003. http://hdl.handle.net/10500/1464.
Full textLinguistics
(M.A. (Linguistics))