To see the other types of publications on this topic, follow the link: Multiple choice assessment.

Dissertations / Theses on the topic 'Multiple choice assessment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Multiple choice assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Geering, Margo, and n/a. "Gender differences in multiple choice assessment." University of Canberra. Education, 1993. http://erl.canberra.edu.au./public/adt-AUC20050218.141005.

Full text
Abstract:
Multiple choice testing has been introduced as an assessment instrument in almost all educational systems during the past twenty years. A growing body of research seems to indicate that tests structured to a multiple choice format favour males. In the ACT, Queensland and Western Australia, a multiple choice examination known as ASAT was used to moderate student scores. Using data from the 1989 ASAT Paper 1, as well as data from the ACT Year 12 cohort of that year, an investigation was made of the items in the ASAT paper. This investigation attempted to identify specific types of questions that enabled males, on average, to perform better than females. Questions, which had a statistically significant difference between the results of males and females, were examined further. An ASAT unit was given to students to complete and their answers to a questionnaire concerning the unit were taped and analysed. The study found that males performed better, on average, than females on the 1989 ASAT Paper 1. The mean difference in the quantitative questions was much greater than in the verbal questions. A number of factors appear to contribute to the difference in performance between males and females. A statistically significant number of females study Mathematics at a lower level, which appears to contribute to females lower quantatitive scores. Females seem to be considerably more anxious about taking tests and this anxiety remains throughout a multiple choice test. Females lack confidence in their ability to achieve in tests and are tentative about "risktaking" which is an element of multiple choice tests. The language of the test and male oriented content may contribute to females' negative performance in multiple choice testing.
APA, Harvard, Vancouver, ISO, and other styles
2

Alsubait, Tahani. "Ontology-based multiple-choice question generation." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/ontologybased-multiplechoice-question-generation(07bf2890-6f41-4a11-8189-02d5bb08e686).html.

Full text
Abstract:
Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts' reviews and students' performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Silveira, Igor Cataneo. "Solving University entrance assessment using information retrieval." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04112018-225438/.

Full text
Abstract:
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance.
Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
APA, Harvard, Vancouver, ISO, and other styles
4

Heuer, Sabine. "AN EVALUATION OF TEST IMAGES FOR MULTIPLE-CHOICE COMPREHENSION ASSESSMENT IN APHASIA." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1090264500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kjosnes, Berit. "The art of assessment : How to utilise multiple-choice within the field of law." Thesis, Umeå universitet, Institutionen för tillämpad utbildningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-144912.

Full text
Abstract:
The purpose of this essay then is to gain insight into the utilisation of MCQ’s within the field of law at Swedish upper secondary schools and in this question what kind of knowledge requirements can be tested with MCQ. Also, the difference in test results between non-MCQs and MCQ was analyses in large and when it came to gender. Here a MCQ test was evaluated based upon the knowledge requirements while quantitative date gathered from a utilisation of MCQ within the field of employment law was analysed. It was found that it should be possib- le to utilise MCQ within the field of law. With regards to difference in results between non- MCQ and the MCQ under scrutiny it was found that high performing students scored above one grade lower on MCQ than the average of three non-MCQ’s in other subjects. Low per- forming students experienced a little improvement in their results. There was a slightly diffe- rence when it came to score in between gender. Although the average female score on all tests where higher than their male counterpart, females scored a little lower on their first MCQ test. It was felt that the scope of this research is too small to allow any conclusions to be drawn.
APA, Harvard, Vancouver, ISO, and other styles
6

Davies, Phil. "Subjectivity in the innovative usage of both computerized multiple-choice questioning and peer-assessment." Thesis, University of South Wales, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chaoui, Nayla Aad. "Finding Relationships Between Multiple-Choice Math Tests and Their Stem-Equivalent Constructed Responses." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cgu_etd/21.

Full text
Abstract:
The study takes a close look at relationships between scores on a Mathematics standardized test in two different testing formats - Multiple-Choice (MC) and Constructed Response (CR). Many studies have been dedicated to finding correlations between item format characteristics with regards to race and gender. Few studies, however, have attempted to explore differences in the performance of English Learners in a low performing, predominantly Latino high school. The study also determined relationships between math scores and gender and math scores and language proficiency, as well as relationships between CAHSEE and CST scores. Statistical analyses were performed using correlations, descriptive statistics, and t-tests. Empirical data were also disaggregated and analyzed by gender, and language proficiency. Results revealed significant positive correlations between MC and CR formats. T-tests displayed statistically significant differences between the means of the formats, with boys and English Only students having better scores than their counterparts. Frequency tables examining proficiency levels of students by gender and language proficiency revealed differences between MC and CR tests, with boys and English Only students earning better levels of proficiency. Significant positive correlations were shown between CST scores and multiple-choice items, but none were found for CST scores and constructed response items.
APA, Harvard, Vancouver, ISO, and other styles
8

Oellermann, Susan Wilma, and der merwe Alexander Dawid Van. "Can Using Online Formative Assessment Boost the Academic Performance of Business Students? An Empirical Study." Kamla-Raj, 2015. http://hdl.handle.net/10321/1571.

Full text
Abstract:
The declining quality of first year student intake at the Durban University of Technology (DUT) prompted the addition of online learning to traditional instruction. The time spent by students in an online classroom and their scores in subsequent multiple-choice question (MCQ) tests were measured. Tests on standardised regression coefficients showed self-test time as a significant predictor of summative MCQ performance while controlling for ability. Exam MCQ performance was found to be associated, positively and significantly, with annual self-test time at the 5 percent level and a significant relationship was found between MCQ marks and year marks. It was concluded that students’ use of the self-test tool in formative assessments has a significant bearing on students’ year marks and final grades. The negative nature of the standardised beta coefficient for gender indicates that, when year marks and annual self-test time are considered, males appear to have performed slightly better than females.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jinghua. "The effect of performance-based assessment on eighth grade students mathematics achievement /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Heidenreich, Sebastian. "Do I care or do I not? : an empirical assessment of decision heuristics in discrete choice experiments." Thesis, University of Aberdeen, 2016. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=229468.

Full text
Abstract:
Discrete choice experiments (DCEs) are widely used across economic disciplines to value multi-attribute commodities. DCEs ask survey-respondents to choose between mutually exclusive hypothetical alternatives that are described by a set of common attributes. The analysis of DCE data assumes that respondents consider and trade all attributes before making these choices. However, several studies show that many respondents ignore attributes. Respondents might choose not to consider all attributes to simplify choices or as a preference, because some attributes are not important to them. However, empirical approaches that account for attribute non-consideration only assume simplifying choice behaviour. This thesis shows that this assumption may lead to misleading welfare conclusions and therefore suboptimal policy advice. The analysis explores 'why' attribute are ignored using statistical analysis or by asking respondents. Both approaches are commonly used to identify attribute non-consideration in DCEs. However, the results of this thesis suggest that respondents struggle to recall ignored attributes and their reasons for non-consideration unless attributes are ignored due to non-valuation. This questions the validity of approaches in the literature that rely on respondents' ability to reflect on their decision rule. Further analysis explores how the complexity of choices affects the probability that respondents do not consider all attributes. The results show that attribute consideration first increases and then decreases with complexity. This raises questions about the optimal design complexity of DCEs. The overall findings of the thesis challenge the applicability of current approaches that account for attribute non-consideration in DCEs to policy analysis and emphasis the need for further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
11

Escalante, Talavera Juan M. "ESL Students' Reading Behaviors on Multiple-Choice Items at Differing Proficiency Levels: An Eye-Tracking Study." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7424.

Full text
Abstract:
Theorists have been concerned with the overlap of reading and problem solving for at least a century (Thorndike 1917, 1973-1974; Sternberg & Frensch, 2014). Various reading models have been proposed including bottom-up and top-down reading processing (Goodman, 1972; Gough, 1972). In second language literature, theorists have further noted that reading consists of strategic, purposeful, and interactive processes (Grabe, 2009). In test taking situations, problem solving is important because it can compensate for students' language proficiencies. In spite of research showing the use of problem solving in reading, less is known about how learners actually read and problem solve in test-taking situations. This study centers around Khalifa, Weir and colleagues' model for cognitive processing in reading (Weir, Hawkey, Green, Unaldy, & Devi, 2009) in combination with eye-tracking technology in order to examine how ESL readers employ careful and expeditious reading. Data were gathered from 50 students attending a university sponsored Intensive English Program. Participants read eight validated reading comprehension items at varying difficulty levels while their eye movements were recorded. Results indicate that student level may not be a factor in how carefully and expeditiously a student reads. However, statistical analyses suggest that text difficulty may be a factor in how carefully students read.
APA, Harvard, Vancouver, ISO, and other styles
12

Liao, Jui-Teng. "Multiple-choice and short-answer questions in language assessment: the interplay between item format and second language reading." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6178.

Full text
Abstract:
Multiple-choice (MCQs) and short-answer questions (SAQs) are the most common test formats for assessing English reading proficiency. While the former provides test-takers with prescribed options, the latter requires short written responses. Test developers favor MCQs over SAQs for the following reasons: less time required for rating, high rater agreement, and wide content coverage. This mixed methods dissertation investigated the impacts of test format on reading performance, metacognitive awareness, test-completion processes, and task perceptions. Participants were eighty English as a second language (ESL) learners from a Midwestern community college. They were first divided into two groups of approximately equivalent reading proficiencies and then completed MCQ and SAQ English reading tests in different orders. After completing each format, participants filled out a survey about demographic information, strategy use, and perceptions of test formats. They also completed a 5-point Likert-scale survey to assess their degree of metacognitive awareness. At the end, sixteen participants were randomly chosen to engage in retrospective interviews focusing on their strategy use and task perceptions. This study employed a mixed methods approach in which quantitative and qualitative strands converged to draw an overall meta-inference. For the quantitative strand, descriptive statistics, paired sample t-tests, item analyses, two-way ANOVAs, and correlation analyses were conducted to investigate 1) the differences between MCQ and SAQ test performance and 2) the relationship between test performance and metacognitive awareness. For the qualitative strand, test-takers’ MCQ and SAQ test completion processes and task perceptions were explored using coded interview and survey responses related to strategy use and perceptions of test formats. Results showed that participants performed differently on MCQ and SAQ reading tests, even though both tests were highly correlated. The paired sample t-tests revealed that participants’ English reading and writing proficiencies might account for the MCQ and SAQ performance disparity. Moreover, there was no positive relationship between reading test performance and the degree of metacognitive awareness generated by the frequency of strategy use. Correlation analyses suggested whether a higher or lower English reading proficiency of the participants was more important than strategy use. Although the frequency of strategy use did not benefit test performance, strategies implemented for MCQ and SAQ tests were found to generate interactive processes allowing participants to gain deeper understanding of the source texts. Furthermore, participants’ perceptions toward MCQs, SAQs, and a combination of both revealed positive and negative influences among test format, reading comprehension, and language learning. Therefore, participants’ preferences of test format should be considered when measuring their English reading proficiency. This study has pedagogical implications on the use of various test formats in L2 reading classrooms.
APA, Harvard, Vancouver, ISO, and other styles
13

Odo, Dennis Murphy. "An investigation of the cross-mode comparability of a paper and computer-based multiple-choice cloze reading assessment for ESL learners." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42086.

Full text
Abstract:
This study was designed to determine whether a computer-based version of a standardized cloze reading test for second language learners is comparable to its traditional paper-based counterpart and to identify how test takers’ computer familiarity and perceptions of paper and computer-based tests related to their performance across testing modes. Previous comparability research for second language speakers revealed that some studies found that the two forms are comparable while others found they are not. Findings on the connection between computer attitudes and computer test performance were also mixed. One hundred and twenty high school ELL students were recruited for the study. The research instruments included both paper and computer-based versions of a locally developed reading assessment. The two tests are the same in terms of content, questions,pagination and layout. The design was a Latin squares so that two groups of learners took the tests in the opposite order and their scores were compared. Participants were also asked to complete questionnaires about their familiarity with computers and their perceptions of each of the two testing modes. Results indicate that the paper and computer-based versions of the test are comparable. A regression analysis showed that there is a relationship between computer familiarity and computer-based LOMERA performance. Mode preference survey data pointed to differences in preferences depending on each unique test feature. These results help validate the cross-mode comparability of assessments outside of the traditional discrete point multiple choice tests which tends to predominate in current research.
APA, Harvard, Vancouver, ISO, and other styles
14

Willing, Sonja Verfasser], Jochen [Akademischer Betreuer] [Musch, and Ute [Akademischer Betreuer] Bayen. "Discrete-Option Multiple-Choice: Evaluating the Psychometric Properties of a New Method of Knowledge Assessment / Sonja Willing. Gutachter: Jochen Musch ; Ute Bayen." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2013. http://d-nb.info/1044146109/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Greenberg, Ariela Caren. "Fighting Bias with Statistics: Detecting Gender Differences in Responses on Items on a Preschool Science Assessment." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/665.

Full text
Abstract:
Differential item functioning (DIF) and differential distractor functioning (DDF) are methods used to screen for item bias (Camilli && Shepard, 1994; Penfield, 2008). Using an applied empirical example, this mixed-methods study examined the congruency and relationship of DIF and DDF methods in screening multiple-choice items. Data for Study I were drawn from item responses of 271 female and 236 male low-income children on a preschool science assessment. Item analyses employed a common statistical approach of the Mantel-Haenszel log-odds ratio (MH-LOR) to detect DIF in dichotomously scored items (Holland & Thayer, 1988), and extended the approach to identify DDF (Penfield, 2008). Findings demonstrated that the using MH-LOR to detect DIF and DDF supported the theoretical relationship that the magnitude and form of DIF and are dependent on the DDF effects, and demonstrated the advantages of studying DIF and DDF in multiple-choice items. A total of 4 items with DIF and DDF and 5 items with only DDF were detected. Study II incorporated an item content review, an important but often overlooked and under-published step of DIF and DDF studies (Camilli & Shepard). Interviews with 25 female and 22 male low-income preschool children and an expert review helped to interpret the DIF and DDF results and their comparison, and determined that a content review process of studied items can reveal reasons for potential item bias that are often congruent with the statistical results. Patterns emerged and are discussed in detail. The quantitative and qualitative analyses were conducted in an applied framework of examining the validity of the preschool science assessment scores for evaluating science programs serving low-income children, however, the techniques can be generalized for use with measures across various disciplines of research.
APA, Harvard, Vancouver, ISO, and other styles
16

Huntley, Belinda. "Comparing different assessment formats in undergraduate mathematics." Thesis, Pretoria [S.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01202009-163129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Siegel, Tracey Jane. "Assessment Practices at an Associate Degree Nursing Program." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/603.

Full text
Abstract:
Nursing programs have traditionally used teacher-developed multiple-choice (MCQ) examinations to prepare students for licensure. Researchers have determined that poorly constructed MCQ tests used as formative and summative evaluations may penalize nursing students and impact progression and retention in nursing programs. The purpose of this exploratory case study was to examine issues related to the use of teacher-developed MCQ examinations as the only method of student assessment in the theory component of nursing courses. The National League for Nursing Core Competencies for Nurse Educators and the revised Bloom's Taxonomy were used as the conceptual frameworks for this study. The Director of the Nursing Program and 9 faculty members participated. Data were collected from a review of documents, 2 focus groups, faculty-maintained diaries, and an interview. During data analysis, categories were identified and themes emerged, revealing the key findings. Using a single method alone to assess student learning limited the opportunity for formative assessment, the ability to assess higher order thinking, and the development of metacognition on the part of students. To assist faculty in creating assessments of student learning that would address these themes, a 3-day faculty professional development project followed by 4 monthly lunch and learn sessions was designed. Providing additional faculty development in assessment methods may promote positive social change as it may ultimately increase the retention of qualified students to meet the demand for registered nurses within the community.
APA, Harvard, Vancouver, ISO, and other styles
18

Hudnett, Richard. "Understanding the Admissions Experience of Admitted Students Who Fail to Enroll: A Multiple Case Study." Thesis, NSUWorks, 2015. https://nsuworks.nova.edu/fse_etd/20.

Full text
Abstract:
The main purpose of this applied dissertation was to explore why a new student who is fully admitted to an academic program never proceeds to registration during their first semester. A research study addressing these instances might help college administrators improve conversion rates of admitted students. The fact that four of the six participants only applied to one university, the researcher believes, validates several prior research studies that directly linked a strong connection between a student’s positive perception of a college and the likelihood that they enroll in it. All of these participants in fact did perceive the university positively; therefore, many of them only applied to it for admission. Several of the participants mentioned that the university’s course offerings, format, and academic fit were among the reasons why they applied to it as well. However, what the study results revealed was not so much about their positive perception of the university or whether or not it was a good academic fit, but more so the lack of communication with the university during the enrollment process, difficulty in navigating the financial aid process, and their common need for a more personalized experience with their financial aid needs that led them to not enroll. The researcher was able to identify six major participant experiences and topics that were among the most commonly used by each of the participants. They included financial aid, cost, personalized experience, level of ease or difficulty relative to the enrollment, expressed need for more information, and communication. After the researcher identified each of the six most commonly mentioned participant experiences and topics within the enrollment process, three major emerging themes became apparent. The three major emerging themes were: Personalized Experience, Communication, and Financial Aid. The results of this study, such as identifying multiple consistent emerging themes of why an admitted student chooses to not enroll, can add value for any university especially one that is seeking to improve its enrollment management processes, the overall experience of its admitted prospective students within its admission system, and its admitted and enrolled conversion rate.
APA, Harvard, Vancouver, ISO, and other styles
19

Brits, Gideon Petrus. "University student performance in multiple choice questions : an item analysis of Mathematics assessments." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/65477.

Full text
Abstract:
The University of Pretoria has experienced a significant increase in student numbers in recent years. This increase has necessarily impacted on the Department of Mathematics and Applied Mathematics. The department is understaffed in terms of lecturing staff, which impacts negatively on postgraduate study and research outputs. The disproportion between teaching staff and the lecturing load and research demands has led to an excessive grading and administrative load on staff. The department decided to use multiple choice questions in assessments that could be graded by means of computer software. The responses of the multiple choice questions are captured on optical reader forms that are processed centrally. Multiple choice questions are combined with constructed response questions (written questions) in semester tests and end-of-term examinations. The quality of the multiple choice questions has never before been determined. This research project asks the research question: How do the multiple choice questions in mathematics, as posed to first-year engineering students at the University of Pretoria, comply with the principles of good assessment for determining quality? A quantitative secondary analysis is performed on data that was sourced from the first-year engineering calculus module WTW 158 for the years 2015, 2016 and 2017. The study shows that, in most cases, the questions are commendable with well-balanced indices of discrimination and difficulty including well-chosen functional distractors. The item analysis included determining the cognitive level of each multiple choice question. The problematic questions are highlighted and possible recommendations are made to improve or revise such questions for future usage.
Dissertation (MEd)--University of Pretoria, 2017.
Science, Mathematics and Technology Education
MEd
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
20

Miranda, Paulo Henrique de Freitas. "AVALIAÇÃO DA APRENDIZAGEM: MÚLTIPLA ESCOLHA VERSUS QUESTÕES ABERTAS EM COMPUTADOR VERSUS PAPEL." Pontifícia Universidade Católica de Goiás, 2015. http://tede2.pucgoias.edu.br:8080/handle/tede/3481.

Full text
Abstract:
Submitted by admin tede (tede@pucgoias.edu.br) on 2016-09-12T14:39:55Z No. of bitstreams: 1 Paulo Henrique de Freitas Miranda.pdf: 2959134 bytes, checksum: 19e3a755635c112cbe515610ba863dda (MD5)
Made available in DSpace on 2016-09-12T14:39:55Z (GMT). No. of bitstreams: 1 Paulo Henrique de Freitas Miranda.pdf: 2959134 bytes, checksum: 19e3a755635c112cbe515610ba863dda (MD5) Previous issue date: 2015-09-10
The evaluation evolves constantly in search of better educational practices through the development of education. In this perspective, this study aims to contribute by analyzing reviews of multiple choice and open questions, submitted on paper and computer. Four tests in four different models (multiple choices on your computer, multiple choices on paper, open questions on the computer and open questions in the paper) were performed. The focus of the research was analysis of the notes in each model, the duration of the test and student satisfaction at the end of the tests. The results were presented and discussed considering a context of controlled assessment, interpreting the conditions under which the models can be equivalent, advantages and disadvantages. A test of multiple-choice paper presented better results in terms od test notes.
A avaliação evolui constantemente em busca de melhores práticas educacionais mediante o desenvolvimento da educação. Nesta perspectiva, este estudo propõese contribuir através da análise de avaliações de múltipla escolha e questões abertas, apresentadas em papel e computador. Foram realizados quatro testes em quatro modelos diferentes (múltipla escolha no computador, múltipla escolha no papel, questões abertas no computador e questões abertas no papel). O foco da investigação foram análises das notas em cada modelo, do tempo de duração do teste e da satisfação do aluno ao término dos testes. Os resultados foram apresentados e discutidos considerando o contexto do teste controlado, interpretando as condições sob os quais os modelos podem apresentar equivalência, vantagens ou desvantagens. O teste de múltipla escolha em papel apresentou melhores resultados em termos de nota no teste.
APA, Harvard, Vancouver, ISO, and other styles
21

Kushlaf, Najah. "Aide à la décision pour l'apprentissage." Thesis, Valenciennes, 2014. http://www.theses.fr/2014VALE0010/document.

Full text
Abstract:
Les travaux réalisés dans cette thèse proposent une aide à la décision pour améliorer la qualité de l’apprentissage. L’apprentissage scolaire englobe deux dimension; une dimension humaine et une dimension pédagogique. La dimension humaine inclut l’apprenant et l’enseignant. La dimension pédagogique, représentée par le programme fixé par l’établissement éducatif, correspond au savoir. Ce dernier va se transformer en connaissance chez l’apprenant. Les deux notions de connaissance et savoir sont donc tout à fait différentes. La distance entre les deux représente la distance entre ce que l’enseignant présente (le savoir) et ce que l’apprenant acquière (la connaissance). La qualité de l’apprentissage concerne les apprenants qui vont à l’école pour acquérir le savoir. En fait, apprendre consiste à intérioriser le savoir. Cette intériorisation demande des efforts pour un changement intellectuel persévérant et exige une continuité basée sur les expériences antérieures. L’acquisition du savoir et sa transformation en connaissance par l’apprenant sont influencées par plusieurs facteurs qui interviennent positivement ou négativement sur la quantité et la qualité de cette connaissance. Il peut résider chez l’apprenant une confusion entre les deux notions qui peut l’amener à valoriser ou ignorer sa connaissance. Le processus de construction des connaissances par le savoir diffusé exige une constante mise en œuvre de procédures d’évaluation. Le processus d’évaluation apprécie alors la structure de la connaissance pour prendre des décisions destinées à la faire évoluer. Cependant, lors d’une évaluation, la confusion entre connaissance et savoir peut amener l’apprenant à valoriser le score, négligeant ainsi le regard qu’il pourrait porter sur les processus de transformation des connaissances au profit d’une restitution la plus fidèle possible du savoir. Cette confusion peut être mise en évidence pourvu que l’évaluation intègre une dimension processuelle. Dès lors, l’évaluation peut être mieux associée à des actions d’amélioration et de transformation des connaissances. L’évaluation peut alors être abordée dans une logique d’aide à la décision. Dans ce travail nous montrerons donc qu’une situation d’apprentissage s’apparente à une situation d’aide à la décision
The research realized in this thesis proposes a decision support to improve the quality of learning. The learning includes two dimensions; human dimension and pedagogic one. The human dimension includes the learner and the teacher. The pedagogic dimension represented in curriculum set by the educational establishment; it is the know. The learner is going to transform the know into knowledge. Thus the know and the knowledge are two notions completely different. The distance between both is the distance between what the teacher presents (the know) and what the learner acquires (the knowledge). The quality of the learning concerns the learners who go to the school to acquire the know. In fact, learning consists in interiorizing the know. This internalization requires the efforts for persistent intellectual change and demands continuity based on past experiences. The acquisition of knowledge and its transformation into knowledge by the learner is influenced by several factors that affect positively or negatively on the quantity and quality of this knowledge. The confusion between the know and the knowledge guide the learner to value or to ignore his knowledge. The knowledge construction process by the diffused know requires an constant evaluation process. The process of evaluation then appreciates the structure of knowledge to make decisions intended to make it evolve. However, during an evaluation, the confusion between knowledge and knowledge can bring learner to value the score so neglecting the importance which he must give for the transformation knowledge process in favor of the highest possible fidelity of knowledge. This confusion can be detected provided that the evaluation includes a processual dimension. Therefore, the evaluation may be better associated with improvement actions and transformation of knowledge. Then the evaluation can be addressed in a logical decision support. Therefore In this research we demonstrate that the learning situation is a decision aiding situation
APA, Harvard, Vancouver, ISO, and other styles
22

(8080967), Jared B. Breakall. "Characterizing Multiple-Choice Assessment Practices in Undergraduate General Chemistry." Thesis, 2019.

Find full text
Abstract:

Assessment of student learning is ubiquitous in higher education chemistry courses because it is the mechanism by which instructors can assign grades, alter teaching practice, and help their students to succeed. One type of assessment that is popular in general chemistry courses, yet difficult to create effectively, is the multiple-choice assessment. Despite its popularity, little is known about the extent that multiple-choice general chemistry exams adhere to accepted design practices or the processes that general chemistry instructors engage in while creating these assessments. Further understanding of multiple-choice assessment quality and the design practices of general chemistry instructors could inform efforts to improve the quality of multiple-choice assessment practice in the future. This work attempted to characterize multiple-choice assessment practices in undergraduate general chemistry classrooms by, 1) conducting a phenomenographic study of general chemistry instructor’s assessment practices and 2) designing an instrument that can detect violations of item writing guidelines in multiple-choice chemistry exams.

The phenomenographic study of general chemistry instructors’ assessment practices included 13 instructors from the United States who participated in a three-phase interview. They were asked to describe how they create multiple-choice assessments, to evaluate six multiple-choice exam items, and to create two multiple-choice exam items using a think-aloud protocol. It was found that the participating instructors considered many appropriate assessment design practices yet did not utilize, or were not familiar with, all the appropriate assessment design practices available to them.

Additionally, an instrument was developed that can be used to detect violations of item writing guidelines in multiple-choice exams. The instrument, known as the Item Writing Flaws Evaluation Instrument (IWFEI) was shown to be reliable between users of the instrument. Once developed, the IWFEI was used to analyze 1,019 general chemistry exam items. This instrument provides a tool for researchers to use to study item writing guideline adherence, as well as, a tool for instructors to use to evaluate their own multiple-choice exams. The use of the IWFEI is hoped to improve multiple-choice item writing practice and quality.

The results of this work provide insight into the multiple-choice assessment design practices of general chemistry instructors and an instrument that can be used to evaluate multiple-choice exams for item writing guideline adherence. Conclusions, recommendations for professional development, and recommendations for future research are discussed.

APA, Harvard, Vancouver, ISO, and other styles
23

Gu, Zhimei. "Maximizing the Potential of Multiple-choice Items for Cognitive Diagnostic Assessment." Thesis, 2011. http://hdl.handle.net/1807/31770.

Full text
Abstract:
When applying cognitive diagnostic models, the goal is to accurately estimate students’ diagnostic profiles. The accuracy of these estimates may be enhanced by looking at the types of incorrect options a student selects. This thesis research examines the additional diagnostic information available from the distractors in multiple-choice items used in large-scale achievement assessments and identifies optimal conditions for extracting diagnostic information. The study is based on the analyses of both real student responses and simulated data. The real student responses are from a large-scale provincial math assessment for grade 6 students in Ontario. Data were then simulated under different skill dimensionality and item discrimination conditions. Comparisons were made between student profile estimates when using the DINA and MC-DINA models. The MC-DINA model is a newly developed cognitive diagnostic model where the probability of a student choosing a particular item option depends on how closely the student’s cognitive skill profile matches the skills tapped by that option. The results from the simulation data analysis suggested that when the simulated data included additional diagnostic information in the distractors, the MC-DINA model was able to use that information to improve the estimation of the student profiles, which shows the utility of the additional information obtained from item distractors. The value of adding information from distractors was greater when there was lower item discrimination and more skill multidimensionality. However, in the real data, the keyed options provided more diagnostic information than the distractors, and there was little information in the distractors that could be utilized by the MC-DINA model. This implies that current math test items could be further developed to include diagnostically rich distractors. The study offers some suggestions for a design of multiple-choice test items and its formative use.
APA, Harvard, Vancouver, ISO, and other styles
24

Tittenberger, Peter, and Dario Schor. "Taking a WebCT Quiz." 2006. http://hdl.handle.net/1993/196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

KUEI, CHUANG FENG, and 莊峰魁. "Online Assessment with multiple choice and constructed response items for the “Light” unit." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/80602264255211194283.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士在職專班
98
Traditional writing tests or computerized tests usually fail to provide teachers enough information to help students learn: multiple choices tests, which are graded by students’ final answers, cannot confirm whether students have learned what they are supposed to; open-ended question, which take much time to grade, are not objective. Therefore, the research is to design computerized constructive response tests for science, which record the students’ detailed problem solving process and automatically grade by diagnosing and classifying students’ bugs, so that students can get real time feedback. The research successfully builds automatized analyzing models and construct diagnosing systems after designing appropriate constructive response items. Students’ bugs are diagnosed by computer systems, so students can get learning feedback immediately and teachers get the complete information about students’ thinking mistake conception without spending much time grading. The conclusion of the research is as follows: 1.The information about the bugs acquired by doing the constructive response items is more detailed and diversified than the ones done before. The average correctness rate of the mistake type judged by the computer system is 100%, which shows that the nature diagnosed tests constructed according the research is valid. 2.The average optimal bug identifying rate of multiple choice tests is 90.22%,while the sub-skill average optimal mistake type identifying rate of that is 93.91%; the average optimal mistake type identifying rate of constructive response items is 92.09 %,while the sub-skill average optimal bug identifying rate of that is 95.96%. 3.The question-economic rate of the constructive response items is 19.86 %,which is a little higher than that of multiple choice tests. Adaptive tests are can be developed into constructive response items.
APA, Harvard, Vancouver, ISO, and other styles
26

YUEH, QUEENIE WIE, and 岳瑋. "Comparing Effectiveness of Global Competence Assessment of Open-End Question and Multiple-Choice Question." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/95wk8t.

Full text
Abstract:
碩士
國立臺中教育大學
教育資訊與測驗統計研究所碩士在職專班
106
Organization for Economic Co-operation and Development (OECD) considers that “global competence” is one of the most important qualities which future citizens should equip. In addition, it has been added as a capability assessment in 2018 by the Programme for International Student Assessment (PISA), except for the existing assessments of reading, mathematics and science, etc. This assessment subject focuses on students’ “capability” rather than “knowledge” and its contextualized questions involve knowledge, ability and attitude relating to global issues. Furthermore, “cultural learning and international comprehension” is one of the ten basic abilities listed in Grade 1-9 Curriculum Guidelines of Taiwan. Also, international education, such as international perspective and multicultural education, is mentioned in the new 12-year Compulsory Education Guidelines. From the above mentioned, global competence sure is one of the essential abilities in future educational field. Following the structure of the PISA test, global issues are collected before transferring to multiple choice or open-end questions. After having the questions examined and confirmed by experts, these questions will be scored through analyzing their answering contents. Two computerized units of assessments are designed in this research, “Tertiary Industry and International Tourism” and “Social Network”. This research aims to measure the performance of seiner high school students in Taiwan on both the assessment units and also the effectiveness of both the multiple choice and open-end question types. In this research, purposive sampling is engaged in and the research objects are randomly chosen from a senior high school in Tainan, Taiwan, with valid samples of 119. The results analyzed indicate that correlation coefficients between computerized and artificial scoring are 0.666 and 0.717, which demonstrate good consistency between both scoring. Furthermore, the average accuracy of computerized scoring is 0.675, which means that the open-end question type is acceptable. Last but not least, the overall performance of the multiple-choice question type on global competence is better than that of the open-end question type. And that the students lack the ability when examining local, global and transcultural issues.
APA, Harvard, Vancouver, ISO, and other styles
27

Tseng, Chih-wei, and 曾志偉. "A Comparison of Multiple-Choice and Constructed-Response Tests for the Assessment of Translation Ability." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/01105585208437183973.

Full text
Abstract:
碩士
國立屏東商業技術學院
應用外語系
96
Recently, translation has been receiving a great attention in institutions of higher education. Such institutions are reorganizing to offer required and elective courses designed both to improve the students’ language proficiency and to enhance professional training for translation. As a result, translation is now a test item for the entrance examination of both universities and graduate schools. However, translation ability is currently evaluated by multiple-choice questions rather than constructed-response questions in some college or graduate school entrance exams. Much research has shown that multiple-choice test scores are comparatively higher than constructed-response test scores. It is skeptical that the examinees’ translation ability can be accurately evaluated simply by a few multiple-choice questions for which there are limited and fixed answers. The purpose of this study aims at not only exploring whether the average score of the multiple-choice test is significantly different from the average score of the equivalent constructed-response test, but also investigating the difference between the strategies used in the two forms of tests. A total of 411 participants of Department of Foreign Language were recruited in this study. The results of the study show that there is a significant difference between the two forms of tests. Meanwhile, the strategies used in answering translation question are significantly different. Consequently, the form of the test will have a significant influence on the subject’s translation performance. In other words, a learner’s ability to translate from English to Chinese could not be measured equally by multiple-choice test and constructed-response test. It could be inferred from the study that a learner’s translation capability could not be effectively assessed and verified simply using multiple-choice tests.
APA, Harvard, Vancouver, ISO, and other styles
28

Chi-Te, Wu, and 吳啟得. "Computerized Multiple Choice Items and Performance Assessment for the Fifth Grade Math "Different Denominator Fraction Addition and Subtraction," Unit." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/86165924084336449907.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士在職專班
99
The study of fifth grade to "cross-denominator fraction addition and subtraction" unit for the scope of the study, according to the teaching objectives, sub-skills, the type of error, so as to Bayesian inference network as a tool to establish a set of Bayesian network to computer-based diagnostic tests national minority subjects, the diagnosis of students' error types, provide teaching reference. The results are as follows: First analysis of "cross-denominator fraction addition and subtraction, "the teaching materials, the establishment of the concept of proposition from proposition based on the concept of a computerized diagnostic test, sampling by using Bayesian network as a post-analysis tools, inference and sub-types of errors the students skills. The results are as follows: 1. The self of the fifth grade of elementary school mathematics "different denominator fraction addition and subtraction,"unit test questions, the test of Cronbach's α value of 0.737, with questions of this study showed good reliability. 2. Students of computer-based diagnostic expert identification of the type of error and the correct type of error rates were 95.3333%, 95.3333%, 93%, 89%, construction problems of diagnosis the average correct response rate of 93.1667%. 3 Four models of sub-skills to identify the type of error rate and pattern of the overall average recognition rate were 90%, display mode helps to enhance the recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Ming-Shyang, and 楊明祥. "The Creation and Development of a Game-Based Assessment System – A Case Study on Multiple Choice Questions for Elementary School Math." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/xzpks7.

Full text
Abstract:
碩士
國立臺北教育大學
數學暨資訊教育學系(含數學教育碩士班)
99
The incorporation of information technology in teaching not only provides a more dynamic experience and enables broader possibilities as compared to traditional teaching methods, it also raises students’ learning interests. After observing the strengths and weaknesses of the existing “virtual manipulatives” and “item bank-Based assessment system”, the researcher aims to develop new assessment tools that feature on-line and off-line operability, easy accessibility, and gaming elements. The purposes of this research are to: (1) develop a game-based assessment system which elementary school teachers can incorporate into their teaching; and (2) evaluate the effectiveness of this game-based assessment system.   The researcher sought reference from the CAI multimedia courseware development model (which involves 4 stages: analysis, design, production, and evaluation and revision) of Hsu Hsin-Yi to develop a multiple choice “game-based assessment system” using Flash. The system development was completed after several revisions based on feedback received during the evaluation and revision stage. Research subjects for the formative evaluation included the supervising professor and the postgraduate math students of the National Taipei University of Education who are currently employed as teachers. Research subjects for the summative evaluation included 15 elementary teachers from A school and 5 elementary teachers from B school, both of which were situated in Taipei, and third, fifth, and sixth grade students from A school.   The results showed that the research subjects, both teachers and students, acknowledged the potential helpfulness of this software in math studies, and students enjoyed learning math using this game-based approach. The software developed under this research featured the following: (1) options for teachers to adjust game settings using worksheets; three game types were available which teachers may select to conduct assessments/tests through customized games; (2) the ability to organize tournaments or group teaching activities with the use of Wii electronic whiteboards; (3) gaming modes that provide immediate feedbacks and error tracking; (4) the availability of a common platform for sharing exam questions and profile settings, and which allows on-line trials; (5) the availability to be used for personal web pages, off-line assessment/teaching, and after-school practices.   Apart from math studies, this game-based assessment tool can also be used for other fields of study. The featured game-based assessment tool presented in this study offers an alternative for teaching tools for teachers.
APA, Harvard, Vancouver, ISO, and other styles
30

Singh, Upasana Gitanjali. "The development of a framework for evaluating e-assessment systems." Thesis, 2014. http://hdl.handle.net/10500/14619.

Full text
Abstract:
Academics encounter problems with the selection, evaluation, testing and implementation of e-assessment software tools. The researcher experienced these problems while adopting e-assessment at the university where she is employed. Hence she undertook this study, which is situated in schools and departments in Computing-related disciplines, namely Computer Science, Information Systems and Information Technology at South African Higher Education Institutions. The literature suggests that further research is required in this domain. Furthermore, preliminary empirical studies indicated similar disabling factors at other South African tertiary institutions, which were barriers to long-term implementation of e-assessment. Despite this, academics who are adopters of e-assessment indicate satisfaction, particularly when conducting assessments with large classes. Questions of the multiple choice genre can be assessed automatically, leading to increased productivity and more frequent assessments. The purpose of this research is to develop an evaluation framework to assist academics in determining which e-assessment tool to adopt, enabling them to make more informed decisions. Such a framework would also support evaluation of existing e-assessment systems. The underlying research design is action research, which supported an iterative series of studies for developing, evaluating, applying, refining, and validating the SEAT (Selecting and Evaluating an e-Assessment Tool) Evaluation Framework and subsequently an interactive electronic version, e-SEAT. Phase 1 of the action research comprised Studies 1 to 3, which established the nature, context and extent of adoption of e-assessment. This set the foundation for development of SEAT in Phase 2. During Studies 4 to 6 in Phase 2, a rigorous sequence of evaluation and application facilitated the transition from the manual SEAT Framework to the electronic evaluation instrument, e-SEAT, and its further evolution. This research resulted in both a theoretical contribution (SEAT) and a practical contribution (e-SEAT). The findings of the action research contributed, along with the literature, to the categories and criteria in the framework, which in turn, contributed to the bodies of knowledge on MCQs and e-assessment. The final e-SEAT version, the ultimate product of this action research, is presented in Appendix J1. For easier reference, the Appendices are included on a CD, attached to the back cover of this Thesis..
Computing
PhD. (Information Systems)
APA, Harvard, Vancouver, ISO, and other styles
31

KOSOBUD, Ondřej. "Testování žáků v německém jazyce na základní škole." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-155669.

Full text
Abstract:
The main aim of this diploma thesis is to find out, if the level of knowledge of German language at pupils at basic schools in the Czech Republic is increasing, stagnating or decreasing and what factors influence their results. In the theoretic part I am going to deal with testing of pupils in Europe and in the Czech Republic. After that there is introduced a list of all standard assessment tests of German language on level A1 and A2. Then there are compared standard assessment tests ?Fit in Deutsch? and ?Start Deutsch? with the tests of Czech School Inspection from 2012/2013. In the research part I focus on the development of pupils? knowledge of German language at basic schools. The research is based on the assigned tests from 2007, 2010 and 2013. On the basis of these tests and filled questionnaires I am trying to find answers on the set research questions and to check correctness of the set hypotheses or alternatively to find other factors that influence pupils? knowledge of German language.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography