Dissertations / Theses on the topic 'Multiple choice assessment'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 31 dissertations / theses for your research on the topic 'Multiple choice assessment.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Geering, Margo, and n/a. "Gender differences in multiple choice assessment." University of Canberra. Education, 1993. http://erl.canberra.edu.au./public/adt-AUC20050218.141005.
Full textAlsubait, Tahani. "Ontology-based multiple-choice question generation." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/ontologybased-multiplechoice-question-generation(07bf2890-6f41-4a11-8189-02d5bb08e686).html.
Full textSilveira, Igor Cataneo. "Solving University entrance assessment using information retrieval." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04112018-225438/.
Full textResponder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
Heuer, Sabine. "AN EVALUATION OF TEST IMAGES FOR MULTIPLE-CHOICE COMPREHENSION ASSESSMENT IN APHASIA." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1090264500.
Full textKjosnes, Berit. "The art of assessment : How to utilise multiple-choice within the field of law." Thesis, Umeå universitet, Institutionen för tillämpad utbildningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-144912.
Full textDavies, Phil. "Subjectivity in the innovative usage of both computerized multiple-choice questioning and peer-assessment." Thesis, University of South Wales, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436367.
Full textChaoui, Nayla Aad. "Finding Relationships Between Multiple-Choice Math Tests and Their Stem-Equivalent Constructed Responses." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cgu_etd/21.
Full textOellermann, Susan Wilma, and der merwe Alexander Dawid Van. "Can Using Online Formative Assessment Boost the Academic Performance of Business Students? An Empirical Study." Kamla-Raj, 2015. http://hdl.handle.net/10321/1571.
Full textLiu, Jinghua. "The effect of performance-based assessment on eighth grade students mathematics achievement /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974655.
Full textHeidenreich, Sebastian. "Do I care or do I not? : an empirical assessment of decision heuristics in discrete choice experiments." Thesis, University of Aberdeen, 2016. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=229468.
Full textEscalante, Talavera Juan M. "ESL Students' Reading Behaviors on Multiple-Choice Items at Differing Proficiency Levels: An Eye-Tracking Study." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7424.
Full textLiao, Jui-Teng. "Multiple-choice and short-answer questions in language assessment: the interplay between item format and second language reading." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6178.
Full textOdo, Dennis Murphy. "An investigation of the cross-mode comparability of a paper and computer-based multiple-choice cloze reading assessment for ESL learners." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42086.
Full textWilling, Sonja Verfasser], Jochen [Akademischer Betreuer] [Musch, and Ute [Akademischer Betreuer] Bayen. "Discrete-Option Multiple-Choice: Evaluating the Psychometric Properties of a New Method of Knowledge Assessment / Sonja Willing. Gutachter: Jochen Musch ; Ute Bayen." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2013. http://d-nb.info/1044146109/34.
Full textGreenberg, Ariela Caren. "Fighting Bias with Statistics: Detecting Gender Differences in Responses on Items on a Preschool Science Assessment." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/665.
Full textHuntley, Belinda. "Comparing different assessment formats in undergraduate mathematics." Thesis, Pretoria [S.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01202009-163129.
Full textSiegel, Tracey Jane. "Assessment Practices at an Associate Degree Nursing Program." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/603.
Full textHudnett, Richard. "Understanding the Admissions Experience of Admitted Students Who Fail to Enroll: A Multiple Case Study." Thesis, NSUWorks, 2015. https://nsuworks.nova.edu/fse_etd/20.
Full textBrits, Gideon Petrus. "University student performance in multiple choice questions : an item analysis of Mathematics assessments." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/65477.
Full textDissertation (MEd)--University of Pretoria, 2017.
Science, Mathematics and Technology Education
MEd
Unrestricted
Miranda, Paulo Henrique de Freitas. "AVALIAÇÃO DA APRENDIZAGEM: MÚLTIPLA ESCOLHA VERSUS QUESTÕES ABERTAS EM COMPUTADOR VERSUS PAPEL." Pontifícia Universidade Católica de Goiás, 2015. http://tede2.pucgoias.edu.br:8080/handle/tede/3481.
Full textMade available in DSpace on 2016-09-12T14:39:55Z (GMT). No. of bitstreams: 1 Paulo Henrique de Freitas Miranda.pdf: 2959134 bytes, checksum: 19e3a755635c112cbe515610ba863dda (MD5) Previous issue date: 2015-09-10
The evaluation evolves constantly in search of better educational practices through the development of education. In this perspective, this study aims to contribute by analyzing reviews of multiple choice and open questions, submitted on paper and computer. Four tests in four different models (multiple choices on your computer, multiple choices on paper, open questions on the computer and open questions in the paper) were performed. The focus of the research was analysis of the notes in each model, the duration of the test and student satisfaction at the end of the tests. The results were presented and discussed considering a context of controlled assessment, interpreting the conditions under which the models can be equivalent, advantages and disadvantages. A test of multiple-choice paper presented better results in terms od test notes.
A avaliação evolui constantemente em busca de melhores práticas educacionais mediante o desenvolvimento da educação. Nesta perspectiva, este estudo propõese contribuir através da análise de avaliações de múltipla escolha e questões abertas, apresentadas em papel e computador. Foram realizados quatro testes em quatro modelos diferentes (múltipla escolha no computador, múltipla escolha no papel, questões abertas no computador e questões abertas no papel). O foco da investigação foram análises das notas em cada modelo, do tempo de duração do teste e da satisfação do aluno ao término dos testes. Os resultados foram apresentados e discutidos considerando o contexto do teste controlado, interpretando as condições sob os quais os modelos podem apresentar equivalência, vantagens ou desvantagens. O teste de múltipla escolha em papel apresentou melhores resultados em termos de nota no teste.
Kushlaf, Najah. "Aide à la décision pour l'apprentissage." Thesis, Valenciennes, 2014. http://www.theses.fr/2014VALE0010/document.
Full textThe research realized in this thesis proposes a decision support to improve the quality of learning. The learning includes two dimensions; human dimension and pedagogic one. The human dimension includes the learner and the teacher. The pedagogic dimension represented in curriculum set by the educational establishment; it is the know. The learner is going to transform the know into knowledge. Thus the know and the knowledge are two notions completely different. The distance between both is the distance between what the teacher presents (the know) and what the learner acquires (the knowledge). The quality of the learning concerns the learners who go to the school to acquire the know. In fact, learning consists in interiorizing the know. This internalization requires the efforts for persistent intellectual change and demands continuity based on past experiences. The acquisition of knowledge and its transformation into knowledge by the learner is influenced by several factors that affect positively or negatively on the quantity and quality of this knowledge. The confusion between the know and the knowledge guide the learner to value or to ignore his knowledge. The knowledge construction process by the diffused know requires an constant evaluation process. The process of evaluation then appreciates the structure of knowledge to make decisions intended to make it evolve. However, during an evaluation, the confusion between knowledge and knowledge can bring learner to value the score so neglecting the importance which he must give for the transformation knowledge process in favor of the highest possible fidelity of knowledge. This confusion can be detected provided that the evaluation includes a processual dimension. Therefore, the evaluation may be better associated with improvement actions and transformation of knowledge. Then the evaluation can be addressed in a logical decision support. Therefore In this research we demonstrate that the learning situation is a decision aiding situation
(8080967), Jared B. Breakall. "Characterizing Multiple-Choice Assessment Practices in Undergraduate General Chemistry." Thesis, 2019.
Find full textAssessment of student learning is ubiquitous in higher education chemistry courses because it is the mechanism by which instructors can assign grades, alter teaching practice, and help their students to succeed. One type of assessment that is popular in general chemistry courses, yet difficult to create effectively, is the multiple-choice assessment. Despite its popularity, little is known about the extent that multiple-choice general chemistry exams adhere to accepted design practices or the processes that general chemistry instructors engage in while creating these assessments. Further understanding of multiple-choice assessment quality and the design practices of general chemistry instructors could inform efforts to improve the quality of multiple-choice assessment practice in the future. This work attempted to characterize multiple-choice assessment practices in undergraduate general chemistry classrooms by, 1) conducting a phenomenographic study of general chemistry instructor’s assessment practices and 2) designing an instrument that can detect violations of item writing guidelines in multiple-choice chemistry exams.
The phenomenographic study of general chemistry instructors’ assessment practices included 13 instructors from the United States who participated in a three-phase interview. They were asked to describe how they create multiple-choice assessments, to evaluate six multiple-choice exam items, and to create two multiple-choice exam items using a think-aloud protocol. It was found that the participating instructors considered many appropriate assessment design practices yet did not utilize, or were not familiar with, all the appropriate assessment design practices available to them.
Additionally, an instrument was developed that can be used to detect violations of item writing guidelines in multiple-choice exams. The instrument, known as the Item Writing Flaws Evaluation Instrument (IWFEI) was shown to be reliable between users of the instrument. Once developed, the IWFEI was used to analyze 1,019 general chemistry exam items. This instrument provides a tool for researchers to use to study item writing guideline adherence, as well as, a tool for instructors to use to evaluate their own multiple-choice exams. The use of the IWFEI is hoped to improve multiple-choice item writing practice and quality.
The results of this work provide insight into the multiple-choice assessment design practices of general chemistry instructors and an instrument that can be used to evaluate multiple-choice exams for item writing guideline adherence. Conclusions, recommendations for professional development, and recommendations for future research are discussed.
Gu, Zhimei. "Maximizing the Potential of Multiple-choice Items for Cognitive Diagnostic Assessment." Thesis, 2011. http://hdl.handle.net/1807/31770.
Full textTittenberger, Peter, and Dario Schor. "Taking a WebCT Quiz." 2006. http://hdl.handle.net/1993/196.
Full textKUEI, CHUANG FENG, and 莊峰魁. "Online Assessment with multiple choice and constructed response items for the “Light” unit." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/80602264255211194283.
Full text亞洲大學
資訊工程學系碩士在職專班
98
Traditional writing tests or computerized tests usually fail to provide teachers enough information to help students learn: multiple choices tests, which are graded by students’ final answers, cannot confirm whether students have learned what they are supposed to; open-ended question, which take much time to grade, are not objective. Therefore, the research is to design computerized constructive response tests for science, which record the students’ detailed problem solving process and automatically grade by diagnosing and classifying students’ bugs, so that students can get real time feedback. The research successfully builds automatized analyzing models and construct diagnosing systems after designing appropriate constructive response items. Students’ bugs are diagnosed by computer systems, so students can get learning feedback immediately and teachers get the complete information about students’ thinking mistake conception without spending much time grading. The conclusion of the research is as follows: 1.The information about the bugs acquired by doing the constructive response items is more detailed and diversified than the ones done before. The average correctness rate of the mistake type judged by the computer system is 100%, which shows that the nature diagnosed tests constructed according the research is valid. 2.The average optimal bug identifying rate of multiple choice tests is 90.22%,while the sub-skill average optimal mistake type identifying rate of that is 93.91%; the average optimal mistake type identifying rate of constructive response items is 92.09 %,while the sub-skill average optimal bug identifying rate of that is 95.96%. 3.The question-economic rate of the constructive response items is 19.86 %,which is a little higher than that of multiple choice tests. Adaptive tests are can be developed into constructive response items.
YUEH, QUEENIE WIE, and 岳瑋. "Comparing Effectiveness of Global Competence Assessment of Open-End Question and Multiple-Choice Question." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/95wk8t.
Full text國立臺中教育大學
教育資訊與測驗統計研究所碩士在職專班
106
Organization for Economic Co-operation and Development (OECD) considers that “global competence” is one of the most important qualities which future citizens should equip. In addition, it has been added as a capability assessment in 2018 by the Programme for International Student Assessment (PISA), except for the existing assessments of reading, mathematics and science, etc. This assessment subject focuses on students’ “capability” rather than “knowledge” and its contextualized questions involve knowledge, ability and attitude relating to global issues. Furthermore, “cultural learning and international comprehension” is one of the ten basic abilities listed in Grade 1-9 Curriculum Guidelines of Taiwan. Also, international education, such as international perspective and multicultural education, is mentioned in the new 12-year Compulsory Education Guidelines. From the above mentioned, global competence sure is one of the essential abilities in future educational field. Following the structure of the PISA test, global issues are collected before transferring to multiple choice or open-end questions. After having the questions examined and confirmed by experts, these questions will be scored through analyzing their answering contents. Two computerized units of assessments are designed in this research, “Tertiary Industry and International Tourism” and “Social Network”. This research aims to measure the performance of seiner high school students in Taiwan on both the assessment units and also the effectiveness of both the multiple choice and open-end question types. In this research, purposive sampling is engaged in and the research objects are randomly chosen from a senior high school in Tainan, Taiwan, with valid samples of 119. The results analyzed indicate that correlation coefficients between computerized and artificial scoring are 0.666 and 0.717, which demonstrate good consistency between both scoring. Furthermore, the average accuracy of computerized scoring is 0.675, which means that the open-end question type is acceptable. Last but not least, the overall performance of the multiple-choice question type on global competence is better than that of the open-end question type. And that the students lack the ability when examining local, global and transcultural issues.
Tseng, Chih-wei, and 曾志偉. "A Comparison of Multiple-Choice and Constructed-Response Tests for the Assessment of Translation Ability." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/01105585208437183973.
Full text國立屏東商業技術學院
應用外語系
96
Recently, translation has been receiving a great attention in institutions of higher education. Such institutions are reorganizing to offer required and elective courses designed both to improve the students’ language proficiency and to enhance professional training for translation. As a result, translation is now a test item for the entrance examination of both universities and graduate schools. However, translation ability is currently evaluated by multiple-choice questions rather than constructed-response questions in some college or graduate school entrance exams. Much research has shown that multiple-choice test scores are comparatively higher than constructed-response test scores. It is skeptical that the examinees’ translation ability can be accurately evaluated simply by a few multiple-choice questions for which there are limited and fixed answers. The purpose of this study aims at not only exploring whether the average score of the multiple-choice test is significantly different from the average score of the equivalent constructed-response test, but also investigating the difference between the strategies used in the two forms of tests. A total of 411 participants of Department of Foreign Language were recruited in this study. The results of the study show that there is a significant difference between the two forms of tests. Meanwhile, the strategies used in answering translation question are significantly different. Consequently, the form of the test will have a significant influence on the subject’s translation performance. In other words, a learner’s ability to translate from English to Chinese could not be measured equally by multiple-choice test and constructed-response test. It could be inferred from the study that a learner’s translation capability could not be effectively assessed and verified simply using multiple-choice tests.
Chi-Te, Wu, and 吳啟得. "Computerized Multiple Choice Items and Performance Assessment for the Fifth Grade Math "Different Denominator Fraction Addition and Subtraction," Unit." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/86165924084336449907.
Full text亞洲大學
資訊工程學系碩士在職專班
99
The study of fifth grade to "cross-denominator fraction addition and subtraction" unit for the scope of the study, according to the teaching objectives, sub-skills, the type of error, so as to Bayesian inference network as a tool to establish a set of Bayesian network to computer-based diagnostic tests national minority subjects, the diagnosis of students' error types, provide teaching reference. The results are as follows: First analysis of "cross-denominator fraction addition and subtraction, "the teaching materials, the establishment of the concept of proposition from proposition based on the concept of a computerized diagnostic test, sampling by using Bayesian network as a post-analysis tools, inference and sub-types of errors the students skills. The results are as follows: 1. The self of the fifth grade of elementary school mathematics "different denominator fraction addition and subtraction,"unit test questions, the test of Cronbach's α value of 0.737, with questions of this study showed good reliability. 2. Students of computer-based diagnostic expert identification of the type of error and the correct type of error rates were 95.3333%, 95.3333%, 93%, 89%, construction problems of diagnosis the average correct response rate of 93.1667%. 3 Four models of sub-skills to identify the type of error rate and pattern of the overall average recognition rate were 90%, display mode helps to enhance the recognition rate.
Yang, Ming-Shyang, and 楊明祥. "The Creation and Development of a Game-Based Assessment System – A Case Study on Multiple Choice Questions for Elementary School Math." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/xzpks7.
Full text國立臺北教育大學
數學暨資訊教育學系(含數學教育碩士班)
99
The incorporation of information technology in teaching not only provides a more dynamic experience and enables broader possibilities as compared to traditional teaching methods, it also raises students’ learning interests. After observing the strengths and weaknesses of the existing “virtual manipulatives” and “item bank-Based assessment system”, the researcher aims to develop new assessment tools that feature on-line and off-line operability, easy accessibility, and gaming elements. The purposes of this research are to: (1) develop a game-based assessment system which elementary school teachers can incorporate into their teaching; and (2) evaluate the effectiveness of this game-based assessment system. The researcher sought reference from the CAI multimedia courseware development model (which involves 4 stages: analysis, design, production, and evaluation and revision) of Hsu Hsin-Yi to develop a multiple choice “game-based assessment system” using Flash. The system development was completed after several revisions based on feedback received during the evaluation and revision stage. Research subjects for the formative evaluation included the supervising professor and the postgraduate math students of the National Taipei University of Education who are currently employed as teachers. Research subjects for the summative evaluation included 15 elementary teachers from A school and 5 elementary teachers from B school, both of which were situated in Taipei, and third, fifth, and sixth grade students from A school. The results showed that the research subjects, both teachers and students, acknowledged the potential helpfulness of this software in math studies, and students enjoyed learning math using this game-based approach. The software developed under this research featured the following: (1) options for teachers to adjust game settings using worksheets; three game types were available which teachers may select to conduct assessments/tests through customized games; (2) the ability to organize tournaments or group teaching activities with the use of Wii electronic whiteboards; (3) gaming modes that provide immediate feedbacks and error tracking; (4) the availability of a common platform for sharing exam questions and profile settings, and which allows on-line trials; (5) the availability to be used for personal web pages, off-line assessment/teaching, and after-school practices. Apart from math studies, this game-based assessment tool can also be used for other fields of study. The featured game-based assessment tool presented in this study offers an alternative for teaching tools for teachers.
Singh, Upasana Gitanjali. "The development of a framework for evaluating e-assessment systems." Thesis, 2014. http://hdl.handle.net/10500/14619.
Full textComputing
PhD. (Information Systems)
KOSOBUD, Ondřej. "Testování žáků v německém jazyce na základní škole." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-155669.
Full text