Academic literature on the topic 'Multiple choice assessment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiple choice assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multiple choice assessment"

1

Panchal, Padamjeet, Bheem Prasad, and Sarita Kumari. "MULTIPLE CHOICE QUESTIONS - ROLE IN ASSESSMENT OF COMPETENCY OF KNOWLEDGE IN ANATOMY." International Journal of Anatomy and Research 6, no. 2.1 (April 5, 2018): 5156–62. http://dx.doi.org/10.16965/ijar.2018.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brady, Anne-Marie. "Assessment of learning with multiple-choice questions." Nurse Education in Practice 5, no. 4 (July 2005): 238–42. http://dx.doi.org/10.1016/j.nepr.2004.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Briggs, Derek, Alicia Alonzo, Cheryl Schwab, and Mark Wilson. "Diagnostic Assessment With Ordered Multiple-Choice Items." Educational Assessment 11, no. 1 (February 1, 2006): 33–63. http://dx.doi.org/10.1207/s15326977ea1101_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hassmen, Peter, and Darwin P. Hunt. "Human Self-Assessment in Multiple-Choice Testing." Journal of Educational Measurement 31, no. 2 (June 1994): 149–60. http://dx.doi.org/10.1111/j.1745-3984.1994.tb00440.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brennan, Timothy J. "A Methodological Assessment of Multiple Utility Frameworks." Economics and Philosophy 5, no. 2 (October 1989): 189–208. http://dx.doi.org/10.1017/s0266267100002388.

Full text
Abstract:
One of the fundamental components of the concept of economic rationality is that preference orderings are “complete,” i.e., that all alternative actions an economic agent can take are comparable (Arrow, 1951; De-breu, 1959). The idea that all actions can be ranked may be called the single utility assumption. The attractiveness of this assumption is considerable. It would be hard to fathom what choice among alternatives means if the available alternatives cannot be ranked by the chooser in some way. In addition, the efficiency criterion makes sense only if one can infer that an individual's choice reflects the best, in expected welfare terms, among all choices that individual could have made (Sen, 1982a). The possibility that a rearrangement of resources could make someone “better off” without making others “worse off” can be understood only if the post-rearrangement world is comparable with the pre-rearrange-ment world.
APA, Harvard, Vancouver, ISO, and other styles
6

Bodon’i, Marina A., and Vladimir A. Plaksin. "MULTIPLE CHOICE TESTS AS A FORMATIVE ASSESSMENT TOOL." Vestnik Kostroma State University. Series: Pedagogy. Psychology. Sociokinetics, no. 2 (2020): 42–46. http://dx.doi.org/10.34216/2073-1426-2020-26-2-42-46.

Full text
Abstract:
The article discusses the problem of using multiple choice tests as a means of formative assessment. As a rule, this type of assessing materials is correlated with summarising assessment; however, as the analysis of theoretical sources has shown, the use of multiple choice tests for training allows us to realise the goals of formative assessment. During the study, the main aspects of the use of multiple choice tests in the course of assessment of learning and in the course of assessment for learning were compared, after which the requirements for multiple choice tests for formative assessment were formulated. The identification of the specific characteristics of the use of multiple choice tests for formative assessment allowed us to specify the methods of using tests at different stages of educational process. Based on the results of the study, recommendations for teachers on the development and use of multiple choice tests for assessment for learning are proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

To, Christina, and Jason Napolitano. "A multiple choice answer?" Journal of Hospital Medicine 6, no. 3 (July 15, 2010): 171–72. http://dx.doi.org/10.1002/jhm.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shaban, Sami, Margaret Elzubeir, and Mohammed Al Houqani. "Online assessment standard setting for multiple choice questions." International Journal of Medical Education 7 (May 5, 2016): 142–43. http://dx.doi.org/10.5116/ijme.5715.3481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nickerson, Raymond S., Susan F. Butler, and Michael T. Carlin. "Knowledge assessment: Squeezing information from multiple-choice testing." Journal of Experimental Psychology: Applied 21, no. 2 (June 2015): 167–77. http://dx.doi.org/10.1037/xap0000041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cust, Michael P. "APPENDIX: Urinary Incontinence: Self-assessment Multiple-Choice Questions." Best Practice & Research Clinical Obstetrics & Gynaecology 14, no. 2 (April 2000): A1—A12. http://dx.doi.org/10.1053/beog.2000.0114.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multiple choice assessment"

1

Geering, Margo, and n/a. "Gender differences in multiple choice assessment." University of Canberra. Education, 1993. http://erl.canberra.edu.au./public/adt-AUC20050218.141005.

Full text
Abstract:
Multiple choice testing has been introduced as an assessment instrument in almost all educational systems during the past twenty years. A growing body of research seems to indicate that tests structured to a multiple choice format favour males. In the ACT, Queensland and Western Australia, a multiple choice examination known as ASAT was used to moderate student scores. Using data from the 1989 ASAT Paper 1, as well as data from the ACT Year 12 cohort of that year, an investigation was made of the items in the ASAT paper. This investigation attempted to identify specific types of questions that enabled males, on average, to perform better than females. Questions, which had a statistically significant difference between the results of males and females, were examined further. An ASAT unit was given to students to complete and their answers to a questionnaire concerning the unit were taped and analysed. The study found that males performed better, on average, than females on the 1989 ASAT Paper 1. The mean difference in the quantitative questions was much greater than in the verbal questions. A number of factors appear to contribute to the difference in performance between males and females. A statistically significant number of females study Mathematics at a lower level, which appears to contribute to females lower quantatitive scores. Females seem to be considerably more anxious about taking tests and this anxiety remains throughout a multiple choice test. Females lack confidence in their ability to achieve in tests and are tentative about "risktaking" which is an element of multiple choice tests. The language of the test and male oriented content may contribute to females' negative performance in multiple choice testing.
APA, Harvard, Vancouver, ISO, and other styles
2

Alsubait, Tahani. "Ontology-based multiple-choice question generation." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/ontologybased-multiplechoice-question-generation(07bf2890-6f41-4a11-8189-02d5bb08e686).html.

Full text
Abstract:
Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts' reviews and students' performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Silveira, Igor Cataneo. "Solving University entrance assessment using information retrieval." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04112018-225438/.

Full text
Abstract:
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance.
Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
APA, Harvard, Vancouver, ISO, and other styles
4

Heuer, Sabine. "AN EVALUATION OF TEST IMAGES FOR MULTIPLE-CHOICE COMPREHENSION ASSESSMENT IN APHASIA." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1090264500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kjosnes, Berit. "The art of assessment : How to utilise multiple-choice within the field of law." Thesis, Umeå universitet, Institutionen för tillämpad utbildningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-144912.

Full text
Abstract:
The purpose of this essay then is to gain insight into the utilisation of MCQ’s within the field of law at Swedish upper secondary schools and in this question what kind of knowledge requirements can be tested with MCQ. Also, the difference in test results between non-MCQs and MCQ was analyses in large and when it came to gender. Here a MCQ test was evaluated based upon the knowledge requirements while quantitative date gathered from a utilisation of MCQ within the field of employment law was analysed. It was found that it should be possib- le to utilise MCQ within the field of law. With regards to difference in results between non- MCQ and the MCQ under scrutiny it was found that high performing students scored above one grade lower on MCQ than the average of three non-MCQ’s in other subjects. Low per- forming students experienced a little improvement in their results. There was a slightly diffe- rence when it came to score in between gender. Although the average female score on all tests where higher than their male counterpart, females scored a little lower on their first MCQ test. It was felt that the scope of this research is too small to allow any conclusions to be drawn.
APA, Harvard, Vancouver, ISO, and other styles
6

Davies, Phil. "Subjectivity in the innovative usage of both computerized multiple-choice questioning and peer-assessment." Thesis, University of South Wales, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chaoui, Nayla Aad. "Finding Relationships Between Multiple-Choice Math Tests and Their Stem-Equivalent Constructed Responses." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cgu_etd/21.

Full text
Abstract:
The study takes a close look at relationships between scores on a Mathematics standardized test in two different testing formats - Multiple-Choice (MC) and Constructed Response (CR). Many studies have been dedicated to finding correlations between item format characteristics with regards to race and gender. Few studies, however, have attempted to explore differences in the performance of English Learners in a low performing, predominantly Latino high school. The study also determined relationships between math scores and gender and math scores and language proficiency, as well as relationships between CAHSEE and CST scores. Statistical analyses were performed using correlations, descriptive statistics, and t-tests. Empirical data were also disaggregated and analyzed by gender, and language proficiency. Results revealed significant positive correlations between MC and CR formats. T-tests displayed statistically significant differences between the means of the formats, with boys and English Only students having better scores than their counterparts. Frequency tables examining proficiency levels of students by gender and language proficiency revealed differences between MC and CR tests, with boys and English Only students earning better levels of proficiency. Significant positive correlations were shown between CST scores and multiple-choice items, but none were found for CST scores and constructed response items.
APA, Harvard, Vancouver, ISO, and other styles
8

Oellermann, Susan Wilma, and der merwe Alexander Dawid Van. "Can Using Online Formative Assessment Boost the Academic Performance of Business Students? An Empirical Study." Kamla-Raj, 2015. http://hdl.handle.net/10321/1571.

Full text
Abstract:
The declining quality of first year student intake at the Durban University of Technology (DUT) prompted the addition of online learning to traditional instruction. The time spent by students in an online classroom and their scores in subsequent multiple-choice question (MCQ) tests were measured. Tests on standardised regression coefficients showed self-test time as a significant predictor of summative MCQ performance while controlling for ability. Exam MCQ performance was found to be associated, positively and significantly, with annual self-test time at the 5 percent level and a significant relationship was found between MCQ marks and year marks. It was concluded that students’ use of the self-test tool in formative assessments has a significant bearing on students’ year marks and final grades. The negative nature of the standardised beta coefficient for gender indicates that, when year marks and annual self-test time are considered, males appear to have performed slightly better than females.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jinghua. "The effect of performance-based assessment on eighth grade students mathematics achievement /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Heidenreich, Sebastian. "Do I care or do I not? : an empirical assessment of decision heuristics in discrete choice experiments." Thesis, University of Aberdeen, 2016. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=229468.

Full text
Abstract:
Discrete choice experiments (DCEs) are widely used across economic disciplines to value multi-attribute commodities. DCEs ask survey-respondents to choose between mutually exclusive hypothetical alternatives that are described by a set of common attributes. The analysis of DCE data assumes that respondents consider and trade all attributes before making these choices. However, several studies show that many respondents ignore attributes. Respondents might choose not to consider all attributes to simplify choices or as a preference, because some attributes are not important to them. However, empirical approaches that account for attribute non-consideration only assume simplifying choice behaviour. This thesis shows that this assumption may lead to misleading welfare conclusions and therefore suboptimal policy advice. The analysis explores 'why' attribute are ignored using statistical analysis or by asking respondents. Both approaches are commonly used to identify attribute non-consideration in DCEs. However, the results of this thesis suggest that respondents struggle to recall ignored attributes and their reasons for non-consideration unless attributes are ignored due to non-valuation. This questions the validity of approaches in the literature that rely on respondents' ability to reflect on their decision rule. Further analysis explores how the complexity of choices affects the probability that respondents do not consider all attributes. The results show that attribute consideration first increases and then decreases with complexity. This raises questions about the optimal design complexity of DCEs. The overall findings of the thesis challenge the applicability of current approaches that account for attribute non-consideration in DCEs to policy analysis and emphasis the need for further research in this area.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multiple choice assessment"

1

Koehler, Susan. Purposeful writing assessment: Using multiple-choice practice to inform writing instruction. Gainesville, FL: Maupin House Pub., 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stahl's self-assessment examination in psychiatry: Multiple choice questions for clinicians. Cambridge: Cambridge University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brewster, Marge A. Clinical chemistry self-assessment: 700 multiple-choice questions with answers explained. 2nd ed. Washington, DC: American Association for Clinical Chemistry, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cole, Wade. Tenth-grade WASL in spring 2006: Open-ended and multiple choice questions. Olympia, WA: Washington State Institute for Public Policy, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cole, Wade. Tenth-grade WASL in spring 2006: Open-ended and multiple choice questions. Olympia, WA: Washington State Institute for Public Policy, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Self-assessment of current knowledge in cardiovascular disease: 500 multiple choice questions and referenced explanatory answers. 3rd ed. New Hyde Park, N.Y: Medical Examination Pub. Co., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Griffith, Trevor P. A study of the validity of using multiple-choice items as test instruments in criterion-referenced assessment with particular reference to partial knowledge. [S.l: The author], 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Canada. First Nations and Inuit Health Branch. Competency Assessment Program for Community Health Nurses working with First Nations and Inuit Health Branch: Part II, multiple-choice examinations : part III, clinical skills assessment for expanded scope of practice. [Ottawa]: Health Canada, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

I, Chase Clinton, ed. Developing and using tests effectively: A guide for faculty. San Francisco: Jossey-Bass Publishers, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ontario Educational Research Council. Conference. [Papers presented at the 36th Annual Conference of the Ontario Educational Research Council, Toronto, Ontario, December 2-3, 1994]. [Toronto, ON: s.n.], 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multiple choice assessment"

1

Gupta, Rajesh. "Evaluation and Assessment." In Multiple Choice Questions in Pain Management, 31–38. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56917-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dhillon, Ramindar S., and James W. Fairley. "Assessment of hearing thresholds in young children." In Multiple-choice Questions in Otolaryngology, 30. London: Palgrave Macmillan UK, 1989. http://dx.doi.org/10.1007/978-1-349-10805-3_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gupta, Rajesh, and Dilip Patel. "Assessment and Monitoring of Pain." In Multiple Choice Questions in Regional Anaesthesia, 11–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23608-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

French, Simon. "The Analysis of Multiple Choice Tests in Educational Assessment." In Probability and Bayesian Statistics, 175–82. Boston, MA: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4613-1885-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khairani, Ahmad Zamri, and Hasni Shamsuddin. "Assessing Item Difficulty and Discrimination Indices of Teacher-Developed Multiple-Choice Tests." In Assessment for Learning Within and Beyond the Classroom, 417–26. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0908-2_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yong, Dechang Yang, Fang Liu, Yijia Cao, and Christian Rehtanz. "Assessment and Choice of Input Signals for Multiple Wide-Area Damping Controllers." In Interconnected Power Systems, 137–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48627-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saiapin, Aleksandr. "A Method for Generation of Multiple-Choice Questions and Their Quality Assessment." In Educating Engineers for Future Industrial Revolutions, 534–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68201-9_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gwinnett, Claire. "The Design and Implementation of Multiple Choice Questions (MCQs) in Forensic Science Assessment." In Forensic Science Education and Training, 269–300. Chichester, UK: John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781118689196.ch17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leclercq, D., E. Boxus, P. de Brogniez, H. Wuidar, and F. Lambert. "The TASTE Approach: General Implicit Solutions in Multiple Choice Questions (MCQs), Open Books Exams and Interactive Testing." In Item Banking: Interactive Testing and Self-Assessment, 210–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58033-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dickinson, John R. "A Taxonomy Assessment And Item Analysis Of A Retailing Management Multiple-Choice Question Bank." In Marketing Dynamism & Sustainability: Things Change, Things Stay the Same…, 329–30. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-10912-1_111.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multiple choice assessment"

1

Chai, Douglas. "Automated marking of printed multiple choice answer sheets." In 2016 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE, 2016. http://dx.doi.org/10.1109/tale.2016.7851785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aziz, Azrilah Abd, Nuraini Khatimin, Azami Zaharim, and Tuan Salwani Awang Salleh. "Evaluating multiple choice items in determining quality of test." In 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE). IEEE, 2013. http://dx.doi.org/10.1109/tale.2013.6654501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wilcox, Bethany R., and Steven J. Pollock. "Multiple-choice Assessment for Upper-division Electricity and Magnetism." In 2013 Physics Education Research Conference. American Association of Physics Teachers, 2014. http://dx.doi.org/10.1119/perc.2013.pr.079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zampirolli, Francisco, Valério Batista, Carla Rodriguez, Rafaela Vilela da Rocha, and Denise Goya. "Automated Assessment with Multiple-choice Questions using Weighted Answers." In 13th International Conference on Computer Supported Education. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010338002540261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Amitabha. "Formative Assessment Using Multiple Choice Questions in Statics and Dynamics." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-66304.

Full text
Abstract:
A two-loop learning outcomes assessment process was followed to evaluate the core curriculum in Mechanical Engineering at Rochester Institute of Technology. This initiative, originally called the Engineering Sciences Core Curriculum, provided systematic course learning outcomes and assessment data of examination performance in Statics, Mechanics, Dynamics, Thermodynamics, Fluid Mechanics and Heat Transfer. This paper reports longitudinal data and important observations in the Statics-Dynamics sequence to determine efficacy and obstacles in student performance. An earlier paper showed that students’ mastery of Dynamics is affected largely by weak retention of fundamentals of Statics and mathematics. New observations recorded in this report suggest the need for better instructional strategies to teach certain focal areas in Statics. Subsequesntly offered Dynamics and Fluid Mechanics classes further need reinforcement of some of these fundamental topics in Statics. This report completes a 9 year long broader feedback loop designed to achieve the educational goals in the Statics-Dynamics sequence.
APA, Harvard, Vancouver, ISO, and other styles
6

Azevedo, José Manuel. "e-Assessment in Mathematics Courses with Multiple-choice Questions Tests." In 7th International Conference on Computer Supported Education. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005452702600266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ng, Annie W. Y., Alan H. S. Chan, Sio-Iong Ao, Alan Hoi-Shou Chan, Hideki Katagiri, and Li Xu. "The Testing Methods and Gender Differences in Multiple-Choice Assessment." In IAENG TRANSACTIONS ON ENGINEERING TECHNOLOGIES VOLUME 3: Special Edition of the International MultiConference of Engineers and Computer Scientists 2009. AIP, 2009. http://dx.doi.org/10.1063/1.3256252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cunningham-Nelson, Samuel, Andrea Goncher, and Wageeh Boles. "Categorising Student Responses for Feedback Based on Multiple Choice and Text Responses." In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE, 2018. http://dx.doi.org/10.1109/tale.2018.8615325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghosh, Amitabha. "Use of Multiple Choice Questions as an Assessment Tool in Dynamics." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-63987.

Full text
Abstract:
Dynamics is a pivotal class in a student’s life-long learning profile since it builds upon the logical extensions of Statics and Strength of Materials classes, and provides a framework on which Fluid Mechanics concepts may be developed for deformable media. This paper establishes the contextual reference of Dynamics in this framework. An earlier paper by the author discussed details of how the design of proper multiple choice questions is critical for assessment in Statics and Fluid Mechanics. This paper provides a progress report of such evaluations in Dynamics. In addition, this paper explores the pedagogical issues related to building a student’s learning profile. While comparing test results obtained in trailer sections of Dynamics with those obtained in sections taught by faculty teams, some structural differences were discovered. This reporting completes the feedback loop used by faculty in our Engineering Sciences Core Curriculum for improving student performance over time. The process may further be developed by using some similarities and differences in the performance data.
APA, Harvard, Vancouver, ISO, and other styles
10

Elmessiry, Adel, and Magdi Elmessiry. "ARTIFICIAL INTELLIGENCE APPLICATION TECHNIQUES FOR DEEP ASSESSMENT OF MULTIPLE CHOICE EDUCATIONAL SYSTEMS." In 12th International Conference on Education and New Learning Technologies. IATED, 2020. http://dx.doi.org/10.21125/edulearn.2020.0030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography