To see the other types of publications on this topic, follow the link: Multiple choice assessment.

Journal articles on the topic 'Multiple choice assessment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiple choice assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Panchal, Padamjeet, Bheem Prasad, and Sarita Kumari. "MULTIPLE CHOICE QUESTIONS - ROLE IN ASSESSMENT OF COMPETENCY OF KNOWLEDGE IN ANATOMY." International Journal of Anatomy and Research 6, no. 2.1 (April 5, 2018): 5156–62. http://dx.doi.org/10.16965/ijar.2018.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brady, Anne-Marie. "Assessment of learning with multiple-choice questions." Nurse Education in Practice 5, no. 4 (July 2005): 238–42. http://dx.doi.org/10.1016/j.nepr.2004.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Briggs, Derek, Alicia Alonzo, Cheryl Schwab, and Mark Wilson. "Diagnostic Assessment With Ordered Multiple-Choice Items." Educational Assessment 11, no. 1 (February 1, 2006): 33–63. http://dx.doi.org/10.1207/s15326977ea1101_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hassmen, Peter, and Darwin P. Hunt. "Human Self-Assessment in Multiple-Choice Testing." Journal of Educational Measurement 31, no. 2 (June 1994): 149–60. http://dx.doi.org/10.1111/j.1745-3984.1994.tb00440.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brennan, Timothy J. "A Methodological Assessment of Multiple Utility Frameworks." Economics and Philosophy 5, no. 2 (October 1989): 189–208. http://dx.doi.org/10.1017/s0266267100002388.

Full text
Abstract:
One of the fundamental components of the concept of economic rationality is that preference orderings are “complete,” i.e., that all alternative actions an economic agent can take are comparable (Arrow, 1951; De-breu, 1959). The idea that all actions can be ranked may be called the single utility assumption. The attractiveness of this assumption is considerable. It would be hard to fathom what choice among alternatives means if the available alternatives cannot be ranked by the chooser in some way. In addition, the efficiency criterion makes sense only if one can infer that an individual's choice reflects the best, in expected welfare terms, among all choices that individual could have made (Sen, 1982a). The possibility that a rearrangement of resources could make someone “better off” without making others “worse off” can be understood only if the post-rearrangement world is comparable with the pre-rearrange-ment world.
APA, Harvard, Vancouver, ISO, and other styles
6

Bodon’i, Marina A., and Vladimir A. Plaksin. "MULTIPLE CHOICE TESTS AS A FORMATIVE ASSESSMENT TOOL." Vestnik Kostroma State University. Series: Pedagogy. Psychology. Sociokinetics, no. 2 (2020): 42–46. http://dx.doi.org/10.34216/2073-1426-2020-26-2-42-46.

Full text
Abstract:
The article discusses the problem of using multiple choice tests as a means of formative assessment. As a rule, this type of assessing materials is correlated with summarising assessment; however, as the analysis of theoretical sources has shown, the use of multiple choice tests for training allows us to realise the goals of formative assessment. During the study, the main aspects of the use of multiple choice tests in the course of assessment of learning and in the course of assessment for learning were compared, after which the requirements for multiple choice tests for formative assessment were formulated. The identification of the specific characteristics of the use of multiple choice tests for formative assessment allowed us to specify the methods of using tests at different stages of educational process. Based on the results of the study, recommendations for teachers on the development and use of multiple choice tests for assessment for learning are proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

To, Christina, and Jason Napolitano. "A multiple choice answer?" Journal of Hospital Medicine 6, no. 3 (July 15, 2010): 171–72. http://dx.doi.org/10.1002/jhm.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shaban, Sami, Margaret Elzubeir, and Mohammed Al Houqani. "Online assessment standard setting for multiple choice questions." International Journal of Medical Education 7 (May 5, 2016): 142–43. http://dx.doi.org/10.5116/ijme.5715.3481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nickerson, Raymond S., Susan F. Butler, and Michael T. Carlin. "Knowledge assessment: Squeezing information from multiple-choice testing." Journal of Experimental Psychology: Applied 21, no. 2 (June 2015): 167–77. http://dx.doi.org/10.1037/xap0000041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cust, Michael P. "APPENDIX: Urinary Incontinence: Self-assessment Multiple-Choice Questions." Best Practice & Research Clinical Obstetrics & Gynaecology 14, no. 2 (April 2000): A1—A12. http://dx.doi.org/10.1053/beog.2000.0114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Iqbal, Muhammad Zafar, Shumaila Irum, and Muhammad Sohaib Yousaf. "MULTIPLE CHOICE QUESTIONS;." Professional Medical Journal 24, no. 09 (September 8, 2017): 1409–14. http://dx.doi.org/10.29309/tpmj/2017.24.09.824.

Full text
Abstract:
Objectives: The main objective of this study was to judge the quality of MCQs interms of their cognition level and item writing flaws, developed by the faculty of a public sectormedical college. Setting: This study was conducted in Sheikh Zayed Medical College, RahimYar Khan. Duration with Dates: Data was collected between June 2014 to March 2015 andthis study was completed in July 2016. Sample Size: A sample of 500 MCQs collected from25 faculty members were included in the study. Study Design: Quantitative method. StudyType: Cross sectional descriptive analysis. Material and Methods: This quantitative study wasconducted in Sheikh Zayed Medical College Rahim Yar Khan over six months period after theapproval of the study proposal. Every faculty member is supposed to write 25 MCQs in order tobecome supervisor. I collected 500 multiple choice questions from 25 faculty members readyfor submission to CPSP. The quality of all MCQs was checked in terms of item writing flawsand cognition level by panel of experts. Results: Absolute terms were observed in 10(2%),vague terms in 15(3%), implausible distracters in 75(15%), extra detail in correct option 15(3%),unfocused stem 63(12.6%), grammatical clues 39(7.8%), logical clues 18(3.6%), word repeats19(3.8%), >then one correct answer 21(4.2%), unnecessary information in stem 37(7.4%),lost sequence in data 15(3%), all of above16(3.2%), none of above 12(2.4%) and negativestem 23(4.6%). Cognition level l (recall) was observed in 363(72.6%), level ll (interpretation) in115(23%) and level lll (problem solving) in 22(4.4%) items. Total 378(75.6%) flaws were identifiedand four commonest flaws were implausible distracter 75(15%), unfocused stem 63(12.6%),grammatical clues 39(7.8%) and unnecessary information in stem 37(7.4%). Conclusion: It isconcluded that assessment of medical students is very demanding and need of the time. A wellconstructed,peer-reviewed single best type MCQ is best one to complete this task becauseof cost effectiveness, better reliability and computerized marking. It is very important to startfaculty development program in order to decrease the number of item writing flaws and improvecognition level towards problem solving and application of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
12

Winarti, Atiek, and Al Mubarak. "Rasch Modeling: A Multiple Choice Chemistry Test." Indonesian Journal on Learning and Advanced Education (IJOLAE) 2, no. 1 (October 26, 2019): 1–9. http://dx.doi.org/10.23917/ijolae.v2i1.8985.

Full text
Abstract:
The study aimed to reveal the difficulty level of items and the suitability of items of Chemistry test with the Rasch model. In addition to detecting this item quality, the Rasch model shows the student's answer pattern as well, so that the assessment can imply the quality of the instrument as an assessment of chemical learning. As many as 20 numbers of multiple-choice questions in chemical bonding material were analyzed by using WINSTEPS 3.73. The samples consisted of 200 senior high school students in Banjarmasin Indonesia. The results revealed that the average item measure was 0.00 with items (Measure Order = 4.64) which has the highest difficulty level. The Q10 was the item that has a level of conformity with the model, and outliers or misfit in Rasch were MNSQ=+0.97, ZSTD=-0.2, Pt Mean Corr=+0.58. In other words, assessment of learning with test techniques such as multiple choice based on Rasch model analysis was an effective way for teachers to review the progress of students in the learning process, guidelines for designing chemical learning strategies, and identifying students' understanding of chemical material.
APA, Harvard, Vancouver, ISO, and other styles
13

Tetali, Dharma Reddy. "AUTOMATED COURSE OUTCOMES ASSESSMENT FOR MULTIPLE CHOICE QUESTIONS(AUTO_ASSESS)." International Journal of Advanced Research in Computer Science 8, no. 9 (September 30, 2017): 189–92. http://dx.doi.org/10.26483/ijarcs.v8i9.4942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Rød, Jan ketil, Sveinung Eiksund, and Olav Fjær. "Assessment based on exercise work and multiple-choice tests." Journal of Geography in Higher Education 34, no. 1 (February 2010): 141–53. http://dx.doi.org/10.1080/03098260903062039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Moss, Edward. "Multiple choice questions: their value as an assessment tool." Current Opinion in Anaesthesiology 14, no. 6 (December 2001): 661–66. http://dx.doi.org/10.1097/00001503-200112000-00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Santhanavijayan, A., S. R. Balasundaram, S. Hari Narayanan, S. Vinod Kumar, and V. Vignesh Prasad. "Automatic generation of multiple choice questions for e-assessment." International Journal of Signal and Imaging Systems Engineering 10, no. 1/2 (2017): 54. http://dx.doi.org/10.1504/ijsise.2017.084571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Vinod Kumar, S., V. Vignesh Prasad, A. Santhanavijayan, S. R. Balasundaram, and S. Hari Narayanan. "Automatic generation of multiple choice questions for e-assessment." International Journal of Signal and Imaging Systems Engineering 10, no. 1/2 (2017): 54. http://dx.doi.org/10.1504/ijsise.2017.10005435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Crane, Kathleen R. "SYSTEMATIC ASSESSMENT OF LEARNING OUTCOMES: DEVELOPING MULTIPLE-CHOICE EXAMS." CIN: Computers, Informatics, Nursing 20, no. 4 (July 2002): 127–28. http://dx.doi.org/10.1097/00024665-200207000-00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nedeau-Cayo, Rosemarie, Deborah Laughlin, Linda Rus, and John Hall. "Assessment of Item-Writing Flaws in Multiple-Choice Questions." Journal for Nurses in Professional Development 29, no. 2 (2013): 52–57. http://dx.doi.org/10.1097/nnd.0b013e318286c2f1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

&NA;. "Assessment of Item-Writing Flaws in Multiple-Choice Questions." Journal for Nurses in Professional Development 29, no. 2 (2013): E1—E2. http://dx.doi.org/10.1097/nnd.0b013e31828d1108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nargund, G. "APPENDIX: Implantation and Miscarriage: Self-assessment Multiple Choice Questions." Best Practice & Research Clinical Obstetrics & Gynaecology 14, no. 5 (October 2000): A1—A10. http://dx.doi.org/10.1053/beog.2000.0126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Foong, L. C. "APPENDIX: Chronic Pelvic Pain: Self-assessment Multiple Choice Questions." Best Practice & Research Clinical Obstetrics & Gynaecology 14, no. 3 (June 2000): A1—A12. http://dx.doi.org/10.1053/beog.2000.0130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Fankhauser, Karen. "Systematic Assessment of Learning Outcomes: Developing Multiple-Choice Exams." Journal of Continuing Education in Nursing 34, no. 5 (September 1, 2003): 236–37. http://dx.doi.org/10.3928/0022-0124-20030901-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Samuel, Thomas, Razia Azen, and Naira Campbell-Kyureghyan. "Evaluation of Learning Outcomes Through Multiple Choice Pre- and Post-Training Assessments." Journal of Education and Learning 8, no. 3 (May 20, 2019): 122. http://dx.doi.org/10.5539/jel.v8n3p122.

Full text
Abstract:
Training programs, in industry, are a common way to increase awareness and change the behavior of individuals. The most popular way to determine the effectiveness of the training on learning outcomes is to administer assessments with Multiple Choice Questions (MCQ) to the participants, despite the fact that in this type of assessment it is difficult to separate true learning from guessing. This study specifically aims to quantify the effect of the inclusion of the ‘I don’t know’ (IDK) option on learning outcomes in a pre-/post-test assessment construct by introducing a ‘Control Question’ (CQ). The analysis was performed on training conducted for 1,474 participants. Results show a statistically significant reduction in the usage of the IDK option in the post-test assessment as compared to the pre-test assessment for all questions including the Control Question. This illustrates that participants are learning concepts taught in the training sessions but are also prone to guess more in the post-test assessment as compared to the pre-test assessment.
APA, Harvard, Vancouver, ISO, and other styles
25

Moreno, Rafael, Rafael J. Martínez, and José Muñiz. "New Guidelines for Developing Multiple-Choice Items." Methodology 2, no. 2 (January 2006): 65–72. http://dx.doi.org/10.1027/1614-2241.2.2.65.

Full text
Abstract:
The rigorous construction of items constitutes a field of great current interest for psychometric researchers and practitioners. In previous studies we have reviewed and analyzed the existing guidelines for the construction of multiple-choice items. From this review emerged a new proposal for guidelines that is now, in the present work, subjected to empirical assessment. This assessment was carried out by users of the guidelines and by experts in item construction. The results endorse the proposal for the new guidelines presented, confirming the advantages in relation to their simplicity and efficiency, as well as permitting identification of the difficulties involved in drawing up and organizing some of the guidelines. Taking into account these results, we propose a new, refined set of guidelines that constitutes a useful, simple, and structured instrument for the construction of multiple-choice items.
APA, Harvard, Vancouver, ISO, and other styles
26

Reardon, Sean Fitzpatrick, Kate Scott, and John Verre. "Symposium: Equity in Educational Assessment." Harvard Educational Review 64, no. 1 (April 1, 1994): 1–5. http://dx.doi.org/10.17763/haer.64.1.30t3h163v1327230.

Full text
Abstract:
Testing and assessment are increasingly the levers of choice for educational reform in the United States today. The last two decades have seen an enormous rise in the attention and money devoted to testing and assessing students, and current proposals for a system of national standards and assessments, such as the Clinton administration's Goals 2000 legislation, promise more of the same. In this symposium, "Equity in Educational Assessment," the Harvard Educational Review examines the relationship between new assessment policies and issues of educational equity. Two assumptions about assessment dominate current policy debates. First is the assumption that changes in assessment policies can be used as a powerful lever for reforming schools. Second is the assumption that new, "authentic" forms of assessment, such as performance assessment and portfolio assessment, are inherently superior to traditional standardized, multiple-choice tests. Both assumptions, however, have gone largely unchallenged in the public discourse about assessment and school reform, despite the fact that there is yet little empirical evidence to indicate whether or not they are valid. In fact, as each contributor to this symposium points out, it is doubtful that merely changing the form of assessments from standardized, multiple-choice tests to open-ended performance and portfolio assessments will improve schools and reduce educational inequities in the United States.
APA, Harvard, Vancouver, ISO, and other styles
27

Adiga, Manoor Narasimha Sachidananda, Swathi Acharya, and Rajendra Holla. "Item Analysis of Multiple-Choice Questions in Pharmacology in an Indian Medical School." Journal of Health and Allied Sciences NU 11, no. 03 (February 10, 2021): 130–35. http://dx.doi.org/10.1055/s-0041-1722822.

Full text
Abstract:
Abstract Introduction Student assessment by multiple-choice questions (MCQs) is an integral part of student evaluation in medicine. The medical teacher should be trained to construct an item with proper stem and valid options. Periodic item analyses will make the process of assessment more meaningful. Hence, we conducted the study to analyze MCQs (item analysis) tested on a batch of MBBS students in pharmacology in their three internal assessment examinations. Methods The study was conducted in the Department of Pharmacology of a medical college in Mangaluru on 150 students. The MCQs of the three internal assessment examinations (20 each) respectively were analyzed. We analyzed each question for difficulty index (DI), discrimination index (DsI), and distracter efficacy or functionality and expressed the percentage results. Results The DI was in an acceptable range of 60, 75, and 90%, respectively, in the three internal assessments. The percentage of “too difficult” questions was 10, 20, and 10% and the average DsI was 0.32 ± 0.04, 0.28 ± 0.02, and 0.26 ± 0.02, respectively. In the second and third internal assessments, 95% of questions had functional distracters, while in the first internal assessment, only 60% of questions had functional distracters. Conclusion We conclude from our study that even though the items (MCQs) framed for the internal assessments were in the acceptable range of quality in terms of the parameters assessed, we must improve MCQ’s construction in selecting distracters in some topics.
APA, Harvard, Vancouver, ISO, and other styles
28

Vlazneva, Svetlana, and Olga Androsova. "Multiple-choice questions and essays in assessing economics." SHS Web of Conferences 99 (2021): 01032. http://dx.doi.org/10.1051/shsconf/20219901032.

Full text
Abstract:
The article is devoted to assessment tools in teaching economics. The authors distinguish and define four levels of understanding economics: elementary, intermediate, systemic and creative. They describe multiple choice questions and essay questions as two possible assessment tools in teaching economics. Multiple-choice questions are represented as the most popular testing format. The advantages of multiple-choice questions include low grading costs, perceived objectivity and availability of comparative analysis. The authors have developed multiple-choice tests, which measure students’ knowledge at three first levels of understanding economics. They enable instructors to see where exactly the students’ understanding has stopped and provide guidance. The authors conclude that multiple-choice questions can be used to measure the basic levels of students’ understanding economics. In measuring higher levels the essay as an assessment tool has a great potential. The authors highlight the advantages and pitfalls of essay testing in economics.
APA, Harvard, Vancouver, ISO, and other styles
29

Leleu, Xavier, Maria-Victoria Mateos, Michel Delforge, Philip Lewis, Thomas Schindler, Craig Gibson, Min Yang, and Katja C. Weisel. "Assessment of Multiple Myeloma Patient Preferences on Treatment Choices: An International Discrete Choice Study." Blood 126, no. 23 (December 3, 2015): 2086. http://dx.doi.org/10.1182/blood.v126.23.2086.2086.

Full text
Abstract:
Abstract Introduction: Patients' individual preferences for specific treatment attributes are an important factor to consider in treatment decisions. This area of research is relatively underexplored for patients with multiple myeloma (MM). Aims: To understand MM patients' strength of preference for method of administration and for avoiding specific adverse events (AEs). Methods: AEs were selected from trials of MM treatments used globally across the disease course: lenalidomide (FIRST, MM-009/010), bortezomib (VISTA, APEX, MMY-3021), thalidomide (IFM 99-06), pomalidomide (MM-003), and carfilzomib (PX-171-003, -004, -005). AEs selected for evaluation were narrowed down to 12, based on discussions with MM patients, from a list of hematologic and non-hematologic AEs with a grade 3/4 incidence > 5% and the greatest difference in rate of occurrence across trials: bone pain, febrile neutropenia, hypokalemia, hyponatremia, infection, lymphopenia, neuralgia, neutropenia, peripheral neuropathy, renal adverse reaction, and thrombocytopenia and thromboembolic events. MM patients were recruited to complete an online survey. Following an introductory tutorial, patients completed 14 discrete choice cards on which they selected their preferred option between 2 hypothetical treatments with varying combinations of AEs (absent/present), route of administration (oral, subcutaneous [SC], intravenous [IV]), and progression-free survival (PFS; 22, 24, or 26 months, based on evidence of first-line MM treatment). Results were expressed as odds ratios (ORs) and coefficients. Strength of preference was converted into a willingness to trade (WTT) PFS months to receive preferred choice of treatment. Results: Four hundred patients from 8 countries participated in the survey: Canada (13; 3.3%), Denmark (9; 2.3%), France (68; 17.0%), Germany (65; 16.3%), Italy (89; 22.3%), Spain (81; 20.3%), Sweden (11; 2.8%), and the United Kingdom (64; 16.0%). Of the respondents, 28.8% were on their first treatment, 70.0% of patients reported having switched treatment. The majority (58.7%) were male, with a mean age of 40 years. Patients showed a preference for oral vs IV administration (OR, 0.875 [95% CI, 0.78-0.98]; P = .020), and there was a trend toward preferring oral over SC administration (OR, 0.897 [95% CI, 0.80-1.01]; P = .067). Strength of preference declined in patients with prior treatments. Patients expressed a statistically significant preference (P < .01) to avoid (OR < 1) all presented grade 3/4 AEs, except for hematologic AEs: thrombocytopenia (OR [P value]: 0.904 [.23]), neutropenia (0.911 [.30]), and lymphopenia (0.916 [.39]) for first treatment patients, and neutropenia (0.907 [.08]) for patients with prior therapy. The relative importance of bone pain, infection, and thromboembolic events was lower in patients with prior therapies, while the relative importance of grade 3/4 neuralgia, febrile neutropenia, and renal adverse reaction increased. The table shows patient preferences as coefficients, and by months of PFS WTT. Example: Patients on their first treatment would be WTT 4.33 mos of PFS to receive oral vs IV administration. Conclusions: Study results display important findings concerning preferences of younger, working-age MM patients on individual AEs and methods of administration. Patients expressed smaller preference for avoiding hematologic AEs, such as neutropenia, lymphopenia, and thrombocytopenia, and an increasing relative importance to avoiding some symptomatic AEs (eg, neuropathy, neuralgia, renal adverse reaction, and febrile neutropenia) over the course of their disease. Patient preference should be considered when making treatment decisions. Future analyses could explore subgroups based on demographics and disease history, including prior AEs. Figure 1. Figure 1. Disclosures Leleu: Amgen: Patents & Royalties; Novartis: Honoraria; Celgene Corporation: Honoraria; Janssen: Honoraria; BMS: Honoraria. Mateos:Janssen-Cilag: Consultancy, Honoraria; Onyx: Consultancy; Celgene: Consultancy, Honoraria; Takeda: Consultancy. Delforge:Novartis: Honoraria; Celgene Corporation: Honoraria; Janssen: Honoraria; Amgen: Honoraria. Lewis:Celgene Corporation: Employment, Equity Ownership. Schindler:Celgene Corporation: Employment, Equity Ownership. Gibson:Celgene Corporation: Employment, Equity Ownership. Yang:Analysis Group: Employment. Weisel:Amgen: Consultancy, Honoraria, Other: Travel Support; Celgene: Consultancy, Honoraria, Other: Travel Support, Research Funding; Novartis: Other: Travel Support; Onyx: Consultancy, Honoraria; BMS: Consultancy, Honoraria, Other: Travel Support; Janssen Pharmaceuticals: Consultancy, Honoraria, Other: Travel Support, Research Funding; Noxxon: Consultancy.
APA, Harvard, Vancouver, ISO, and other styles
30

Tetteh, Godson Ayertei, and Frederick Asafo-Adjei Sarpong. "Influence of type of assessment and stress on the learning outcome." Journal of International Education in Business 8, no. 2 (November 2, 2015): 125–44. http://dx.doi.org/10.1108/jieb-05-2015-0015.

Full text
Abstract:
Purpose – The purpose of this paper is to explore the influence of constructivism on assessment approach, where the type of question (true or false, multiple-choice, calculation or essay) is used productively. Although the student’s approach to learning and the teacher’s approach to teaching are concepts that have been widely researched, few studies have explored how the type of assessment (true or false, multiple-choice, calculation or essay questions) and stress would manifest themselves or influence the students’ learning outcome to fulfill Bloom’s taxonomy. Multiple-choice questions have been used for efficient assessment; however, this method has been criticized for encouraging surface learning. And also some students complain of excelling in essay questions and failing in multiple-choice questions. A concern has arisen that changes may be necessary in the type of assessment that is perceived to fulfill Bloom’s taxonomy. Design/methodology/approach – Students’ learning outcomes were measured using true or false, multiple-choice, calculations or essay questions to fulfill Bloom’s taxonomy and the students’ reaction to the test questionnaire. To assess the influence of the type of assessment and the stress level factors of interest, MANOVA was used to identify whether any differences exist and to assess the extent to which these differences are significantly different, both individually and collectively. Second, to assess if the feedback information given to respondents after the mid-semester assessment was effective, the one-way ANOVA procedure was used to test the equality of means and the differences in means of the mid-semester assessment scores and the final assessment scores. Findings – Results revealed that the type of questions (true or false, multiple-choice, calculations or essay) will not significantly affect the learning outcome for each subgroup. The ANOVA results, comparing the mid-semester and final assessments, indicated that there is sufficient evidence means are not equal. Thus, the feedback given to respondents after the mid-semester assessment had a positive impact on the final assessment to actively improve student learning. Research limitations/implications – This study is restricted to students in a particular university in Ghana, and may not necessarily be applicable universally. Practical implications – The practical implications of this research is that assessments for learning, and the importance of assessment impact not only on students, but also on teachers and the literature. Originality/value – This study contributes to the literature by examining how the combination of the type of assessment (true or false, multiple-choice, calculation or essay) and stress contributes to the learning outcome.
APA, Harvard, Vancouver, ISO, and other styles
31

Simpson, N. C. "Partial credit in multiple-choice exams: improving large-scale assessment of multiple constituencies." International Journal of Intercultural Information Management 1, no. 3 (2009): 233. http://dx.doi.org/10.1504/ijiim.2009.025367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Crisp, G. T., and E. J. Palmer. "Engaging academics with a simplified analysis of their multiple-choice question (MCQ) assessment results." Journal of University Teaching and Learning Practice 4, no. 2 (April 1, 2007): 31–50. http://dx.doi.org/10.53761/1.4.2.4.

Full text
Abstract:
The appropriate analysis of students’ responses to an assessment is an essential step in improving the quality of the assessment itself as well as staff teaching and student learning. Many academics are unfamiliar with the formal processes used to analyze assessment results; the standard statistical methods associated with analyzing the validity and reliability of an assessment are perceived as being too difficult for academics with a limited understanding of statistics. This inability of academics to apply conventional statistical tools with authority often makes it difficult for them to make informed judgements about improving the quality of the questions used in assessments. We analyzed students’ answers to a number of selected response assessments and examined different formats for presenting the resulting data to academics from a range of disciplines. We propose the need for a set of simple but effective visual formats that will allow academics to identify questions that should be reviewed before being used again and present the results of a staff survey which evaluated the response of academics to these presentation formats. The survey examined ways in which academics might use the data to assist their teaching and students’ learning. We propose that by engaging academics with a formal reflection of students’ responses, academic developers are in a position to influence academics’ use of specific items for diagnostic and formative assessments.
APA, Harvard, Vancouver, ISO, and other styles
33

&NA;. "How to take MULTIPLE-CHOICE TESTS." Nursing 22, no. 10 (October 1992): 117–32. http://dx.doi.org/10.1097/00152193-199210000-00037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kniveton, Bromley H. "A correlational analysis of multiple-choice and essay assessment measures." Research in Education 56, no. 1 (November 1996): 73–84. http://dx.doi.org/10.1177/003452379605600106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Ou Lydia, Hee-Sun Lee, and Marcia C. Linn. "An Investigation of Explanation Multiple-Choice Items in Science Assessment." Educational Assessment 16, no. 3 (September 7, 2011): 164–84. http://dx.doi.org/10.1080/10627197.2011.611702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Davis, Cheryl J., Michele D. Brock, Kristin McNulty, Mary L. Rosswurm, Benjamin Bruneau, and Thomas Zane. "Efficiency of forced choice preference assessment: Comparing multiple presentation techniques." Behavior Analyst Today 10, no. 3-4 (2010): 440–55. http://dx.doi.org/10.1037/h0100682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Heck, Ronald H., and Marian Crislip. "Direct and Indirect Writing Assessments: Examining Issues of Equity and Utility." Educational Evaluation and Policy Analysis 23, no. 1 (March 2001): 19–36. http://dx.doi.org/10.3102/01623737023001019.

Full text
Abstract:
Performance tests are increasingly used as alternatives to, or in connection with, standardized multiple-choice tests as a means of assessing student learning and school accountability. Besides their proposed equity advantages over multiple-choice tests in measuring student learning across groups of students, performance assessments have also been viewed as having greater utility for monitoring school progress because of their proposed closer correspondence to the curriculum that is actually taught. We examined these assumptions by comparing third-grade student performance on a performance-based writing test and a multiple-choice test of language skills. We observed smaller differences in achievement on the writing performance assessment for some groups of students (e.g., low socioeconomic status, various ethnic backgrounds) than are commonly observed on multiple-choice tests. Girls, however, had higher mean scores than boys on both types of assessments. Moreover, the school’s identification and commitment over time to improving its students’ writing skills positively related to its students’ outcomes on the writing performance test. Overall, our examination of performance-based writing assessment is encouraging with respect to providing a relatively fair assessment and measuring learning tasks that are related to the school’s curricular practices.
APA, Harvard, Vancouver, ISO, and other styles
38

Heck, Ronald H., and Marian Crislip. "Direct and Indirect Writing Assessments: Examining Issues of Equity and Utility." Educational Evaluation and Policy Analysis 23, no. 3 (September 2001): 275–92. http://dx.doi.org/10.3102/01623737023003275.

Full text
Abstract:
Performance tests are increasingly used as alternatives to, or in connection with, standardized multiple-choice tests as a means of assessing student learning and school accountability. Besides their proposed equity advantages over multiple-choice tests in measuring student learning across groups of students, performance assessments have also been viewed as having greater utility for monitoring school progress because of their proposed closer correspondence to the curriculum that is actually taught. We examined these assumptions by comparing third-grade student performance on a performance-based writing test and a multiple-choice test of language skills. We observed smaller differences in achievement on the writing performance assessment for some groups of students (e.g., low socioeconomic status, various ethnic backgrounds) than are commonly observed on multiple-choice tests. Girls, however, had higher mean scores than boys on both types of assessments. Moreover, the school's identification and commitment over time to improving its students' writing skills positively related to its students' outcomes on the writing performance test. Overall, our examination of performance-based writing assessment is encouraging with respect to providing a relatively fair assessment and measuring learning tasks that are related to the school's curricular practices.
APA, Harvard, Vancouver, ISO, and other styles
39

Wilkie, Richard M., Clare Harley, and Catriona Morrison. "High Level Multiple Choice Questions in Advanced Psychology Modules." Psychology Learning & Teaching 8, no. 2 (January 1, 2009): 30–36. http://dx.doi.org/10.2304/plat.2009.8.2.30.

Full text
Abstract:
Traditional approaches to assessing students assume that multiple choice questions (MCQs) are adequate in assessing only basic, low-level knowledge at the early stages of the higher education (HE) curriculum. Increasingly, however, teachers of HE across a variety of subject areas are keen to explore the opportunities for developing higher-level MCQ formats for assessing more advanced stages of the curriculum. This has many benefits: students are unable to question-spot and are required to demonstrate a breadth as well as a depth of knowledge, tests can be administered electronically, and because feedback and marking can be instantaneous feedback can be quicker than in a traditional paper-and-pencil assessment, without an onerous marking load for staff. Here we report the use of high-level MCQs (hMCQ) in Level 2 of our BSc Psychology programme. We demonstrate the success of this format in differentiating between students, and highlight important factors in designing questions. We argue that this type of examination format offers an assessment that discriminates between students and which can be simply evaluated to ensure there is a suitable fit between the questions that make up the assessment tool and the student population under evaluation.
APA, Harvard, Vancouver, ISO, and other styles
40

Loftis, J. Robert. "Beyond Information Recall." American Association of Philosophy Teachers Studies in Pedagogy 5 (2019): 89–122. http://dx.doi.org/10.5840/aaptstudies2019121144.

Full text
Abstract:
Multiple-choice questions have an undeserved reputation for only being able to test student recall of basic facts. In fact, well-crafted mechanically gradable questions can measure very sophisticated cognitive skills, including those engaged at the highest level of Benjamin Bloom’s taxonomy of outcomes. In this article, I argue that multiple-choice questions should be a part of the diversified assessment portfolio for most philosophy courses. I present three arguments broadly related to fairness. First, multiple-choice questions allow one to consolidate subjective decision making in a way that makes it easier to manage. Second, multiple-choice questions contribute to the diversity of an evaluation portfolio by balancing out problems with writing-based assessments. Third, by increasing the diversity of evaluations, multiple-choice questions increase the inclusiveness of the course. In the course of this argument, I provide examples of multiple-choice questions that measure sophisticated learning and advice for how to write good multiple-choice questions.
APA, Harvard, Vancouver, ISO, and other styles
41

Mustikasari, Vita Ria, Munzil Munzil, and Lia Puji Lestari. "Pengembangan Instrumen Penilaian Kemampuan Berpikir Tingkat Tinggi Materi Sistem Pendengaran dan Sonar SMP." JURNAL EKSAKTA PENDIDIKAN (JEP) 2, no. 2 (November 26, 2018): 116. http://dx.doi.org/10.24036/jep/vol2-iss2/212.

Full text
Abstract:
The aim of this study is to produce a higher-order thinking skills assessment instrument on the material of the hearing and sonar system for Junior high school students that are valid and reliable. Indicators of higher-order thinking skills (IKBTT) are developed based on the cognitive dimension of Bloom's Taxonomy which includes the skills to analyze (C4), evaluate (C5) and create (C6). The assessment instrument developed was a multiple choice test with open reasons. Multiple choice test with open reason is a multiple choice question with four answer choices completed with the reason for choosing an answer. The instrument is equipped with an assessment rubric in the form of a holistic rubric to assess the relationship between multiple choice answers and reasons for choosing answers. The results of the validation of material aspects, construction, and language show that the products developed are very feasible with an average score of 85.7%. The instrument was empirically tested with the number of subjects trying as many as 59 students of 8th grade Junior high school. Based on the results of empirical trials of multiple choice questions with open reasons obtained 25 valid questions with multiple choice reliability of 0.745, and reason reliability of 0.929. The number of questions produced based on C4 cognitive dimensions were 12 questions (48%), C5 cognitive dimensions were 9 questions (36%), and C6 cognitive dimensions were 4 questions (16%). This study produced an instrument for assessing higher-order thinking skills in the material of the hearing and sonar system for Junior high school students that were valid and reliable.
APA, Harvard, Vancouver, ISO, and other styles
42

McKenna, Peter. "Multiple choice questions: answering correctly and knowing the answer." Interactive Technology and Smart Education 16, no. 1 (March 11, 2019): 59–73. http://dx.doi.org/10.1108/itse-09-2018-0071.

Full text
Abstract:
PurposeThis paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment.Design/methodology/approachThe paper presents a critical review of existing research on MCQs, then reports on an experimental study where two objective tests (using MCQs and CRQs) were set for an introductory undergraduate course. To maximise completion, tests were kept short; consequently, differences between individuals’ scores across both tests are examined rather than overall averages and pass rates.FindingsMost students who excelled in the MCQ test did not do so in the CRQ test. Students could do well without necessarily understanding the principles being tested.Research limitations/implicationsConclusions are limited by the small number of questions in each test and by delivery of the tests at different times. This meant that statistical average data would be too coarse to use, and that some students took one test but not the other. Conclusions concerning CRQs are limited to disciplines where numerical answers or short and constrained text answers are appropriate.Practical implicationsMCQs, while useful in formative assessment, are best avoided for summative assessments. Where appropriate, CRQs should be used instead.Social implicationsMCQs are commonplace as summative assessments in education and training. Increasing the use of CRQs in place of MCQs should increase the reliability of tests, including those administered in safety-critical areas.Originality/valueWhile others have recommended that MCQs should not be used (Hinchliffe 2014, Srivastavaet al., 2004) because they are vulnerable to guessing, this paper presents an experimental study designed to demonstrate whether this hypothesis is correct.
APA, Harvard, Vancouver, ISO, and other styles
43

Smith, Linda S. "How to write better multiple-choice questions." Nursing 48, no. 11 (November 2018): 14–17. http://dx.doi.org/10.1097/01.nurse.0000546471.79886.85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Retnawati, Heri. "Proving content validity of self-regulated learning scale (The comparison of Aiken index and expanded Gregory index)." Research and Evaluation in Education 2, no. 2 (December 28, 2016): 155. http://dx.doi.org/10.21831/reid.v2i2.11029.

Full text
Abstract:
This study aims to prove the content validity of the self-regulated learning (SRL) scale using Likert model and multiple-choice model with content validity coefficient based on expert assessments with Aiken formula and expanded Gregory formula. In this study, the SRL scale with Likert and multiple-choice model are developed using the same outline/format. There are three experts who assess the items' relevancy using indicators of both scale formats. The results of the expert assessments are then used to calculate the coefficient of the validity with Aiken formula and the expanded Gregory formula. The results showed that the content validity coefficient based on expert assessment on Likert and multiple-choice format with Aiken formula is at 0.9 for each, while using the Aiken formula and expanded Gregory formula, the coefficient is 0.6 for Likert, and 0.8 for multiple-choice.
APA, Harvard, Vancouver, ISO, and other styles
45

Benarroch, Alicia, and Nicolás Marín. "Questionnaire of multiple-choice to assessment beliefs about learning of science." Enseñanza de las Ciencias. Revista de investigación y experiencias didácticas 28, no. 2 (April 5, 2011): 245. http://dx.doi.org/10.5565/rev/ec/v28n2.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nicol, David. "E‐assessment by design: using multiple‐choice tests to good effect." Journal of Further and Higher Education 31, no. 1 (February 2007): 53–64. http://dx.doi.org/10.1080/03098770601167922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Haladyna, Thomas M., Steven M. Downing, and Michael C. Rodriguez. "A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment." Applied Measurement in Education 15, no. 3 (July 2002): 309–33. http://dx.doi.org/10.1207/s15324818ame1503_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mingo, Maya A., Hsin-Hui Chang, and Robert L. Williams. "Undergraduate Students’ Preferences for Constructed Versus Multiple-Choice Assessment of Learning." Innovative Higher Education 43, no. 2 (September 14, 2017): 143–52. http://dx.doi.org/10.1007/s10755-017-9414-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Symonds, I. M., and W. Thompson. "APPENDIX Emergencies in Obstetrics and Gynaecology: Self-assessment Multiple Choice Questions." Best Practice & Research Clinical Obstetrics & Gynaecology 14, no. 1 (February 2000): A1—A14. http://dx.doi.org/10.1053/beog.2000.0095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xu, Xiaomeng, Sierra Kauer, and Samantha Tupy. "Multiple-choice questions: Tips for optimizing assessment in-seat and online." Scholarship of Teaching and Learning in Psychology 2, no. 2 (2016): 147–58. http://dx.doi.org/10.1037/stl0000062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography