Academic literature on the topic 'Student teaching – Evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Student teaching – Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Student teaching – Evaluation"

1

Halston, Abby, Taylor Lum, and Hans Chun. "Student Evaluation of Teaching: Exploring Instructor and Student Perspectives with Course Redesign." Education, Language and Sociology Research 1, no. 1 (June 19, 2020): p144. http://dx.doi.org/10.22158/elsr.v1n1p144.

Full text
Abstract:
Student Evaluation of Teaching (SET), or instructor evaluations, is used as a significant instrument across the world to measure instructors teaching methods and course evaluations. With the lack of standardized SET across universities and institutions, this study gains insight into how instructors use and improve student evaluations and students’ views of how their feedback is utilized by posing questions to university students and faculty through focus groups and interviews. Data was gathered and recorded to interpret students’ perceptions with how instructors utilize the students’ evaluations and instructors’ perceptions of student evaluations and how the instructors use the students’ feedback. Results indicate that students and instructors have different values for student feedback and curriculum improvement. Implications of different values for student feedback and curriculum improvement include instructors not attempting to improve their teaching and course, students poorly rating their instructors, and students that may not be challenged due to possibly receiving a negative evaluation.
APA, Harvard, Vancouver, ISO, and other styles
2

Wolfer, Terry A., and Miriam McNown Johnson. "Re-Evaluating Student Evaluation of Teaching." Journal of Social Work Education 39, no. 1 (January 2003): 111–21. http://dx.doi.org/10.1080/10437797.2003.10779122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reisenwitz, Timothy H. "Student Evaluation of Teaching." Journal of Marketing Education 38, no. 1 (September 17, 2015): 7–17. http://dx.doi.org/10.1177/0273475315596778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rice, Lee C. "Student Evaluation of Teaching." Teaching Philosophy 11, no. 4 (1988): 329–44. http://dx.doi.org/10.5840/teachphil198811484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dwinell, Patricia L., and Jeanne L. Higbee. "Students' Perceptions of the Value of Teaching Evaluations." Perceptual and Motor Skills 76, no. 3 (June 1993): 995–1000. http://dx.doi.org/10.2466/pms.1993.76.3.995.

Full text
Abstract:
This research examined how 187 students assessed a course evaluation form, the anonymity of the evaluation process, the fairness and accuracy students attribute to the task of completing evaluations of instruction, and students' perceptions of the extent to which teachers and administrators make use of the information provided by evaluations. 92% of the student-participants believed that the rating forms provided an effective means of evaluating instruction. The majority thought instructors pay attention to evaluation results and change their behavior accordingly. Only 2% believed that their anonymity was not protected. Students appeared to have more faith in their own evaluations than in those of other students. They also lacked confidence in the use of evaluations for determining salary increases or tenure and promotion.
APA, Harvard, Vancouver, ISO, and other styles
6

Pullen, Darren, Steven Collette, Loan Dao, and J. F. "Student Evaluations of Teaching: Is There a Relationship between Student Feedback on Teaching and the Student Final Grade?" Frontiers in Education Technology 2, no. 3 (July 4, 2019): p124. http://dx.doi.org/10.22158/fet.v2n3p124.

Full text
Abstract:
The use of Student Evaluations of Teaching (SET) has become widespread practice in higher education despite inconclusive evidence reported in the literature around its validity. Not surprisingly, the question of the validity of SET continues to be a current debate in higher education, pointing to more research to be conducted in this area. The current study contributes to broadening knowledge and understanding on the validity of SET by drawing on an online unit evaluation completed by students (n=2430 out of total student enrolment of N=7757) in one university across three postgraduate education programs over a two-year period, to determine whether there is a relationship between student feedback on teaching and student final unit grade. Findings revealed that students who achieved very high or very low final unit grades did not participate in the SET, while students who achieved Pass or Credit grades partook in the SET, thus providing feedback. This indicates that teaching and evaluating staff need to be aware that a large subset of their students that are not providing feedback to staff to improve the quality of their courses.
APA, Harvard, Vancouver, ISO, and other styles
7

Raman, Raghu, and Prema Nedungadi. "Adoption of Web-Enabled Student Evaluation of Teaching (WESET)." International Journal of Emerging Technologies in Learning (iJET) 15, no. 24 (December 22, 2020): 191. http://dx.doi.org/10.3991/ijet.v15i24.17159.

Full text
Abstract:
The “student voice” movement, which advocates for the critical importance of seeking and applying student input into educational decisions such as curriculum development and teaching methods, has been gaining momentum. We examine “student voice” through the vehicle of “Student Evaluation of Teaching (SET)” in the context of higher education. We treat Web-Enabled Student Evaluation of Teaching (WESET) in higher educational institutions as an innovation and apply Diffusion of Innovation theory to study its adoption. We study WESET rates of adoption by analyzing data from 45,934 anonymous student feedbacks of 427 teachers by 1102 students over a period of five years covering both undergraduate and graduate programs at an Indian university. Data from 589 courses in three distinct academic disciplines were collected and analyzed. The adoption rate of the students is primarily attributed to three factors: (a) the guarantee that the system will maintain anonymity, (b) expectation that student feedback will result in positive changes, and (c) ease of use as WESET was integrated into an existing system already used by students. Student evaluations for the same courses significantly improved over each subsequent semester, suggesting that faculty had incorporate student feedback into their curriculum and teaching methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Guo-Hai, and David Watkins. "Can Student Ratings of Teaching be Predicted by Teaching Styles?" Psychological Reports 106, no. 2 (April 2010): 501–12. http://dx.doi.org/10.2466/pr0.106.2.501-512.

Full text
Abstract:
The relationship between teaching styles and student ratings of teaching was examined at a Chinese university. 388 teachers (170 men, 218 women) were invited to fill out the 49-item Teaching Styles Inventory (Grigorenko & Sternberg, 1993). The inventory measures seven teaching styles: legislative, judicial, liberal, global, executive, conservative, and local. Scores from students' evaluations of teaching of courses for one semester were collected. Students' evaluation scores were significantly and negatively related to executive and conservative teaching styles of their teachers, while no significant correlation was found between student ratings and any of the other five teaching styles. Only conservative teaching style contributed significantly to the prediction of student ratings. Sex and age were found to have moderating effects on the relationship between teaching style and student ratings. The role of teaching styles in student ratings was discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Kustra, Erika, Florida Doci, Kaitlyn Gillard, Catharine Dishke Hondzel, Lori Goff, Danielle Gabay, Ken Meadows, et al. "Teaching Culture Perception: Documenting and Transforming Institutional Teaching Cultures." Collected Essays on Learning and Teaching 8 (June 12, 2015): 231. http://dx.doi.org/10.22329/celt.v8i0.4267.

Full text
Abstract:
An institutional culture that values teaching is likely to lead to improved student learning. The main focus of this study was to determine faculty, graduate and undergraduate students’ perception of the teaching culture at their institution and identify indicators of that teaching culture. Themes included support for teaching development; support for best practices, innovative practices and specific effective behaviours; recognition of teaching; infrastructure; evaluation of teaching and implementing the student feedback received from teaching evaluations. The study contributes to a larger project examining the quality of institutional teaching culture.
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Zhan, Zhihai Suo, Jun Liu, Mo Xu, Dandan Hong, Hua Wen, and Xiangting Ji. "Construction practice of student evaluation system based on JFinal + webix integrated framework and Baidu AI platform." MATEC Web of Conferences 336 (2021): 05016. http://dx.doi.org/10.1051/matecconf/202133605016.

Full text
Abstract:
Students' evaluation of teaching is a key link to realize teaching quality monitoring and promote teachers' teaching level. Based on the practice of student evaluation in our university, this paper constructs a multi-level student evaluation system, and develops an online student evaluation system by using JFinal+webix integration framework. The new Internet plus evaluation model is established to improve the efficiency of student evaluation and the enthusiasm of students to evaluate teaching. Meanwhile, based on the analysis of students' comments on teaching by Baidu AI platform, It provides data support for the improvement of learning level and the optimization of teaching evaluation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Student teaching – Evaluation"

1

Todoroff, Ryan. "Student perceptions of formative teacher evaluation putting the student back in student evaluations /." Online full text .pdf document, available to Fuller patrons only, 2003. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Onyegam, Emmanuel I. (Emmanuel Ikechi). "Graduate Student Opinion of Most Important Attributes in Effective Teaching." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc279009/.

Full text
Abstract:
Graduate students in the College of Education at the University of North Texas, Denton rated 57 teacher attributes on their relative importance in effective teaching. The data was analyzed across six demographic variables of department, sex, degree, nationality, teaching experience, and previous graduate school, using mean scores, one-way ANOVA, and t-tests for two independent samples.
APA, Harvard, Vancouver, ISO, and other styles
3

Lam, Lai-wah Melanie. "Student evaluation of teaching in Hong Kong secondary school." Click to view the E-thesis via HKUTO, 2003. http://sunzi.lib.hku.hk/hkuto/record/B31963407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lam, Lai-wah Melanie, and 林麗華. "Student evaluation of teaching in Hong Kong secondary school." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31963407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ganivet, Fernando J. "Development of a New Student Evaluation Instrument of Instructor Effectiveness in Online Courses." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/382.

Full text
Abstract:
The purpose of this study was to (a) develop an evaluation instrument capable of rating students' perceptions of the instructional quality of an online course and the instructor’s performance, and (b) validate the proposed instrument with a study conducted at a major public university. The instrument was based upon the Seven Principles of Good Practice for Undergraduate Education (Chickering & Gamson, 1987). The study examined four specific questions. 1. Is the underlying factor structure of the new instrument consistent with Chickering and Gamson's Seven Principles? 2. Is the factor structure of the new instrument invariant for male and female students? 3. Are the scores on the new instrument related students’ expected grades? 4. Are the scores on the new instrument related to the students' perceived course workload? The instrument was designed to measure students’ levels of satisfaction with their instruction, and also gathered information concerning the students’ sex, the expected grade in the course, and the students’ perceptions of the amount of work required by the course. A cluster sample consisting of an array of online courses across the disciplines yielded a total 297 students who responded to the online survey. The students for each course selected were asked to rate their instructors with the newly developed instrument. Question 1 was answered using exploratory factor analysis, and yielded a factor structure similar to the Seven Principles. Question 2 was answered by separately factor-analyzing the responses of male and female students and comparing the factor structures. The resulting factor structures for men and women were different. However, 14 items could be realigned under five factors that paralleled some of the Seven Principles. When the scores of only those 14 items were entered in two principal components factor analyses using only men and only women, respectively and restricting the factor structure to five factors, the factor structures were the same for men and women. A weak positive relationship between students’ expected grades and their scores on the instrument was found (Question 3). There was no relationship between students’ perceived workloads for the course and their scores on the instrument (Question 4).
APA, Harvard, Vancouver, ISO, and other styles
6

Joubert, L., G. Ludick, and Z. Hattingh. "STUDENT EVALUATION OF DIFFERENT TEACHING METHODS AND THE EFFECTIVENESS THEREOF." Interim : Interdisciplinary Journal, Vol 13, Issue 2: Central University of Technology Free State Bloemfontein, 2014. http://hdl.handle.net/11462/287.

Full text
Abstract:
Published Article
A significant amount of time and effort has to go into teaching students. It is no art when lecturers simply read from a text book. The objective of this study was to determine the teaching methods that students at the Hotel School, Central University of Technology, Free State, consider as most effective to support learning. All first-year students (N=73) enrolled for the National Diploma: Hospitality Management were targeted to participate in the survey. A mixedmethod study design was followed, and a questionnaire consisting of closedand open-ended questions was developed for data collection. Closed-ended questions were rated on a five-point Likert scale, while answers to open-ended questions were analysed to determine trends. Results showed that lecturers used a variety of teaching methods. The lecture teaching method was rated best by 49% of students followed by the group discussion method which was rated as second best (19%). Case studies and brainstorming were the least-preferred methods (4% and 0% respectively). Lecturers should ensure that maximum information is transferred through the teaching methods that most appeal to students. The focus should be on enabling students to practically apply the lessons taught in everyday life.
APA, Harvard, Vancouver, ISO, and other styles
7

Baker, Scott Hamilton. "Faculty Perceptions as a Foundation for Evaluating Use of Student Evaluations of Teaching." ScholarWorks @ UVM, 2014. http://scholarworks.uvm.edu/graddis/288.

Full text
Abstract:
Amidst ever-growing demands for accountability and increased graduation rates to help justify the rising costs of higher education, few topics in undergraduate education elicit a broader range of responses than student evaluations of teaching (SETs). Despite debates over their efficacy, SETs are increasingly used as formative (pedagogical practices) and summative (employee reviews) assessments of faculty teaching. Proponents contend SETs are a necessary component in measuring the quality of education a student receives, arguing that they further enable educators to reflect upon their own pedagogy and thus informing best practices, and that they are a valid component in summative evaluations of faculty. Skeptics argue that SETs are ineffective as the measurements themselves are invalid and unreliable, students are not qualified evaluators of teaching, and faculty may lower educational standards due to pressure for higher ratings in summative evaluations. This study dives more deeply into this debate by exploring faculty perceptions of SETs. Through the use of surveys of 27 full- and part-time faculty within one division at a private, four-year teaching-focused college, this study explored faculty perceptions of SETs primarily as an initial step in a larger process seeking to evaluate perceived and potential efficacy of SETs. Both quantitative and qualitative data were collected and analyzed using Patton's (2008) Utilization-Focused Evaluation (UFE) framework for engaging evidence based upon a four-stage process in which evaluation findings are analyzed, interpreted, judged, and recommendations for action are generated, with all steps involving intended users. Overall, the study data suggests that faculty were generally very supportive of SETs for formative assessments, and strongly reported their importance and use for evaluating their own pedagogy. Findings also indicated faculty relied primarily upon the students' written qualitative comments over the quantitative reports generated by externally determined scaled-questions on the SETs. Faculty also reported the importance of SETs as part of their own summative evaluations, yet expressed concern about overreliance upon them and again indicated a desire for a more meaningful process. The utility of the UFE framework for SETs, has implications beyond the institution studied, nearly every higher education institution is faced with increasing demands for accountability of student learning from multiple stakeholders. Additionally, many institutions are grappling with policies on SETs in summative and formative evaluation and to what extent faculty and administrators do--and perhaps should--utilize SETs in measuring teaching effectiveness is a pertinent question for any institution of higher education to examine. Thus, the study suggests that to what extent faculty reflect upon SETs, and to what extent they utilize feedback, is a salient issue at any institution; and Patton's model has the potential to maximize the utility of SETs for many relevant stakeholders, especially faculty.
APA, Harvard, Vancouver, ISO, and other styles
8

Glover, Jacob I. "Finding the right mix: teaching methods as predictors for student progress on learning objectives." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/13623.

Full text
Abstract:
Doctor of Philosophy
Department of Special Education, Counseling and Student Affairs
Aaron H. Carlstrom
This study extends existing student ratings research by exploring how teaching methods, individually and collectively, influence a minimum standard of student achievement on learning objectives and how class size impacts this influence. Twenty teaching methods were used to predict substantial or exceptional progress on each of 12 learning objectives. Analyses were conducted in four class-size groups, Small (between 10-14 students), Medium (between 15-34 students), Large (between 35-49 students), and Very Large (50 or more students). Archival data were over 580,000 classes of instructors and students who responded to two instruments within the IDEA Student Rating of Instruction system: Instructors completed the Faculty Information Form, and students responded to the Student Ratings Diagnostic Form. Significant progress, for the purpose of this study, means students indicated they made either substantial or exceptional progress on learning objectives the instructor identified as relevant to the course. Therefore, student ratings of progress were dichotomized and binary logistic regression was conducted on the dummy variables. Descriptive statistics and point-biserial correlations were also conducted to test the hypotheses. Teaching methods that stimulated student interest were found to be among the strongest predictors of significant progress on the majority of learning objectives across all class sizes. For all class sizes, significant progress was correctly classified from a low of 76% of the time to a high of 90% of the time. The higher students rated the instructor in stimulating them to intellectual effort the more progress they reported on a majority of learning objectives across all class sizes. Higher instructor ratings on inspiring students to set and achieve challenging goals were also associated with significant student progress on learning objectives across all class sizes. Class size was not a major factor affecting the predictive strength of groups of teaching methods on student progress on learning objectives. However, it was a factor concerning the predictive strength of individual teaching methods. The larger the enrollment the greater was the predictive strength of key teaching methods. Implications of the study for faculty professional development and for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Gall, Annette Rashid. "Faculty perceptions of the effects of student evaluations of teaching on higher education instructional practices and instructor morale." Huntington, WV : [Marshall University Libraries], 2004. http://www.marshall.edu/etd/descript.asp?ref=397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zong, Shiping. "The meaning of expected grade and the meaning of overall rating of instruction : a validation study of student evaluation of teaching with hierarchical linear models /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/7608.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Student teaching – Evaluation"

1

E, Gronlund Norman. Student exercise manual for measurement & evaluation in teaching. [Place of publication not identified]: Penguin Putnam Inc., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kelly, Mavis. Going on-line with student evaluation of teaching. Hong Kong: City University of Hong Kong, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baksh, Ishmael J. Teaching strategies: The student perspective. St. John's: Publications Committee, Faculty of Education, Memorial University of Newfoundland, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rando, William C. Learning from students: Early term student feedback in higher education. [University Park, PA?]: National Center on Postsecondary Teaching, Learning and Assessment, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

R, Stephens Kristen, ed. The ultimate guide for student product development & evaluation. 2nd ed. Waco, Tex: Prufrock Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karnes, Frances A. The ultimate guide for student product development & evaluation. 2nd ed. Waco, Tex: Prufrock Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Karnes, Frances A. The ultimate guide for student product development & evaluation. 2nd ed. Waco, Tex: Prufrock Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Student portfolios: A practical guide to evaluation. Bothell, WA: Wright Group, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Francis, Eberle, and Farrin Lynn, eds. Uncovering student ideas in science. Arlington, Va: NSTA Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Faculty, Alverno College. Student assessment-as-learning at Alverno College. Milwaukee, Wis: Alverno College Institute, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Student teaching – Evaluation"

1

Masters, Geofferey, George Morgan, and Mark Wilson. "Charting Student Progress." In Creative Ideas For Teaching Evaluation, 287–89. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-015-7829-5_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ferguson, Ronald F., and Eric Hirsch. "How Working Conditions Predict Teaching Quality and Student Outcomes." In Designing Teacher Evaluation Systems, 332–80. San Francisco: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119210856.ch11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jie, Yongsheng Zhang, Xiaolong Wu, and Guoyun Li. "Teaching Video Recommendation Based on Student Evaluation." In Cloud Computing and Security, 182–90. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00009-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Uttl, Bob. "Lessons Learned from Research on Student Evaluation of Teaching in Higher Education." In Student Feedback on Teaching in Schools, 237–56. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75150-0_15.

Full text
Abstract:
AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.
APA, Harvard, Vancouver, ISO, and other styles
5

Buchanan, John. "Student Evaluation as a Driver of Education Delivery." In Challenging the Deprofessionalisation of Teaching and Teachers, 169–87. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-8538-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bassi, Francesca, Leonardo Grilli, Omar Paccagnella, Carla Rampichini, and Roberta Varriale. "New Insights on Student Evaluation of Teaching in Italy." In New Statistical Developments in Data Science, 263–74. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21158-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bijlsma, Hannah. "The Quality of Student Perception Questionnaires: A Systematic Review." In Student Feedback on Teaching in Schools, 47–71. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75150-0_4.

Full text
Abstract:
AbstractStudent perceptions of teaching are promising for measuring the quality of teaching in primary and secondary education. However, generating valid and reliable measurements when using a student perception questionnaire (SPQ) is not self-evident. Many authors have pointed to issues that need to be taken into account when developing, selecting, and using an SPQ in order to generate valid and reliable scores. In this study, 22 SPQs that met the inclusion criteria used in the literature search were systematically evaluated by two reviewers. The reviewers were most positive about the theoretical basis of the SPQs and about the quality of the SPQ materials. According to their evaluation, most SPQs also had acceptable reliability and construct validity. However, norm information about the quality rating measures was often lacking and few sampling specifications were provided. Information about the features of the SPQs, if available, was also often not presented in an accessible way by the instrument developers (e.g., in a user manual), making it difficult for potential SPQ users to obtain an overview of the qualities of available SPQs in order to decide which SPQs best fit their own context and intended use. It is suggested to create an international database of SPQs and to develop a standardized evaluation framework to evaluate the SPQ qualities in order to provide potential users with the information they need to make a well-informed choice of an SPQ.
APA, Harvard, Vancouver, ISO, and other styles
8

Cook, D. A., L. M. Brown, and E. N. Skakun. "Factors which Influence the Outcome of Student Evaluation of Teaching." In Advances in Medical Education, 545–47. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-011-4886-3_165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wisniewski, Benedikt, and Klaus Zierer. "Functions and Success Conditions of Student Feedback in the Development of Teaching and Teachers." In Student Feedback on Teaching in Schools, 125–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75150-0_8.

Full text
Abstract:
AbstractThe term “student feedback” is often used synonymously with evaluation, assessment, or ratings of teaching, but can be conceptually delimitated from these concepts, distinguishing formative and summative aspects. Obtaining feedback is a core component of teachers’ professional development. It is the basis for critical self-reflection, a prerequisite of reducing discrepancies between one’s performance and set goals, a tool to identify blind spots, and a means of correcting false self-assessments. Student feedback opens up opportunities for teachers to improve on their teaching by comparing students’ perspectives on instructional quality to their own perspectives. Feedback can also help teachers to implement democratic principles, and experience self-efficacy. Conditions are discussed that need to be fulfilled for student feedback to be successful.
APA, Harvard, Vancouver, ISO, and other styles
10

van der Lans, Rikkert. "A Probabilistic Model for Feedback on Teachers’ Instructional Effectiveness: Its Potential and the Challenge of Combining Multiple Perspectives." In Student Feedback on Teaching in Schools, 73–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75150-0_5.

Full text
Abstract:
AbstractThis chapter describes research into the validity of a teacher evaluation framework that was applied between 2012 and 2016 to provide feedback to Dutch secondary school teachers concerning their instructional effectiveness. In this research project, the acquisition of instructional effectiveness was conceptualized as unfolding along a continuum ranging from ineffective novice to effective expert instructor. Using advanced statistical models, teachers’ current position on the continuum was estimated. This information was used to tailor feedback for professional development. Two instruments were applied to find teachers’ current position on the continuum, namely the International Comparative Assessment of Learning and Teaching (ICALT) observation instrument and the My Teacher–student questionnaire (MTQ). This chapter highlights background theory and central concepts behind the project and it introduces the logic behind the statistical methods that were used to operationalize the continuum of instructional effectiveness. Specific attention is given to differences between students and observers in how they experience teachers’ instructional effectiveness and the resulting disagreement in how they position teachers on the continuum. It is explained how this disagreement made feedback reports less actionable. The chapter then discusses evidence of two empirical studies that examined the disagreement from two methodological perspectives. Finally, it makes some tentative conclusions concerning the practical implications of the evidence.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Student teaching – Evaluation"

1

Coelho, Lia Alencar, and Marcelo Machado De Luca de Oliveira Ribeiro. "Student ratings to evaluate the teaching effectiveness: Factors should be considered." In Fifth International Conference on Higher Education Advances. Valencia: Universitat Politècnica València, 2019. http://dx.doi.org/10.4995/head19.2019.9392.

Full text
Abstract:
The study discusses the student ratings of a professor teaching sociology disciplines in different undergraduate courses. The data were obtained from questionnaires consisting of a series of inquiries about the discipline, focusing on how it fits in the curricular structure (discipline evaluation) and, also, on teacher’s performance (professor evaluation). A total of 480 students answered the questionnaire and, for each question they had a total of five possible answers: very poor (1 point), poor (2 points), fair (3 points), good (4 points) and excellent (5 points). Considering discipline and professor evaluations, students from Animal Science, Food Engineering and Veterinary Medicine courses consider "fair" the performance of the sociology professor. Regarding to the professor evaluation, the students of the three undergraduate courses considered the performance of the teacher "good". For discipline evaluation, the Animal Science and Veterinary Medicine students considered the discipline "fair" and the Food Engineering students considered the discipline "poor". The results obtained can serve as a basis for the design of a institutional evaluation system of teaching based on student ratings, however the evaluation of the discipline and the performance of the teacher must be considered separately.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Heping, Yang Zhan, and Hu Ding. "Teaching Quality Evaluation Based on Student Satisfaction." In 2011 International Conference on Management and Service Science (MASS 2011). IEEE, 2011. http://dx.doi.org/10.1109/icmss.2011.5998798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zaninotto, Enrico. "Is the Italian student survey on teaching reliable? For what purposes?" In Fifth International Conference on Higher Education Advances. Valencia: Universitat Politècnica València, 2019. http://dx.doi.org/10.4995/head19.2019.9108.

Full text
Abstract:
Italian universities submit a compulsory survey to students for the evaluation of teaching activities. The questionnaire, designed by the Italian National Agency for University and Research System Evaluation (ANVUR), aims to evaluate four dimensions of teaching quality (course, instructor, personal interest and overall satisfaction) through twelve questions on a four-level scale. This paper addresses first the issue of the questionnaire’s reliability in the representation of the four evaluation dimensions. The main result is that the questionnaire do not represent properly the four dimensions of evaluation which it is intended for. Secondly, through a preliminary statistical analysis, it discusses the use of the survey for comparative purposes. A comparative analysis can be adversely affected by several contextual and subjective factors, like the size of the class and the gender of the instructor. The paper concludes by discussing the difficulty of finding proper conditioning, and raises doubts regarding an uncritical comparative use of student evaluations of teaching activities.
APA, Harvard, Vancouver, ISO, and other styles
4

Kiran, Eranki L. N., and Kannan M. Moudgalya. "Evaluation of Programming Competency Using Student Error Patterns." In 2015 International Conference on Learning and Teaching in Computing and Engineering (LaTiCE). IEEE, 2015. http://dx.doi.org/10.1109/latice.2015.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alegre, Ines, and Jasmina Berbegal-Mirabent. "Evaluation Systems in Online Environments." In Seventh International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2021. http://dx.doi.org/10.4995/head21.2021.13026.

Full text
Abstract:
One of the biggest challenges of online teaching is student evaluation. With the students not being physically present, assessing their level of knowledge on a subject presents different challenges than those tradionally encountered in face-to-face teaching. In this paper we present an overview of different evaluation systems and reflect about its advantages and disadvantages when applying them in online environments.The most common evaluation systems: multiple-choice quizzes, open question exams, essays, projects and oral exams, are ranked depending on several criteria. Criteria include items that any professor should take into consideration such as easiness of design and preparation or difficulty of student cheating. The advantages and downsides of each evaluation system are presented and several mechanisms to mitigate the disatvanges of each method are proposed.This paper is helpful to professors and teachers, specially in the current situation where the Covid-19 pandemic has moved most high-education teaching online.
APA, Harvard, Vancouver, ISO, and other styles
6

Jiménez-Parra, Beatriz, Daniel Alonso-Martínez, Laura Cabeza-García, and Nuria González-Álvarez. "Online teaching in COVID-19 times. Student satisfaction and analysis of their academic performance." In Seventh International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2021. http://dx.doi.org/10.4995/head21.2021.12855.

Full text
Abstract:
Online teaching has grown exponentially as a result of COVID-19. Universities and teaching institutions the world over have had to adapt their curricula to this new teaching and learning model. The main goal of this study is to analyse various teaching methodologies used on a sample of university students to analyse their effectiveness in terms of satisfaction, competencies and academic performance. The results suggest that methodologies that include greater student-teacher interaction or the use of videoconferencing for classes and problem-solving help to raise student satisfaction. Students also positively assess online teaching as it allows them to acquire new competencies and even to identify business opportunities. The online evaluation method used also seems to have been appropriate, as it led students to obtain better grades than in face-to-face teaching contexts. The study offers several implications for university teachers of Social Sciences who wish to adopt this type of teaching method.
APA, Harvard, Vancouver, ISO, and other styles
7

Schlag, Ruben, and Maximilian Sailer. "Linking teachers’ facial microexpressions with student-based evaluation of teaching effectiveness: A pilot study using FaceReader™." In Seventh International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2021. http://dx.doi.org/10.4995/head21.2021.13093.

Full text
Abstract:
This study seeks to investigate the potential influence of facial microexpressions on student-based evaluations and to explore the future possibilities of using automated technologies in higher education. We applied a non-experimental correlational design to investigate if the number of videotaped university lecturers’ facial microexpressions recognized by FaceReader™ serves as a predictor for positive results on student evaluation of teaching effectiveness. Therefore, we analyzed five videotaped lectures with the automatic facial recognition software. Additionally, each video was rated by between 8 and 16 students, using a rating instrument based on the results of Murray´s (1983) factor analysis. The FaceReader™ software could detect more than 5.000 facial microexpressions. Although positive emotions bear positive influence on the “overall performance rating”, “emotions” is not predicting “overall performance rating”, b = .05, t(37) = .35, p > .05. The study demonstrates that student ratings are affected by more variables than just facial microexpressions. The study showed that sympathy as well as the estimated age of the lecturer predicted higher student ratings.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Wen, Rani Vyas, Ranjithsudarshan Gopalakrishnan, Edwars R. Clay, and Mankirat Singh. "Exploring Correlation among Different Elements of Student Evaluation of Teaching." In 2020 IEEE Frontiers in Education Conference (FIE). IEEE, 2020. http://dx.doi.org/10.1109/fie44824.2020.9273999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Sumei. "The Application of SPSS in Student Evaluation of Teaching Quality." In 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC). IEEE, 2010. http://dx.doi.org/10.1109/ieec.2010.5533284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huguet, Carme, Jill Pearse, and Jorge Esteve. "New tools for online teaching and their impact on student learning." In Seventh International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2021. http://dx.doi.org/10.4995/head21.2021.12811.

Full text
Abstract:
In the context of the global Covid-19 crisis, a practical introductory Geosciences course was redesigned to aid student learning in a 100% virtual format. New materials were created to i) improve disciplinary language range and concept acquisition; ii) make classes more dynamic; iii) provide tools for self-regulated learning and assessment and iv) maintain student motivation. Usefulness of the new materials was evaluated using a voluntary online survey that was answered by 40% of the students. Additional information was obtained from the university's student evaluation survey. All tools were well-rated, but self-assessment quizzes and class presentations had the highest overall scores. Students commented on their usefulness in terms of knowledge acquisition and self-assessment. Perhaps not surprisingly, self-assessment quizzes were the one tool students felt kept them more motivated. These were closely followed by class presentations and short in-class quizzes. Students found the online access to all lesson materials very useful for self-paced learning. According to a majority of students, the in-class quizzes and student participation using the digital the whiteboard made classes more dynamic. Overall, the new strategies succeeded in improving students' learning and independence, but more work is needed to make classes more dynamic, and especially to improve student motivation. Intrinsic motivation is perhaps the most difficult to improve because in a 100% virtual course, it is difficult to promote student-student interactions and receive visual feedback from the class. In view of the survey results, we introduce bonus activities in order to improve extrinsic motivation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Student teaching – Evaluation"

1

Mansfield, Janet. Report on faculty and student evaluation of instructors in direct service teaching at Portland State University Graduate School of Social Work. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thomson, Sue, Nicole Wernert, Sima Rodrigues, and Elizabeth O'Grady. TIMSS 2019 Australia. Volume I: Student performance. Australian Council for Educational Research, December 2020. http://dx.doi.org/10.37517/978-1-74286-614-7.

Full text
Abstract:
The Trends in International Mathematics and Science Study (TIMSS) is an international comparative study of student achievement directed by the International Association for the Evaluation of Educational Achievement (IEA). TIMSS was first conducted in 1995 and the assessment conducted in 2019 formed the seventh cycle, providing 24 years of trends in mathematics and science achievement at Year 4 and Year 8. In Australia, TIMSS is managed by the Australian Council for Educational Research (ACER) and is jointly funded by the Australian Government and the state and territory governments. The goal of TIMSS is to provide comparative information about educational achievement across countries in order to improve teaching and learning in mathematics and science. TIMSS is based on a research model that uses the curriculum, within context, as its foundation. TIMSS is designed, broadly, to align with the mathematics and science curricula used in the participating education systems and countries, and focuses on assessment at Year 4 and Year 8. TIMSS also provides important data about students’ contexts for learning mathematics and science based on questionnaires completed by students and their parents, teachers and school principals. This report presents the results for Australia as a whole, for the Australian states and territories and for the other participants in TIMSS 2019, so that Australia’s results can be viewed in an international context, and student performance can be monitored over time. The results from TIMSS, as one of the assessments in the National Assessment Program, allow for nationally comparable reports of student outcomes against the Melbourne Declaration on Educational Goals for Young Australians. (Ministerial Council on Education, Employment, Training and Youth Affairs, 2008).
APA, Harvard, Vancouver, ISO, and other styles
3

DeBarger, Angela, and Geneva Haertel. Evaluation of Journey to El Yunque: Final Report. The Learning Partnership, December 2006. http://dx.doi.org/10.51420/report.2006.1.

Full text
Abstract:
This report describes the design, implementation and outcomes of the initial version of the NSF-funded Journey to El Yunque curriculum, released in 2005. As formative evaluators, the role of SRI International was to document the development of the curriculum and to collect empirical evidence on the impact of the intervention on student achievement. The evaluation answers four research questions: How well does the Journey to El Yunque curriculum and accompanying assessments align with the National Science Education Standards for content and inquiry? How do teachers rate the effectiveness of the professional development workshop in teaching them to use the Journey to El Yunque curriculum and assessment materials? How do teachers implement the Journey to El Yunque curriculum? To what extent does the Journey to El Yunque curriculum increase students’ understanding ofecology and scientific inquiry abilities? The evaluators concluded that Journey to El Yunque is a well-designed curriculum and assessment replacement unit that addresses important science content and inquiry skills. The curriculum and assessments are aligned to life science content standards and key ecological concepts, and materials cover a broad range of these standards and concepts. Journey to El Yunque students scored significantly higher on the posttest than students learning ecology from traditional means with effect size 0.20.
APA, Harvard, Vancouver, ISO, and other styles
4

Tucker-Blackmon, Angelicque. Engagement in Engineering Pathways “E-PATH” An Initiative to Retain Non-Traditional Students in Engineering Year Three Summative External Evaluation Report. Innovative Learning Center, LLC, July 2020. http://dx.doi.org/10.52012/tyob9090.

Full text
Abstract:
The summative external evaluation report described the program's impact on faculty and students participating in recitation sessions and active teaching professional development sessions over two years. Student persistence and retention in engineering courses continue to be a challenge in undergraduate education, especially for students underrepresented in engineering disciplines. The program's goal was to use peer-facilitated instruction in core engineering courses known to have high attrition rates to retain underrepresented students, especially women, in engineering to diversify and broaden engineering participation. Knowledge generated around using peer-facilitated instruction at two-year colleges can improve underrepresented students' success and participation in engineering across a broad range of institutions. Students in the program participated in peer-facilitated recitation sessions linked to fundamental engineering courses, such as engineering analysis, statics, and dynamics. These courses have the highest failure rate among women and underrepresented minority students. As a mixed-methods evaluation study, student engagement was measured as students' comfort with asking questions, collaboration with peers, and applying mathematics concepts. SPSS was used to analyze pre-and post-surveys for statistical significance. Qualitative data were collected through classroom observations and focus group sessions with recitation leaders. Semi-structured interviews were conducted with faculty members and students to understand their experiences in the program. Findings revealed that women students had marginalization and intimidation perceptions primarily from courses with significantly more men than women. However, they shared numerous strategies that could support them towards success through the engineering pathway. Women and underrepresented students perceived that they did not have a network of peers and faculty as role models to identify within engineering disciplines. The recitation sessions had a positive social impact on Hispanic women. As opportunities to collaborate increased, Hispanic womens' social engagement was expected to increase. This social engagement level has already been predicted to increase women students' persistence and retention in engineering and result in them not leaving the engineering pathway. An analysis of quantitative survey data from students in the three engineering courses revealed a significant effect of race and ethnicity for comfort in asking questions in class, collaborating with peers outside the classroom, and applying mathematical concepts. Further examination of this effect for comfort with asking questions in class revealed that comfort asking questions was driven by one or two extreme post-test scores of Asian students. A follow-up ANOVA for this item revealed that Asian women reported feeling excluded in the classroom. However, it was difficult to determine whether these differences are stable given the small sample size for students identifying as Asian. Furthermore, gender differences were significant for comfort in communicating with professors and peers. Overall, women reported less comfort communicating with their professors than men. Results from student metrics will inform faculty professional development efforts to increase faculty support and maximize student engagement, persistence, and retention in engineering courses at community colleges. Summative results from this project could inform the national STEM community about recitation support to further improve undergraduate engineering learning and educational research.
APA, Harvard, Vancouver, ISO, and other styles
5

Kibler, Amanda, René Pyatt, Jason Greenberg Motamedi, and Ozen Guven. Key Competencies in Linguistically and Culturally Sustaining Mentoring and Instruction for Clinically-based Grow-Your-Own Teacher Education Programs. Oregon State University, May 2021. http://dx.doi.org/10.5399/osu/1147.

Full text
Abstract:
Grow-Your-Own (GYO) Teacher Education programs that aim to diversify and strengthen the teacher workforce must provide high-quality learning experiences that support the success and retention of Black, Indigenous, and people of color (BIPOC) teacher candidates and bilingual teacher candidates. Such work requires a holistic and systematic approach to conceptualizing instruction and mentoring that is both linguistically and culturally sustaining. To guide this work in the Master of Arts in Teaching in Clinically Based Elementary program at Oregon State University’s College of Education, we conducted a review of relevant literature and frameworks related to linguistically responsive and/or sustaining teaching or mentoring practices. We developed a set of ten mentoring competencies for school-based cooperating/clinical teachers and university supervisors. They are grouped into the domains of: Facilitating Linguistically and Culturally Sustaining Instruction, Engaging with Mentees, Recognizing and Interrupting Inequitable Practices and Policies, and Advocating for Equity. We also developed a set of twelve instructional competencies for teacher candidates as well as the university instructors who teach them. The instructional competencies are grouped into the domains of: Engaging in Self-reflection and Taking Action, Learning About Students and Re-visioning Instruction, Creating Community, and Facilitating Language and Literacy Development in Context. We are currently operationalizing these competencies to develop and conduct surveys and focus groups with various GYO stakeholders for the purposes of ongoing program evaluation and improvement, as well as further refinement of these competencies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography