Статті в журналах з теми "Evaluation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Evaluation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Evaluation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Georghiou, L. "Meta-evaluation: Evaluation of evaluations." Scientometrics 45, no. 3 (July 1999): 523–30. http://dx.doi.org/10.1007/bf02457622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Praestgaard, E. "Meta-evaluation: Evaluation of evaluations." Scientometrics 45, no. 3 (July 1999): 531–32. http://dx.doi.org/10.1007/bf02457623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ingle, B. "Evaluation '85 Canadian Evaluation Society/ Evaluation Network/ Evaluation Research Society: Exploring the Contributions of Evaluations." American Journal of Evaluation 6, no. 3 (January 1, 1985): 16. http://dx.doi.org/10.1177/109821408500600303.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Horvat, M. "Meta-evaluation: Evaluation of evaluations some points for discussion." Scientometrics 45, no. 3 (July 1999): 533–42. http://dx.doi.org/10.1007/bf02457624.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Patenaude, Johane, Georges-Auguste Legault, Monelle Parent, Jean-Pierre Béland, Suzanne Kocsis Bédard, Christian Bellemare, Louise Bernier, Charles-Etienne Daniel, Pierre Dagenais, and Hubert Gagnon. "OP104 Health Technology Assessment's Ethical Evaluation: Understanding The Diversity Of Approaches." International Journal of Technology Assessment in Health Care 33, S1 (2017): 47–48. http://dx.doi.org/10.1017/s0266462317001738.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
INTRODUCTION:The main difficulties encountered in the integration of ethics in Health Technology Assessment (HTA) were identified in our systematic review. In the process of analyzing these difficulties we then addressed the question of the diversity of ethical approaches (1) and the difficulties in their operationalization (2,3).METHODS:Nine ethical approaches were identified: principlism, casuistry, coherence analysis, wide reflexive equilibrium, axiology, socratic approach, triangular method, constructive technology assessment and social shaping of technology. Three criteria were used to clarify the nature of each of these approaches: 1.The characteristics of the ethical evaluation2.The disciplinary foundation of the ethical evaluation3.The operational process of the ethical evaluation in HTA analysis.RESULTS:In HTA, both norm-based ethics and value-based ethics are mobilized. This duality is fundamental since it proposes two different ethical evaluations: the first is based on the conformity to a norm, whereas the second rests on the actualization of values. The disciplinary foundation generates diversity as philosophy, sociology and theology propose different justifications for ethical evaluation. At the operational level, ethical evaluation's characteristics are applied to the case at stake by specific practical reasoning. In a norm-based practical reasoning, one must substantiate the facts that will be correlated to a moral norm for clearly identifying conformity or non-conformity. In value-based practical reasoning, one must identify the impacts of the object of assessment that will be subject to ethical evaluation. Two difficulties arise: how to apply values to facts and prioritize amongst conflicting ethical evaluations of the impacts?CONCLUSIONS:Applying these three criteria to ethical approaches in HTA helps understanding their complexity and the difficulty of operationalizing them in HTA tools. The choice of any ethical evaluations is never neutral; it must be justified by a moral point of view. Developing tools for ethics in HTA is operationalizing a specific practical reasoning in ethics.
6

Al-Husseini, Khansaa Azeez Obayes, Ali Hamzah Obaid, and Ola Najah Kadhim. "Evaluating the Effectiveness of E-learning: Based on Academic Staff Evaluation." Webology 19, no. 1 (January 20, 2022): 367–79. http://dx.doi.org/10.14704/web/v19i1/web19027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
E-learning has become a popular learning method used in many local and international universities and in many educational institutions. The initial achievements of e-learning platforms and the online learning environment demonstrated outstanding advantages in distance education. However, it is necessary to conduct an evaluation of the educational process, and in particular an effective assessment of the learning environment via online-based e-learning platforms. Where this study aimed to identify the evaluation of the effectiveness of e-learning from the point of view of the teaching staff in the Technical Institute of Babel and the Technical Institute Al-Mussaib. To achieve the objectives of the study, the researchers prepared a questionnaire containing (32) questions, after verifying the tools of reliability and validity. The results of the study revealed that the evaluation of the effectiveness of e-learning was average and above average in some paragraphs of the questionnaire. The percentage (84,070) of faculty members use computers and smart phones to publish academic content through the use of the home internet, at a rate of (95,575) in the form of creating educational content in several forms, including video, audio and text at the same time.
7

Guyadeen, Dave, and Mark Seasons. "Evaluation Theory and Practice: Comparing Program Evaluation and Evaluation in Planning." Journal of Planning Education and Research 38, no. 1 (November 3, 2016): 98–110. http://dx.doi.org/10.1177/0739456x16675930.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article reviews the major approaches of program evaluation and evaluation in planning. The challenges to evaluating plans and planning are discussed, including the reliance on ex ante evaluations, a lack of outcome evaluation methodologies, the attribution gap, and institutional hurdles. Areas requiring further research are also highlighted, including the need to develop appropriate evaluation methodologies; creating stronger linkages between program evaluation and evaluation in planning; examining the institutional and political contexts guiding the use (and misuse) of evaluation in practice; and the importance of training and educating planners on evaluation.
8

Sparks, Elizabeth, Michelle Molina, Natalie Shepp, and Fiona Davey. "The Evaluation Skill-a-Thon: Evaluation Model for Meaningful Youth Engagement." Journal of Youth Development 16, no. 1 (March 30, 2021): 100–125. http://dx.doi.org/10.5195/jyd.2021.968.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Active engagement of youth participants in the evaluation process is an increasingly sought out method, but the field can still benefit from new methods that ease youth participatory evaluation implementation. Meaningful youth engagement in the evaluation process is particularly advantageous under the 4-H thriving model because of its potential to contribute to positive youth development, foster relationship building, enhance evaluation capacity, and improve program quality through improved evaluations. This program sought to facilitate actively engaging youth in the evaluation process by breaking it up into clear and manageable steps including evaluation design, data collection, data interpretation and analysis, reporting results, and outlining programmatic change. To achieve this aim, program staff designed the Evaluation Skill-a-Thon, a set of self-paced, experiential evaluation activities at various stations through which youth participants rotate. Actively involving youth participants in the evaluation process using the Skill-a-Thon model resulted in youth being able to identify and design programmatic changes, increased participation and response rates in other evaluations, and several youth participants’ gaining an interest in evaluation and working to design evaluations in later years. The Evaluation Skill-a-Thon holds promise for actively engaging youth participants in the entire evaluation process, easy implementation, and increasing evaluation capacity.
9

Kunieda, Yoshiaki. "Effectiveness of Self-Evaluation, Peer Evaluation and 2nd-Step Self-Evaluation- Covering Anchoring Training in Maritime Education and Training." Advances in Social Sciences Research Journal 9, no. 12 (January 5, 2023): 567–79. http://dx.doi.org/10.14738/assrj.912.13718.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evaluation is an essential part of education and training, and it can be used in maritime education and training to help learners organize their knowledge and improve their skills, as well as to improve education and training methods. In this study, self-evaluation and mutual evaluation were conducted during anchoring training on the training ship Shioji Maru belonging to the Tokyo University of Marine Science and Technology. In a survey of students trained in 2020 and 2021 regarding the acquisition of knowledge and skills, 94.5% of students rated themselves as “very effective” or “effective” in their self-evaluation and 92.0% of students rated themselves as “very effective” or “effective” in their mutual evaluation. Comparing the self-evaluation scores with the mutual evaluation scores, it was found that the mutual evaluation scores tended to rank higher than the self-evaluation scores. This is thought to be due to a lack of confidence in one’s own ship handling skills, which leads to harsh evaluations of oneself and more lenient evaluations of others. It was also found that the higher the instructor’s evaluation score, the smaller the difference between the self-evaluation score and the instructor’s evaluation score. Students with higher scores in the instructor’s evaluation were more confident in their ship handling skills, which is thought to indicate that they can evaluate themselves more accurately. On the other hand, self-evaluation was conducted at an early stage immediately after the training, and the bridge operation team and the entire team also conducted the self-evaluation again after the debriefing. In other words, a 2nd-step self-evaluation was conducted through two evaluations conducted at different times. We show the results of a qualitative analysis of the students’ impressions and opinions of these self-evaluations and peer evaluation using the steps for coding and theorization (SCAT) method.
10

Guenther, John, and Ian H. Falk. "Generalising from qualitative evaluation." Evaluation Journal of Australasia 21, no. 1 (March 2021): 7–23. http://dx.doi.org/10.1177/1035719x21993938.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evaluations are often focused on assessing merit, value, outcome or some other feature of a programme, project, policy or some other object. Evaluation research is then more concerned with the particular rather than the general – even more so, when qualitative methods are used. But does this mean that evaluations should not be used to generalise? If it is possible to generalise from evaluations, under what circumstances can this be legitimately achieved? The authors of this article have previously argued for generalising from qualitative research (GQR), and in this article, they extrapolate the discussion to the field of evaluation. First, the article begins with a discussion of the definitions of generalisability in research, recapping briefly on our arguments for GQR. Second, the differentiation between research and evaluation is explored with consideration of what literature there is to justify generalisation from qualitative evaluation (GQE). Third, a typology derived from the literature is developed, to sort 54 evaluation projects. Fourth, material from a suite of evaluation projects is drawn from to demonstrate how the typology of generalisation applies in the context of evaluations conducted in several fields of study. Finally, we suggest a model for GQE.
11

Gladkikh, N. "“If You Use Evaluation You Make Better Decisions and Help People More.” Interview with Michael Patton." Positive changes 3, no. 1 (March 27, 2023): 4–15. http://dx.doi.org/10.55140/2782-5817-2023-3-1-4-15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Michael Quinn Patton is one of the world’s most renowned experts in project and program evaluation1. He has been working in this field since the 1970s, when evaluation in the nonprofit sector was a relatively new phenomenon.Dr. Patton is the creator of well-known evaluation concepts that are used by specialists around the world. He received several international awards for outstanding contributions to the field, and he wrote 18 books on various issues related to practical use of evaluation2. In an interview with our Editor-in-Chief,Michael Patton shared his views on the profession of an evaluator, the impact of the profession, trends in evaluation, the “gold standard” of evaluation methodology, and what the future holds for this field.
12

Simuyemba, Moses C., Obrian Ndlovu, Felicitas Moyo, Eddie Kashinka, Abson Chompola, Aaron Sinyangwe, and Felix Masiye. "Real-time evaluation pros and cons: Lessons from the Gavi Full Country Evaluation in Zambia." Evaluation 26, no. 3 (February 3, 2020): 367–79. http://dx.doi.org/10.1177/1356389019901314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Full Country Evaluations were Gavi-funded real-time evaluations of immunisation programmes in Bangladesh, Mozambique, Uganda and Zambia, from 2013 to 2016. The evaluations focused on providing evidence for improvement of immunisation delivery in these countries and spanned all phases of Gavi support. The process evaluation approach of the evaluations utilised mixed methods to track progress against defined theories-of-change and related milestones during the various stages of implementation of the Gavi support streams. This article highlights complexities of this type of real-time evaluation and shares lessons learnt on conducting such evaluation from the Zambian experience. Real-time process evaluation is a complex evaluation methodology that requires sensitivity to the context of the evaluation, catering for various information needs of stakeholders, and establishment of mutually beneficial relationships between programme implementers and evaluators. When used appropriately, it can be an effective means of informing programme decisions and aiding programme improvement for both donors and local implementers.
13

Devenport, Jennifer L., Steven D. Penrod, and Brian L. Cutler. "Eyewitness identification evidence: Evaluation commonsense evaluations." Psychology, Public Policy, and Law 3, no. 2-3 (June 1997): 338–61. http://dx.doi.org/10.1037/1076-8971.3.2-3.338.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Rodríguez-Bilella, Pablo, and Rafael Monterde-Díaz. "Evaluation, Valuation, Negotiation: Some Reflections Towards a Culture of Evaluation." Canadian Journal of Program Evaluation 25, no. 3 (January 2011): 1–10. http://dx.doi.org/10.3138/cjpe.0025.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Although the evaluation of public policies is a subject of growing interest in Latin America, there are problems with the design and implementation of evaluations, as well as with the limited use of their results. In many cases, the evaluations have more to do with generating descriptions and less with assessing these activities and using those assessments to improve planning and decision making. These points are explored in a case study of the evaluation of a rural development program in Argentina, emphasizing the process of negotiation and consensus building between the evaluators and the official in charge of approving the evaluation report. The lessons learned from the experience point to the generation and consolidation of a culture of evaluation in the region.
15

Nishimura, Satoshi, Hiroyasu Miwa, Ken Fukuda, Kentaro Watanabe, and Takuichi Nishimura. "Future Prospects towards Evaluation of Robotic Devices for Nursing Care : Subjective Evaluation and Objective Evaluation." Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2015.6 (2015): 17–18. http://dx.doi.org/10.1299/jsmeicam.2015.6.17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Patton, Michael Quinn. "Meta-evaluation: Evaluating the Evaluation of the Paris Declaration." Canadian Journal of Program Evaluation 27, no. 3 (January 2013): 147–71. http://dx.doi.org/10.3138/cjpe.0027.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: It has become a standard in major high-stakes evaluations to commission an independent review to determine whether the evaluation meets generally accepted standards of quality. This is called a meta-evaluation. Given the historic importance of the Evaluation of the Paris Declaration, the Management Group commissioned a meta-evaluation of the evaluation. The meta-evaluation concluded that the findings, conclusions, and recommendations presented in the Paris Declaration Evaluation adhered closely and rigorously to the evaluation evidence collected and synthesized. The meta-evaluation included an assessment of the evaluation’s strengths, weaknesses, and lessons. This article describes how the meta-evaluation was designed and implemented, the data collected, and the conclusions reached.
17

Byrne, Jani Gabriel. "Competitive Evaluation in Industry: Some Comments." Proceedings of the Human Factors Society Annual Meeting 33, no. 5 (October 1989): 423–25. http://dx.doi.org/10.1177/154193128903300541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper examines competitive evaluations in industry. Competitive evaluations involve systematic comparisons between two or more products on similar (or equated) attributes. These attributes can be either usability criteria or usability characteristics. Three topics are discussed: applications of competitive evaluation information that could be useful to product development; industry pitfalls associated with conducting an evaluation; and timing of the evaluation in the product development process as it relates to the goals of the organization. The paper concludes with general suggestions concerning conducting competitive evaluations in industry.
18

Beere, Diana. "Evaluation Capacity-building: A Tale of Value-adding." Evaluation Journal of Australasia 5, no. 2 (September 2005): 41–47. http://dx.doi.org/10.1177/1035719x0500500207.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evaluation capacity-building entails not only developing the expertise needed to undertake robust and useful evaluations; it also involves creating and sustaining a market for that expertise by promoting an organisational culture in which evaluation is a routine part of ‘the way we do things around here’. A challenge for evaluators is to contribute to evaluation capacity-building while also fulfilling their key responsibilities to undertake evaluations. A key strategy is to focus on both discerning value and adding value for clients/commissioners of evaluations. This paper takes as examples two related internal evaluation projects conducted for the Queensland Police Service that have added value for the client and, in doing so, have helped to promote and sustain an evaluation culture within the organisation. It describes key elements of these evaluations that contributed to evaluation capacity-building. The paper highlights the key role that evaluators themselves, especially internal evaluators, can take in evaluation capacity-building, and proposes that internal evaluators can, and should, integrate evaluation capacity-building into their routine program evaluation work.
19

Carugi, Carlo, and Heather Bryant. "A Joint Evaluation With Lessons for the Sustainable Development Goals Era: The Joint GEF-UNDP Evaluation of the Small Grants Programme." American Journal of Evaluation 41, no. 2 (September 17, 2019): 182–200. http://dx.doi.org/10.1177/1098214019865936.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The integrated nature of the Sustainable Development Goals (SDGs) calls for greater synergy, harmonization, and complementarity in development work. This is to be reflected in evaluation. Despite a long and diversified history spanning over almost three decades, joint evaluations have fallen out of fashion. Evaluators tend to shy away from joint evaluations because of timeliness, institutional and organizational differences, and personal preferences. As the SDGs call for more joint evaluations, we need to get them right. This article supports the appeal for more joint evaluations in the SDGs era by learning from the existing long and diversified experience. This article shares lessons from a joint evaluation that is relevant in the context of the SDGs for the United Nations Evaluation Group, the Evaluation Cooperation Group, and the wider international evaluation community.
20

Harnar, Michael A., Jeffrey A. Hillman, Cheryl L. Endres, and Juna Z. Snow. "Internal Formative Meta-Evaluation: Assuring Quality in Evaluation Practice." American Journal of Evaluation 41, no. 4 (September 2, 2020): 603–13. http://dx.doi.org/10.1177/1098214020924471.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The term meta-evaluation—referring to the “evaluation of evaluations”—has been in the evaluation lexicon for a half-century. Despite this longevity, research on meta-evaluation is sparse and even more so for internal formative types of meta-evaluation. This exploratory study builds on our understanding of meta-evaluative methods by exploring evaluators’ approaches to ensuring quality practice. A sample of practitioners was drawn from the American Evaluation Association membership and invited to share their quality assurance practices through an online survey. Respondents reported using a variety of tools to ensure quality in their practice, including published and unpublished standards, principles and guidelines, and processes involving stakeholder engagement at various stages of evaluation. A distinction was identified between an intrinsic, merit-focused perspective on quality that is more or less controlled by the evaluator and an extrinsic, worth-focused perspective on quality primarily informed by key stakeholders of the evaluation.
21

Groen, Jovan F., and Yves Herry. "The Online Evaluation of Courses: Impact on Participation Rates and Evaluation Scores." Canadian Journal of Higher Education 47, no. 2 (August 27, 2017): 106–20. http://dx.doi.org/10.47678/cjhe.v47i2.186704.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
At one of Ontario’s largest universities, the University of Ottawa, course evaluations involve about 6,000 course sections and over 43,000 students every year. This paper-based format requires over 1,000,000 sheets of paper, 20,000 envelopes, and the support of dozens of administrative staff members. To examine the impact of a shift to an online system for the evaluation of courses, the following study sought to compare participation rates and evaluation scores of an online and paper-based course evaluation system. Results from a pilot group of 10,417 students registered in 318 courses suggest an average decrease in participation rate of 12–15% when using an online system. No significant differences in evaluation scores were observed. Instructors and students alike shared positive reviews about the online system; however, they suggested that an in-class period be maintained for the electronic completion of course evaluations.
22

Hunsaker, Scott L., and Carolyn M. Callahan. "Evaluation of Gifted Programs: Current Practices." Journal for the Education of the Gifted 16, no. 2 (January 1993): 190–200. http://dx.doi.org/10.1177/016235329301600207.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In an effort to describe current gifted program evaluation practices, a review of articles, ERIC documents, and dissertations were supplemented by evaluation reports solicited by The National Research Center on the Gifted e) Talented at The University of Virginia from public school, private school, and professional sources. Seventy evaluation reports were received. These were coded according to ten variables dealing with evaluation design, methodology, and usefulness. Frequencies and chi squares were computed for each variable. A major concern brought out by this study is the paucity of evaluation reports/results made available to the NRC G/T. This may be due to a lack of gifted program evaluations, or to dissatisfaction with evaluation designs and results. Other concerns included lack of methodological sophistication, reporting, and utility concerns. Some promising practices were apparent in the studies reviewed. A large sub-set of the evaluations were done for program improvement and employed multiple methodologies, sources, analysis techniques, and reporting formats with utility practices that produce needed changes. In addition, most evaluations focused on a number of key areas in the gifted program rather than settling for generalized impressions about the program.
23

Sa, Yongjin. "Flexible Work Arrangements Program Implementation Evaluation." Advances in Social Sciences Research Journal 8, no. 5 (May 11, 2021): 63–70. http://dx.doi.org/10.14738/assrj.85.10154.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research primarily focuses on construction of the program evaluation proposal for the flexible work arrangements. In order to make the program evaluation design, this paper specifically discusses evaluation questions and data collection analysis regarding three kinds of evaluations including needs assessment, implementation evaluation, formative evaluation, and summative evaluation. Furthermore, the expected positive effects and main functions of the flexible work arrangements program evaluation are also suggested.
24

Porter, Jamila M., Laura K. Brennan, Mighty Fine, and Ina I. Robinson. "Elements to Enhance the Successful Start and Completion of Program and Policy Evaluations: The Injury & Violence Prevention (IVP) Program & Policy Evaluation Institute." Journal of MultiDisciplinary Evaluation 16, no. 37 (November 19, 2020): 58–73. http://dx.doi.org/10.56645/jmde.v16i37.659.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Public health practitioners, including injury and violence prevention (IVP) professionals, are responsible for implementing evaluations, but often lack formal evaluation training. Impacts of many practitioner-focused evaluation trainings—particularly their ability to help participants successfully start and complete evaluations—are unknown. Objectives: We assessed the impact of the Injury and Violence Prevention (IVP) Program & Policy Evaluation Institute (“Evaluation Institute”), a team-based, multidisciplinary, and practitioner-focused evaluation training designed to teach state IVP practitioners and their cross-sector partners how to evaluate program and policy interventions. Design: Semi-structured interviews were conducted with members of 13 evaluation teams across eight states at least one year after training participation (24 participants in total). Document reviews were conducted to triangulate, supplement, and contextualize reported improvements to policies, programs, and practices. Intervention: Teams of practitioners applied for and participated in the Evaluation Institute, a five-month evaluation training initiative that included a set of online training modules, an in-person workshop, and technical support from evaluation consultants. Main Outcome Measure(s): The successful start and/or completion of a program or policy evaluation focused on an IVP intervention. Results: Of the 13 teams studied, a total of 12 teams (92%) reported starting or completing an evaluation. Four teams (31%) reported fully completing their evaluations; eight teams (61%) reported partially completing their evaluations. Teams identified common facilitators and barriers that impacted their ability to start and complete their evaluations. Nearly half of the 13 teams (46%) – whether or not they completed their evaluation – reported at least one common improvement made to a program or policy as a result of engaging in an evaluative process. Conclusion: Practitioner-focused evaluation trainings are essential to build critical evaluation skills among public health professionals and their multidisciplinary partners. The process of evaluating an intervention—even if the evaluation is not completed—has substantial value and can drive improvements to public health interventions. The Evaluation Institute can serve as a model for training public health practitioners and their partners to successfully plan, start, complete, and utilize evaluations to improve programs and policies. Keywords: Evaluation; injury; multidisciplinary partnerships; practitioner-focused evaluation training; professional development; program and policy evaluation; public health; technical assistance; violence
25

Bourgeois, Isabelle, and Clémence Naré. "The "Usability" of Evaluation Reports: A Precursor to Evaluation Use in Government Organizations." Journal of MultiDisciplinary Evaluation 11, no. 25 (September 25, 2015): 60–67. http://dx.doi.org/10.56645/jmde.v11i25.433.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: According to the Treasury Board of Canada’s Policy on Evaluation (2009), evaluations produced by federal government departments must contribute to decision-making at an organizational level (mainly summative) as well as a program level (mainly formative). Previous research shows that although the formative objectives of evaluation are generally reached, the use of evaluation for broader, budgetary management is limited. However, little research has been conducted thus far on this issue. Purpose: This study investigates the extent to which program evaluation is used in the Canadian federal government for budgetary management purposes. Setting: This paper outlines the results obtained following the first component of a two-pronged research strategy focusing on evaluation use in Canadian federal government organizations. Intervention: N/A Research Design: Two federal agencies were recruited to participate in organizational case studies aiming to identify the factors that facilitate the use of evaluation for budgetary reallocation exercises. Data Collection and Analysis: This report presents the findings from a detailed analysis of evaluation reports published by both agencies between 2010-2013. The data were collected from public evaluation reports and analyzed using NVivo. Findings: The preliminary findings of the study show that instrumental use has occurred or can be expected to occur, based on the types of recommendations outlined in the reports reviewed and on the responses to the evaluations produced by program managers. Keywords: evaluation use; organizational evaluation capacity; instrumental use; evaluation reports; Canadian federal government; document review; organizational decision-making.
26

Nørholm, Morten. "Outlining a theory of the social and symbolic function of evaluations of education." Praxeologi – Et kritisk refleksivt blikk på sosiale praktikker 1 (May 21, 2019): e1467. http://dx.doi.org/10.15845/praxeologi.v1i0.1467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe article presents the results of a research project focusing on evaluations of education as a part of a New Public Management in the area of education.The empirical material consists of:- 8 state-sanctioned evaluations of the formal training programs for the positions in a medical field- various texts on evaluations- various examples of Danish evaluation research.A field of producers of Danish evaluation research is constructed as part of a field of power: analogous to the analysed evaluations, Danish evaluation research forms a discourse legitimizing socially necessary administrative interventions. The evaluations and the evaluation research are constructed as parts of a mechanism performing and legitimizing a sorting to an existing social order. The theoretical starting point is from theories, primarily by Émile Durkheim, Pierre Bourdieu and Ulf P. Lundgren.Keywords: evaluation, evaluation of education, social reproduction, New Public Management, societies after the Modern, meritocracy
27

Corley, E. A., G. G. Keller, J. C. Lattimer, and M. R. Ellersieck. "Reliability of early radiographic evaluations for canine hip dysplasia obtained from the standard ventrodorsal radiographic projection." Journal of the American Veterinary Medical Association 211, no. 9 (November 1, 1997): 1142–46. http://dx.doi.org/10.2460/javma.1997.211.09.1142.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objective To determine reliability of preliminary evaluations for canine hip dysplasia (CHD) performed by the Orthopedic Foundation for Animals on dogs between 3 and 18 months of age. Design Retrospective analysis of data from the Orthopedic Foundation for Animals database. Animals 2,332 Golden Retrievers, Labrador Retrievers, German Shepherd Dogs, and Rottweilers for which preliminary evaluation had been performed between 3 and 18 months of age and for which results of a definitive evaluation performed after 24 months of age were available. Procedure Each radiograph was evaluated, and hip joint status was graded as excellent, good, fair, or borderline phenotype or mild, moderate, or severe dysplasia. Preliminary evaluations were performed by 1 radiologist; definitive evaluations were the consensus of 3 radiologists. Reliability of preliminary evaluations was calculated as the percentage of definitive evaluations (normal vs dysplastic) that were unchanged from preliminary evaluations. Results Reliability of a preliminary evaluation of normal hip joint phenotype decreased significantly as the preliminary evaluation changed from excellent (100%) to good (97.9%) to fair (76.9%) phenotype. Reliability of a preliminary evaluation of CHD increased significantly as the preliminary evaluation changed from mild (84.4%) to moderate (97.4%) CHD. Reliability of preliminary evaluations increased significantly as age at the time of preliminary evaluation increased, regardless of whether dogs received a preliminary evaluation of normal phenotype or CHD. Clinical Implications Results suggest that preliminary evaluations of hip joint status in dogs are generally reliable. However, dogs that receive a preliminary evaluation of fair phenotype or mild CHD should be reevaluated after 24 months of age. (J Am Vet Med Assoc 1997;211:1142–1146)
28

Alindogan, Mark Anthony. "Evaluation competencies and functions in advertised evaluation roles in Australia." Evaluation Journal of Australasia 19, no. 2 (June 2019): 88–100. http://dx.doi.org/10.1177/1035719x19857197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study explores the functions of professional evaluators outlined in online job advertisements. A total of 97 job advertisements were reviewed in the study. A content analysis using a Coding Analysis Toolkit developed by Shulman was conducted to identify six main evaluation functions based on the collected data. These functions are (1) evaluation and reporting, (2) providing evaluation advice, (3) evaluation capacity building, (4) communication and engagement, (5) forming partnerships and (6) leading, managing and influencing. These functions were then compared to the Australian Evaluation Society’s (AES) Core Competency Domains. Overall, there is a broad alignment between these functions and the AES Core Competency Domains. However, the analysis shows that the delivery of culturally competent evaluations and evaluation utilisation received no mention in advertised evaluation roles. The delivery of culturally competent evaluation is essential from the perspective of ethics, validity and theory, while the utilisation of evaluation findings is important for the benefit of society.
29

Chen, Guanyu, Jacky Bowring, and Shannon Davis. "How Is “Success” Defined and Evaluated in Landscape Architecture—A Collective Case Study of Landscape Architecture Performance Evaluation Approaches in New Zealand." Sustainability 15, no. 20 (October 23, 2023): 15162. http://dx.doi.org/10.3390/su152015162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examines landscape performance evaluation practices in New Zealand by analysing a representative set of evaluation cases using a “sequential” case study approach. The aim is to map the methodological terrain and understand how “success” is defined and assessed in these evaluations. This study identifies different evaluation models, including goal, satisfaction, and norm models, and explores the evaluation methods employed. This study also reveals a correlation between funding sources and evaluation outcomes, with stakeholder-funded evaluations more likely to yield positive results. These findings highlight the need for comprehensive evaluations that adopt appropriate and sufficient models and the importance of interdisciplinary collaboration for robust evaluation practices.
30

singh, Barath Kumar, and Ravi Kumar Chittoriya. "Hair Evaluation Methods." Indian Journal of Medical and Health Sciences 10, no. 1 (June 15, 2023): 31–38. http://dx.doi.org/10.21088/ijmhs.2347.9981.10123.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The three main Hair assessment methods in alopecia are Non-invasive (questionnaire, daily hair counts, standardized wash test, 60-s hair count, global pictures, dermoscopy, hair weight, contrasting felt examination, phototrichogram, TrichoScan), semi-invasive (trichogram and unit area trichogram), and intrusive procedures (e.g., scalp biopsy). No method is ideal or realistic. These are useful for patient diagnosis and monitoring when interpreted carefully. Daily hair counts, wash tests, etc. are good ways to evaluate a patient's shedding. Hair clinics use procedures like global photography. Phototrichogram is exclusively used in clinical trials. These procedures (like scalp biopsy) require processing and interpretation expertise. In this review article, we discuss the various hair evaluation methods.
31

Alkin, Marvin C., Christina A. Christie, and Naomi A. Stephen. "Choosing an Evaluation Theory: A Supplement to Evaluation Roots (3rd Edition)." Journal of MultiDisciplinary Evaluation 17, no. 41 (August 6, 2021): 51–60. http://dx.doi.org/10.56645/jmde.v17i41.709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Unlike scientific theories, evaluation theories are prescriptive: a set of actions and approaches that should be followed when conducting an evaluation. While evaluation theorists have offered a variety of writings describing their theories and approaches, few have offered a specific outline of what the theory looks like in practice. Thus, Alkin and Christie formulated a book to aid evaluators in how to apply theories in evaluations (Alkin & Christie, forthcoming). This book culminates in a series of prototypes that outline each theory’s goals, appropriate contexts, prescriptions, and observable actions in application. Purpose: In order to aid evaluators in applying theories, this article seeks to provide a basis for comparison that can be used to help evaluators select which theory would be most appropriate in their practice. Setting: This comparison can be applied in any setting where evaluations fit the context prescribed by each of the theories. Intervention: Not applicable. Research Design: Not applicablre. Data Collection and Analysis: Not applicable. Findings: In order for theories to influence practice effectively, theories must be displayed in a way that allows for easy comparison. This comparison of three theory prototypes demonstrates that prototypes can be an effective way for selecting a prescriptive theory when conducting an evaluation. Keywords: prescriptive theories; practice; empowerment evaluation; learning centered model; developmental evaluation
32

Corbeil, Ronald C. "Improving Federal Evaluation Planning." Canadian Journal of Program Evaluation 4, no. 2 (September 1989): 23–38. http://dx.doi.org/10.3138/cjpe.4.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: In 1977 the federal government formally introduced a requirement that its departments and agencies evaluate their programs on a comprehensive and cyclical basis. Since then, over 600 evaluations of varying quality and significance have been completed. This article focuses on the evaluation assessment, the principal planning instrument that is to be prepared immediately before every federal evaluation, in order to show its relative success and to identify ways of improving its performance. The article supplements the technological strength of the Office of the Comptroller General of Canada by providing practical, valid, common-sense guidance on how to more successfully plan useful evaluations.
33

Conzelmann, Julie D. "Exploring Updates to Performance Evaluation Terminology." Business Ethics and Leadership 5, no. 4 (2021): 6–16. http://dx.doi.org/10.21272/bel.5(4).6-16.2021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The goal of this research was to obtain feedback and perspectives from human resource experts regarding the applicability of a newly created performance evaluation document. Reviewed literature includes sources indicating the documentation for employee performance evaluations have not been revised in decades. No recent literature was found regarding updating performance evaluations. Through an exploratory case study, human resource experts helped discern the need to update performance evaluation documents, including the 11 most recognized organizational citizenship behaviors. Purposive and snowball participant selection comprised five qualifying human resources subject matter experts representing healthcare, business, retail, manufacturing, and education from various cities in the United States. Findings revealed the need for organizations to update performance evaluations from the current antiquated and generic documents that only measure basic job-task performance. The outcome was agreement that human resource leaders should update their performance evaluation document to Exhibit B. Results empirically confirmed human resource leaders would support an updated performance evaluation document, substantiating my argument that the newly created performance evaluation document would be beneficial to everyone by fully recognizing and measuring the value of all employee contributions in the workplace.
34

Ball, Liezl H., and Theo J. D. Bothma. "Heuristic evaluation of e-dictionaries." Library Hi Tech 36, no. 2 (June 18, 2018): 319–38. http://dx.doi.org/10.1108/lht-07-2017-0144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose The purpose of this paper is to discuss the heuristic evaluations of five e-dictionaries according to the criteria developed by Ball and Bothma (2018). E-dictionaries are increasingly making use of modern information technology to create advanced information tools. It is necessary to ensure that these new products are still usable. Heuristic evaluation is a usability evaluation method used to evaluate the usability of a product. Design/methodology/approach Five e-dictionaries were evaluated through heuristic evaluation. This method requires an evaluator to evaluate a product by using a set of criteria or guidelines. Findings Various usability issues, as well as good features of e-dictionaries, could be identified through these evaluations, and are discussed under the categories of content, information architecture, navigation, access (searching and browsing), help, customisation and use of other innovative technologies. Originality/value Through the evaluations in this study, the criteria could be validated and an example of how the criteria can be used to evaluate e-dictionaries could be presented.
35

Sinabell, Irina, and Elske Ammenwerth. "Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation." Applied Clinical Informatics 13, no. 01 (January 2022): 67–79. http://dx.doi.org/10.1055/s-0041-1740919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods. Objective Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations. Methods The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness. Results Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations. Conclusion We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
36

Derrington, Mary Lynne, and James Anthony Martinez. "Exploring Teachers’ Evaluation Perceptions: A Snapshot." NASSP Bulletin 103, no. 1 (March 2019): 32–50. http://dx.doi.org/10.1177/0192636519830770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Teacher perceptions after 5 years of implementing evaluation protocols that were initiated under Race to the Top revealed attitudes about the evaluation instrument used and the nature of their relationship with the evaluator. This study surveyed middle and high school teachers in nine Eastern Tennessee school districts. Data indicated unintended consequences as a result of their evaluations, including impacts on relationships with principals as well as the concerns with principal time needed for evaluations. Findings imply that the reformed evaluation system is not effectively providing learning opportunities for secondary teachers who had previously been judged as competent.
37

Higa, Terry Ann F., and Paul R. Brandon. "Participatory Evaluation as Seen in a Vygotskian Framework." Canadian Journal of Program Evaluation 23, no. 3 (January 2009): 103–25. http://dx.doi.org/10.3138/cjpe.0023.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: In participatory evaluations of K–12 programs, evaluators develop school faculty’s and administrators’ evaluation capacity by training them to conduct evaluation tasks and providing consultation while the tasks are conducted. A strong case can be made that the capacity building in these evaluations can be examined using a Vygotskian approach. We conducted participatory evaluations at 9 Hawaii public schools and collected data on the extent to which various factors affected participating school personnel’s learning about program evaluation. After the evaluations were completed, a trained interviewer conducted standardized interviews eliciting the participating school personnel’s opinions about the methods and effects of the capacity building. Two reviewers used codes representing Vygotskian concepts to categorize the interview results. We present the results of the coding and provide conclusions about the value of using a Vygotskian framework to examine capacity building in participatory evaluations.
38

Askim, Jostein, Erik Døving, and Åge Johnsen. "Evaluation in Norway: A 25-Year Assessment." Scandinavian Journal of Public Administration 25, no. 3-4 (December 1, 2021): 109–31. http://dx.doi.org/10.58235/sjpa.v25i3-4.7087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article analyses the Norwegian government’s evaluation practice over the 25-year period from 1994 to 2018. Evaluations are mandatory for government ministries and agencies in Norway, with the government conducting some 100 evaluations annually. This article utilises data from a unique database to describe the development of the evaluation industry, focusing on the volume of evaluations, the most active commissioners and providers of evaluations, and the types of evaluations conducted. First, the analysis indicates that the volume of evaluations peaked in around 2010 and has subsequently decreased. As a possible consequence, information relevant to policy may be less publicly available than before. Second, ministries have commissioned relatively fewer evaluations in the last decade than in the years before, and executive agencies have commissioned relatively more. Third, the proportion of evaluations performed by consultants has risen, with that of research institutes falling.
39

Zidane, Youcef J.-T., Bjørn Otto Elvenes, Knut F. Samset, and Bassam A. Hussein. "System Thinking, Evaluations and Learning – Experience from Road Construction Megaproject in Algeria." Mediterranean Journal of Social Sciences 9, no. 3 (May 1, 2018): 121–34. http://dx.doi.org/10.2478/mjss-2018-0054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Ex-post evaluation is starting to be recognized in the Algerian different government institutions (e.g., ministries); and evaluation is becoming part of any program or project for two main reasons, justify the legitimacy of the programs and projects, and collect lessons learned for the next similar programs and projects. On the other hand, academicians believe that programs and projects can be improved by conducting proper evaluations and extracting lessons learned. Program/Project evaluation is comprehensive evaluation, which mainly applies to ex-post evaluation. This paper will look closer at an ex-post evaluation of an Algerian highway megaproject based on PESTOL model, this evaluation is already conducted in the period of 2014 – 2016. Considering ex-post evaluation of projects has many purposes and among them is linked to learning and knowledge sharing and transfer. In this regard, the paper describes very briefly the approach used to the post project evaluation. In addition, link it to learning and to other types of evaluations – i.e., ex-ante, monitoring, midterm, terminal evaluations, and using system-thinking approach, and proposes a framework for learning in projects by evaluations. This paper is based on qualitative case study approach.
40

Beer, Jennifer S., Michael V. Lombardo, and Jamil Palacios Bhanji. "Roles of Medial Prefrontal Cortex and Orbitofrontal Cortex in Self-evaluation." Journal of Cognitive Neuroscience 22, no. 9 (September 2010): 2108–19. http://dx.doi.org/10.1162/jocn.2009.21359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Empirical investigations of the relation of frontal lobe function to self-evaluation have mostly examined the evaluation of abstract qualities in relation to self versus other people. The present research furthers our understanding of frontal lobe involvement in self-evaluation by examining two processes that have not been widely studied by neuroscientists: on-line self-evaluations and correction of systematic judgment errors that influence self-evaluation. Although people evaluate their abstract qualities, it is equally important that perform on-line evaluations to assess the success of their behavior in a particular situation. In addition, self-evaluations of task performance are sometimes overconfident because of systematic judgment errors. What role do the neural regions associated with abstract self-evaluations and decision bias play in on-line evaluation and self-evaluation bias? In this fMRI study, self-evaluation in two reasoning tasks was examined; one elicited overconfident self-evaluations of performance because of salient but misleading aspects of the task and the other was free from misleading aspects. Medial PFC (mPFC), a region associated with self-referential processing, was generally involved in on-line self-evaluations but not specific to accurate or overconfident evaluation. Orbitofrontal cortex (OFC) activity, a region associated with accurate nonsocial judgment, negatively predicted individual differences in overconfidence and was negatively associated with confidence level for incorrect trials.
41

Scott, Richard E. "'Pragmatic Evaluation'." International Journal of E-Health and Medical Communications 1, no. 2 (April 2010): 1–11. http://dx.doi.org/10.4018/ijehmc.2010040101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
E-Health continues to be implemented despite continued demonstration that it lacks value. Specific guidance regarding research approaches and methodologies would be beneficial due to the value in identifying and adopting a single model or framework for any one ‘entity’ (healthcare organisation, sub-national region, country, etc.) so that the evidence-base accumulates more rapidly and interventions can be more meaningfully compared. This paper describes a simple and systematic approach to e-health evaluation in a real-world setting, which can be applied by an evaluation team and raises the quality of e-health evaluations. The framework guides and advises users on evaluation approaches at different stages of e-health development and implementation. Termed ‘Pragmatic Evaluation,’ the approach has five principles that unfold in a staged approach that respects the collective need for timely, policy relevant, yet meticulous research.
42

McGill, Megann, Jordan Siegel, and Natasha Noureal. "A Preliminary Comparison of In-Person and Telepractice Evaluations of Stuttering." American Journal of Speech-Language Pathology 30, no. 4 (July 14, 2021): 1737–49. http://dx.doi.org/10.1044/2021_ajslp-19-00215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose The purpose of this study was to compare in-person and telepractice evaluations of stuttering with adult participants. The research questions were as follows: Is an evaluation for stuttering via telepractice equivalent to an in-person evaluation in terms of (a) duration of individual evaluation tasks and overall length of the evaluation, (b) clinical outcomes across evaluating clinicians, and (c) participant experience? Method Participants were 14 adults who stutter (males = 11; age range: 20–68) who were simultaneously assessed via telepractice and in-person. Comprehensive evaluations included analysis of the speaker's stuttering, evaluation of the speaker's perceptions and attitudes about stuttering, and language testing. Evaluations were administered by either an in-person clinician or a telepractice clinician but were simultaneously scored by both clinicians. Participants were randomly assigned to the in-person-led assessment condition or the telepractice-led assessment condition. Results No statistically significant differences were found between the in-person and telepractice-led evaluations in terms of overall evaluation task duration, evaluation clinical outcomes, or participants' reported experiences. That is, telepractice evaluations for stuttering in adults may be an equivalent option to in-person evaluations. Conclusions Results of this preliminary study indicate that telepractice evaluations of stuttering may be comparable to in-person evaluations in terms of duration, clinical outcomes, and participant experiences. The current study supports the notion that telepractice evaluations may be a viable option for adult clients who stutter. Clinical considerations and future directions for research are discussed.
43

Beg, Dr Mirza Aneesa Afzal, and Dr Mukhtar Ahmed. "Evaluation of Septoplasty Outcome Using Nose (Nasal Obstruction Symptom Evaluation) Scale." International Journal of Scientific Research 3, no. 2 (June 1, 2012): 359–60. http://dx.doi.org/10.15373/22778179/feb2014/116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Lawrenz, Frances, and Douglas Huffman. "Using Multi-Site Core Evaluation to Provide “Scientific” Evidence." Canadian Journal of Program Evaluation 19, no. 2 (September 2004): 17–36. http://dx.doi.org/10.3138/cjpe.19.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Funders of educational and other social service programs are requiring more experimental and performance-oriented designs in evaluations of program effectiveness. Concomitantly, funders are exhibiting less interest in evaluations that serve other purposes, such as implementation fidelity. However, in order to fully understand the effectiveness of most complex social and educational programs, an evaluation must provide diverse information. This article uses the Core Evaluation of the Collaboratives for Excellence in Teacher Preparation Program as a case example of how evaluations might meet the requirements of objective scientific evaluation while at the same time valuing and incorporating other evaluation purposes. The successes and limitations of the case example in achieving this blending are discussed.
45

Buehrer, Susanne, Evanthia Kalpazidou Schmidt, Dorottya Rigler, and Rachel Palmen. "How to Implement Context-Sensitive Evaluation Approaches in Countries with still Emerging Evaluation Cultures." Public Policy and Administration 20, no. 3 (September 28, 2021): 368–81. http://dx.doi.org/10.5755/j01.ppaa.20.3.28371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evaluation cultures and evaluation capacity building vary greatly across the European Union. Western European countries, such as Austria, Germany, Denmark and Sweden, have been termed as leading countries in the evaluation as they have built up well-established evaluation cultures and carry out systematic evaluations of programmes and institutions. In contrast, in Central and Eastern European (CEE) countries, efforts continue to establish evaluation practices and further develop the current evaluation culture. In Hungary, for example, an established research and innovation evaluation practice does not exist, not one specifically considering gender equality in research and innovation evaluations with the exception of research and innovation programmes financed by the EU Structural Funds. Based on the results of a Horizon 2020 project, we apply a context-sensitive evaluation concept in Hungary that enables program owners and evaluators to develop a tailor-made design and impact model for their planned or ongoing gender equality interventions. The development of this evaluation was based on a thorough analysis of the literature and 19 case studies, building on documentary analysis and semi-structured interviews. The article shows that this evaluation approach is applicable also in countries with a certain catch-up demand of the existing overall evaluation culture. The special feature of the presented evaluation approach is, on the one hand, that the evaluation is context-sensitive. On the other hand, this approach makes it possible not only to depict effects on gender equality itself, but also to anticipate effects on research and innovation. Such effects can, for example, be a stronger orientation of research towards societal needs, which makes it particularly interesting for private companies.
46

Rowe, Andy. "Rapid impact evaluation." Evaluation 25, no. 4 (September 19, 2019): 496–513. http://dx.doi.org/10.1177/1356389019870213.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rapid Impact Evaluation offers the potential to evaluate impacts in both ex ante and ex post settings, providing utility for developmental and formative evaluation as well as the usual summative settings. Rapid Impact Evaluation triangulates judgments of three separate groups of experts to assess the incremental change in effects attributable to the program. Three methodological innovations are central to the method: the scenario-based counterfactual, a simplified approach to measuring change in effects, and an interest-based approach to stakeholder engagement. In evaluations to date, Rapid Impact Evaluation has proved to be a cost effective and nimble approach to assessing impacts and does not intrude on design or implementation of the program. By applying recent thinking on use-seeking research emphasizing joint knowledge processes over knowledge products, Rapid Impact Evaluation promotes salience, legitimacy, and credibility with decision makers and key stakeholders. Applications show Rapid Impact Evaluation to be fit for purpose.
47

Breidahl, Karen N., Gunnar Gjelstrup, Hanne Foss Hansen, and Morten Balle Hansen. "Evaluation of Large-Scale Public-Sector Reforms." American Journal of Evaluation 38, no. 2 (August 8, 2016): 226–45. http://dx.doi.org/10.1177/1098214016660612.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our analysis, we apply four governance perspectives (rational-instrumental perspective, rational interest–based perspective, institutional-cultural perspective, and chaos perspective) in a comparative analysis of the evaluations of two large-scale public-sector reforms in Denmark and Norway. We compare the evaluation process (focus and purpose), the evaluators, and the organization of the evaluation, as well as the utilization of the evaluation results. The analysis uncovers several significant findings including how the initial organization of the evaluation shows strong impact on the utilization of the evaluation and how evaluators can approach the challenges of evaluating large-scale reforms.
48

Dhakal, Teertha Raj. "Institutionalization and Use of Evaluations in the Public Sector in Nepal." Journal of MultiDisciplinary Evaluation 10, no. 23 (July 16, 2014): 51–58. http://dx.doi.org/10.56645/jmde.v10i23.403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper reviews the institutionalization process and use of evaluation evidences in the planning processes in Nepal. It reviews evaluation reports of 29 projects or programmes of various sectors conducted engaging independent evaluators during 1995-2012. It concludes that reform to effectively institutionalize and promote use of evaluations need to be implemented as a part of overall performance management and accountability reform and the uses of evaluations increase if the evaluation system is designed as a consistent and integrated feature of the development planning process aiming to correct the entire planning cycle. Promotion of evaluation culture at various levels, securing a higher level policy commitment and addressing capacity gaps in managing evaluations are key attributes to promote utilization of evaluations. Key words: evaluation, use, performance management, Nepal
49

Singh, Pushpinder, and Rajeev Ruparathna. "Construction project proposal evaluation under uncertainty: A fuzzy-based approach." Canadian Journal of Civil Engineering 47, no. 3 (March 2020): 291–300. http://dx.doi.org/10.1139/cjce-2018-0795.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Construction project proposal evaluations received much attention during the recent years due to increased awareness of sustainability, value for money, and transparency. The bid evaluation matrix is the commonly used method in the construction industry for project proposal evaluation. The subjective judgments used for evaluation criteria are associated with significant uncertainty. Moreover, the transformation of qualitative evaluations raises significant data uncertainty. Attempts have been made by previous researchers to incorporate uncertainties associated with project bid evaluations, but the ability of those methods to account for “true uncertainty” is questionable. The objective of this paper is to develop a fuzzy logic-based evaluation framework for project proposal evaluation. This use of type-2 fuzzy numbers sets the proposed approach apart from the current body of knowledge. An algorithm using type-2 fuzzy numbers was developed to define input uncertainty and parameter weights. The developed method was demonstrated using a building construction project case study. The proposed approach enables quantifying qualitative evaluations more comprehensively from a scientific basis, forming a generic proposal evaluation method applicable to various industry sectors.
50

Scholtz, Jean, Catherine Plaisant, Mark Whiting, and Georges Grinstein. "Evaluation of visual analytics environments: The road to the Visual Analytics Science and Technology challenge evaluation methodology." Information Visualization 13, no. 4 (June 11, 2013): 326–35. http://dx.doi.org/10.1177/1473871613490290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evaluation of software can take many forms ranging from algorithm correctness and performance to evaluations that focus on the value to the end user. This article presents a discussion of the development of an evaluation methodology for visual analytics environments. The Visual Analytics Science and Technology Challenge was created as a community evaluation resource. This resource is available to researchers and developers of visual analytics environments and allows them to test out their designs and visualization and compare the results with the solution and the entries prepared by others. Sharing results allows the community to learn from each other and to hopefully advance more quickly. In this article, we discuss the original challenge and its evolution during the 7 years since its inception. While the Visual Analytics Science and Technology Challenge is the focus of this article, there are lessons for many involved in setting up a community evaluation program, including the need to understand the purpose of the evaluation, decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators. For ongoing evaluations, it is also necessary to track the evolution and to ensure that the evaluation methodologies are keeping pace with the science being evaluated. The discussions on the Visual Analytics Science and Technology Challenge on these topics should be pertinent to many interested in community evaluations.

До бібліографії