Siga este link para ver outros tipos de publicações sobre o tema: Evaluation.

Artigos de revistas sobre o tema "Evaluation"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Evaluation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Georghiou, L. "Meta-evaluation: Evaluation of evaluations". Scientometrics 45, n.º 3 (julho de 1999): 523–30. http://dx.doi.org/10.1007/bf02457622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Praestgaard, E. "Meta-evaluation: Evaluation of evaluations". Scientometrics 45, n.º 3 (julho de 1999): 531–32. http://dx.doi.org/10.1007/bf02457623.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ingle, B. "Evaluation '85 Canadian Evaluation Society/ Evaluation Network/ Evaluation Research Society: Exploring the Contributions of Evaluations". American Journal of Evaluation 6, n.º 3 (1 de janeiro de 1985): 16. http://dx.doi.org/10.1177/109821408500600303.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Horvat, M. "Meta-evaluation: Evaluation of evaluations some points for discussion". Scientometrics 45, n.º 3 (julho de 1999): 533–42. http://dx.doi.org/10.1007/bf02457624.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Patenaude, Johane, Georges-Auguste Legault, Monelle Parent, Jean-Pierre Béland, Suzanne Kocsis Bédard, Christian Bellemare, Louise Bernier, Charles-Etienne Daniel, Pierre Dagenais e Hubert Gagnon. "OP104 Health Technology Assessment's Ethical Evaluation: Understanding The Diversity Of Approaches". International Journal of Technology Assessment in Health Care 33, S1 (2017): 47–48. http://dx.doi.org/10.1017/s0266462317001738.

Texto completo da fonte
Resumo:
INTRODUCTION:The main difficulties encountered in the integration of ethics in Health Technology Assessment (HTA) were identified in our systematic review. In the process of analyzing these difficulties we then addressed the question of the diversity of ethical approaches (1) and the difficulties in their operationalization (2,3).METHODS:Nine ethical approaches were identified: principlism, casuistry, coherence analysis, wide reflexive equilibrium, axiology, socratic approach, triangular method, constructive technology assessment and social shaping of technology. Three criteria were used to clarify the nature of each of these approaches: 1.The characteristics of the ethical evaluation2.The disciplinary foundation of the ethical evaluation3.The operational process of the ethical evaluation in HTA analysis.RESULTS:In HTA, both norm-based ethics and value-based ethics are mobilized. This duality is fundamental since it proposes two different ethical evaluations: the first is based on the conformity to a norm, whereas the second rests on the actualization of values. The disciplinary foundation generates diversity as philosophy, sociology and theology propose different justifications for ethical evaluation. At the operational level, ethical evaluation's characteristics are applied to the case at stake by specific practical reasoning. In a norm-based practical reasoning, one must substantiate the facts that will be correlated to a moral norm for clearly identifying conformity or non-conformity. In value-based practical reasoning, one must identify the impacts of the object of assessment that will be subject to ethical evaluation. Two difficulties arise: how to apply values to facts and prioritize amongst conflicting ethical evaluations of the impacts?CONCLUSIONS:Applying these three criteria to ethical approaches in HTA helps understanding their complexity and the difficulty of operationalizing them in HTA tools. The choice of any ethical evaluations is never neutral; it must be justified by a moral point of view. Developing tools for ethics in HTA is operationalizing a specific practical reasoning in ethics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Al-Husseini, Khansaa Azeez Obayes, Ali Hamzah Obaid e Ola Najah Kadhim. "Evaluating the Effectiveness of E-learning: Based on Academic Staff Evaluation". Webology 19, n.º 1 (20 de janeiro de 2022): 367–79. http://dx.doi.org/10.14704/web/v19i1/web19027.

Texto completo da fonte
Resumo:
E-learning has become a popular learning method used in many local and international universities and in many educational institutions. The initial achievements of e-learning platforms and the online learning environment demonstrated outstanding advantages in distance education. However, it is necessary to conduct an evaluation of the educational process, and in particular an effective assessment of the learning environment via online-based e-learning platforms. Where this study aimed to identify the evaluation of the effectiveness of e-learning from the point of view of the teaching staff in the Technical Institute of Babel and the Technical Institute Al-Mussaib. To achieve the objectives of the study, the researchers prepared a questionnaire containing (32) questions, after verifying the tools of reliability and validity. The results of the study revealed that the evaluation of the effectiveness of e-learning was average and above average in some paragraphs of the questionnaire. The percentage (84,070) of faculty members use computers and smart phones to publish academic content through the use of the home internet, at a rate of (95,575) in the form of creating educational content in several forms, including video, audio and text at the same time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Guyadeen, Dave, e Mark Seasons. "Evaluation Theory and Practice: Comparing Program Evaluation and Evaluation in Planning". Journal of Planning Education and Research 38, n.º 1 (3 de novembro de 2016): 98–110. http://dx.doi.org/10.1177/0739456x16675930.

Texto completo da fonte
Resumo:
This article reviews the major approaches of program evaluation and evaluation in planning. The challenges to evaluating plans and planning are discussed, including the reliance on ex ante evaluations, a lack of outcome evaluation methodologies, the attribution gap, and institutional hurdles. Areas requiring further research are also highlighted, including the need to develop appropriate evaluation methodologies; creating stronger linkages between program evaluation and evaluation in planning; examining the institutional and political contexts guiding the use (and misuse) of evaluation in practice; and the importance of training and educating planners on evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sparks, Elizabeth, Michelle Molina, Natalie Shepp e Fiona Davey. "The Evaluation Skill-a-Thon: Evaluation Model for Meaningful Youth Engagement". Journal of Youth Development 16, n.º 1 (30 de março de 2021): 100–125. http://dx.doi.org/10.5195/jyd.2021.968.

Texto completo da fonte
Resumo:
Active engagement of youth participants in the evaluation process is an increasingly sought out method, but the field can still benefit from new methods that ease youth participatory evaluation implementation. Meaningful youth engagement in the evaluation process is particularly advantageous under the 4-H thriving model because of its potential to contribute to positive youth development, foster relationship building, enhance evaluation capacity, and improve program quality through improved evaluations. This program sought to facilitate actively engaging youth in the evaluation process by breaking it up into clear and manageable steps including evaluation design, data collection, data interpretation and analysis, reporting results, and outlining programmatic change. To achieve this aim, program staff designed the Evaluation Skill-a-Thon, a set of self-paced, experiential evaluation activities at various stations through which youth participants rotate. Actively involving youth participants in the evaluation process using the Skill-a-Thon model resulted in youth being able to identify and design programmatic changes, increased participation and response rates in other evaluations, and several youth participants’ gaining an interest in evaluation and working to design evaluations in later years. The Evaluation Skill-a-Thon holds promise for actively engaging youth participants in the entire evaluation process, easy implementation, and increasing evaluation capacity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Kunieda, Yoshiaki. "Effectiveness of Self-Evaluation, Peer Evaluation and 2nd-Step Self-Evaluation- Covering Anchoring Training in Maritime Education and Training". Advances in Social Sciences Research Journal 9, n.º 12 (5 de janeiro de 2023): 567–79. http://dx.doi.org/10.14738/assrj.912.13718.

Texto completo da fonte
Resumo:
Evaluation is an essential part of education and training, and it can be used in maritime education and training to help learners organize their knowledge and improve their skills, as well as to improve education and training methods. In this study, self-evaluation and mutual evaluation were conducted during anchoring training on the training ship Shioji Maru belonging to the Tokyo University of Marine Science and Technology. In a survey of students trained in 2020 and 2021 regarding the acquisition of knowledge and skills, 94.5% of students rated themselves as “very effective” or “effective” in their self-evaluation and 92.0% of students rated themselves as “very effective” or “effective” in their mutual evaluation. Comparing the self-evaluation scores with the mutual evaluation scores, it was found that the mutual evaluation scores tended to rank higher than the self-evaluation scores. This is thought to be due to a lack of confidence in one’s own ship handling skills, which leads to harsh evaluations of oneself and more lenient evaluations of others. It was also found that the higher the instructor’s evaluation score, the smaller the difference between the self-evaluation score and the instructor’s evaluation score. Students with higher scores in the instructor’s evaluation were more confident in their ship handling skills, which is thought to indicate that they can evaluate themselves more accurately. On the other hand, self-evaluation was conducted at an early stage immediately after the training, and the bridge operation team and the entire team also conducted the self-evaluation again after the debriefing. In other words, a 2nd-step self-evaluation was conducted through two evaluations conducted at different times. We show the results of a qualitative analysis of the students’ impressions and opinions of these self-evaluations and peer evaluation using the steps for coding and theorization (SCAT) method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Gladkikh, N. "“If You Use Evaluation You Make Better Decisions and Help People More.” Interview with Michael Patton". Positive changes 3, n.º 1 (27 de março de 2023): 4–15. http://dx.doi.org/10.55140/2782-5817-2023-3-1-4-15.

Texto completo da fonte
Resumo:
Michael Quinn Patton is one of the world’s most renowned experts in project and program evaluation1. He has been working in this field since the 1970s, when evaluation in the nonprofit sector was a relatively new phenomenon.Dr. Patton is the creator of well-known evaluation concepts that are used by specialists around the world. He received several international awards for outstanding contributions to the field, and he wrote 18 books on various issues related to practical use of evaluation2. In an interview with our Editor-in-Chief,Michael Patton shared his views on the profession of an evaluator, the impact of the profession, trends in evaluation, the “gold standard” of evaluation methodology, and what the future holds for this field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Guenther, John, e Ian H. Falk. "Generalising from qualitative evaluation". Evaluation Journal of Australasia 21, n.º 1 (março de 2021): 7–23. http://dx.doi.org/10.1177/1035719x21993938.

Texto completo da fonte
Resumo:
Evaluations are often focused on assessing merit, value, outcome or some other feature of a programme, project, policy or some other object. Evaluation research is then more concerned with the particular rather than the general – even more so, when qualitative methods are used. But does this mean that evaluations should not be used to generalise? If it is possible to generalise from evaluations, under what circumstances can this be legitimately achieved? The authors of this article have previously argued for generalising from qualitative research (GQR), and in this article, they extrapolate the discussion to the field of evaluation. First, the article begins with a discussion of the definitions of generalisability in research, recapping briefly on our arguments for GQR. Second, the differentiation between research and evaluation is explored with consideration of what literature there is to justify generalisation from qualitative evaluation (GQE). Third, a typology derived from the literature is developed, to sort 54 evaluation projects. Fourth, material from a suite of evaluation projects is drawn from to demonstrate how the typology of generalisation applies in the context of evaluations conducted in several fields of study. Finally, we suggest a model for GQE.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Simuyemba, Moses C., Obrian Ndlovu, Felicitas Moyo, Eddie Kashinka, Abson Chompola, Aaron Sinyangwe e Felix Masiye. "Real-time evaluation pros and cons: Lessons from the Gavi Full Country Evaluation in Zambia". Evaluation 26, n.º 3 (3 de fevereiro de 2020): 367–79. http://dx.doi.org/10.1177/1356389019901314.

Texto completo da fonte
Resumo:
The Full Country Evaluations were Gavi-funded real-time evaluations of immunisation programmes in Bangladesh, Mozambique, Uganda and Zambia, from 2013 to 2016. The evaluations focused on providing evidence for improvement of immunisation delivery in these countries and spanned all phases of Gavi support. The process evaluation approach of the evaluations utilised mixed methods to track progress against defined theories-of-change and related milestones during the various stages of implementation of the Gavi support streams. This article highlights complexities of this type of real-time evaluation and shares lessons learnt on conducting such evaluation from the Zambian experience. Real-time process evaluation is a complex evaluation methodology that requires sensitivity to the context of the evaluation, catering for various information needs of stakeholders, and establishment of mutually beneficial relationships between programme implementers and evaluators. When used appropriately, it can be an effective means of informing programme decisions and aiding programme improvement for both donors and local implementers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Devenport, Jennifer L., Steven D. Penrod e Brian L. Cutler. "Eyewitness identification evidence: Evaluation commonsense evaluations." Psychology, Public Policy, and Law 3, n.º 2-3 (junho de 1997): 338–61. http://dx.doi.org/10.1037/1076-8971.3.2-3.338.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Rodríguez-Bilella, Pablo, e Rafael Monterde-Díaz. "Evaluation, Valuation, Negotiation: Some Reflections Towards a Culture of Evaluation". Canadian Journal of Program Evaluation 25, n.º 3 (janeiro de 2011): 1–10. http://dx.doi.org/10.3138/cjpe.0025.003.

Texto completo da fonte
Resumo:
Abstract: Although the evaluation of public policies is a subject of growing interest in Latin America, there are problems with the design and implementation of evaluations, as well as with the limited use of their results. In many cases, the evaluations have more to do with generating descriptions and less with assessing these activities and using those assessments to improve planning and decision making. These points are explored in a case study of the evaluation of a rural development program in Argentina, emphasizing the process of negotiation and consensus building between the evaluators and the official in charge of approving the evaluation report. The lessons learned from the experience point to the generation and consolidation of a culture of evaluation in the region.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Nishimura, Satoshi, Hiroyasu Miwa, Ken Fukuda, Kentaro Watanabe e Takuichi Nishimura. "Future Prospects towards Evaluation of Robotic Devices for Nursing Care : Subjective Evaluation and Objective Evaluation". Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2015.6 (2015): 17–18. http://dx.doi.org/10.1299/jsmeicam.2015.6.17.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Patton, Michael Quinn. "Meta-evaluation: Evaluating the Evaluation of the Paris Declaration". Canadian Journal of Program Evaluation 27, n.º 3 (janeiro de 2013): 147–71. http://dx.doi.org/10.3138/cjpe.0027.008.

Texto completo da fonte
Resumo:
Abstract: It has become a standard in major high-stakes evaluations to commission an independent review to determine whether the evaluation meets generally accepted standards of quality. This is called a meta-evaluation. Given the historic importance of the Evaluation of the Paris Declaration, the Management Group commissioned a meta-evaluation of the evaluation. The meta-evaluation concluded that the findings, conclusions, and recommendations presented in the Paris Declaration Evaluation adhered closely and rigorously to the evaluation evidence collected and synthesized. The meta-evaluation included an assessment of the evaluation’s strengths, weaknesses, and lessons. This article describes how the meta-evaluation was designed and implemented, the data collected, and the conclusions reached.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Byrne, Jani Gabriel. "Competitive Evaluation in Industry: Some Comments". Proceedings of the Human Factors Society Annual Meeting 33, n.º 5 (outubro de 1989): 423–25. http://dx.doi.org/10.1177/154193128903300541.

Texto completo da fonte
Resumo:
This paper examines competitive evaluations in industry. Competitive evaluations involve systematic comparisons between two or more products on similar (or equated) attributes. These attributes can be either usability criteria or usability characteristics. Three topics are discussed: applications of competitive evaluation information that could be useful to product development; industry pitfalls associated with conducting an evaluation; and timing of the evaluation in the product development process as it relates to the goals of the organization. The paper concludes with general suggestions concerning conducting competitive evaluations in industry.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Beere, Diana. "Evaluation Capacity-building: A Tale of Value-adding". Evaluation Journal of Australasia 5, n.º 2 (setembro de 2005): 41–47. http://dx.doi.org/10.1177/1035719x0500500207.

Texto completo da fonte
Resumo:
Evaluation capacity-building entails not only developing the expertise needed to undertake robust and useful evaluations; it also involves creating and sustaining a market for that expertise by promoting an organisational culture in which evaluation is a routine part of ‘the way we do things around here’. A challenge for evaluators is to contribute to evaluation capacity-building while also fulfilling their key responsibilities to undertake evaluations. A key strategy is to focus on both discerning value and adding value for clients/commissioners of evaluations. This paper takes as examples two related internal evaluation projects conducted for the Queensland Police Service that have added value for the client and, in doing so, have helped to promote and sustain an evaluation culture within the organisation. It describes key elements of these evaluations that contributed to evaluation capacity-building. The paper highlights the key role that evaluators themselves, especially internal evaluators, can take in evaluation capacity-building, and proposes that internal evaluators can, and should, integrate evaluation capacity-building into their routine program evaluation work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Carugi, Carlo, e Heather Bryant. "A Joint Evaluation With Lessons for the Sustainable Development Goals Era: The Joint GEF-UNDP Evaluation of the Small Grants Programme". American Journal of Evaluation 41, n.º 2 (17 de setembro de 2019): 182–200. http://dx.doi.org/10.1177/1098214019865936.

Texto completo da fonte
Resumo:
The integrated nature of the Sustainable Development Goals (SDGs) calls for greater synergy, harmonization, and complementarity in development work. This is to be reflected in evaluation. Despite a long and diversified history spanning over almost three decades, joint evaluations have fallen out of fashion. Evaluators tend to shy away from joint evaluations because of timeliness, institutional and organizational differences, and personal preferences. As the SDGs call for more joint evaluations, we need to get them right. This article supports the appeal for more joint evaluations in the SDGs era by learning from the existing long and diversified experience. This article shares lessons from a joint evaluation that is relevant in the context of the SDGs for the United Nations Evaluation Group, the Evaluation Cooperation Group, and the wider international evaluation community.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Harnar, Michael A., Jeffrey A. Hillman, Cheryl L. Endres e Juna Z. Snow. "Internal Formative Meta-Evaluation: Assuring Quality in Evaluation Practice". American Journal of Evaluation 41, n.º 4 (2 de setembro de 2020): 603–13. http://dx.doi.org/10.1177/1098214020924471.

Texto completo da fonte
Resumo:
The term meta-evaluation—referring to the “evaluation of evaluations”—has been in the evaluation lexicon for a half-century. Despite this longevity, research on meta-evaluation is sparse and even more so for internal formative types of meta-evaluation. This exploratory study builds on our understanding of meta-evaluative methods by exploring evaluators’ approaches to ensuring quality practice. A sample of practitioners was drawn from the American Evaluation Association membership and invited to share their quality assurance practices through an online survey. Respondents reported using a variety of tools to ensure quality in their practice, including published and unpublished standards, principles and guidelines, and processes involving stakeholder engagement at various stages of evaluation. A distinction was identified between an intrinsic, merit-focused perspective on quality that is more or less controlled by the evaluator and an extrinsic, worth-focused perspective on quality primarily informed by key stakeholders of the evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Porter, Jamila M., Laura K. Brennan, Mighty Fine e Ina I. Robinson. "Elements to Enhance the Successful Start and Completion of Program and Policy Evaluations: The Injury & Violence Prevention (IVP) Program & Policy Evaluation Institute". Journal of MultiDisciplinary Evaluation 16, n.º 37 (19 de novembro de 2020): 58–73. http://dx.doi.org/10.56645/jmde.v16i37.659.

Texto completo da fonte
Resumo:
Background: Public health practitioners, including injury and violence prevention (IVP) professionals, are responsible for implementing evaluations, but often lack formal evaluation training. Impacts of many practitioner-focused evaluation trainings—particularly their ability to help participants successfully start and complete evaluations—are unknown. Objectives: We assessed the impact of the Injury and Violence Prevention (IVP) Program & Policy Evaluation Institute (“Evaluation Institute”), a team-based, multidisciplinary, and practitioner-focused evaluation training designed to teach state IVP practitioners and their cross-sector partners how to evaluate program and policy interventions. Design: Semi-structured interviews were conducted with members of 13 evaluation teams across eight states at least one year after training participation (24 participants in total). Document reviews were conducted to triangulate, supplement, and contextualize reported improvements to policies, programs, and practices. Intervention: Teams of practitioners applied for and participated in the Evaluation Institute, a five-month evaluation training initiative that included a set of online training modules, an in-person workshop, and technical support from evaluation consultants. Main Outcome Measure(s): The successful start and/or completion of a program or policy evaluation focused on an IVP intervention. Results: Of the 13 teams studied, a total of 12 teams (92%) reported starting or completing an evaluation. Four teams (31%) reported fully completing their evaluations; eight teams (61%) reported partially completing their evaluations. Teams identified common facilitators and barriers that impacted their ability to start and complete their evaluations. Nearly half of the 13 teams (46%) – whether or not they completed their evaluation – reported at least one common improvement made to a program or policy as a result of engaging in an evaluative process. Conclusion: Practitioner-focused evaluation trainings are essential to build critical evaluation skills among public health professionals and their multidisciplinary partners. The process of evaluating an intervention—even if the evaluation is not completed—has substantial value and can drive improvements to public health interventions. The Evaluation Institute can serve as a model for training public health practitioners and their partners to successfully plan, start, complete, and utilize evaluations to improve programs and policies. Keywords: Evaluation; injury; multidisciplinary partnerships; practitioner-focused evaluation training; professional development; program and policy evaluation; public health; technical assistance; violence
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Groen, Jovan F., e Yves Herry. "The Online Evaluation of Courses: Impact on Participation Rates and Evaluation Scores". Canadian Journal of Higher Education 47, n.º 2 (27 de agosto de 2017): 106–20. http://dx.doi.org/10.47678/cjhe.v47i2.186704.

Texto completo da fonte
Resumo:
At one of Ontario’s largest universities, the University of Ottawa, course evaluations involve about 6,000 course sections and over 43,000 students every year. This paper-based format requires over 1,000,000 sheets of paper, 20,000 envelopes, and the support of dozens of administrative staff members. To examine the impact of a shift to an online system for the evaluation of courses, the following study sought to compare participation rates and evaluation scores of an online and paper-based course evaluation system. Results from a pilot group of 10,417 students registered in 318 courses suggest an average decrease in participation rate of 12–15% when using an online system. No significant differences in evaluation scores were observed. Instructors and students alike shared positive reviews about the online system; however, they suggested that an in-class period be maintained for the electronic completion of course evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Hunsaker, Scott L., e Carolyn M. Callahan. "Evaluation of Gifted Programs: Current Practices". Journal for the Education of the Gifted 16, n.º 2 (janeiro de 1993): 190–200. http://dx.doi.org/10.1177/016235329301600207.

Texto completo da fonte
Resumo:
In an effort to describe current gifted program evaluation practices, a review of articles, ERIC documents, and dissertations were supplemented by evaluation reports solicited by The National Research Center on the Gifted e) Talented at The University of Virginia from public school, private school, and professional sources. Seventy evaluation reports were received. These were coded according to ten variables dealing with evaluation design, methodology, and usefulness. Frequencies and chi squares were computed for each variable. A major concern brought out by this study is the paucity of evaluation reports/results made available to the NRC G/T. This may be due to a lack of gifted program evaluations, or to dissatisfaction with evaluation designs and results. Other concerns included lack of methodological sophistication, reporting, and utility concerns. Some promising practices were apparent in the studies reviewed. A large sub-set of the evaluations were done for program improvement and employed multiple methodologies, sources, analysis techniques, and reporting formats with utility practices that produce needed changes. In addition, most evaluations focused on a number of key areas in the gifted program rather than settling for generalized impressions about the program.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Nørholm, Morten. "Outlining a theory of the social and symbolic function of evaluations of education". Praxeologi – Et kritisk refleksivt blikk på sosiale praktikker 1 (21 de maio de 2019): e1467. http://dx.doi.org/10.15845/praxeologi.v1i0.1467.

Texto completo da fonte
Resumo:
AbstractThe article presents the results of a research project focusing on evaluations of education as a part of a New Public Management in the area of education.The empirical material consists of:- 8 state-sanctioned evaluations of the formal training programs for the positions in a medical field- various texts on evaluations- various examples of Danish evaluation research.A field of producers of Danish evaluation research is constructed as part of a field of power: analogous to the analysed evaluations, Danish evaluation research forms a discourse legitimizing socially necessary administrative interventions. The evaluations and the evaluation research are constructed as parts of a mechanism performing and legitimizing a sorting to an existing social order. The theoretical starting point is from theories, primarily by Émile Durkheim, Pierre Bourdieu and Ulf P. Lundgren.Keywords: evaluation, evaluation of education, social reproduction, New Public Management, societies after the Modern, meritocracy
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Sa, Yongjin. "Flexible Work Arrangements Program Implementation Evaluation". Advances in Social Sciences Research Journal 8, n.º 5 (11 de maio de 2021): 63–70. http://dx.doi.org/10.14738/assrj.85.10154.

Texto completo da fonte
Resumo:
This research primarily focuses on construction of the program evaluation proposal for the flexible work arrangements. In order to make the program evaluation design, this paper specifically discusses evaluation questions and data collection analysis regarding three kinds of evaluations including needs assessment, implementation evaluation, formative evaluation, and summative evaluation. Furthermore, the expected positive effects and main functions of the flexible work arrangements program evaluation are also suggested.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Corley, E. A., G. G. Keller, J. C. Lattimer e M. R. Ellersieck. "Reliability of early radiographic evaluations for canine hip dysplasia obtained from the standard ventrodorsal radiographic projection". Journal of the American Veterinary Medical Association 211, n.º 9 (1 de novembro de 1997): 1142–46. http://dx.doi.org/10.2460/javma.1997.211.09.1142.

Texto completo da fonte
Resumo:
Objective To determine reliability of preliminary evaluations for canine hip dysplasia (CHD) performed by the Orthopedic Foundation for Animals on dogs between 3 and 18 months of age. Design Retrospective analysis of data from the Orthopedic Foundation for Animals database. Animals 2,332 Golden Retrievers, Labrador Retrievers, German Shepherd Dogs, and Rottweilers for which preliminary evaluation had been performed between 3 and 18 months of age and for which results of a definitive evaluation performed after 24 months of age were available. Procedure Each radiograph was evaluated, and hip joint status was graded as excellent, good, fair, or borderline phenotype or mild, moderate, or severe dysplasia. Preliminary evaluations were performed by 1 radiologist; definitive evaluations were the consensus of 3 radiologists. Reliability of preliminary evaluations was calculated as the percentage of definitive evaluations (normal vs dysplastic) that were unchanged from preliminary evaluations. Results Reliability of a preliminary evaluation of normal hip joint phenotype decreased significantly as the preliminary evaluation changed from excellent (100%) to good (97.9%) to fair (76.9%) phenotype. Reliability of a preliminary evaluation of CHD increased significantly as the preliminary evaluation changed from mild (84.4%) to moderate (97.4%) CHD. Reliability of preliminary evaluations increased significantly as age at the time of preliminary evaluation increased, regardless of whether dogs received a preliminary evaluation of normal phenotype or CHD. Clinical Implications Results suggest that preliminary evaluations of hip joint status in dogs are generally reliable. However, dogs that receive a preliminary evaluation of fair phenotype or mild CHD should be reevaluated after 24 months of age. (J Am Vet Med Assoc 1997;211:1142–1146)
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Bourgeois, Isabelle, e Clémence Naré. "The "Usability" of Evaluation Reports: A Precursor to Evaluation Use in Government Organizations". Journal of MultiDisciplinary Evaluation 11, n.º 25 (25 de setembro de 2015): 60–67. http://dx.doi.org/10.56645/jmde.v11i25.433.

Texto completo da fonte
Resumo:
Background: According to the Treasury Board of Canada’s Policy on Evaluation (2009), evaluations produced by federal government departments must contribute to decision-making at an organizational level (mainly summative) as well as a program level (mainly formative). Previous research shows that although the formative objectives of evaluation are generally reached, the use of evaluation for broader, budgetary management is limited. However, little research has been conducted thus far on this issue. Purpose: This study investigates the extent to which program evaluation is used in the Canadian federal government for budgetary management purposes. Setting: This paper outlines the results obtained following the first component of a two-pronged research strategy focusing on evaluation use in Canadian federal government organizations. Intervention: N/A Research Design: Two federal agencies were recruited to participate in organizational case studies aiming to identify the factors that facilitate the use of evaluation for budgetary reallocation exercises. Data Collection and Analysis: This report presents the findings from a detailed analysis of evaluation reports published by both agencies between 2010-2013. The data were collected from public evaluation reports and analyzed using NVivo. Findings: The preliminary findings of the study show that instrumental use has occurred or can be expected to occur, based on the types of recommendations outlined in the reports reviewed and on the responses to the evaluations produced by program managers. Keywords: evaluation use; organizational evaluation capacity; instrumental use; evaluation reports; Canadian federal government; document review; organizational decision-making.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Chen, Guanyu, Jacky Bowring e Shannon Davis. "How Is “Success” Defined and Evaluated in Landscape Architecture—A Collective Case Study of Landscape Architecture Performance Evaluation Approaches in New Zealand". Sustainability 15, n.º 20 (23 de outubro de 2023): 15162. http://dx.doi.org/10.3390/su152015162.

Texto completo da fonte
Resumo:
This study examines landscape performance evaluation practices in New Zealand by analysing a representative set of evaluation cases using a “sequential” case study approach. The aim is to map the methodological terrain and understand how “success” is defined and assessed in these evaluations. This study identifies different evaluation models, including goal, satisfaction, and norm models, and explores the evaluation methods employed. This study also reveals a correlation between funding sources and evaluation outcomes, with stakeholder-funded evaluations more likely to yield positive results. These findings highlight the need for comprehensive evaluations that adopt appropriate and sufficient models and the importance of interdisciplinary collaboration for robust evaluation practices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Alindogan, Mark Anthony. "Evaluation competencies and functions in advertised evaluation roles in Australia". Evaluation Journal of Australasia 19, n.º 2 (junho de 2019): 88–100. http://dx.doi.org/10.1177/1035719x19857197.

Texto completo da fonte
Resumo:
This study explores the functions of professional evaluators outlined in online job advertisements. A total of 97 job advertisements were reviewed in the study. A content analysis using a Coding Analysis Toolkit developed by Shulman was conducted to identify six main evaluation functions based on the collected data. These functions are (1) evaluation and reporting, (2) providing evaluation advice, (3) evaluation capacity building, (4) communication and engagement, (5) forming partnerships and (6) leading, managing and influencing. These functions were then compared to the Australian Evaluation Society’s (AES) Core Competency Domains. Overall, there is a broad alignment between these functions and the AES Core Competency Domains. However, the analysis shows that the delivery of culturally competent evaluations and evaluation utilisation received no mention in advertised evaluation roles. The delivery of culturally competent evaluation is essential from the perspective of ethics, validity and theory, while the utilisation of evaluation findings is important for the benefit of society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Nainunis, Mas Akhmad, e Imam Yuadi. "Training Evaluation Analysis Using Text Mining". Indonesian Journal of Artificial Intelligence and Data Mining 7, n.º 1 (18 de janeiro de 2024): 71. http://dx.doi.org/10.24014/ijaidm.v7i1.27607.

Texto completo da fonte
Resumo:
Training evaluation is an evaluation of the results of training that has been carried out. This evaluation includes technical and non-technical factors which are very important for the company to pay attention to when implementing training in the future. Many companies only implement training evaluations as a formality and only include evaluations that are limited by choice, such as closed questionnaires, training evaluations using open questionnaires can provide the freedom to provide positive or negative input that can be of concern to the company. This research aims to find out the words or topics that appear most frequently in open comments on training evaluation results by using the FP Growth algorithm and association rules to find out the relationship between topics or words from the training evaluation results. They are applied to 516 open-ended comments submitted via the post-training questionnaire. The research results showed that 15 association rules were created using Rapidminer using the FP-Growth algorithm with a minimum support of 0.02 and a minimum confidence of 0.5. All rules have a lift value>1 which indicates that all rules are valid or have a strong association relationship. This research can determine the pattern of comments or suggestions given by workers regarding training evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Sinabell, Irina, e Elske Ammenwerth. "Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation". Applied Clinical Informatics 13, n.º 01 (janeiro de 2022): 67–79. http://dx.doi.org/10.1055/s-0041-1740919.

Texto completo da fonte
Resumo:
Abstract Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods. Objective Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations. Methods The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness. Results Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations. Conclusion We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Alkin, Marvin C., Christina A. Christie e Naomi A. Stephen. "Choosing an Evaluation Theory: A Supplement to Evaluation Roots (3rd Edition)". Journal of MultiDisciplinary Evaluation 17, n.º 41 (6 de agosto de 2021): 51–60. http://dx.doi.org/10.56645/jmde.v17i41.709.

Texto completo da fonte
Resumo:
Background: Unlike scientific theories, evaluation theories are prescriptive: a set of actions and approaches that should be followed when conducting an evaluation. While evaluation theorists have offered a variety of writings describing their theories and approaches, few have offered a specific outline of what the theory looks like in practice. Thus, Alkin and Christie formulated a book to aid evaluators in how to apply theories in evaluations (Alkin & Christie, forthcoming). This book culminates in a series of prototypes that outline each theory’s goals, appropriate contexts, prescriptions, and observable actions in application. Purpose: In order to aid evaluators in applying theories, this article seeks to provide a basis for comparison that can be used to help evaluators select which theory would be most appropriate in their practice. Setting: This comparison can be applied in any setting where evaluations fit the context prescribed by each of the theories. Intervention: Not applicable. Research Design: Not applicablre. Data Collection and Analysis: Not applicable. Findings: In order for theories to influence practice effectively, theories must be displayed in a way that allows for easy comparison. This comparison of three theory prototypes demonstrates that prototypes can be an effective way for selecting a prescriptive theory when conducting an evaluation. Keywords: prescriptive theories; practice; empowerment evaluation; learning centered model; developmental evaluation
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Zidane, Youcef J.-T., Bjørn Otto Elvenes, Knut F. Samset e Bassam A. Hussein. "System Thinking, Evaluations and Learning – Experience from Road Construction Megaproject in Algeria". Mediterranean Journal of Social Sciences 9, n.º 3 (1 de maio de 2018): 121–34. http://dx.doi.org/10.2478/mjss-2018-0054.

Texto completo da fonte
Resumo:
Abstract Ex-post evaluation is starting to be recognized in the Algerian different government institutions (e.g., ministries); and evaluation is becoming part of any program or project for two main reasons, justify the legitimacy of the programs and projects, and collect lessons learned for the next similar programs and projects. On the other hand, academicians believe that programs and projects can be improved by conducting proper evaluations and extracting lessons learned. Program/Project evaluation is comprehensive evaluation, which mainly applies to ex-post evaluation. This paper will look closer at an ex-post evaluation of an Algerian highway megaproject based on PESTOL model, this evaluation is already conducted in the period of 2014 – 2016. Considering ex-post evaluation of projects has many purposes and among them is linked to learning and knowledge sharing and transfer. In this regard, the paper describes very briefly the approach used to the post project evaluation. In addition, link it to learning and to other types of evaluations – i.e., ex-ante, monitoring, midterm, terminal evaluations, and using system-thinking approach, and proposes a framework for learning in projects by evaluations. This paper is based on qualitative case study approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

singh, Barath Kumar, e Ravi Kumar Chittoriya. "Hair Evaluation Methods". Indian Journal of Medical and Health Sciences 10, n.º 1 (15 de junho de 2023): 31–38. http://dx.doi.org/10.21088/ijmhs.2347.9981.10123.5.

Texto completo da fonte
Resumo:
The three main Hair assessment methods in alopecia are Non-invasive (questionnaire, daily hair counts, standardized wash test, 60-s hair count, global pictures, dermoscopy, hair weight, contrasting felt examination, phototrichogram, TrichoScan), semi-invasive (trichogram and unit area trichogram), and intrusive procedures (e.g., scalp biopsy). No method is ideal or realistic. These are useful for patient diagnosis and monitoring when interpreted carefully. Daily hair counts, wash tests, etc. are good ways to evaluate a patient's shedding. Hair clinics use procedures like global photography. Phototrichogram is exclusively used in clinical trials. These procedures (like scalp biopsy) require processing and interpretation expertise. In this review article, we discuss the various hair evaluation methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Corbeil, Ronald C. "Improving Federal Evaluation Planning". Canadian Journal of Program Evaluation 4, n.º 2 (setembro de 1989): 23–38. http://dx.doi.org/10.3138/cjpe.4.003.

Texto completo da fonte
Resumo:
Abstract: In 1977 the federal government formally introduced a requirement that its departments and agencies evaluate their programs on a comprehensive and cyclical basis. Since then, over 600 evaluations of varying quality and significance have been completed. This article focuses on the evaluation assessment, the principal planning instrument that is to be prepared immediately before every federal evaluation, in order to show its relative success and to identify ways of improving its performance. The article supplements the technological strength of the Office of the Comptroller General of Canada by providing practical, valid, common-sense guidance on how to more successfully plan useful evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Conzelmann, Julie D. "Exploring Updates to Performance Evaluation Terminology". Business Ethics and Leadership 5, n.º 4 (2021): 6–16. http://dx.doi.org/10.21272/bel.5(4).6-16.2021.

Texto completo da fonte
Resumo:
The goal of this research was to obtain feedback and perspectives from human resource experts regarding the applicability of a newly created performance evaluation document. Reviewed literature includes sources indicating the documentation for employee performance evaluations have not been revised in decades. No recent literature was found regarding updating performance evaluations. Through an exploratory case study, human resource experts helped discern the need to update performance evaluation documents, including the 11 most recognized organizational citizenship behaviors. Purposive and snowball participant selection comprised five qualifying human resources subject matter experts representing healthcare, business, retail, manufacturing, and education from various cities in the United States. Findings revealed the need for organizations to update performance evaluations from the current antiquated and generic documents that only measure basic job-task performance. The outcome was agreement that human resource leaders should update their performance evaluation document to Exhibit B. Results empirically confirmed human resource leaders would support an updated performance evaluation document, substantiating my argument that the newly created performance evaluation document would be beneficial to everyone by fully recognizing and measuring the value of all employee contributions in the workplace.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Ball, Liezl H., e Theo J. D. Bothma. "Heuristic evaluation of e-dictionaries". Library Hi Tech 36, n.º 2 (18 de junho de 2018): 319–38. http://dx.doi.org/10.1108/lht-07-2017-0144.

Texto completo da fonte
Resumo:
Purpose The purpose of this paper is to discuss the heuristic evaluations of five e-dictionaries according to the criteria developed by Ball and Bothma (2018). E-dictionaries are increasingly making use of modern information technology to create advanced information tools. It is necessary to ensure that these new products are still usable. Heuristic evaluation is a usability evaluation method used to evaluate the usability of a product. Design/methodology/approach Five e-dictionaries were evaluated through heuristic evaluation. This method requires an evaluator to evaluate a product by using a set of criteria or guidelines. Findings Various usability issues, as well as good features of e-dictionaries, could be identified through these evaluations, and are discussed under the categories of content, information architecture, navigation, access (searching and browsing), help, customisation and use of other innovative technologies. Originality/value Through the evaluations in this study, the criteria could be validated and an example of how the criteria can be used to evaluate e-dictionaries could be presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Beer, Jennifer S., Michael V. Lombardo e Jamil Palacios Bhanji. "Roles of Medial Prefrontal Cortex and Orbitofrontal Cortex in Self-evaluation". Journal of Cognitive Neuroscience 22, n.º 9 (setembro de 2010): 2108–19. http://dx.doi.org/10.1162/jocn.2009.21359.

Texto completo da fonte
Resumo:
Empirical investigations of the relation of frontal lobe function to self-evaluation have mostly examined the evaluation of abstract qualities in relation to self versus other people. The present research furthers our understanding of frontal lobe involvement in self-evaluation by examining two processes that have not been widely studied by neuroscientists: on-line self-evaluations and correction of systematic judgment errors that influence self-evaluation. Although people evaluate their abstract qualities, it is equally important that perform on-line evaluations to assess the success of their behavior in a particular situation. In addition, self-evaluations of task performance are sometimes overconfident because of systematic judgment errors. What role do the neural regions associated with abstract self-evaluations and decision bias play in on-line evaluation and self-evaluation bias? In this fMRI study, self-evaluation in two reasoning tasks was examined; one elicited overconfident self-evaluations of performance because of salient but misleading aspects of the task and the other was free from misleading aspects. Medial PFC (mPFC), a region associated with self-referential processing, was generally involved in on-line self-evaluations but not specific to accurate or overconfident evaluation. Orbitofrontal cortex (OFC) activity, a region associated with accurate nonsocial judgment, negatively predicted individual differences in overconfidence and was negatively associated with confidence level for incorrect trials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Derrington, Mary Lynne, e James Anthony Martinez. "Exploring Teachers’ Evaluation Perceptions: A Snapshot". NASSP Bulletin 103, n.º 1 (março de 2019): 32–50. http://dx.doi.org/10.1177/0192636519830770.

Texto completo da fonte
Resumo:
Teacher perceptions after 5 years of implementing evaluation protocols that were initiated under Race to the Top revealed attitudes about the evaluation instrument used and the nature of their relationship with the evaluator. This study surveyed middle and high school teachers in nine Eastern Tennessee school districts. Data indicated unintended consequences as a result of their evaluations, including impacts on relationships with principals as well as the concerns with principal time needed for evaluations. Findings imply that the reformed evaluation system is not effectively providing learning opportunities for secondary teachers who had previously been judged as competent.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Higa, Terry Ann F., e Paul R. Brandon. "Participatory Evaluation as Seen in a Vygotskian Framework". Canadian Journal of Program Evaluation 23, n.º 3 (janeiro de 2009): 103–25. http://dx.doi.org/10.3138/cjpe.0023.006.

Texto completo da fonte
Resumo:
Abstract: In participatory evaluations of K–12 programs, evaluators develop school faculty’s and administrators’ evaluation capacity by training them to conduct evaluation tasks and providing consultation while the tasks are conducted. A strong case can be made that the capacity building in these evaluations can be examined using a Vygotskian approach. We conducted participatory evaluations at 9 Hawaii public schools and collected data on the extent to which various factors affected participating school personnel’s learning about program evaluation. After the evaluations were completed, a trained interviewer conducted standardized interviews eliciting the participating school personnel’s opinions about the methods and effects of the capacity building. Two reviewers used codes representing Vygotskian concepts to categorize the interview results. We present the results of the coding and provide conclusions about the value of using a Vygotskian framework to examine capacity building in participatory evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

McGill, Megann, Jordan Siegel e Natasha Noureal. "A Preliminary Comparison of In-Person and Telepractice Evaluations of Stuttering". American Journal of Speech-Language Pathology 30, n.º 4 (14 de julho de 2021): 1737–49. http://dx.doi.org/10.1044/2021_ajslp-19-00215.

Texto completo da fonte
Resumo:
Purpose The purpose of this study was to compare in-person and telepractice evaluations of stuttering with adult participants. The research questions were as follows: Is an evaluation for stuttering via telepractice equivalent to an in-person evaluation in terms of (a) duration of individual evaluation tasks and overall length of the evaluation, (b) clinical outcomes across evaluating clinicians, and (c) participant experience? Method Participants were 14 adults who stutter (males = 11; age range: 20–68) who were simultaneously assessed via telepractice and in-person. Comprehensive evaluations included analysis of the speaker's stuttering, evaluation of the speaker's perceptions and attitudes about stuttering, and language testing. Evaluations were administered by either an in-person clinician or a telepractice clinician but were simultaneously scored by both clinicians. Participants were randomly assigned to the in-person-led assessment condition or the telepractice-led assessment condition. Results No statistically significant differences were found between the in-person and telepractice-led evaluations in terms of overall evaluation task duration, evaluation clinical outcomes, or participants' reported experiences. That is, telepractice evaluations for stuttering in adults may be an equivalent option to in-person evaluations. Conclusions Results of this preliminary study indicate that telepractice evaluations of stuttering may be comparable to in-person evaluations in terms of duration, clinical outcomes, and participant experiences. The current study supports the notion that telepractice evaluations may be a viable option for adult clients who stutter. Clinical considerations and future directions for research are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Askim, Jostein, Erik Døving e Åge Johnsen. "Evaluation in Norway: A 25-Year Assessment". Scandinavian Journal of Public Administration 25, n.º 3-4 (1 de dezembro de 2021): 109–31. http://dx.doi.org/10.58235/sjpa.v25i3-4.7087.

Texto completo da fonte
Resumo:
This article analyses the Norwegian government’s evaluation practice over the 25-year period from 1994 to 2018. Evaluations are mandatory for government ministries and agencies in Norway, with the government conducting some 100 evaluations annually. This article utilises data from a unique database to describe the development of the evaluation industry, focusing on the volume of evaluations, the most active commissioners and providers of evaluations, and the types of evaluations conducted. First, the analysis indicates that the volume of evaluations peaked in around 2010 and has subsequently decreased. As a possible consequence, information relevant to policy may be less publicly available than before. Second, ministries have commissioned relatively fewer evaluations in the last decade than in the years before, and executive agencies have commissioned relatively more. Third, the proportion of evaluations performed by consultants has risen, with that of research institutes falling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Scott, Richard E. "'Pragmatic Evaluation'". International Journal of E-Health and Medical Communications 1, n.º 2 (abril de 2010): 1–11. http://dx.doi.org/10.4018/ijehmc.2010040101.

Texto completo da fonte
Resumo:
E-Health continues to be implemented despite continued demonstration that it lacks value. Specific guidance regarding research approaches and methodologies would be beneficial due to the value in identifying and adopting a single model or framework for any one ‘entity’ (healthcare organisation, sub-national region, country, etc.) so that the evidence-base accumulates more rapidly and interventions can be more meaningfully compared. This paper describes a simple and systematic approach to e-health evaluation in a real-world setting, which can be applied by an evaluation team and raises the quality of e-health evaluations. The framework guides and advises users on evaluation approaches at different stages of e-health development and implementation. Termed ‘Pragmatic Evaluation,’ the approach has five principles that unfold in a staged approach that respects the collective need for timely, policy relevant, yet meticulous research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Lee, Eunsuk, e Yu Ri Kim. "Evaluation Use in International Development Cooperation and Stakeholder Roles: A Theory of Change Approach". Korea Association of International Development and Cooperation 16, n.º 2 (30 de junho de 2024): 1–23. http://dx.doi.org/10.32580/idcr.2024.16.2.1.

Texto completo da fonte
Resumo:
Purpose: This study aims to provide a theoretical foundation and practical examples to enhance evaluation use among various stakeholders by exploring the mechanisms of evaluation use in international development cooperation through the Theory of Change (ToC). Originality: By developing a ToC specifically for development cooperation evaluation, this research advances the discourse on evaluation use. It establishes a foundational basis for theoretical discussions on diverse evaluation applications and identifies the roles of various stakeholders. Methodology: The study begins with a comprehensive literature review to define the concept and various types of evaluation use. It then conducts a comparative analysis of evaluation logic models from previous studies to extract common elements and create a tailored ToC for evaluations in international development cooperation. Finally, stakeholder analysis is used to categorize evaluation stakeholders. We apply the developed ToC to identify the roles of each stakeholder in different evaluation contexts. Result: By applying an extended definition of evaluation use, this study developed a ToC for international development cooperation evaluation, identifying essential elements for change and critical assumptions throughout the evaluation process. Through the analysis of evaluation stakeholders categorized into key, primary, and secondary groups based on their influence within the ToC framework, the study reveals that stakeholders at all levels can utilize evaluations for diverse purposes across the entire evaluation process. Conclusion and Implication: This study systematically identified the diverse roles of stakeholders and the mechanisms of evaluation use in the field of development cooperation, integrating ToC and stakeholder analysis. It provides a comprehensive understanding of who uses evaluations, for what purposes, when, and how. Ultimately, this research is expected to enhance evaluation practices and use, thereby contributing to the achievement of the intended objectives of evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Dhakal, Teertha Raj. "Institutionalization and Use of Evaluations in the Public Sector in Nepal". Journal of MultiDisciplinary Evaluation 10, n.º 23 (16 de julho de 2014): 51–58. http://dx.doi.org/10.56645/jmde.v10i23.403.

Texto completo da fonte
Resumo:
This paper reviews the institutionalization process and use of evaluation evidences in the planning processes in Nepal. It reviews evaluation reports of 29 projects or programmes of various sectors conducted engaging independent evaluators during 1995-2012. It concludes that reform to effectively institutionalize and promote use of evaluations need to be implemented as a part of overall performance management and accountability reform and the uses of evaluations increase if the evaluation system is designed as a consistent and integrated feature of the development planning process aiming to correct the entire planning cycle. Promotion of evaluation culture at various levels, securing a higher level policy commitment and addressing capacity gaps in managing evaluations are key attributes to promote utilization of evaluations. Key words: evaluation, use, performance management, Nepal
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Buehrer, Susanne, Evanthia Kalpazidou Schmidt, Dorottya Rigler e Rachel Palmen. "How to Implement Context-Sensitive Evaluation Approaches in Countries with still Emerging Evaluation Cultures". Public Policy and Administration 20, n.º 3 (28 de setembro de 2021): 368–81. http://dx.doi.org/10.5755/j01.ppaa.20.3.28371.

Texto completo da fonte
Resumo:
Evaluation cultures and evaluation capacity building vary greatly across the European Union. Western European countries, such as Austria, Germany, Denmark and Sweden, have been termed as leading countries in the evaluation as they have built up well-established evaluation cultures and carry out systematic evaluations of programmes and institutions. In contrast, in Central and Eastern European (CEE) countries, efforts continue to establish evaluation practices and further develop the current evaluation culture. In Hungary, for example, an established research and innovation evaluation practice does not exist, not one specifically considering gender equality in research and innovation evaluations with the exception of research and innovation programmes financed by the EU Structural Funds. Based on the results of a Horizon 2020 project, we apply a context-sensitive evaluation concept in Hungary that enables program owners and evaluators to develop a tailor-made design and impact model for their planned or ongoing gender equality interventions. The development of this evaluation was based on a thorough analysis of the literature and 19 case studies, building on documentary analysis and semi-structured interviews. The article shows that this evaluation approach is applicable also in countries with a certain catch-up demand of the existing overall evaluation culture. The special feature of the presented evaluation approach is, on the one hand, that the evaluation is context-sensitive. On the other hand, this approach makes it possible not only to depict effects on gender equality itself, but also to anticipate effects on research and innovation. Such effects can, for example, be a stronger orientation of research towards societal needs, which makes it particularly interesting for private companies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Lawrenz, Frances, e Douglas Huffman. "Using Multi-Site Core Evaluation to Provide “Scientific” Evidence". Canadian Journal of Program Evaluation 19, n.º 2 (setembro de 2004): 17–36. http://dx.doi.org/10.3138/cjpe.19.002.

Texto completo da fonte
Resumo:
Abstract: Funders of educational and other social service programs are requiring more experimental and performance-oriented designs in evaluations of program effectiveness. Concomitantly, funders are exhibiting less interest in evaluations that serve other purposes, such as implementation fidelity. However, in order to fully understand the effectiveness of most complex social and educational programs, an evaluation must provide diverse information. This article uses the Core Evaluation of the Collaboratives for Excellence in Teacher Preparation Program as a case example of how evaluations might meet the requirements of objective scientific evaluation while at the same time valuing and incorporating other evaluation purposes. The successes and limitations of the case example in achieving this blending are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Rowe, Andy. "Rapid impact evaluation". Evaluation 25, n.º 4 (19 de setembro de 2019): 496–513. http://dx.doi.org/10.1177/1356389019870213.

Texto completo da fonte
Resumo:
Rapid Impact Evaluation offers the potential to evaluate impacts in both ex ante and ex post settings, providing utility for developmental and formative evaluation as well as the usual summative settings. Rapid Impact Evaluation triangulates judgments of three separate groups of experts to assess the incremental change in effects attributable to the program. Three methodological innovations are central to the method: the scenario-based counterfactual, a simplified approach to measuring change in effects, and an interest-based approach to stakeholder engagement. In evaluations to date, Rapid Impact Evaluation has proved to be a cost effective and nimble approach to assessing impacts and does not intrude on design or implementation of the program. By applying recent thinking on use-seeking research emphasizing joint knowledge processes over knowledge products, Rapid Impact Evaluation promotes salience, legitimacy, and credibility with decision makers and key stakeholders. Applications show Rapid Impact Evaluation to be fit for purpose.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Cullen, Anne E., e Chris L. S. Coryn. "Forms and Functions of Participatory Evaluation in International Development: A Review of the Empirical and Theoretical Literature". Journal of MultiDisciplinary Evaluation 7, n.º 16 (29 de maio de 2011): 32–47. http://dx.doi.org/10.56645/jmde.v7i16.288.

Texto completo da fonte
Resumo:
Background: Since the late 1970s participatory approaches have been widely promoted to evaluate international development programs. However, there is no universal agreement of what is meant by participatory evaluation. For some evaluators, participatory evaluations involve the extensive participation of all stakeholder groups (from donor to non-recipients) in every phase of the evaluation (from design to dissemination). For others, the participation of donors in the design constitutes a participatory evaluation approach. Participatory evaluation approaches are best considered on a continuum. In other words, there are many gradations to participation and evaluations should be classified accordingly. Purpose: The lack of shared meaning of participatory evaluation approaches also impedes serious discussion on their use including their merits and demerits, suggestions for their improvement, and their overall effectiveness. The purpose of this article is to present an examination of the literature on participatory evaluation approaches to highlight commonalities and differences. Setting: Not applicable Intervention: Not applicable. Research Design: Not applicable. Data Collection and Analysis: Desk review. Findings: This article clearly demonstrates how broadly participatory evaluation is conceptualized and practiced and underscores the clear need for specification and precision when discussing what is meant by participatory evaluation. Recommendations for how evaluators should describe participatory evaluations are provided. Keywords: participatory evaluation; collaborative evaluation; empowerment evaluation; stakeholder-based evaluation
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Mizerek, Henryk. "Ewaluacje konstruktywistyczne. Implikacje dla wczesnej edukacji". Problemy Wczesnej Edukacji 51, n.º 4 (31 de dezembro de 2020): 75–86. http://dx.doi.org/10.26881/pwe.2020.51.06.

Texto completo da fonte
Resumo:
The aim of this paper is to analyze the possibilities of using evaluation models – developed within the constructivist paradigm – in early education. The author’s field of interest included responsive evaluation, dialogic evaluation, deliberative democratic evaluation, participatory evaluation, empowerment evaluation and stakeholder-based evaluation. Responsive and dialogical evaluation have been found to be particularly useful in early education. The scope of in-depth analyzes is aimed to present the specificity of responsive dialogical evaluations compared to other constructivist models and their distinctiveness from neoliberal evaluations and those subordinated to the gold standard ideology. The content of the second part of the paper is devoted to analyze the problems of designing, planning and conducting programs implemented in early education.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia