Auswahl der wissenschaftlichen Literatur zum Thema „Peer review of research grant proposals“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Peer review of research grant proposals" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Peer review of research grant proposals"

1

Lindquist, RD, MF Tracy und D. Treat-Jacobson. „Peer review of nursing research proposals“. American Journal of Critical Care 4, Nr. 1 (01.01.1995): 59–65. http://dx.doi.org/10.4037/ajcc1995.4.1.59.

Der volle Inhalt der Quelle
Annotation:
The grant review process that operationalizes peer review for the critique, scoring, approval, and selection of research grants for funding may intimidate a novice reviewer. This article describes the peer review panel and process of grant review, specifies the role and responsibilities of the reviewer in the review session, and presents considerations for the evaluation of proposals and the preparation of a written critique. A sample critique is provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Marchant, Mary A. „The Keys to Preparing Successful Research Grant Proposals“. Journal of Agricultural and Applied Economics 33, Nr. 3 (Dezember 2001): 605–12. http://dx.doi.org/10.1017/s1074070800021040.

Der volle Inhalt der Quelle
Annotation:
AbstractThis article seeks to demystify the competitive grant recommendation process of scientific peer review panels. The National Research Initiative Competitive Grants Program (NRICGP) administered by the U.S. Department of Agriculture-Cooperative State Research, Extension, and Education Service (USDA-CSREES) serves as the focus of this article. This article provides a brief background on the NRICGP and discusses the application process, the scientific peer review process, guidelines for grant writing, and ways to interpret reviewer comments if a proposal is not funded. The essentials of good grant writing discussed in this article are transferable to other USDA competitive grant programs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Conix, Stijn, Andreas De Block und Krist Vaesen. „Grant writing and grant peer review as questionable research practices“. F1000Research 10 (08.11.2021): 1126. http://dx.doi.org/10.12688/f1000research.73893.1.

Der volle Inhalt der Quelle
Annotation:
A large part of governmental research funding is currently distributed through the peer review of project proposals. In this paper, we argue that such funding systems incentivize and even force researchers to violate five moral values, each of which is central to commonly used scientific codes of conduct. Our argument complements existing epistemic arguments against peer-review project funding systems and, accordingly, strengthens the mounting calls for reform of these systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Conix, Stijn, Andreas De Block und Krist Vaesen. „Grant writing and grant peer review as questionable research practices“. F1000Research 10 (24.12.2021): 1126. http://dx.doi.org/10.12688/f1000research.73893.2.

Der volle Inhalt der Quelle
Annotation:
A large part of governmental research funding is currently distributed through the peer review of project proposals. In this paper, we argue that such funding systems incentivize and even force researchers to violate five moral values, each of which is central to commonly used scientific codes of conduct. Our argument complements existing epistemic arguments against peer-review project funding systems and, accordingly, strengthens the mounting calls for reform of these systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Botham, Crystal M., Shay Brawn, Latishya Steele, Cisco B. Barrón, Sofie R. Kleppner und Daniel Herschlag. „Biosciences Proposal Bootcamp: Structured peer and faculty feedback improves trainees’ proposals and grantsmanship self-efficacy“. PLOS ONE 15, Nr. 12 (28.12.2020): e0243973. http://dx.doi.org/10.1371/journal.pone.0243973.

Der volle Inhalt der Quelle
Annotation:
Grant writing is an essential skill to develop for academic and other career success but providing individual feedback to large numbers of trainees is challenging. In 2014, we launched the Stanford Biosciences Grant Writing Academy to support graduate students and postdocs in writing research proposals. Its core program is a multi-week Proposal Bootcamp designed to increase the feedback writers receive as they develop and refine their proposals. The Proposal Bootcamp consisted of two-hour weekly meetings that included mini lectures and peer review. Bootcamp participants also attended faculty review workshops to obtain faculty feedback. Postdoctoral trainees were trained and hired as course teaching assistants and facilitated weekly meetings and review workshops. Over the last six years, the annual Bootcamp has provided 525 doctoral students and postdocs with multi-level feedback (peer and faculty). Proposals from Bootcamp participants were almost twice as likely to be funded than proposals from non-Bootcamp trainees. Overall, this structured program provided opportunities for feedback from multiple peer and faculty reviewers, increased the participants’ confidence in developing and submitting research proposals, while accommodating a large number of participants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gallo, Stephen A., und Karen B. Schmaling. „Peer review: Risk and risk tolerance“. PLOS ONE 17, Nr. 8 (26.08.2022): e0273813. http://dx.doi.org/10.1371/journal.pone.0273813.

Der volle Inhalt der Quelle
Annotation:
Peer review, commonly used in grant funding decisions, relies on scientists’ ability to evaluate research proposals’ quality. Such judgments are sometimes beyond reviewers’ discriminatory power and could lead to a reliance on subjective biases, including preferences for lower risk, incremental projects. However, peer reviewers’ risk tolerance has not been well studied. We conducted a cross-sectional experiment of peer reviewers’ evaluations of mock primary reviewers’ comments in which the level and sources of risks and weaknesses were manipulated. Here we show that proposal risks more strongly predicted reviewers’ scores than proposal strengths based on mock proposal evaluations. Risk tolerance was not predictive of scores but reviewer scoring leniency was predictive of overall and criteria scores. The evaluation of risks dominates reviewers’ evaluation of research proposals and is a source of inter-reviewer variability. These results suggest that reviewer scoring variability may be attributed to the interpretation of proposal risks, and could benefit from intervention to improve the reliability of reviews. Additionally, the valuation of risk drives proposal evaluations and may reduce the chances that risky, but highly impactful science, is supported.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mutz, Rüdiger, Lutz Bornmann und Hans-Dieter Daniel. „Does Gender Matter in Grant Peer Review?“ Zeitschrift für Psychologie 220, Nr. 2 (Januar 2012): 121–29. http://dx.doi.org/10.1027/2151-2604/a000103.

Der volle Inhalt der Quelle
Annotation:
One of the most frequently voiced criticisms of the peer review process is gender bias. In this study we evaluated the grant peer review process (external reviewers’ ratings, and board of trustees’ final decision: approval or no approval for funding) at the Austrian Science Fund with respect to gender. The data consisted of 8,496 research proposals (census) across all disciplines from 1999 to 2009, which were rated on a scale from 1 to 100 (poor to excellent) by 18,357 external reviewers in 23,977 reviews. In line with the current state of research, we found that the final decision was not associated with applicant’s gender or with any correspondence between gender of applicants and reviewers. However, the decisions on the grant applications showed a robust female reviewer salience effect. The approval probability decreases (up to 10%), when there is parity or a majority of women in the group of reviewers. Our results confirm an overall gender null hypothesis for the peer review process of men’s and women’s grant applications in contrast to claims that women’s grants are systematically downrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Frampton, Geoff, Jonathan Shepherd, Karen Pickett und Jeremy Wyatt. „PP021 Peer Review Innovations For Grant Applications: Efficient And Effective?“ International Journal of Technology Assessment in Health Care 33, S1 (2017): 78–79. http://dx.doi.org/10.1017/s0266462317002124.

Der volle Inhalt der Quelle
Annotation:
INTRODUCTION:Peer review of grant applications is employed routinely by health research funding bodies to determine which research proposals should be funded. Peer review faces a number of criticisms, however, especially that it is time consuming, financially expensive, and may not select the best proposals. Various modifications to peer review have been examined in research studies but these have not been systematically reviewed to guide Health Technology Assessment (HTA) funding agencies.METHODS:We developed a systematic map based on a logic model to summarize the characteristics of empirical studies that have investigated peer review of health research grant applications. Consultation with stakeholders from a major health research funder (the United Kingdom National Institute for Health Research, NIHR) helped to identify topic areas within the map of particular interest. Innovations that could improve the efficiency and/or effectiveness of peer review were agreed as being a priority for more detailed analysis. Studies of these innovations were identified using pre-specified eligibility criteria and were subjected to a full systematic review.RESULTS:The systematic map includes eighty-one studies, most published since 2005, indicating an increasing area of investigation. Studies were mostly observational and retrospective in design, and a large proportion have been conducted in the United States, with many conducted by the National Institutes of Health. An example of an innovation is video training to improve reviewer reliability. Although research councils in the United Kingdom have conducted several relevant studies, these have mainly examined existing practices rather than testing peer review innovations. Full results of the systematic review will be provided in the presentation, and we will assess which innovations could improve the efficiency and/or effectiveness of peer review for selecting health research proposals.CONCLUSIONS:Despite considerable interest in, and criticism of, peer review for helping to select health research proposals, there have been few detailed systematic examinations of the primary research evidence in this area. Our evidence synthesis provides the most up-to-date overview of evidence in this important developing area, with recommendations for health research funders in their decision making.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Holland, Christy K. „How to write a peer-polished proposal in 15 weeks“. Journal of the Acoustical Society of America 155, Nr. 3_Supplement (01.03.2024): A104—A105. http://dx.doi.org/10.1121/10.0026964.

Der volle Inhalt der Quelle
Annotation:
Creating a meticulously crafted proposal requires a strategic approach and systematic planning. An overview of a semester-long graduate course on how to write successful NIH grant applications will be provided. Particular emphasis is given to developing proposals for the Ruth L. Kirschstein Predoctoral Individual National Research Service Award (https://researchtraining.nih.gov/programs/fellowships/F31) or to disease-based foundations. The writing process involves drafting components in several key phases. The initial four weeks focus on understanding the proposal requirements, identifying the target audience, organizing a brilliant biosketch and remarkable resources and environment pages, and establishing a clear hypothesis and specific aims. An extensive literature review is conducted in the subsequent two weeks to contextualize the proposal, identify a gap in knowledge, and stress the significance and innovation of the proposed work. Weeks 7 and 8 are devoted to the development of a robust research approach and methodology, including data collection, analysis techniques, expected outcomes, potential challenges, and alternative approaches. The review process, refinement and enhancement take center stage for the remaining weeks. Peer review and feedback mechanisms are incorporated to iteratively improve each proposal's coherence, logic, and persuasiveness. This systematic 15-week timeline emphasizes iterative refinement through peer input, ensuring a polished proposal ready for submission.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Guthrie, Susan, Daniela Rodriguez Rincon, Gordon McInroy, Becky Ioppolo und Salil Gunashekar. „Measuring bias, burden and conservatism in research funding processes“. F1000Research 8 (12.06.2019): 851. http://dx.doi.org/10.12688/f1000research.19156.1.

Der volle Inhalt der Quelle
Annotation:
Background: Grant funding allocation is a complex process that in most cases relies on peer review. A recent study identified a number of challenges associated with the use of peer review in the evaluation of grant proposals. Three important issues identified were bias, burden, and conservatism, and the work concluded that further experimentation and measurement is needed to assess the performance of funding processes. Methods: We have conducted a review of international practice in the evaluation and improvement of grant funding processes in relation to bias, burden and conservatism, based on a rapid evidence assessment and interviews with research funding agencies. Results: The evidence gathered suggests that efforts so far to measure these characteristics systematically by funders have been limited. However, there are some examples of measures and approaches which could be developed and more widely applied. Conclusions: The majority of the literature focuses primarily on the application and assessment process, whereas burden, bias and conservatism can emerge as challenges at many wider stages in the development and implementation of a grant funding scheme. In response to this we set out a wider conceptualisation of the ways in which this could emerge across the funding process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Peer review of research grant proposals"

1

Jayasinghe, Upali W., University of Western Sydney, of Arts Education and Social Sciences College und Self-Concept Enhancement and Learning Facilitation Research Centre. „Peer review in the assessment and funding of research by the Australian Research Council“. THESIS_CAESS_SELF_Jayasinghe_U.xml, 2003. http://handle.uws.edu.au:8081/1959.7/572.

Der volle Inhalt der Quelle
Annotation:
In higher education settings the peer review process is highly valued and used for evaluating the academic merits of grant proposals, journal submissions, academic promotions, monographs, text books, PhD thesis and a variety of other academic products. The purpose of this thesis was to evaluate the peer review process for awarding research grants used by the Australian Research Council (ARC) Large Grants Program and to propose strategies to address potential shortcomings of the system. This study also evaluated psychometric properties such as the reliabilities of various ratings that are part of the assessment process of the ARC Large grants Program. Data for the all grant applications submitted for the 1996 round of the Large Grants Program were provided by the ARC. In a variation to the typical peer review process, applicants were given an opportunity to nominate assessors to review their proposals. The results indicated that global ratings given by the researcher-nominated assessors were systematically higher and less reliable than those by panel-nominated external reviewers chosen by the ARC. The reliability of peer reviews is not adequate by most standards. A critical direction for future research is considering what strategies need to be put in place to improve the quality of the reviews. To improve the reliability it is recommended that researcher-nominated reviewers should not be used; that there should be more reviews per proposal and a smaller more highly selected core of reviewers should perform most of the reviews within each sub-discipline providing a greater control over error associated with individual reviewers
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jayasinghe, Upali W. „Peer review in the assessment and funding of research by the Australian Research Council /“. View thesis, 2003. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20051102.114303/index.html.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.) -- University of Western Sydney, 2003.
"A thesis submitted to the University of Western Sydney in fulfilment of the requirements for the degree of Doctor of Philosophy" Bibliography : leaves 350-371.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Jayasinghe, Upali W. „Peer review in the assessment and funding of research by the Australian Research Council“. Thesis, View thesis, 2003. http://handle.uws.edu.au:8081/1959.7/572.

Der volle Inhalt der Quelle
Annotation:
In higher education settings the peer review process is highly valued and used for evaluating the academic merits of grant proposals, journal submissions, academic promotions, monographs, text books, PhD thesis and a variety of other academic products. The purpose of this thesis was to evaluate the peer review process for awarding research grants used by the Australian Research Council (ARC) Large Grants Program and to propose strategies to address potential shortcomings of the system. This study also evaluated psychometric properties such as the reliabilities of various ratings that are part of the assessment process of the ARC Large grants Program. Data for the all grant applications submitted for the 1996 round of the Large Grants Program were provided by the ARC. In a variation to the typical peer review process, applicants were given an opportunity to nominate assessors to review their proposals. The results indicated that global ratings given by the researcher-nominated assessors were systematically higher and less reliable than those by panel-nominated external reviewers chosen by the ARC. The reliability of peer reviews is not adequate by most standards. A critical direction for future research is considering what strategies need to be put in place to improve the quality of the reviews. To improve the reliability it is recommended that researcher-nominated reviewers should not be used; that there should be more reviews per proposal and a smaller more highly selected core of reviewers should perform most of the reviews within each sub-discipline providing a greater control over error associated with individual reviewers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Eigelaar, Ilse. „The use of peer review as an evaluative tool in science“. Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52587.

Der volle Inhalt der Quelle
Annotation:
Thesis (MPhil)--Stellenbosch University, 2001.
ENGLISH ABSTRACT: Peer review as an institutional mechanism for certifying knowledge and allocating resources dates back as far as 1665. Today it can with confidence be stated that it is one of the most prominent evaluative tools used in science to determine the quality of research across all scientific fields. Given the transformation within the processes of knowledge production, peer review as an institutionalised method of the evaluation of scientific research has not been unaffected. Peer reviewers have to act within a system of relevant science and find themselves responsible to the scientific community as well as to public decision-makers, who in turn are responsible to the public. This dual responsibility of reviewers led to the development of criteria to be used in the evaluation process to enable them to measure scientific excellence as well as the societal relevance of science. In this thesis peer review in science is examined within the context of these transformations. In looking at the conceptual and methodological issues raised by peer review, definitions of peer review, its history and contexts of application are examined followed by a critique on peer review. Peer review in practice is also explored and the evaluation processes of four respective funding agencies are analysed with regards to three aspects intrinsic to the peer review process: the method by which reviewers are selected, the review criteria by which proposals are rated, and the number of review stages within each review process. The thesis concludes with recommendations for possible improvements to the peer review process and recommended alternatives to peer review as an evaluative tool.
AFRIKAANSE OPSOMMING: Portuurgroep-evaluering as 'n geïnsitutsionaliseerde meganisme in die sertifisering van kennis en die toewys van hulpbronne dateer terug so ver as 1665. Huidiglik kan dit as een van die mees prominente metingsinstrumente van die kwaliteit van navorsing in alle wetenskaplike velde beskou word. Die transformasies wat plaasgevind het binne die prosesse waar kennis gegenereer word, het ook nie portuurgroep-evaluaring as 'n geïnstitusionaliseerde metode van evaluering ongeraak gelaat nie. Portuurgroep-evalueerders bevind hulself binne 'n sisteem van relevante wetenskap. Binne hierdie sisteem het hulle 'n verantwoordelikheid teenoor die wetenskaplike gemeenskap sowel as die publiekebesluitnemers wat op hul beurt weer verantwoordelik is teenoor die publiek. Hierdie dubbele verantwoordelikheid het tot gevolg die saamstel van kriteria waarvolgens evalueerders wetenskaplike uitmuntendheid sowel as relevansie tot die breër samelewing kan meet. Hierdie tesis ondersoek portuurgroep-evaluering teen die agtergrond van hierdie transformasies. Die konseptueie en metodologiese aspekte van portuurgroepevaluering word ondersoek deur eerstens te kyk na definisies van portuurgroepevaluering, die geskiedenis daarvan en kontekste waarbinne dit gebruik word. Tweedens word gekyk na kritiek gelewer op portuurgroep-evaluering. Portuurgroep evaluering binne die praktyk word ook ondersoek waar vier onderskeie befondsingsagentskappe se evaluerings prosesse geanaliseer word. Hierdie analise word gedoen in terme van drie essensiële aspekte binne portuurgroep- evaluering. Hierdie drie aspekte is as volg: 1) die wyse waarop evalueerders geselekteer word, 2) die evalueringskriteria waarvolgens navorsingsvoorstelle gemeet word en 3) die hoeveelheid evalueringsfases binne die protuurgroep-evaluerings proses. Laastens word aanbevelings ter verbetering van die portuurgroep-evaluerings proses as ook voorstelle tot moontlike alternatiewe tot portuurgroep-evaluering as 'n evaluerings instrument gebied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mow, Karen Estelle, und n/a. „Research Grant Funding and Peer Review in Australian Research Councils“. University of Canberra. Administrative Studies, 2009. http://erl.canberra.edu.au./public/adt-AUC20091214.152554.

Der volle Inhalt der Quelle
Annotation:
This thesis considers the effects of research funding process design in the Australian Research Council (ARC) and the National Health and Medical Research Council (NHMRC). The program delivery mechanisms that the ARC and NHMRC use differ in detail and each council claims to be using the best selection model possible. Neither council provides evidence that peer review is the best possible way of delivering government funding for research and neither can produce empirical evidence that they use the best possible peer review model to determine excellence. Data used in this thesis were gathered over several years, forming a comparative case study of the Australian Research Council and the National Health and Medical Research Council, with illustrative data from comparable international organizations in the UK and USA. The data collection included: a survey of applicants, semi-structured interviews with experienced panel members and former staff, observation of selection meetings, and examination of publications by and about the research councils. Researchers firmly believe in peer review and their confidence enables the system to function. However, the mechanisms of grant selection are not well understood and not well supported by applicants, who criticize the processes used to assess their work, while supporting the concept of peer selection. The notion of excellence is problematic; judgements of excellence are made within frameworks set by the research councils and vary across disciplines. Allocation of research funding depends on peer review assessment to determine quality, but there is no single peer review mechanism, rather, there exist a variety of processes. Process constraints are examined from the perspectives of panel members, peer reviewers, council staff and applicants. Views from outside and inside the black box of selection reveal the impacts of process design on judgements of excellence and decision-making capacity. Peer reviewers in selection panels are found to use a range of differentiating strategies to separate applications, with variance evident across disciplines and research councils. One dominant criterion emerges in both the ARC and NHMRC processes, track record of the applicants. Program delivery mechanisms enable and constrain selection but every peer panel member has to make selection decisions by defining discipline standards and negotiating understandings within the panel. The extent to which peers can do this depends on the number of applications assigned to them, the size of the applicant field, and the processes they have to follow. Fine details of process design, panel rules and interactions are the tools that shape funding outcomes. Research councils believe they are selecting the best, most meritorious proposed research. However, I show in this thesis that the dominant discriminator between applicants in Australian selection processes is track record of the applicant. This effect is the result of several factors operating singly or in concert. Researcher track record, largely determined by quality and number of journal publications, is considered to be the responsibility of universities but support for this capacity building has not been systematically provided in Australian universities. Reliance on track record to determine the outcomes of all but the very best applications is very like awarding prizes for past work and is significantly different from the models of grant selection that operate in comparable international research councils.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Avin, Shahar. „Breaking the grant cycle : on the rational allocation of public resources to scientific research projects“. Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/247434.

Der volle Inhalt der Quelle
Annotation:
The thesis presents a reformative criticism of science funding by peer review. The criticism is based on epistemological scepticism, regarding the ability of scientific peers, or any other agent, to have access to sufficient information regarding the potential of proposed projects at the time of funding. The scepticism is based on the complexity of factors contributing to the merit of scientific projects, and the rate at which the parameters of this complex system change their values. By constructing models of different science funding mechanisms, a construction supported by historical evidence, computational simulations show that in a significant subset of cases it would be better to select research projects by a lottery mechanism than by selection based on peer review. This last result is used to create a template for an alternative funding mechanism that combines the merits of peer review with the benefits of random allocation, while noting that this alternative is not so far removed from current practice as may first appear.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rankins, Falcon. „An Investigation of How Black STEM Faculty at Historically Black Colleges and Universities Approach the National Science Foundation Merit Review Process“. VCU Scholars Compass, 2017. https://scholarscompass.vcu.edu/etd/5149.

Der volle Inhalt der Quelle
Annotation:
This qualitative inquiry explored the ways in which US-born, Black faculty member participants in science, technology, engineering, and mathematics (STEM) disciplines at Historically Black Colleges and Universities (HBCUs) interact with the National Science Foundation (NSF). Eight Black HBCU STEM faculty members with a range of involvement in NSF-related activities were individually interviewed. Topics of discussion with participants included their prior experiences with NSF, their understanding of the merit review process, and their understanding of their personal and institutional relationships with NSF and the STEM community. Two broad findings emerged from the conversations. The first was that issues of communities and social identity were important to the participants’ work as research scientists. Participants prioritized advancing people and communities over advancing the knowledge of ambiguous, disembodied scientific disciplines, and some participants were motivated by interests in social justice. However, participants maintained strong identities as scientists and the discussions provided no evidence that other social factors influenced their application of the scientific method. The second major finding dealt with the role participants perceived their institutions playing in their involvement with NSF. All participants described challenges associated with pursuing research in HBCU environments and, in some cases, the institutional challenges served as the motivation for participants’ projects, with varying consequences. Finally, this study developed and refined a theoretical framework for explaining the underrepresentation of HBCUs in NSF funding streams. In developing this framework, a brief history of the origination of HBCUs, NSF, and the NSF merit review process is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Peer review of research grant proposals"

1

National Research Council (U.S.). Committee on Peer Review Procedures. Improving research through peer review. Washington, D.C: National Academy Press, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

National Cancer Institute (U.S.), Hrsg. Share your expertise with us. [Bethesda Md.]: National Cancer Insitute, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Center for Scientific Review (National Institutes of Health). What happens to your grant application: A primer for new applicants. 8. Aufl. Bethesda, Md.]: Center for Scientific Review, National Institutes of Health, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

United States. Congress. Senate. Committee on Governmental Affairs, Hrsg. Peer review: Reforms needed to ensure fairness in federal agency grant selection : report to the Chairman, Committee on Governmental Affairs, U.S. Senate. Washington, D.C: The Office, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Langfeldt, Liv. Fagfellevurdering som forskningspolitisk virkemiddel: En studie av fordelingen av frie midler i Norges forskningsråd. Oslo: NIFU, Norsk institutt for studier av forskning og utdanning, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Richard, Mandel. A half century of peer review, 1946-1996. Bethesda, MD (2760 Eisenhower Ave., Alexandria 22314): Division of Research Grants, National Institutes of Health, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hill, Anne. Addressing common problems: Guidance for submitting European Commission fifth framework proposals. Birmingham: Outreach Press, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Center, Horace Mann Learning, Hrsg. Reviewing applications for discretionary grants and cooperative agreements: A workbook for application reviewers. [Washington, D.C.?]: Horace Mann Learning Center, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Center, Horace Mann Learning. Reviewing applications for discretionary grants and cooperative agreements: A workbook for application reviewers. Washington, D.C.?]: Horace Mann Learning Center, U.S. Department of Education, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

S, Frankel Mark, und Cave Jane, Hrsg. Evaluating science and scientists: An east-west dialogue on research evaluation in post-communist Europe. Budapest: Central European University Press, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Peer review of research grant proposals"

1

Marušić, Ana. „Reviewing, Evaluating and Editing“. In Collaborative Bioethics, 107–19. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-22412-6_8.

Der volle Inhalt der Quelle
Annotation:
AbstractAs an early career researcher, you will probably not be extensively involved in reviewing journal articles or research proposals, or editing scientific journals. However, reviewing, evaluating and editing are important aspects of research. As an early career researcher, especially after getting a doctoral degree, you may be invited by a journal to serve as a peer reviewer, or may edit or work in a scientific peer review journal. It is important that you understand what to expect from a responsible review of your work – when you submit a manuscript to a journal or a grant proposal. In this chapter, we will look at different types of journal peer review. We will address the responsibilities of peer reviewers toward the authors and editor, including confidentiality, objectivity, and competing interests. We will focus on journal peer review, because this is something that you will certainly experience from the author’s side, and possibly as a reviewer. The principles of professional and responsible peer review also apply to other types of peer review, such as for grants of academic/research advancement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Langfeldt, Liv. „The Decision-Making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome“. In Peer review in an Era of Evaluation, 297–326. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-75263-7_13.

Der volle Inhalt der Quelle
Annotation:
AbstractWhen distributing grants, research councils use peer expertise as a guarantee for supporting the best projects. However, there are no clear norms for assessments, and there may be a large variation in what criteria reviewers emphasize – and how they are emphasized. The determinants of peer review may therefore be accidental, in the sense that who reviews what research and how reviews are organized may determine outcomes. This chapter deals with how the review process affects the outcome of grant review. It is a reprint of a study of the multitude of review procedures practiced in The Research Council of Norway (RCN) in the 1990s. While it is outdated as an empirical study of the RCN, it provides some general insights into the dynamics of grant review panels and the effects of different ways of organising the decision-making in the panels. Notably, it is still one of the few in-depth studies of grant review processes based on direct observation of panel meetings and full access to applications and review documents. A central finding is that rating scales and budget restrictions are more important than review guidelines for the kind of criteria applied by the reviewers. The decision-making methods applied by the review panels when ranking proposals are found to have substantial effects on the outcome. Some ranking methods tend to support uncontroversial and safe projects, whereas other methods give better chances for scholarly pluralism and controversial research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Srinivas, Shamala, und Ranga V. Srinivas. „Grant Process and Peer Review: US National Institutes of Health System“. In The Quintessence of Basic and Clinical Research and Scientific Publishing, 799–810. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1284-1_51.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Schwartz, Samuel M., und Mischa E. Friedman. „The Peer Review System“. In A Guide to NIH Grant Programs, 86–96. Oxford University PressNew York, NY, 1992. http://dx.doi.org/10.1093/oso/9780195069341.003.0007.

Der volle Inhalt der Quelle
Annotation:
Abstract The peer review system is an administrative creation first utilized by the NIH over forty years ago. It has enabled the NIH to recruit large numbers of prominent, primarily nonfederal scientists from institutions of higher learning, hospitals, re search foundations, and industry within the United States and neighboring countries. This impressive array of expertise is used by the agency to evaluate the scientific merit of grant and contract proposals. It represents a unique partnership between the NIH and the biomedical research community. The scientific expertise contributed by the nation’s biomedical researchers would be impossible for the NIH to duplicate with its own staff. On the other hand, the NIH represents an ideal rallying point for the research community for advocating strong, continuing support for biomedical research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schwartz, Samuel M., und Mischa E. Friedman. „Institute Review“. In A Guide to NIH Grant Programs, 114–29. Oxford University PressNew York, NY, 1992. http://dx.doi.org/10.1093/oso/9780195069341.003.0009.

Der volle Inhalt der Quelle
Annotation:
Abstract The study sections in DRG are mistakenly thought by some to comprise the totality of NIH peer review activities. What is often overlooked is the very significant amount of peer review being carried out in the institutes. A few of the mechanisms (program projects, centers, contracts) reviewed in the institutes have received less than favorable acceptance by many in the scientific community because they are perceived to be a less sound way to support research than the ROI award program. Many of the institute reviews involve the most complex grant mechanisms, the most challenging review assignments for staff and reviewers, and result in the commit ment of substantial amounts of monies. Approximately 40 percent of NIH funds awarded for competing grant and cooperative agreement in FY 1990 were the result of reviews managed by the institutes. Table 9.1 should serve to highlight the size of the workload.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Tinkle, Mindy B. „Submitting a Research Grant Application to the National Institutes of Health: Navigating the Application and Peer Review System“. In Intervention Research. New York, NY: Springer Publishing Company, 2012. http://dx.doi.org/10.1891/9780826109583.0021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schwartz, Samuel M., und Mischa E. Friedman. „Areas of Special Interest“. In A Guide to NIH Grant Programs, 160–67. Oxford University PressNew York, NY, 1992. http://dx.doi.org/10.1093/oso/9780195069341.003.0013.

Der volle Inhalt der Quelle
Annotation:
Abstract Shared responsibility between members of the biomedical research community and NIH staff is an important aspect of the extramural programs and the peer review system. One aspect of this is service by members of the research community on advisory councils and review panels. In addition, many of these individuals are Pis themselves. They and their institutions as applicants share important roles in assur ing the responsible and ethical conduct of research and the training for this, the proper care of human subjects and experimental animals in research protocols, the equitable involvement of minorities and women as subjects in clinical research, and the protection of research personnel and the environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schwartz, Samuel M., und Mischa E. Friedman. „National Advisory Councils“. In A Guide to NIH Grant Programs, 130–38. Oxford University PressNew York, NY, 1992. http://dx.doi.org/10.1093/oso/9780195069341.003.0010.

Der volle Inhalt der Quelle
Annotation:
Abstract Historically, national advisory councils were the first on the peer review scene. In 1937, Congress created the National Cancer Institute (NCI), along with an advisory council, and authorized the support of research and research training. With the passage of the Public Health Service Act in 1944, the NCI became part of NIH and a National Advisory Health Council was directed to provide support for other areas of biomedical research. After World War Il, the NIH inherited a variety of research activities supported by other agencies. The National Advisory Health Council felt the need for scientific assistance in the review of grant applications. In 1946, NIH created the Office of Research Grants, which eventually became the DRG of today, and twenty-one study sections to help the National Advisory Health Council carry out its mandate to make grant awards. This briefly is how it all began and may help to place in perspective the origin of national advisory councils, study sections, and other similar scientific review groups.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gross, Alan G., und Joseph E. Harmon. „Evaluation Before Publication“. In The Internet Revolution in the Sciences and Humanities. Oxford University Press, 2016. http://dx.doi.org/10.1093/oso/9780190465926.003.0010.

Der volle Inhalt der Quelle
Annotation:
In the midst of the controversy over the Nemesis affair—over whether a hidden star was the cause of periodic extinctions on Earth—David Raup and Jack Sepkoski were faced with a dilemma peer review had deliberately created: . . . The Tremaine analysis was technically hearsay, because it did not exist in the conventional sense of a scientific publication. To be sure, he sent us a copy of the manuscript shortly before submitting it for publication in a special volume based on the Tucson meeting. We were working on a response but could not say anything substantive about it publically, for fear of having our own paper on the subject disqualified by prior publication in the press. Besides, we had nothing to rebut until Tremaine’s paper was reviewed, revised, and finally published. . . . Precisely: Only peer review followed by publication gave them something to rebut. A survivor after a half-century of criticism concerning its efficacy, peer review remains the best guarantee that published manuscripts and funded grant proposals conform closely to community standards. Moreover, in both the sciences and the humanities, the review criteria are the same: originality, significance to the discipline, argumenta­tive competence, and clarity of expression. When we examine the ways the Internet is transforming peer review, we will see that the transparency and interactivity of the new medium make possible sounder judgments according to these criteria. Interactivity gives practitioners a firmer sense of the disciplinary-specific meanings of the standards on which their judgments are based; transparency broadcasts this firmer sense to the discipline as a whole. Under any form of peer review, knowledge is what it has always been, an agonistic system in flux, the site of a constant struggle for survival in the realm of ideas. But it is a system that cannot function properly unless each component—each bundle of claims, evidence, and argument—exhibits provisional stability. To confer this stability is the task of peer review.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tinkle, Mindy B., und Ann Marie McCarthy. „Submitting a Research Grant Application to the National Institutes of Health: Navigating the Application and Peer Review System“. In Intervention Research and Evidence-Based Quality Improvement. New York, NY: Springer Publishing Company, 2018. http://dx.doi.org/10.1891/9780826155719.0024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Peer review of research grant proposals"

1

Schiffbänker, Helene. „Implementing ‘Gender in Research’ as Inclusive Excellence Indicator – Practices in peer review panels“. In 27th International Conference on Science, Technology and Innovation Indicators (STI 2023). 27th International Conference on Science, Technology and Innovation Indicators (STI 2023), 2024. http://dx.doi.org/10.55835/64425f1ea45f9765a1e48751.

Der volle Inhalt der Quelle
Annotation:
Research funding organisations (RFOs) are key actors for guiding and reforming the assessment of grant applications. To mitigate gender bias, many RFOs have various policies in place. But how are formal gender equality policies implemented in practice by peer review panels? We analyse how one policy, incorporating the sex and gender dimension in research content and innovation (GiRI), is assessed in practice. Case studies were conducted in selected national RFOs which have implemented GiRI as an element of excellence. Data was collected through panel observations and interviews with staff and reviewers. By bringing in the reviewers’ perspective, we gain insights into how they perceive and discuss this excellence indicator, can identify various assessment practices and ultimately contribute to ongoing discussions on reconstructing excellence and fostering inclusiveness in science. The practical experiences might help RFOs to establish appropriate indicators to measure and monitor progress and fine-tune the policy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Peer review of research grant proposals"

1

Spaulding, Jesse, und Gleb Pitsevich. Thinklab: A platform for open review of research grant proposals [project]. ThinkLab, Februar 2016. http://dx.doi.org/10.15363/thinklab.18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Spaulding, Jesse, und Gleb Pitsevich. Thinklab: A platform for open review of research grant proposals [proposal]. ThinkLab, Februar 2016. http://dx.doi.org/10.15363/thinklab.a12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Heidler, Richard. Funding Research Data Infrastructures: Funding Criteria in Grant Peer Review. Fteval - Austrian Platform for Research and Technology Policy Evaluation, März 2020. http://dx.doi.org/10.22163/fteval.2020.467.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie