To see the other types of publications on this topic, follow the link: Survey evaluation.

Journal articles on the topic 'Survey evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Survey evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Doble, Susan E., Jeanne E. Bonnell, and Joyce Magill-Evans. "Evaluation of Social Skills: A Survey of Current Practice." Canadian Journal of Occupational Therapy 58, no. 5 (December 1991): 241–49. http://dx.doi.org/10.1177/000841749105800506.

Full text
Abstract:
Occupational therapists in the Atlantic Region were surveyed to identify their perceptions of the importance of evaluating social skills in their practice, and to determine how they evaluate clients' social skills. Although 79% indicated that social skills evaluations were relevant, the majority reported that clinical observations were used almost exclusively for assessment. Only 29% of the respondents reported using any formal evaluation of social skills. According to the respondents, the evaluation process was hampered by their limited knowledge of available evaluation tools, limited access to social skills models, and insufficient time. These results indicate that therapists' knowledge of existing evaluation tools must be increased. Development of a theoretical model that will enable therapists to define social skills and relate clients' social skills to their occupational performance will also facilitate the evaluation process. As a consequence, therapists' evaluation of clients' social skills will be more efficient.
APA, Harvard, Vancouver, ISO, and other styles
2

Zonneveld, Isaak S. "Landscape survey and evaluation." Journal of Arid Environments 17, no. 2 (September 1989): 255–64. http://dx.doi.org/10.1016/s0140-1963(18)30913-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Koutrouli, Eleni, and Aphrodite Tsalgatidou. "Reputation Systems Evaluation Survey." ACM Computing Surveys 48, no. 3 (February 8, 2016): 1–28. http://dx.doi.org/10.1145/2835373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weeks, Laura, Julie Polisena, Anna Scott, Anke-Peggy Holtorf, Sophie Staniszewska, and Karen Facey. "OP110 Survey Of Health Technology Assessment Evaluation Strategies For Patient And Public Involvement." International Journal of Technology Assessment in Health Care 33, S1 (2017): 51–52. http://dx.doi.org/10.1017/s0266462317001787.

Full text
Abstract:
INTRODUCTION:Although there is increased awareness of patient and public involvement (PPI) among Health Technology Assessment (HTA) organizations, evaluations of PPI initiatives are relatively scarce. Our objective as members of HTAi's Patient and Citizen Involvement Group (PCIG) was to advance understanding of the range of evaluation strategies adopted by HTA organizations and their potential usefulness.METHODS:In March 2016, a survey was sent to HTA organizations through the International Network of Agencies for Health Technology Assessment (INAHTA) and contacts of members of HTAi's PCIG. Respondents were asked about their organizational structure; how patients and members of the public are involved; whether and how PPI initiatives have been evaluated, and, if so, which facilitators and challenges to evaluation were found and how results were used and disseminated.RESULTS:Fifteen programs from twelve countries responded that involved patient (14/15) and members of the public (10/15) in HTA activities. Seven programs evaluated their PPI activities, including participant satisfaction (5/7), process evaluations (5/7) and impact evaluations (4/7). Evaluation results were used to improve PPI activities, identify education and training needs, and direct strategic priorities. Facilitators and challenges revolved around the need for stakeholder buy-in, sufficient resources, senior leadership, and including patients in evaluations. Participants also provided suggestions based on their experiences for others embarking on this work, for example including patients and members of the public in the process.CONCLUSIONS:We identified a small but diverse set of HTA organizations internationally that are evaluating their PPI activities. Our results add to the limited literature by documenting a range of evaluation strategies that reflect the range of rationales and approaches to PPI in HTA. It will be important for HTA organizations to draw on formal evaluation theories and methods when planning future evaluations, and to also share their approaches and experiences with evaluation.
APA, Harvard, Vancouver, ISO, and other styles
5

Preskill, Hallie, and Valerie Caracelli. "Current and Developing Conceptions of Use: Evaluation Use TIG Survey Results." Evaluation Practice 18, no. 3 (September 1997): 209–25. http://dx.doi.org/10.1177/109821409701800303.

Full text
Abstract:
This article presents the results of a survey sent to Evaluation Use Topical Interest Group (TIG) members for the purpose of ascertaining their perceptions about and experiences with evaluation use. Fifty-four percent ( n = 282) of the 530 members surveyed responded. These respondents agree that the major purposes of evaluation are to facilitate organizational learning, provide information for decision making, improve programs, and determine the merit or worth of the evaluand. Performance-results oriented evaluations, formative evaluations, as well as evaluations with a participatory emphasis, organizational learning emphasis, and practitioner-centered action research or empowerment approaches were all viewed as more important today than they were 10 years ago. Survey findings revealed that the most important strategies for facilitating use are planning for use at the beginning of an evaluation, identifying and prioritizing intended users and intended uses of the evaluation, designing the evaluation within resource limitations, involving stakeholders in the evaluation process, communicating findings to stakeholders as the evaluation progresses, and developing a communication and reporting plan. This survey represents a comprehensive effort to understand TIG respondents' views on evaluation use and should help further discussion on developing and advancing our theoretical and practical knowledge.
APA, Harvard, Vancouver, ISO, and other styles
6

Wassink, Heather L., Gwen E. Chapman, Ryna Levy-Milne, and Lisa Forster-Coull. "Implementing the British Columbia Nutrition Survey: Perspectives of Interviewers and Facilitators." Canadian Journal of Dietetic Practice and Research 65, no. 2 (July 2004): 59–64. http://dx.doi.org/10.3148/65.2.2004.59.

Full text
Abstract:
The British Columbia Nutrition Survey was the last of ten provincial nutrition surveys completed between 1988 and 1999. A qualitative process evaluation was conducted to identify strengths and weaknesses of British Columbia Nutrition Survey procedures, as perceived by 27 public health nurses and dietitians directly involved in data collection. Data for the process evaluation were collected through in-depth telephone interviews, during which interviewers and facilitators described their experiences working for the survey. Qualitative analysis of interview transcripts identified codes that were then organized into eight categories, including issues arising from interviewer and facilitator training, challenges in recruiting survey participants, reflections on safety for survey personnel and participants, facilitators’ key role, the flexibility required to implement the protocol, and communication within the survey research team. Two final categories related to rewarding aspects of the job: insights affecting professional practice, and meeting survey participants and personnel. Evaluation findings show the importance of establishing open communication between research planners and those conducting surveys. This communication is needed to ensure that workers’ needs are met, the quality of the study is maximized, and evaluations of study protocols include the perspectives of those directly involved in data collection.
APA, Harvard, Vancouver, ISO, and other styles
7

Cong, Xin, and Lingling Zi. "Blockchain applications, challenges and evaluation: A survey." Discrete Mathematics, Algorithms and Applications 12, no. 04 (July 7, 2020): 2030001. http://dx.doi.org/10.1142/s1793830920300015.

Full text
Abstract:
Blockchain is a promising technology, which may change the way of transactions and affect our lives in the future, and has attracted the attention of more and more scholars recently. This paper provides an overview of the important issues of blockchain and the aim is to lead researchers to comprehensive understand applications, challenges and evaluation from the technical perspective. The basic technology including authorization, incentive and consensus is presented, focusing on their latest methods. Then, a wide range of blockchain applications are described. Specially, some of the latest intersection and integration areas with blockchain are introduced. Moreover, research challenges of blockchain are summarized and analyzed. Finally, evaluation metrics of blockchain are presented and as far as we know, these metrics are first designed for evaluating blockchain performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Moore, Marsha L. "Developing the Preceptorship Evaluation Survey." Journal for Nurses in Staff Development (JNSD) 25, no. 5 (September 2009): 249–53. http://dx.doi.org/10.1097/nnd.0b013e3181ae2eba.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karras, Bryant T., and James T. Tufano. "Multidisciplinary eHealth survey evaluation methods." Evaluation and Program Planning 29, no. 4 (November 2006): 413–18. http://dx.doi.org/10.1016/j.evalprogplan.2006.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

ZHANG, Wei-Nan, Yangzi ZHANG, and Ting LIU. "Survey of evaluation methods for dialogue systems}{Survey of evaluation methods for dialogue systems." SCIENTIA SINICA Informationis 47, no. 8 (July 24, 2017): 953. http://dx.doi.org/10.1360/n112017-00125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sayed, Ahmed, and Hussain Al-Asaad. "Low-Power Flip-Flops: Survey, Comparative Evaluation, and a New Design." International Journal of Engineering and Technology 3, no. 3 (2011): 279–86. http://dx.doi.org/10.7763/ijet.2011.v3.238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ikart, Emmanuel M. "Survey Questionnaire Survey Pretesting Method: An Evaluation of Survey Questionnaire via Expert Reviews Technique." Asian Journal of Social Science Studies 4, no. 2 (April 23, 2019): 1. http://dx.doi.org/10.20849/ajsss.v4i2.565.

Full text
Abstract:
Whereas the literature on questionnaire pretesting has revealed a paradox, questionnaire pretesting is a simple technique to measure in advance whether a questionnaire causes problems for respondents or interviewers. Consequently, experienced researchers and survey methodologists have declared questionnaire pretesting indispensable. All the same, published survey reports provide no information about whether a questionnaire was pretested and, if so, how and with what results. Moreover, until recently, there has been limited methodological research on questionnaire pretesting. The universally acknowledged importance of questionnaire pretesting has been honoured more in theory than in practice. As a result, we know very little about pretesting and the extent to which a pretest serves its intended purpose and leads to value-added on questionnaires. An expert review is a traditional method of questionnaire pretesting. Expert reviews can be conducted with varying levels of organisation and rigor. On the lower end of the spectrum, an experienced subject matter expert, or survey methodologist reviews a draft questionnaire to identify issues with question wording or administration that may lead to measurement error. On the more rigorous end of the spectrum, as employed in this study is the Questionnaire Appraisal Scheme method, a standardized instrument review containing 28 problem types that allow experienced researchers and/or coders to code, analyse and compare the results of questionnaire problems reported by the independent expert reviewers for consistency and agreement across the expert reviewers. However, in spite of the wider use of the expert review as a pretest method, fewer empirical evaluations of this method exist. Specifically, there is little evidence as to whether different expert reviews consistently identified similar questionnaire problems. Similarly, there has been no reasonable level of agreements across the expert reviewers in their evaluation of questionnaire problems. This paper addresses these shortcomings. The protocols employed in the paper would contribute to reducing the shortfall in pretesting guidelines and encourage roundtable discussions in academia and management practice.
APA, Harvard, Vancouver, ISO, and other styles
13

Colbert-Getz, Jorie M., and Steven Baumann. "Changing medical students’ perception of the evaluation culture: Is it possible?" Journal of Educational Evaluation for Health Professions 13 (February 15, 2016): 8. http://dx.doi.org/10.3352/jeehp.2016.13.8.

Full text
Abstract:
Student feedback is a critical component of the teacher-learner cycle. However, there is not a gold standard course or clerkship evaluation form and limited research on the impact of changing the evaluation process. Results from a focus group and pre-implementation feedback survey coupled with best practices in survey design were used to improve all course/clerkship evaluation for academic year 2013-2014. In spring 2014 we asked all subjected students in University of Utah School of Medicine, United States of America to complete the same feedback survey (post-implementation survey). We assessed the evaluation climate with 3 measures on the feedback survey: overall satisfaction with the evaluation process; time students gave effort to the process; and time students used shortcuts. Scores from these measures were compared between 2013 and 2014 with Mann-Whitney U-tests. Response rates were 79% (254) for 2013 and 52% (179) for 2014. Students’ overall satisfaction score were significantly higher (more positive) post-implementation compared to pre-implementation (P<0.001). There was no change in the amount of time students gave effort to completing evaluations (P=0.981) and no change for the amount of time they used shortcuts to complete evaluations (P=0.956). We were able to change overall satisfaction with the medical school evaluation culture, but there was no change in the amount of time students gave effort to completing evaluations and times they used shortcuts to complete evaluations. To ensure accurate evaluation results we will need to focus our efforts on time needed to complete course evaluations across all four years.
APA, Harvard, Vancouver, ISO, and other styles
14

Aalto, Leena, Sanna Lappalainen, Heidi Salonen, and Kari Reijula. "Usability evaluation (IEQ survey) in hospital buildings." International Journal of Workplace Health Management 10, no. 3 (June 5, 2017): 265–82. http://dx.doi.org/10.1108/ijwhm-03-2016-0014.

Full text
Abstract:
Purpose As hospital operations are undergoing major changes, comprehensive methods are needed for evaluating the indoor environment quality (IEQ) and usability of workspaces in hospital buildings. The purpose of this paper is to present a framework of the characteristics that have an impact on the usability of work environments for hospital renovations, and to use this framework to illustrate the usability evaluation process in the real environment. Design/methodology/approach The usability of workspaces in hospital environments was evaluated in two hospitals, as an extension of the IEQ survey. The evaluation method was usability walk-through. The main aim was to determine the usability characteristics of hospital facility workspaces that support health, safety, good indoor air quality, and work flow. Findings The facilities and workspaces were evaluated by means of four main themes: orientation, layout solution, working conditions, and spaces for patients. The most significant usability flaws were cramped spaces, noise/acoustic problems, faulty ergonomics, and insufficient ventilation. Due to rooms being cramped, all furnishing directly caused functionality and safety problems in these spaces. Originality/value The paper proposes a framework that links different design characteristics to the usability of hospital workspaces that need renovation.
APA, Harvard, Vancouver, ISO, and other styles
15

MacDonald, Amanda B., Jan L. Jensen, and Melissa D. Rossiter. "Nutrition and Shiftwork: Evaluation of New Paramedics’ Knowledge and Attitudes." Canadian Journal of Dietetic Practice and Research 74, no. 4 (December 2013): 198–201. http://dx.doi.org/10.3148/74.4.2013.198.

Full text
Abstract:
Purpose: The effect of an oral education intervention on nutrition knowledge was evaluated in new paramedic employees. The evaluation involved measuring knowledge of and attitudes toward nutrition and shiftwork before and after the directed intervention. Methods: A convenience sample of 30 new paramedic shiftworkers attended a 15-minute education session focused on nutrition management strategies. This matched cohort study included three self-administered surveys. Survey 1 was completed before education, survey 2 immediately after education, and survey 3 after one month of concurrent post-education and employment experience. Knowledge and attitude scores were analyzed for differences between all surveys. Results: Participants were primary care paramedics, 59% of whom were male. They reported that previously they had not received this type of information or had received only a brief lecture. Mean knowledge scores increased significantly from survey 1 to survey 2; knowledge retention was identified in survey 3. A significant difference was found between surveys 2 and 3 for attitudes toward meal timing; no other significant differences were found between attitude response scores. Conclusions: The education session was successful in improving shiftwork nutrition knowledge among paramedics. Paramedics’ attitudes toward proper nutrition practices were positive before the education intervention.
APA, Harvard, Vancouver, ISO, and other styles
16

Dickson, John P., Floyd J. Fowler, and Thomas W. Mangione. "Improving Survey Questions: Design and Evaluation." Journal of Marketing Research 34, no. 2 (May 1997): 296. http://dx.doi.org/10.2307/3151868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kolivand, Hoshang, Mohd Shahrizal Sunar, Samira Y. Kakh, Riyadh Al-Rousan, and Ismahafezi Ismail. "Photorealistic rendering: a survey on evaluation." Multimedia Tools and Applications 77, no. 19 (March 13, 2018): 25983–6008. http://dx.doi.org/10.1007/s11042-018-5834-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mueller, Daniel J. "Career Orientation Placement and Evaluation Survey." Measurement and Evaluation in Counseling and Development 18, no. 3 (October 1985): 132–34. http://dx.doi.org/10.1080/07481756.1985.12022802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Turpin, Robin S., Nick L. Smith, and Laurie A. Darcy. "Survey of journals publishing evaluation research." Evaluation Practice 8, no. 2 (May 1987): 10–19. http://dx.doi.org/10.1016/s0886-1633(87)80080-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

B.R.W. "A survey of evaluation practice readers." Evaluation Practice 17, no. 1 (December 1996): 85–90. http://dx.doi.org/10.1016/s0886-1633(96)90043-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

BELL, JOHN R. "Migraine Survey Can Guide Evaluation, Therapy." Internal Medicine News 40, no. 11 (June 2007): 24. http://dx.doi.org/10.1016/s1097-8690(07)70645-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shanmugapriya, P., and R. M. Suresh. "Software Architecture Evaluation Methods A Survey." International Journal of Computer Applications 49, no. 16 (July 28, 2012): 19–26. http://dx.doi.org/10.5120/7711-1107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Soner, Shweta, Swapnil Soner, and Maya Yadav. "A Survey on Software Bug Evaluation." International Journal of Computer Applications 129, no. 10 (November 17, 2015): 36–38. http://dx.doi.org/10.5120/ijca2015907015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lesslie, R. "Wilderness survey and evaluation in Australia." Australian Geographer 22, no. 1 (May 1991): 35–43. http://dx.doi.org/10.1080/00049189108703019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Turpin, R. S., N. L. Smith, and L. A. Darcy. "Survey of Journals Publishing Evaluation Research." American Journal of Evaluation 8, no. 2 (May 1, 1987): 10–19. http://dx.doi.org/10.1177/109821408700800202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sobhy, Dalia, Rami Bahsoon, Leandro Minku, and Rick Kazman. "Evaluation of Software Architectures under Uncertainty." ACM Transactions on Software Engineering and Methodology 30, no. 4 (July 2021): 1–50. http://dx.doi.org/10.1145/3464305.

Full text
Abstract:
Context: Evaluating software architectures in uncertain environments raises new challenges, which require continuous approaches. We define continuous evaluation as multiple evaluations of the software architecture that begins at the early stages of the development and is periodically and repeatedly performed throughout the lifetime of the software system. Numerous approaches have been developed for continuous evaluation; to handle dynamics and uncertainties at run-time, over the past years, these approaches are still very few, limited, and lack maturity. Objective: This review surveys efforts on architecture evaluation and provides a unified terminology and perspective on the subject. Method: We conducted a systematic literature review to identify and analyse architecture evaluation approaches for uncertainty including continuous and non-continuous, covering work published between 1990–2020. We examined each approach and provided a classification framework for this field. We present an analysis of the results and provide insights regarding open challenges. Major results and conclusions: The survey reveals that most of the existing architecture evaluation approaches typically lack an explicit linkage between design-time and run-time. Additionally, there is a general lack of systematic approaches on how continuous architecture evaluation can be realised or conducted. To remedy this lack, we present a set of necessary requirements for continuous evaluation and describe some examples.
APA, Harvard, Vancouver, ISO, and other styles
27

Brandon, Paul R. "State-Level Evaluations of School Programs Funded under the Drug-Free Schools and Communities Act." Journal of Drug Education 22, no. 1 (March 1992): 25–36. http://dx.doi.org/10.2190/fk6n-mgaf-chgu-q2yj.

Full text
Abstract:
Although the Drug-Free Schools and Communities Act of 1986 and the 1989 Amendments to the Act require states to evaluate their drug-education programs, no guidelines for conducting these evaluations have been produced, and little has been reported on how the states are conducting such evaluations. In this article, the results of a telephone survey on current state-level efforts to evaluate school programs funded under the Act are reported. Some states report studies of the implementation of the program and some report drug- and alcohol-use surveys. Together, these two types of evaluation efforts form the foundation of an approach for conducting evaluations under the Act. Reasons are presented why experimental and quasi-experimental designs might be inappropriate and impractical for the evaluations, and an evaluation approach linking program implementation findings and drug- and alcohol-use survey results is suggested.
APA, Harvard, Vancouver, ISO, and other styles
28

Alliston, Deborah, Matthew J. Kelt, Grace Nehme, and Robert Wittler. "Group Evaluations of Individual Faculty Hospitalists." Kansas Journal of Medicine 12, no. 3 (August 21, 2019): 62–64. http://dx.doi.org/10.17161/kjm.v12i3.11794.

Full text
Abstract:
Introduction Faculty evaluations are important tools for improving faculty-to-resident instruction, but residents in our pediatric and internal medicine/pediatric residency programs would seldom evaluate individual pediatric faculty hospitalists. Our objectives were to: (1) increase the percentage of completed evaluations of individual pediatric hospitalists to greater than 85%, (2) improve the quality of pediatric hospitalist feedback as measured by resident and faculty satisfaction surveys, and (3) to reduce the resident concern of lack of anonymity of evaluations. Methods Members of the resident inpatient team (pediatric and internal medicine/pediatric residents) completed group-based evaluations of individual pediatric hospitalists. A survey to evaluate this change in process was distributed to the pediatric hospitalists (n = 6) and another survey was distributed to residents, both based on a 5-point Likert-type scale. Surveys were completed before and four months after implementation of the changes. Pre- and post-survey data of resident and hospitalist responses were compared using the Mann-Whitney test and probability proportion test. Results The percent of completed evaluations increased from 0% to 86% in one month and to 100% in two months. Thereafter, the percent of completed evaluations remained at 100% through the end of the data collection period at seven months. Hospitalists reported (n = 6, 100% participation) their satisfaction regarding the feedback they received from residents significantly increased for all survey questions. Resident satisfaction (n = 24, 89% participation in postintervention surveys) increased significantly with regards to the evaluation process. Conclusions For hospitalists, group-based resident evaluations of individual hospitalists led to an increased percentage of completed evaluations, improved the quality and quantity of feedback to hospitalists, and increased satisfaction with evaluations. For residents, these changes led to increased satisfaction with the evaluation process.
APA, Harvard, Vancouver, ISO, and other styles
29

Aron, D. N., R. Roberts, J. Stallings, J. Brown, and C. W. Hay. "Evaluation of Positive Contrast Arthrography in Canine Cranial Cruciate Ligament Disease." Veterinary and Comparative Orthopaedics and Traumatology 09, no. 01 (1996): 10–3. http://dx.doi.org/10.1055/s-0038-1632495.

Full text
Abstract:
SummaryArthrographic and intraoperative evaluations of stifles affected with cranial cruciate disease were compared. Arthrography did not appear to be helpful in predicting cranial cruciate ligament pathology. The caudal cruciate ligament was consistently not visualized in the arthrograms and was normal at surgery. The menisci were visualized consistently in the arthrograms, but conclusions could not be made as to the benefit of arthrography in predicting meniscal pathology. Arthrography was not helpful in predicting joint capsule and femoral articular surface pathology. Survey radiographic evaluation was better than arthrography in evaluating joint pathology. When cruciate injury is suspected, after history and physical examination, survey radiographs are better than positive contrast arthrograms at supporting the diagnosis.Positive contrast arthrography was evaluated as a diagnostic aid in canine cranial cruciate ligament disease. It did not appear to be useful in predicting joint pathology. With arthrography, both menisci could be visualized and evaluated for abnormalities. Joint effusion and presence of osteophytes evaluated on survey radiographs was better than arthrography in evaluating joint pathology.
APA, Harvard, Vancouver, ISO, and other styles
30

Kairys, Steven, Laurence Ricci, and Martin A. Finkel. "Funding of Child Abuse Evaluations: Survey of Child Abuse Evaluation Programs." Child Maltreatment 11, no. 2 (May 2006): 182–88. http://dx.doi.org/10.1177/1077559505285778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Brewer, Sarah E., Elizabeth J. Campagna, and Elaine H. Morrato. "Advancing regulatory science and assessment of FDA REMS programs: A mixed-methods evaluation examining physician survey response." Journal of Clinical and Translational Science 3, no. 4 (August 2019): 199–209. http://dx.doi.org/10.1017/cts.2019.400.

Full text
Abstract:
AbstractPurpose:Food and Drug Administration’s (FDA) Draft Guidance for Industry on pharmaceutical REMS (Risk Evaluation and Mitigation Strategies) assessment and survey methodology highlights physician knowledge–attitudes–behaviors (KAB) surveys as regulatory science tools. This mixed-methods evaluation advances regulatory science and the assessment of FDA REMS programs when using physician surveys. We: (1) reviewed published physician survey response rates; and (2) assessed response bias in a simulation study of secondary survey data using different accrual cut-off strategies.Methods:A systematic literature review was conducted of US physician surveys (2000–2014) on pharmaceutical use (n = 75). Kruskal–Wallis tests were used to examine the relationships between response rates and survey design characteristics. The simulation was conducted using secondary data from a population-based physician KAB survey on diabetes risk management with antipsychotic use in Missouri Medicaid (n = 973 accrued over 30 weeks). Survey item responses were compared using Pearson’s chi-square tests for two faster completion simulations: Fixed Sample (n = 300) and Fixed Time (8 weeks).Results:Survey response rates ranged from 7% to 100% (median = 48%, IQR = 34%–68%). Surveys of targeted populations and surveys using member lists were associated with higher response rates (p = 0.02). In the simulation, 9 of 20 (45%) KAB items, including diabetes screening advocacy, differed significantly using the smaller Fixed Sample strategy (achieved in 12 days) versus full accrual. Fewer response differences were found using the Fixed Time strategy (2 of 20 [10%] items).Conclusions:Published data on physician surveys report low response rates with most associated with the sample source selected. FDA REMS assessments should include formal evaluation of survey accrual and response bias.
APA, Harvard, Vancouver, ISO, and other styles
32

Gayer, Christian. "Forecast Evaluation of European Commission Survey Indicators." Journal of Business Cycle Measurement and Analysis 2005, no. 2 (March 23, 2006): 157–83. http://dx.doi.org/10.1787/jbcma-v2005-art2-en.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tanur, Judith M., Ellen J. Wentland, and Kent W. Smith. "Survey Responses: An Evaluation of Their Validity." Contemporary Sociology 24, no. 6 (November 1995): 833. http://dx.doi.org/10.2307/2076732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xu, Qingyang, and Ning Wang. "A Survey on Ship Collision Risk Evaluation." PROMET - Traffic&Transportation 26, no. 6 (December 30, 2014): 475–86. http://dx.doi.org/10.7307/ptt.v26i6.1386.

Full text
Abstract:
Recently, ship collision avoidance has become essential due to the emergence of special vessels like chemical tankers and VLCCs (very large crude carriers), etc. The information needed for safe navigation is obtained by combining electrical equipment with real-time visual information. However, misjudgements and human errors are the major cause of ship collisions according to research data. The decision support system of Collision avoidance is an advantageous facility to make up for this. Collision risk evaluation is one of the most important problems in collision avoidance decision supporting system. A review is presented of different approaches to evaluate the collision risk in maritime transportation. In such a context, the basic concepts and definitions of collision risk and their evaluation are described. The review focuses on three categories of numerical models of collision risk calculation: methods based on traffic flow theory, ship domain and methods based on dCPA and tCPA.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Zhaobin, E. Wang, and Ying Zhu. "Image segmentation evaluation: a survey of methods." Artificial Intelligence Review 53, no. 8 (April 18, 2020): 5637–74. http://dx.doi.org/10.1007/s10462-020-09830-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Xiaoying, and Dimitri Theodoratos. "A survey on XML streaming evaluation techniques." VLDB Journal 22, no. 2 (June 10, 2012): 177–202. http://dx.doi.org/10.1007/s00778-012-0281-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gordon, Alan. "SurveyMonkey.com—Web-Based Survey and Evaluation System." Internet and Higher Education 5, no. 1 (January 2002): 83–87. http://dx.doi.org/10.1016/s1096-7516(02)00061-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Eszergár-Kiss, Domokos, and Bálint Caesar. "User Group Evaluation Based on Survey Data." Transportation Research Procedia 10 (2015): 256–65. http://dx.doi.org/10.1016/j.trpro.2015.09.075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tsen, Lawrence C., Scott Segal, Margaret Pothier, and Angela M. Bader. "Survey of Residency Training in Preoperative Evaluation." Anesthesiology 93, no. 4 (October 1, 2000): 1134–37. http://dx.doi.org/10.1097/00000542-200010000-00039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Osman, Rasha, and William J. Knottenbelt. "Database system performance evaluation models: A survey." Performance Evaluation 69, no. 10 (October 2012): 471–93. http://dx.doi.org/10.1016/j.peva.2012.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Crawford, Scott. "Evaluation of Web Survey Data Collection Systems." Field Methods 14, no. 3 (August 2002): 307–21. http://dx.doi.org/10.1177/1525822x0201400304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Church, Kenneth Ward, and Joel Hestness. "A survey of 25 years of evaluation." Natural Language Engineering 25, no. 06 (July 31, 2019): 753–67. http://dx.doi.org/10.1017/s1351324919000275.

Full text
Abstract:
AbstractEvaluation was not a thing when the first author was a graduate student in the late 1970s. There was an Artificial Intelligence (AI) boom then, but that boom was quickly followed by a bust and a long AI Winter. Charles Wayne restarted funding in the mid-1980s by emphasizing evaluation. No other sort of program could have been funded at the time, at least in America. His program was so successful that these days, shared tasks and leaderboards have become common place in speech and language (and Vision and Machine Learning). It is hard to remember that evaluation was a tough sell 25 years ago. That said, we may be a bit too satisfied with current state of the art. This paper will survey considerations from other fields such as reliability and validity from psychology and generalization from systems. There has been a trend for publications to report better and better numbers, but what do these numbers mean? Sometimes the numbers are too good to be true, and sometimes the truth is better than the numbers. It is one thing for an evaluation to fail to find a difference between man and machine, and quite another thing to pass the Turing Test. As Feynman said, “the first principle is that you must not fool yourself–and you are the easiest person to fool.”
APA, Harvard, Vancouver, ISO, and other styles
43

Tong, Hsin-Min, and Allen L. Bures. "Marketing Faculty Evaluation Systems: A National Survey." Journal of Marketing Education 11, no. 1 (April 1989): 10–13. http://dx.doi.org/10.1177/027347538901100103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ozer, Muammer. "A Survey of New Product Evaluation Models." Journal of Product Innovation Management 16, no. 1 (January 1999): 77–94. http://dx.doi.org/10.1111/1540-5885.1610077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sindhi, Komal, Jaymit Pandya, and Sudhir Vegad. "Quality evaluation of apple fruit: A Survey." International Journal of Computer Applications 136, no. 1 (February 17, 2016): 32–36. http://dx.doi.org/10.5120/ijca2016908340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chakrabarty, Subhra, and Joseph N. Rogé. "An Evaluation of the Organizational Learning Survey." Psychological Reports 91, no. 3_suppl (December 2002): 1255–67. http://dx.doi.org/10.2466/pr0.2002.91.3f.1255.

Full text
Abstract:
A mail survey of a national random sample of 2,000 marketing managers was conducted. The data provided by 221 respondents were analyzed to assess the unidimensionality of the 21-item 1997 Organizational Learning Survey developed by Goh and Richards. A confirmatory factor analysis using LISREL 8.30 did not support the unidimensionality of the 21-item survey; however, unidimensionality was established for most of the dimensions. Managerial implications and directions for research were suggested.
APA, Harvard, Vancouver, ISO, and other styles
47

Tillotson, Joy. "Web site evaluation: a survey of undergraduates." Online Information Review 26, no. 6 (December 2002): 392–403. http://dx.doi.org/10.1108/14684520210452727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

CHAKRABARTY, SUBHRA. "AN EVALUATION OF THE ORGANIZATIONAL LEARNING SURVEY." Psychological Reports 91, no. 7 (2002): 1255. http://dx.doi.org/10.2466/pr0.91.7.1255-1267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

CHAKRABARTY, SUBHRA. "AN EVALUATION OF THE ORGANIZATIONAL LEARNING SURVEY." Psychological Reports 91, no. 8 (2002): 1255. http://dx.doi.org/10.2466/pr0.91.8.1255-1267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Couper, Mick P. "Usability Evaluation of Computer-Assisted Survey Instruments." Social Science Computer Review 18, no. 4 (November 2000): 384–96. http://dx.doi.org/10.1177/089443930001800402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography