To see the other types of publications on this topic, follow the link: Fry Readability Index.

Journal articles on the topic 'Fry Readability Index'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Fry Readability Index.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kahn, Alice, and Mary Pannbacker. "Readability of Educational Materials for Clients With Cleft Lip/Palate and Their Families." American Journal of Speech-Language Pathology 9, no. 1 (February 2000): 3–9. http://dx.doi.org/10.1044/1058-0360.0901.03.

Full text
Abstract:
Educational materials for clients with cleft lip/palate and their families and materials for the general public produced by the American Speech-Language-Hearing Association were analyzed for readability. The SMOG Grading Formula, a simple, fast procedure for predicting grade-level difficulty of written material, and the Fry index of readability, a more lengthy measure of readability, were used to analyze 30 publications. Reading levels were computed, and results ranged from elementary to college level. The majority of materials were written at or above the high school readability level. Results suggest a need for revision of current materials to lower reading levels. Writers should consider the appropriateness of reading level for readers when preparing educational materials.
APA, Harvard, Vancouver, ISO, and other styles
2

Kakazu, Rafael, Adam Schumaier, Chelsea Minoughan, and Brian Grawe. "Poor Readability of AOSSM Patient Education Resources and Opportunities for Improvement." Orthopaedic Journal of Sports Medicine 6, no. 11 (November 1, 2018): 232596711880538. http://dx.doi.org/10.1177/2325967118805386.

Full text
Abstract:
Background: Appropriate education on the disease processes associated with orthopaedic pathology can affect patient expectations and functional outcome. Hypothesis: Patient education resources from the American Orthopaedic Society for Sports Medicine (AOSSM) are too complex for comprehension by the average orthopaedic patient. Study Design: Cross-sectional study. Methods: Patient education resources provided by the AOSSM were analyzed with software that provided 10 readability scores as well as opportunities for improving readability. The readability scores were compared with the recommended eighth-grade reading level. Results: A total of 39 patient education resources were identified and evaluated. The mean ± SD reading grade-level scores were as follows: Coleman-Liau Index, 12.5 ± 1.11; New Dale-Chall Readability Formula, 10.9 ± 1.37; Flesch-Kincaid Grade Level, 9.9 ± 1.06; FORCAST Readability Formula, 11.4 ± 0.51; Fry Readability Formula, 12.8 ± 2.79; Gunning Fog Index, 11.9 ± 1.37; Raygor Readability Index, 13.1 ± 2.37; Simple Measure of Gobbledygook, 12.3 ± 0.90; Automated Readability Index, 11.2 ± 1.18; and New Automated Readability Index, 10.6 ± 1.27. After averaging the reading grade-level scores, only 1 patient education resource was found to be written at an 8th- to 9th-grade level, and 14 (36%) were written above a 12th-grade level. All scores were significantly different from the eighth-grade level ( P < .0065). The percentage of complex words and long words were 19.6% ± 2.67% and 41.4% ± 3.18%, respectively. Conclusion: Patient education resources provided by the AOSSM are at a significantly higher reading level than recommended. Simple changes can drastically improve these scores to increase health literacy and possibly outcome.
APA, Harvard, Vancouver, ISO, and other styles
3

Kapoor, Karan, Praveen George, Matthew C. Evans, Weldon J. Miller, and Stanley S. Liu. "Health Literacy: Readability of ACC/AHA Online Patient Education Material." Cardiology 138, no. 1 (2017): 36–40. http://dx.doi.org/10.1159/000475881.

Full text
Abstract:
Objectives: To determine whether the online patient education material offered by the American College of Cardiology (ACC) and the American Heart Association (AHA) is written at a higher level than the 6th-7th grade level recommended by the National Institute of Health (NIH). Methods: Online patient education material from each website was subjected to reading grade level (RGL) analysis using the Readability Studio Professional Edition. One-sample t testing was used to compare the mean RGLs obtained from 8 formulas to the NIH-recommended 6.5 grade level and 8th grade national mean. Results: In total, 372 articles from the ACC website and 82 from the AHA were studied. Mean (±SD) RGLs for the 454 articles were 9.6 ± 2.1, 11.2 ± 2.1, 11.9 ± 1.6, 10.8 ± 1.6, 9.7 ± 2.1, 10.8 ± 0.8, 10.5 ± 2.6, and 11.7 ± 3.5 according to the Flesch-Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG Index), Coleman-Liau Index (CLI), Gunning-Fog Index (GFI), New Dale-Chall reading level formula (NDC), FORCAST, Raygor Readability Estimate (RRE), and Fry Graph (Fry), respectively. All analyzed articles had significantly higher RGLs than both the NIH-recommended grade level of 6.5 and the national mean grade level of 8 (p < 0.00625). Conclusions: Patient education material provided on the ACC and AHA websites is written above the NIH-recommended 6.5 grade level and 8th grade national mean reading level. Additional studies are required to demonstrate whether lowering the RGL of this material improves outcomes among patients with cardiovascular disease.
APA, Harvard, Vancouver, ISO, and other styles
4

Eloy, Jean Anderson, Shawn Li, Khushabu Kasabwala, Nitin Agarwal, David R. Hansberry, Soly Baredes, and Michael Setzen. "Readability Assessment of Patient Education Materials on Major Otolaryngology Association Websites." Otolaryngology–Head and Neck Surgery 147, no. 5 (August 3, 2012): 848–54. http://dx.doi.org/10.1177/0194599812456152.

Full text
Abstract:
Objective Various otolaryngology associations provide Internet-based patient education material (IPEM) to the general public. However, this information may be written above the fourth- to sixth-grade reading level recommended by the American Medical Association (AMA) and National Institutes of Health (NIH). The purpose of this study was to assess the readability of otolaryngology-related IPEMs on various otolaryngology association websites and to determine whether they are above the recommended reading level for patient education materials. Study Design and Setting Analysis of patient education materials from 9 major otolaryngology association websites. Methods The readability of 262 otolaryngology-related IPEMs was assessed with 8 numerical and 2 graphical readability tools. Averages were evaluated against national recommendations and between each source using analysis of variance (ANOVA) with post hoc Tukey’s honestly significant difference (HSD) analysis. Mean readability scores for each otolaryngology association website were compared. Results Mean website readability scores using Flesch Reading Ease test, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG grading, Gunning Fog Index, New Dale-Chall Readability Formula, FORCAST Formula, New Fog Count Test, Raygor Readability Estimate, and the Fry Readability Graph ranged from 20.0 to 57.8, 9.7 to 17.1, 10.7 to 15.9, 11.6 to 18.2, 10.9 to 15.0, 8.6 to 16.0, 10.4 to 12.1, 8.5 to 11.8, 10.5 to 17.0, and 10.0 to 17.0, respectively. ANOVA results indicate a significant difference ( P < .05) between the websites for each individual assessment. Conclusion The IPEMs found on all otolaryngology association websites exceed the recommended fourth- to sixth-grade reading level.
APA, Harvard, Vancouver, ISO, and other styles
5

Gill, Marie E. "Drug court handbooks suitability for programme participants with low literacy." Health Education Journal 77, no. 8 (July 16, 2018): 995–1006. http://dx.doi.org/10.1177/0017896918786531.

Full text
Abstract:
Objective: The objective of this study was to describe the suitability of adult drug court handbooks for participants with low literacy. Methods: A convenience sample of seven drug court participant handbooks from urban drug courts in three regions of the USA were assessed for reading and literacy suitability for low-literacy learners using the Fry Index Readability Formula and the Suitability Assessment of Materials (SAM). The Fry Index Readability Formula follows an objective approach using a three-step process of counting three random samples of 100-word passages to calculate grade reading level. The SAM uses a 0–2 range scale to assess six distinct factors of written materials to yield a numerical score as superior, adequate or not suitable. Results: All handbooks were written above the reading level for low-literacy learners with scores at the eighth grade reading level or higher. A few handbooks scored adequate ratings in some SAM categories, and only two handbooks scored superior ratings in any one category. Overall SAM scores showed all handbooks were not suitable for low-literacy learners. Conclusion: Findings indicate that drug court participant handbooks from this sample are not written for low-literacy learners. Key recommendations are to develop a drug court handbook appropriate for participants with low literacy, assess drug court participants’ literacy for reading grade level and comprehension and provide multi-modal teaching formats to promote effective learning.
APA, Harvard, Vancouver, ISO, and other styles
6

O'Sullivan, Lydia, Prasanth Sukumar, Rachel Crowley, Eilish McAuliffe, and Peter Doran. "Readability and understandability of clinical research patient information leaflets and consent forms in Ireland and the UK: a retrospective quantitative analysis." BMJ Open 10, no. 9 (September 2020): e037994. http://dx.doi.org/10.1136/bmjopen-2020-037994.

Full text
Abstract:
ObjectivesThe first aim of this study was to quantify the difficulty level of clinical research Patient Information Leaflets/Informed Consent Forms (PILs/ICFs) using validated and widely used readability criteria which provide a broad assessment of written communication. The second aim was to compare these findings with best practice guidelines.DesignRetrospective, quantitative analysis of clinical research PILs/ICFs provided by academic institutions, pharmaceutical companies and investigators.SettingPILs/ICFs which had received Research Ethics Committee approval in the last 5 years were collected from Ireland and the UK.InterventionNot applicable.Main outcome measuresPILs/ICFs were evaluated against seven validated readability criteria (Flesch Reading Ease, Flesh Kincaid Grade Level, Simplified Measure of Gobbledegook, Gunning Fog, Fry, Raygor and New Dale Chall). The documents were also scored according to two health literacy-based criteria: the Clear Communication Index (CCI) and the Suitability Assessment of Materials tool. Finally, the documents were assessed for compliance with six best practice metrics from literacy agencies.ResultsA total of 176 PILs were collected, of which 154 were evaluable. None of the PILs/ICFs had the mean reading age of <12 years recommended by the American Medical Association. 7.1% of PILs/ICFs were evaluated as ‘Plain English’, 40.3%: ‘Fairly Difficult’, 51.3%: ‘Difficult’ and 1.3%: ‘Very Difficult’. No PILs/ICFs achieved a CCI >90. Only two documents complied with all six best practice literacy metrics.ConclusionsWhen assessed against both traditional readability criteria and health literacy-based tools, the PILs/ICFs in this study are inappropriately complex. There is also evidence of poor compliance with guidelines produced by literacy agencies. These data clearly evidence the need for improved documentation to underpin the consent process.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaur, Supreet, Abhishek Kumar, Dhruv Mehta, and Michael Maroules. "So Difficult to Understand : Readability Index Analysis of Online Patient Information on Lymphoma from NCI- Designated Cancer Center." Blood 128, no. 22 (December 2, 2016): 3567. http://dx.doi.org/10.1182/blood.v128.22.3567.3567.

Full text
Abstract:
Abstract Objectives - In current era of information technology, there is abundance of medical information for the patients and families. It is recommended that the online patient information (OPI) should be written no greater than sixth grade as per The National Institute of Health (NIH), American Medical Association, Department of Health & Human Services. We aim is to assess whether OPI on lymphoma from NCI-Designated Cancer Center (NCIDCC) and various cancer associations websites meet the current recommendations on a panel of readability indexes. Methods - OPI from patient only section of NCIDCC and cancer associations websites were collected. This text was analyzed by 7 commonly used readability tests - Flesch Reading Ease score(FRE), Gunning Fog(GF), Flesch-Kincaid Grade Level(FKGL), The Coleman-Liau Index(CLI), The Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index(ARI) and Linsear Write Formula(LWF). Text from each article was pasted into Microsoft Word and analyzed using the online software Readability formulas. Results - The mean FRE score is 55.4 (range 37.4-67.9) which corresponded to difficult level grade. The mean GF score is 12.9 (range 10.8-15.2) that comes between difficult and hard. The FKGL score is 9.9 (range 7.2-11.9) that corresponds to above the level of ninth grader.The mean CLI score is 11.09 (range 9-12) which represented text of twelfth grade. The mean SMOG index is 9.6 (range 8.1-11.1) which corresponded to greater than seventh grade level. While the mean ARI score was 10.5 (range 7.7-12.5) which represents readability suitable for people more than tenth grade. The mean LWF was 11.29 (range 7.8-14.4) that corresponds to tenth grade level of text. Conclusion - The currently available OPI on lymphoma did not met the set national recommendations level on seven different validated readability indexes. Currently, available literature is difficult to understand and comprehend for average patient and their kins. There is a dire need to revise the currently available material for easy comprehension and understanding by the general patient population. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
8

Munsour, Emad Eldin, Ahmed Awaisu, Mohamed Azmi Ahmad Hassali, Sara Darwish, and Einas Abdoun. "Readability and Comprehensibility of Patient Information Leaflets for Antidiabetic Medications in Qatar." Journal of Pharmacy Technology 33, no. 4 (April 28, 2017): 128–36. http://dx.doi.org/10.1177/8755122517706978.

Full text
Abstract:
Background:The readability and comprehensibility of the patient information leaflets (PILs) provided with antidiabetic medications are of questionable standards; this issue negatively affects adherence to drug therapy, especially in patients with limited literacy skills. Objective: To evaluate the readability and comprehensibility of PILs supplied with medications used for the treatment of type 2 diabetes mellitus in Qatar. Methods: All PILs of the antidiabetic medications in Qatar were evaluated using the Flesch Reading Ease (FRE) score for readability. The Flesch-Kincaid Grade Level, Gunning-Fog Index, and SMOG Grading were used to estimate the comprehensibility of PILs in terms of school grade levels. Results: A total of 45 PILs were evaluated: 32 (71.1%) PILs of brand-name products and 13 (28.9%) for generics. Nine (20%) of the PILs were in English only; 8 (17.8%) were in English, Arabic, and French; and 28 (62.2%) were in English and Arabic. The mean FRE score was 37.71 (±15.85), and the most readable PIL had FRE score of 62. The mean scores for the comprehensibility evaluations were 10.96 (±2.67), 15.02 (±2.52), and 11.41 (±1.6) for the Flesch-Kincaid Grade Level, Gunning-Fog Index, and SMOG Grading, respectively. The most commonly used antidiabetic medication was metformin with 1372.9 (±552.9) as PILs’ mean number of words. Conclusion: Only 2.2% of PILs had acceptable readability scores. All PILs could be comprehended by at least an 11th grade student, which exceeds the recommended grade level for health-related materials. Approximately 20% of these PILs were in English only and were not readable by most patients.
APA, Harvard, Vancouver, ISO, and other styles
9

Marshall, Rebecca, Eoghan Pomeroy, Catriona McKendry, Michael Gilmartin, Paula McQuail, and Mark Johnson. "Anaesthesia for total hip and knee replacement: A review of patient education materials available online." F1000Research 8 (April 9, 2019): 416. http://dx.doi.org/10.12688/f1000research.18675.1.

Full text
Abstract:
Background: Patients frequently consult the internet for health information. Our aim was to perform an Internet-based readability and quality control study using recognised quality scoring systems to assess the patient information available online relating to anaesthesia for total hip and knee replacement surgery. Methods: Online patient information relating to anaesthesia for total hip and knee replacement was identified using Google, Bing and Yahoo with search terms ‘hip replacement anaesthetic’, ‘knee replacement anaesthetic.’ Readability was assessed using Flesch Reading Ease (FRE), Flesch-Kincaid grade level (FKGL) and Gunning Fog Index (GFI). Quality was assessed using DISCERN instrument, Health On the Net Foundation seal, and Information Standard mark. Results: 32 websites were analysed. 25% were HONcode certified, 15.6% had the Information Standard. Mean FRE was 55.2±12.8. Mean FKGL was 8.6±1.9. Six websites (18.8%) had the recommended 6th-grade readability level. Mean of 10.4±2.6 years of formal education was required to read the websites. Websites with Information Standard were easier to read: FKGL (6.2 vs. 9, P < 0.001), GFI (8.8 vs. 10.7, P = 0.04), FRE score (64.2 vs. 9, P = 0.02). Mean DISCERN score was low: 40.3 ± 13. Conclusions: Overall, most websites were poor quality with reading levels too high for the target audience. Information Standard NHS quality mark was associated with improved readability, however along with HONcode were not found to have a statistically significant correlation with quality. Based on this study, we would encourage healthcare professionals to be judicious in the websites they recommend to patients, and to consider both the readability and quality of the information provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Michael J., Mohamedkazim M. Alwani, Jonathan L. Harper, Lauren M. Van Buren, Elhaam H. Bandali, Elisa A. Illing, Taha Z. Shipchandler, and Jonathan Y. Ting. "Readability of Printed Online Education Materials on Pituitary Tumors: Untangling a Web of Complexity." American Journal of Rhinology & Allergy 34, no. 6 (May 27, 2020): 759–66. http://dx.doi.org/10.1177/1945892420927288.

Full text
Abstract:
Background Patients are increasingly turning to the internet for health education. Due to the complex pathophysiology, clinic-diagnostic profile, and management spectrum of pituitary tumors, an evaluation of the readability of printed online education materials (POEMs) regarding this entity is warranted. Objective (1) To apply established readability assessment tools to internet search results on the topic of pituitary tumors. (2) To identify sources of complexity in order to guide the creation POEMs that are in line with the reading level of the target audience. Methodology: After independent, neutral internet search for the phrase “pituitary tumor,” the first 100 results were subjected to inclusion criteria matching. Analysis was performed using 5 readability assessment tools including Flesch Reading Ease (FRE), Flesch–Kincaid Grade Level (FKGL), Gunning-Fog Score (GFS), Coleman–Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG). Results A total of 82 websites met inclusion criteria. All websites were found to be at a higher reading level ( P < .01) than the United States Department of Health and Human Services (USDHHS) recommended sixth-grade level. Mean readability scores were as follows: FRE, 38.79; FKGL, 11.27; GFS, 12.83; CLI, 17.31; SMOG, 12.12. Intergroup comparison between FKGL, GFS, CLI, and SMOG yielded that CLI was significantly higher ( P < .03). No significant differences in readability were noted between academic and other websites. Conclusion There is a significant misalignment between the reading level of patients and the readability of pituitary tumor POEMs. This may lead patients to misconceive their diagnoses, management options, and prognosis.
APA, Harvard, Vancouver, ISO, and other styles
11

Marques, Vagner Antônio, Lanna Nogueira Pereira, Idamo Favalessa de Aquino, and Viviane da Costa Freitag. "Has it become more readable? Empirical evidence of key matters in independent audit reports,." Revista Contabilidade & Finanças 32, no. 87 (December 2021): 444–60. http://dx.doi.org/10.1590/1808-057x202112990.

Full text
Abstract:
ABSTRACT The aim of this study was to analyze the effect of the adoption of Brazilian Accounting Standard - Auditing Technique 701 (NBC TA 701, in its Portuguese initialism) over the readability of audit reports. The study fills a gap in the literature by obtaining empirical evidence regarding the effect of NBC TA 701 on the readability and comprehensibility of audit reports. The study is important for verifying whether the disclosure of key audit matters (KAMs) improves the ease of reading and understanding audit reports after the adoption of NBC TA 701. Unlike in the previous literature, it was observed that the effect of KAMs has a non-linear, U-shaped relationship, which suggests additional benefits to readability based on a certain quantity of key matters reported. The data from a sample of 240 listed companies on the B3 S.A. -Brasil,Bolsa,Balcão (B3), in the period from 2013 to 2018, were assessed using content analysis, descriptive statistics, difference of means tests, and panel data correlation and regression analyses. The results showed that the adoption of NBC TA 701 significantly affected the Flesch readability index (FRI) of the independent audit reports. They also confirmed that the quantity of KAMs reported increases the FRI in a non-linear way, and that the types of key matters affect readability differently according to their complexity. The results provide evidence that the new audit report improves the level of readability in a non-linear way, thus contributing to the informational content of the audit report used by the various users for decision making.
APA, Harvard, Vancouver, ISO, and other styles
12

Mac, Olivia A., Amy Thayre, Shumei Tan, and Rachael H. Dodd. "Web-Based Health Information Following the Renewal of the Cervical Screening Program in Australia: Evaluation of Readability, Understandability, and Credibility." Journal of Medical Internet Research 22, no. 6 (June 26, 2020): e16701. http://dx.doi.org/10.2196/16701.

Full text
Abstract:
Background Three main changes were implemented in the Australian National Cervical Screening Program (NCSP) in December 2017: an increase in the recommended age to start screening, extended screening intervals, and change from the Papanicolaou (Pap) test to primary human papillomavirus screening (cervical screening test). The internet is a readily accessible source of information to explain the reasons for these changes to the public. It is important that web-based health information about changes to national screening programs is accessible and understandable for the general population. Objective This study aimed to evaluate Australian web-based resources that provide information about the changes to the cervical screening program. Methods The term cervical screening was searched in 3 search engines. The first 10 relevant results across the first 3 pages of each search engine were selected. Overall, 2 authors independently evaluated each website for readability (Flesch Reading Ease [FRE], Flesch-Kincaid Grade Level, and Simple Measure of Gobbledygook [SMOG] index), quality of information (Patient Education Materials Assessment Tool [PEMAT] for printable materials), credibility (Journal of the American Medical Association [JAMA] benchmark criteria and presence of Health on the Net Foundation code of conduct [HONcode] certification), website design, and usability with 5 simulation questions to assess the relevance of information. A descriptive analysis was conducted for the readability measures, PEMAT, and the JAMA benchmark criteria. Results Of the 49 websites identified in the search, 15 were eligible for inclusion. The consumer-focused websites were classed as fairly difficult to read (mean FRE score 51.8, SD 13.3). The highest FRE score (easiest to read) was 70.4 (Cancer Council Australia Cervical Screening Consumer Site), and the lowest FRE score (most difficult to read) was 33.0 (NCSP Clinical Guidelines). A total of 9 consumer-focused websites and 4 health care provider–focused websites met the recommended threshold (sixth to eighth grade; SMOG index) for readability. The mean PEMAT understandability scores were 87.7% (SD 6.0%) for consumer-focused websites and 64.9% (SD 13.8%) for health care provider–focused websites. The mean actionability scores were 58.1% (SD 19.1%) for consumer-focused websites and 36.7% (SD 11.0%) for health care provider–focused websites. Moreover, 9 consumer-focused and 3 health care provider–focused websites scored above 70% for understandability, and 2 consumer-focused websites had an actionability score above 70%. A total of 3 websites met all 4 of the JAMA benchmark criteria, and 2 websites displayed the HONcode. Conclusions It is important for women to have access to information that is at an appropriate reading level to better understand the implications of the changes to the cervical screening program. These findings can help health care providers direct their patients toward websites that provide information on cervical screening that is written at accessible reading levels and has high understandability.
APA, Harvard, Vancouver, ISO, and other styles
13

Marshall, Ashley N., and Kenneth C. Lam. "READABILITY OF THE DISABLEMENT OF THE PHYSICALLY ACTIVE (DPA) SCALE IN PEDIATRIC ATHLETES." Orthopaedic Journal of Sports Medicine 7, no. 3_suppl (March 1, 2019): 2325967119S0010. http://dx.doi.org/10.1177/2325967119s00108.

Full text
Abstract:
Background: The assessment of patient outcomes in pediatric (ie, youth and adolescent) athletes is critical for comprehensive and whole person healthcare. The Disablement of the Physically Active scale (DPA) is a relatively new generic patient-reported outcome measure (PROM) that was designed specifically for athletic and highly functional patient populations. While the DPA has been used to evaluate health-related quality of life (HRQOL) in adults, little is known about its use in pediatric athletes. The selection of PROMs for pediatric athletes presents with unique challenges, particularly regarding the ability of these athletes to effectively understand the instruments to which they are completing. Therefore, the purpose of this study was to examine the readability of the DPA in pediatric athletes through (1) participant-based and (2) computer-based analyses. Methods: Participant-based analysis was utilized to conduct a preliminary investigation into the subjective readability (ie, a participant’s perceived ability to successfully read the material) of the DPA. Participants were youth athletes (n=13, age=8.7±1.3 years) recruited from a local community athletics league. An investigator administered the San Diego Quick Assessment of Reading Ability to determine each participant’s current reading grade level. Participants were then instructed to read each item of the DPA and circle any words that they did not understand. Frequency counts and percentages were determined for each word identified by participants, within each item of the DPA. Computer-based analysis was utilized to assess the objective readability of the DPA. The Flesch Reading Ease (FRE), Flesch-Kincaid Reading Level (FK), and Gunning Fog Index (FOG) scores were calculated for each item of the DPA. FRE scores range from 0 to 100, with lower scores indicating more difficult reading material. The FRE score is converted to an approximate reading level (FK), ranging from pre-primer (<0) to college (>12), with higher reading level indicating more difficult reading material. Similar to the FK formula, the FOG formula computes to an approximate reading grade level associated with the U.S. education system. Summary statistics (mean± standard deviation, median, and range) were used to report scores for each DPA item. We also reported the number (%) of items that exceeded the 5th – 6th grade reading level, which is the maximum recommended threshold for pediatric patients. Results: It was determined that the average reading grade level of the participants was 3±1.4 years. In regards to subjective readability, participants did not understand an average of 22.1% (48.3/219 words) of the entire DPA scale, with a range across items of 3.1% (Pain = 0.15/5 words) to 34.9% (Overall Fitness = 5.2/15 words). There were 40 instances where greater than 50% of the participants did not recognize a word, and seven words throughout the scale that 100% (13/13) of the participants did not understand: endurance, stability, pivoting, coordination, cardiovascular, endurance and colleagues. For objective readability, FRE scores ranged from 5.8 (very confusing) to 119.7 (very easy) across items. The mean and median across all items was 42.4±33.4 (difficult), and 41.4 (difficult), respectively. The FK reading level ranged from -2.8 (pre-primer) to 13.8 (college), with a mean score of 8.8±4.8 and median score of 8.9. The FOG reading level ranged from 1 (1st grade) to 13.8 (college), with a mean score of 9.3±3.8 and median score of 10.5. When considering both the FK and FOG scores, 81.8% (9/11) of the items exceeded the 5th – 6th grade reading level threshold. Conclusions/Significance: These findings indicate that the overall readability of the DPA may not be appropriate for pediatric athletes. Thus, findings using the DPA in pediatric athletes for clinical or research purposes should be interpreted with caution. Future research is warranted to develop a pediatric version of the DPA, utilizing the results of this study for guidance, as no other generic PROM currently exist for assessing HRQOL specifically in youth and adolescent athletes.
APA, Harvard, Vancouver, ISO, and other styles
14

Holzer, Lukas. "Assessing the Quality and Content of Accessible Information on the Internet Regarding Hallux Rigidus." Foot & Ankle Orthopaedics 2, no. 3 (September 1, 2017): 2473011417S0001. http://dx.doi.org/10.1177/2473011417s000195.

Full text
Abstract:
Category: Midfoot/Forefoot Introduction/Purpose: The internet has grown into one of the biggest resource of easy accessible health information for the general public. Hallux rigidus (HR) is the most common arthritic disease pattern of the foot. There is no mutual consent on the ideal operative intervention for this disease. State of knowledge is essential for a decision to proceed with a HR specific treatment, since this decision is ultimately a shared one between the patient and the physician. Therefore, many patients comb through the internet for medical information related to this specific topic. The purpose of this study was to assess the quality and integrity of the online available informative content about HR using acknowledged scoring instruments, proven quality markers and a novel designed specific HR score. Methods: In December 2016 the search term “hallux rigidus” was used to search Google (Google Inc., Mountain View, CA, USA) by three independent reviewers (two experienced orthopedic surgeons and one trained medical student). The content of the first 50 hits were analyzed by the use of specific scores including the DISCERN score, the Journal of the American Medical Association (JAMA) benchmark criteria and the Health On the Net code (HONcode) as a seal of quality for providing complete and transparent health-related information. Also the Flesch Reading Ease (FRE), the Flesch Kincaid Grade Level (FKGL), and the Gunning Fog Index (GFI) were recorded as indicators for readability. Websites were classified as follows: academic, commercial, government or non-profit organization (NPO), physician or group, and unspecified. A HR-specific content score was developed consisting of 19 specified items to evaluate the overall aspects about the procedure, management and potential complications. Results: Of the 50 screened websites, 37 (74%) were included for analysis. Of these 37 websites, 20 were from a physician or group, 8 were commercial, 6 were academic, 3 were governmental or NPO and 1 was unspecified. The mean DISCERN score of the websites was 46±10. The highest score was 65 points, whereas the lowest was 21. HON Code certification was present in six (16%) and HON seal in two websites (5%). The FRE, RKGL and GFI were 43,2±14, 8,1±2, and 6,7±2 respectively. The HR specific content score was 7±3. Conclusion: Quality and content of accessible information on the internet on HR was analyzed by three individuals by various objective scores. 37 websites were included in the analysis. The majority of websites provided poor information on the management of HR. Patients need to be aware of this fact. In the future measures need to be taken to improve quality and content of HR related websites.
APA, Harvard, Vancouver, ISO, and other styles
15

Varghese, Jeffrey Alex, Anooj A. Patel, Chitang Joshi, Brendan Alleyne, and Robert D. Galiano. "Which Resources Are Better: Sales or Scholarly? An Assessment on the Readability, Quality, and Technical Features of Online Chemical Peel Websites." Aesthetic Surgery Journal Open Forum 3, no. 1 (January 1, 2021). http://dx.doi.org/10.1093/asjof/ojab008.

Full text
Abstract:
Abstract Background Chemical peels are an exceedingly popular cosmetic treatment with a wide variety of suppliers, each with its own online health resource describing the procedure. With increasing reliance on the internet for medical information, it is crucial that these resources provide reliable information for patients to make informed decisions. Objectives The aim of this study was to examine popular chemical peel resources and determine if those that offered chemical peel treatments (Sales) had lower readability, quality of information, and technical features compared with those that did not (Scholarly). Methods The term “chemical peel” was searched in July 2020 and the top 50 websites were retrieved for analysis. Each resource’s readability, quality, and technical features were measured through 8 readability formulas, the DISCERN and Health on the Net Code (HONcode), and 2 website performance monitors. Results The 50 websites were analyzed with an average Fry readability score of 13th grade. Scholarly websites displayed higher readability than Sales (Flesch Reading Ease 54.4 &gt; 47.4, P = 0.047 and Coleman-Liau Index 10.6 &lt; 11.7, P = 0.04). Scholarly resources surpassed Sales both in quality (DISCERN 56.4 &gt; 39.7, P &lt; 0.001 and HONcode 11.8 &gt; 9.5, P = 0.032) and technical features (WooRank 76.9 &gt; 68.6, P = 0.0082). Conclusions The average readability of chemical peel resources is too difficult, and their quality must be improved. Scholarly resources exhibited higher readability, quality, and technical features than Sales websites.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography