To see the other types of publications on this topic, follow the link: AI health care planning.

Journal articles on the topic 'AI health care planning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AI health care planning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nelson, Scott D., Colin G. Walsh, Casey A. Olsen, Andrew J. McLaughlin, Joseph R. LeGrand, Nick Schutz, and Thomas A. Lasko. "Demystifying artificial intelligence in pharmacy." American Journal of Health-System Pharmacy 77, no. 19 (July 4, 2020): 1556–70. http://dx.doi.org/10.1093/ajhp/zxaa218.

Full text
Abstract:
Abstract Purpose To provide pharmacists and other clinicians with a basic understanding of the underlying principles and practical applications of artificial intelligence (AI) in the medication-use process. Summary “Artificial intelligence” is a general term used to describe the theory and development of computer systems to perform tasks that normally would require human cognition, such as perception, language understanding, reasoning, learning, planning, and problem solving. Following the fundamental theorem of informatics, a better term for AI would be “augmented intelligence,” or leveraging the strengths of computers and the strengths of clinicians together to obtain improved outcomes for patients. Understanding the vocabulary of and methods used in AI will help clinicians productively communicate with data scientists to collaborate on developing models that augment patient care. This primer includes discussion of approaches to identifying problems in practice that could benefit from application of AI and those that would not, as well as methods of training, validating, implementing, evaluating, and maintaining AI models. Some key limitations of AI related to the medication-use process are also discussed. Conclusion As medication-use domain experts, pharmacists play a key role in developing and evaluating AI in healthcare. An understanding of the core concepts of AI is necessary to engage in collaboration with data scientists and critically evaluating its place in patient care, especially as clinical practice continues to evolve and develop.
APA, Harvard, Vancouver, ISO, and other styles
2

Schwendicke, F., W. Samek, and J. Krois. "Artificial Intelligence in Dentistry: Chances and Challenges." Journal of Dental Research 99, no. 7 (April 21, 2020): 769–74. http://dx.doi.org/10.1177/0022034520915714.

Full text
Abstract:
The term “artificial intelligence” (AI) refers to the idea of machines being capable of performing human tasks. A subdomain of AI is machine learning (ML), which “learns” intrinsic statistical patterns in data to eventually cast predictions on unseen data. Deep learning is a ML technique using multi-layer mathematical operations for learning and inferring on complex data like imagery. This succinct narrative review describes the application, limitations and possible future of AI-based dental diagnostics, treatment planning, and conduct, for example, image analysis, prediction making, record keeping, as well as dental research and discovery. AI-based applications will streamline care, relieving the dental workforce from laborious routine tasks, increasing health at lower costs for a broader population, and eventually facilitate personalized, predictive, preventive, and participatory dentistry. However, AI solutions have not by large entered routine dental practice, mainly due to 1) limited data availability, accessibility, structure, and comprehensiveness, 2) lacking methodological rigor and standards in their development, 3) and practical questions around the value and usefulness of these solutions, but also ethics and responsibility. Any AI application in dentistry should demonstrate tangible value by, for example, improving access to and quality of care, increasing efficiency and safety of services, empowering and enabling patients, supporting medical research, or increasing sustainability. Individual privacy, rights, and autonomy need to be put front and center; a shift from centralized to distributed/federated learning may address this while improving scalability and robustness. Lastly, trustworthiness into, and generalizability of, dental AI solutions need to be guaranteed; the implementation of continuous human oversight and standards grounded in evidence-based dentistry should be expected. Methods to visualize, interpret, and explain the logic behind AI solutions will contribute (“explainable AI”). Dental education will need to accompany the introduction of clinical AI solutions by fostering digital literacy in the future dental workforce.
APA, Harvard, Vancouver, ISO, and other styles
3

Fieldhouse, Jon, Vanessa Parmenter, and Alice Hortop. "Vocational rehabilitation in mental health services: evaluating the work of a social and therapeutic horticulture community interest company." Mental Health and Social Inclusion 18, no. 3 (August 5, 2014): 155–63. http://dx.doi.org/10.1108/mhsi-01-2014-0002.

Full text
Abstract:
Purpose – The purpose of this paper is to report on an action inquiry (AI) evaluation of the Natureways project, a time-limited collaboration between an NHS Trust Vocational Service and a voluntary sector horticulture-based community interest company (CIC). Design/methodology/approach – Natureways produced positive employment outcomes and an AI process – based on co-operative inquiry with trainees, staff, and managers – explored how these had been achieved. Findings – Natureways’ efficacy was based on features of the setting (its supportiveness, rural location, and workplace authenticity), on its embeddedness (within local care-planning pathways, the horticultural industry, and the local community), and on effective intersectoral working. The inquiry also generated actionable learning about creative leadership and adaptability in the changing landscape of service provision, about the benefits of the CIC's small scale and business ethos, about the links between trainees’ employability, social inclusion and recovery, about horticulture as a training medium, and about the role of AI in service development. Practical implications – The inquiry highlights how an intersectoral CIC can be an effective model for vocational rehabilitation. Social implications – Community-embeddeness is an asset for mental health-orientated CICs, facilitating social inclusion and recovery. Social and therapeutic horticulture settings are seen to be conducive to this. Originality/value – This case study suggests that AI methodology is not only well-suited to many practitioners’ skill sets, but its participatory ethos and focus on experiential knowledge makes it suitable for bringing a service user voice to bear on service development.
APA, Harvard, Vancouver, ISO, and other styles
4

Calders, Toon, Eirini Ntoutsi, Mykola Pechenizkiy, Bodo Rosenhahn, and Salvatore Ruggieri. "Introduction to The Special Section on Bias and Fairness in AI." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 1–3. http://dx.doi.org/10.1145/3468507.3468509.

Full text
Abstract:
Fairness in Artificial Intelligence rightfully receives a lot of attention these days. Many life-impacting decisions are being partially automated, including health-care resource planning decisions, insurance and credit risk predictions, recidivism predictions, etc. Much of work appearing on this topic within the Data Mining, Machine Learning and Artificial Intelligence community is focused on technological aspects. Nevertheless, fairness is much wider than this as it lies at the intersection of philosophy, ethics, legislation, and practical perspectives. Therefore, to fill this gap and bring together scholars of these disciplines working on fairness, the first workshop on Bias and Fairness in AI was held online on September 18, 2020 at the ECML-PKDD 2020 conference. This special section includes six articles presenting different perspectives on bias and fairness from different angles.
APA, Harvard, Vancouver, ISO, and other styles
5

Blanes-Selva, Vicent, Ascensión Doñate-Martínez, Gordon Linklater, Jorge Garcés-Ferrer, and Juan M. García-Gómez. "Responsive and Minimalist App Based on Explainable AI to Assess Palliative Care Needs during Bedside Consultations on Older Patients." Sustainability 13, no. 17 (September 2, 2021): 9844. http://dx.doi.org/10.3390/su13179844.

Full text
Abstract:
Palliative care is an alternative to standard care for gravely ill patients that has demonstrated many clinical benefits in cost-effective interventions. It is expected to grow in demand soon, so it is necessary to detect those patients who may benefit from these programs using a personalised objective criterion at the correct time. Our goal was to develop a responsive and minimalist web application embedding a 1-year mortality explainable predictive model to assess palliative care at bedside consultation. A 1-year mortality predictive model has been trained. We ranked the input variables and evaluated models with an increasing number of variables. We selected the model with the seven most relevant variables. Finally, we created a responsive, minimalist and explainable app to support bedside decision making for older palliative care. The selected variables are age, medication, Charlson, Barthel, urea, RDW-SD and metastatic tumour. The predictive model achieved an AUC ROC of 0.83 [CI: 0.82, 0.84]. A Shapley value graph was used for explainability. The app allows identifying patients in need of palliative care using the bad prognosis criterion, which can be a useful, easy and quick tool to support healthcare professionals in obtaining a fast recommendation in order to allocate health resources efficiently.
APA, Harvard, Vancouver, ISO, and other styles
6

Comiford, Ashley L., Dorothy A. Rhoades, Justin D. Dvorak, Kai Ding, Toral Mehta, Paul Spicer, Theodore Wagener, and Mark P. Doescher. "Use of Potentially Reduced Exposure Tobacco Products Among American Indian Smokeless Tobacco Users: Associations With Cessation Behaviors and Cotinine Levels." Public Health Reports 135, no. 1 (December 13, 2019): 141–49. http://dx.doi.org/10.1177/0033354919893031.

Full text
Abstract:
Objectives: American Indian/Alaska Native (AI/AN) adults use smokeless tobacco products (eg, chewing and dip tobacco) more often than other racial/ethnic groups do. Although US adults increasingly use potentially reduced exposure tobacco products (PREPs), such as electronic cigarettes and snus, no studies have examined the use of PREPs among AI/AN smokeless tobacco users. We examined associations between current PREPs use and smokeless tobacco–related measures, including cessation attempts and cotinine levels, in a sample of American Indian adults who currently use smokeless tobacco. Methods: We collected survey and tobacco biomarker data from 299 adult American Indian smokeless tobacco users at Cherokee Nation health care facilities and events in 2016 and 2017. We used multivariable analyses to determine associations between current PREPs use and smokeless tobacco–related characteristics. Results: Current PREPs users were younger, less likely to be married or living with a partner, less likely to report a chronic medical condition, and more likely to report other tobacco use than PREPs nonusers. Among participants with annual household incomes ≤$30 000, current PREPs users were less likely than PREPs nonusers to report a definite desire to quit smokeless tobacco ( P = .02). PREPs use was not associated with planning to quit smokeless tobacco, past 12-month smokeless tobacco quit attempts, amount of smokeless tobacco used per week, cotinine levels, or scores on the Fagerström Test for Nicotine Dependence–Smokeless Tobacco. Conclusions: Our study suggests that American Indian smokeless tobacco users may not be using PREPs as a smokeless tobacco cessation aid. Future studies should take this finding into consideration when evaluating the role of PREPs use in smokeless tobacco cessation and in total tobacco cessation in this population.
APA, Harvard, Vancouver, ISO, and other styles
7

Adikari, Achini, Rashmika Nawaratne, Daswin De Silva, Sajani Ranasinghe, Oshadi Alahakoon, and Damminda Alahakoon. "Emotions of COVID-19: Content Analysis of Self-Reported Information Using Artificial Intelligence." Journal of Medical Internet Research 23, no. 4 (April 30, 2021): e27341. http://dx.doi.org/10.2196/27341.

Full text
Abstract:
Background The COVID-19 pandemic has disrupted human societies around the world. This public health emergency was followed by a significant loss of human life; the ensuing social restrictions led to loss of employment, lack of interactions, and burgeoning psychological distress. As physical distancing regulations were introduced to manage outbreaks, individuals, groups, and communities engaged extensively on social media to express their thoughts and emotions. This internet-mediated communication of self-reported information encapsulates the emotional health and mental well-being of all individuals impacted by the pandemic. Objective This research aims to investigate the human emotions related to the COVID-19 pandemic expressed on social media over time, using an artificial intelligence (AI) framework. Methods Our study explores emotion classifications, intensities, transitions, and profiles, as well as alignment to key themes and topics, across the four stages of the pandemic: declaration of a global health crisis (ie, prepandemic), the first lockdown, easing of restrictions, and the second lockdown. This study employs an AI framework comprised of natural language processing, word embeddings, Markov models, and the growing self-organizing map algorithm, which are collectively used to investigate social media conversations. The investigation was carried out using 73,000 public Twitter conversations posted by users in Australia from January to September 2020. Results The outcomes of this study enabled us to analyze and visualize different emotions and related concerns that were expressed and reflected on social media during the COVID-19 pandemic, which could be used to gain insights into citizens’ mental health. First, the topic analysis showed the diverse as well as common concerns people had expressed during the four stages of the pandemic. It was noted that personal-level concerns expressed on social media had escalated to broader concerns over time. Second, the emotion intensity and emotion state transitions showed that fear and sadness emotions were more prominently expressed at first; however, emotions transitioned into anger and disgust over time. Negative emotions, except for sadness, were significantly higher (P<.05) in the second lockdown, showing increased frustration. Temporal emotion analysis was conducted by modeling the emotion state changes across the four stages of the pandemic, which demonstrated how different emotions emerged and shifted over time. Third, the concerns expressed by social media users were categorized into profiles, where differences could be seen between the first and second lockdown profiles. Conclusions This study showed that the diverse emotions and concerns that were expressed and recorded on social media during the COVID-19 pandemic reflected the mental health of the general public. While this study established the use of social media to discover informed insights during a time when physical communication was impossible, the outcomes could also contribute toward postpandemic recovery and understanding psychological impact via emotion changes, and they could potentially inform health care decision making. This study exploited AI and social media to enhance our understanding of human behaviors in global emergencies, which could lead to improved planning and policy making for future crises.
APA, Harvard, Vancouver, ISO, and other styles
8

Vitianingsih, Anik Vega, Achmad Choiron, Dwi Cahyono, and Suyanto Suyanto. "Representation of Spatial Data Modeling Results Measles Diseases, Case Study in East Java Province." Record and Library Journal 6, no. 1 (April 13, 2020): 14. http://dx.doi.org/10.20473/rlj.v6-i1.2020.14-23.

Full text
Abstract:
Background of the study: Measles is a major cause of child death caused by a lack of immunization when a child is a baby.Purpose: The discussion in this paper aims to describe the results of the analysis of the spatial data modeling of measles, knowing the percentage distribution of measles-prone areas, each district based on coverage on immunization status with good, average, fair and poor classification categories. The classification results include areas with good, average, fair, and poor immunization coverage status categories.Method: The method used i.e. with to requirement gathering information data from the East Java health profile book in 2011-2016 for the measles attribute, a literature study to describe the parameter requirements based on the coverage of immunization status (infant immunization status, PD3I, epidemic, and nutritional status), and selection of artificial intelligent (AI) system methods that are in accordance with data behavior for the spatial data modeling process in the formulation of alternative preference values with a decision-making system that involves multi-criteria parameters (multiple attributes decision-making/MADM) with Simple Additive Weighting (SAW) method.Findings: The alternative preference value Vi in the spatial data modeling process with the SAW method can be used as a mathematical model for the same data series behavior.Conclusion: The results of the representation in the modeling of spatial data and this attribute data can be used as a reference for planning in the development of health care centers in areas with poor immunization status categories.
APA, Harvard, Vancouver, ISO, and other styles
9

Kerr, Bradley, David Stephens, Daniel Pham, Thomas Ghost Dog, Celena McCray, Colbie Caughlan, Amanda Gaston, et al. "Assessing the Usability, Appeal, and Impact of a Web-Based Training for Adults Responding to Concerning Posts on Social Media: Pilot Suicide Prevention Study." JMIR Mental Health 7, no. 1 (January 20, 2020): e14949. http://dx.doi.org/10.2196/14949.

Full text
Abstract:
Background Suicide prevention remains challenging among youth, as many do not disclose suicidal ideation. Nearly one-third of American Indian and Alaska Native (AI and AN, tribal, or native) youth see concerning messages on social media at least weekly. Objective To prepare adults to support AI and AN youth who post or view concerning messages, our team designed an hour-long training: Responding to Concerning Posts on Social Media. This study tested the usability, appeal, and impact of the training. Methods A purposive sample of 70 adults was recruited to participate in the pilot, which included 2 study arms. Arm 1 participants completed a 30-min training video and reviewed accompanying handouts, including the Viewer Care Plan (VCP). The VCP provided a 3-step planning and response tool: (1) Start the Conversation, (2) Listen, Gather Information, and Assess Viewer Experience, and (3) Plan and Act. The intent of the VCP was to support and connect AI and AN youth who either view or post concerning messages on social media to life-saving resources. Those enrolled in arm 2 participated in an additional interactive role-play scenario with a coach that took place after the training, via text message. Participants provided qualitative and quantitative feedback on the training’s relevance, appeal, and utility. Paired t tests were used to assess confidence in addressing concerning posts between pre- and postsurveys. Content analysis of the role-play transcripts was used to assess the quality and completion of the coached role-plays, in relation to the recommended VCP. Results Altogether, 35 participants finished the training and completed pre- and postsurveys; 22 participants completed the 6-month follow-up survey. Pre-post analyses of differences in means found significant improvement across several efficacy measures, including confidence starting a conversation about social media (P=.003), confidence contacting the person who posted something concerning (P<.001), and confidence recommending support services to youth who view (P=.001) or youth who post concerning messages (P<.001). Similarly, pre- to 6-month analyses found significant positive improvement across multiple measures, including confidence contacting the youth who posted (P<.001), confidence starting a conversation about social media with youth (P=.003), and an increase in the number of experiences recommending resources for youth who viewed concerning social media posts (P=.02). Of the 3 steps of the VCP, the least followed step in coached role-plays was sharing tools and resources, which is a part of the third Plan and Act step. Conclusions Findings indicate that the Responding to Concerning Posts on Social Media training is a promising tool to prepare adults to intervene and complete the VCP. Additional evaluation with a larger cohort of participants is needed to determine the unique impact of the role-play scenario and changes in mental health referral rates, behaviors, and skills.
APA, Harvard, Vancouver, ISO, and other styles
10

Sumit. "AI Health Care Chatbot." International Journal for Modern Trends in Science and Technology 6, no. 12 (December 13, 2020): 219–24. http://dx.doi.org/10.46501/ijmtst061241.

Full text
Abstract:
Healthcare bot is a technology that makes interaction between man and machine possible by using Artificial Intelligence with the support of dialog flow. Now a day people tend to seek knowledge or information from internet that concern with health through online healthcare services. To lead a good life healthcare is very much important. But it is very difficult to obtain the consultation with the doctor in case of any health issues. The basic aim of this system is to bridge the vocabulary gap between the doctors by giving self-diagnosis from the comfort of one’s place. The proposed idea is to create a medical chatbot using Artificial Intelligence that can diagnose the disease and provide basic details about the disease before consulting a doctor. To reduce the healthcare costs and improve accessibility to medical knowledge the medical bot is built. Certain bots act as a medical reference books, which helps the patient know more about their disease and helps to improve their health. The user can achieve the real benefit of a bot only when it can diagnose all kind of disease and provide necessary information. Hence, people will have an idea about their health and have the right protection.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Yuqi, Zhichao Hao, Shichong Zhao, Jiaqi Gong, and Fan Yang. "Artificial Intelligence in Health Care: Bibliometric Analysis." Journal of Medical Internet Research 22, no. 7 (July 29, 2020): e18228. http://dx.doi.org/10.2196/18228.

Full text
Abstract:
Background As a critical driving power to promote health care, the health care–related artificial intelligence (AI) literature is growing rapidly. Objective The purpose of this analysis is to provide a dynamic and longitudinal bibliometric analysis of health care–related AI publications. Methods The Web of Science (Clarivate PLC) was searched to retrieve all existing and highly cited AI-related health care research papers published in English up to December 2019. Based on bibliometric indicators, a search strategy was developed to screen the title for eligibility, using the abstract and full text where needed. The growth rate of publications, characteristics of research activities, publication patterns, and research hotspot tendencies were computed using the HistCite software. Results The search identified 5235 hits, of which 1473 publications were included in the analyses. Publication output increased an average of 17.02% per year since 1995, but the growth rate of research papers significantly increased to 45.15% from 2014 to 2019. The major health problems studied in AI research are cancer, depression, Alzheimer disease, heart failure, and diabetes. Artificial neural networks, support vector machines, and convolutional neural networks have the highest impact on health care. Nucleosides, convolutional neural networks, and tumor markers have remained research hotspots through 2019. Conclusions This analysis provides a comprehensive overview of the AI-related research conducted in the field of health care, which helps researchers, policy makers, and practitioners better understand the development of health care–related AI research and possible practice implications. Future AI research should be dedicated to filling in the gaps between AI health care research and clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Alderton, Gemma. "AI in resource-poor health care systems." Science 366, no. 6468 (November 21, 2019): 966.16–968. http://dx.doi.org/10.1126/science.366.6468.966-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Coeckelbergh, Mark. "Health Care, Capabilities, and AI Assistive Technologies." Ethical Theory and Moral Practice 13, no. 2 (July 17, 2009): 181–90. http://dx.doi.org/10.1007/s10677-009-9186-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Babic, Boris, Sara Gerke, Theodoros Evgeniou, and I. Glenn Cohen. "Beware explanations from AI in health care." Science 373, no. 6552 (July 15, 2021): 284–86. http://dx.doi.org/10.1126/science.abg1834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lillie, Kate M., Lisa G. Dirks, J. Randall Curtis, Carey Candrian, Jean S. Kutner, and Jennifer L. Shaw. "Culturally Adapting an Advance Care Planning Communication Intervention With American Indian and Alaska Native People in Primary Care." Journal of Transcultural Nursing 31, no. 2 (July 1, 2019): 178–87. http://dx.doi.org/10.1177/1043659619859055.

Full text
Abstract:
Introduction: Advance care planning (ACP) is a process in which patients, families, and providers discuss and plan for desired treatment goals. American Indian and Alaska Native people (AI/AN) have higher prevalence of many serious, life-limiting illnesses compared with the general population; yet AI/ANs use ACP considerably less than the overall population. Method: We conducted a qualitative study to culturally adapt an existing ACP intervention for AI/ANs in two primary care settings. Results: We found that it is important to incorporate patients’ cultural values and priorities into ACP, determine who the patient wants involved in ACP conversations, and consider the culturally and locally relevant barriers and facilitators when developing an ACP intervention with AI/AN communities. Discussion: At the core, ACP interventions should be clear and understandable across populations and tailored to facilitate culturally appropriate and meaningful patient–provider communication. Our results and methodology of culturally adapting an intervention may be applicable to other underrepresented populations.
APA, Harvard, Vancouver, ISO, and other styles
16

Liyanage, Harshana, Siaw-Teng Liaw, Jitendra Jonnagaddala, Richard Schreiber, Craig Kuziemsky, Amanda L. Terry, and Simon de Lusignan. "Artificial Intelligence in Primary Health Care: Perceptions, Issues, and Challenges." Yearbook of Medical Informatics 28, no. 01 (April 25, 2019): 041–46. http://dx.doi.org/10.1055/s-0039-1677901.

Full text
Abstract:
Background: Artificial intelligence (AI) is heralded as an approach that might augment or substitute for the limited processing power of the human brain of primary health care (PHC) professionals. However, there are concerns that AI-mediated decisions may be hard to validate and challenge, or may result in rogue decisions. Objective: To form consensus about perceptions, issues, and challenges of AI in primary care. Method: A three-round Delphi study was conducted. Round 1 explored experts’ viewpoints on AI in PHC (n=20). Round 2 rated the appropriateness of statements arising from round one (n=12). The third round was an online panel discussion of findings (n=8) with the members of both the International Medical Informatics Association and the European Federation of Medical Informatics Primary Health Care Informatics Working Groups. Results: PHC and informatics experts reported AI has potential to improve managerial and clinical decisions and processes, and this would be facilitated by common data standards. The respondents did not agree that AI applications should learn and adapt to clinician preferences or behaviour and they did not agree on the extent of AI potential for harm to patients. It was more difficult to assess the impact of AI-based applications on continuity and coordination of care. Conclusion: While the use of AI in medicine should enhance healthcare delivery, we need to ensure meticulous design and evaluation of AI applications. The primary care informatics community needs to be proactive and to guide the ethical and rigorous development of AI applications so that they will be safe and effective.
APA, Harvard, Vancouver, ISO, and other styles
17

Barbour, Andrew B., Jennifer M. Frush, Luke A. Gatta, William C. McManigle, Niobra M. Keah, Lorena Bejarano-Pineda, and Evan M. Guerrero. "Artificial Intelligence in Health Care: Insights From an Educational Forum." Journal of Medical Education and Curricular Development 6 (January 2019): 238212051988934. http://dx.doi.org/10.1177/2382120519889348.

Full text
Abstract:
Discussions surrounding the future of artificial intelligenc (AI) in healthcare often cause consternation among healthcare professionals. These feelings may stem from a lack of formal education on AI and how to be a leader of AI implementation in medical systems. To address this, our academic medical center hosted an educational summit exploring how to become a leader of AI in healthcare. This article presents three lessons learned from hosting this summit, thus providing guidance for developing medical curriculum on the topic of AI in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
18

Reddy, Sandeep, Sonia Allan, Simon Coghlan, and Paul Cooper. "A governance model for the application of AI in health care." Journal of the American Medical Informatics Association 27, no. 3 (November 4, 2019): 491–97. http://dx.doi.org/10.1093/jamia/ocz192.

Full text
Abstract:
Abstract As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about governance of AI in health care.
APA, Harvard, Vancouver, ISO, and other styles
19

Johnson, Sandra L. J. "AI, Machine Learning, and Ethics in Health Care." Journal of Legal Medicine 39, no. 4 (October 2, 2019): 427–41. http://dx.doi.org/10.1080/01947648.2019.1690604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rogers, E. "AI and the changing face of health care." IEEE Intelligent Systems and their Applications 13, no. 1 (January 1998): 20–25. http://dx.doi.org/10.1109/mis.1998.653220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rubin, Rita. "Obstacles to Implementing AI Tools in Health Care." JAMA 325, no. 4 (January 26, 2021): 333. http://dx.doi.org/10.1001/jama.2020.26933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Keane, Pearse A., and Eric J. Topol. "AI-facilitated health care requires education of clinicians." Lancet 397, no. 10281 (April 2021): 1254. http://dx.doi.org/10.1016/s0140-6736(21)00722-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

A.Jabbar, M., Shirina Samreen, and Rajanikanth Aluvalu. "The Future of Health care: Machine Learning." International Journal of Engineering & Technology 7, no. 4.6 (September 25, 2018): 23. http://dx.doi.org/10.14419/ijet.v7i4.6.20226.

Full text
Abstract:
Machine learning (ML) is a rising field. Machine learning is to find patterns automatically and reason about data.ML enables personalized care called precision medicine. Machine learning methods have made advances in healthcare domain. This paper discuss about application of machine learning in health care. Machine learning will change health care within a few years. In future ML and AI will transform health care, but quality ML and AI decision support systems (DSS) Should Require to address the problems faced by patients and physicians in effective diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
24

Naeem, Jacqueline, and Jacqueline Naeem. "AI Predicts Post-Acute Care Need." Journal of the American Medical Directors Association 21, no. 3 (March 2020): B16. http://dx.doi.org/10.1016/j.jamda.2020.01.050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Asan, Onur, and Avishek Choudhury. "Research Trends in Artificial Intelligence Applications in Human Factors Health Care: Mapping Review." JMIR Human Factors 8, no. 2 (June 18, 2021): e28236. http://dx.doi.org/10.2196/28236.

Full text
Abstract:
Background Despite advancements in artificial intelligence (AI) to develop prediction and classification models, little research has been devoted to real-world translations with a user-centered design approach. AI development studies in the health care context have often ignored two critical factors of ecological validity and human cognition, creating challenges at the interface with clinicians and the clinical environment. Objective The aim of this literature review was to investigate the contributions made by major human factors communities in health care AI applications. This review also discusses emerging research gaps, and provides future research directions to facilitate a safer and user-centered integration of AI into the clinical workflow. Methods We performed an extensive mapping review to capture all relevant articles published within the last 10 years in the major human factors journals and conference proceedings listed in the “Human Factors and Ergonomics” category of the Scopus Master List. In each published volume, we searched for studies reporting qualitative or quantitative findings in the context of AI in health care. Studies are discussed based on the key principles such as evaluating workload, usability, trust in technology, perception, and user-centered design. Results Forty-eight articles were included in the final review. Most of the studies emphasized user perception, the usability of AI-based devices or technologies, cognitive workload, and user’s trust in AI. The review revealed a nascent but growing body of literature focusing on augmenting health care AI; however, little effort has been made to ensure ecological validity with user-centered design approaches. Moreover, few studies (n=5 against clinical/baseline standards, n=5 against clinicians) compared their AI models against a standard measure. Conclusions Human factors researchers should actively be part of efforts in AI design and implementation, as well as dynamic assessments of AI systems’ effects on interaction, workflow, and patient outcomes. An AI system is part of a greater sociotechnical system. Investigators with human factors and ergonomics expertise are essential when defining the dynamic interaction of AI within each element, process, and result of the work system.
APA, Harvard, Vancouver, ISO, and other styles
26

Garbuio, Massimo, and Nidthida Lin. "Artificial Intelligence as a Growth Engine for Health Care Startups: Emerging Business Models." California Management Review 61, no. 2 (November 21, 2018): 59–83. http://dx.doi.org/10.1177/0008125618811931.

Full text
Abstract:
The future of health care may change dramatically as entrepreneurs offer solutions that change how we prevent, diagnose, and cure health conditions, using artificial intelligence (AI). This article provides a timely and critical analysis of AI-driven health care startups and identifies emerging business model archetypes that entrepreneurs from around the world are using to bring AI solutions to the marketplace. It identifies areas of value creation for the application of AI in health care and proposes an approach to designing business models for AI health care startups.
APA, Harvard, Vancouver, ISO, and other styles
27

Gao, Shuqing, Lingnan He, Yue Chen, Dan Li, and Kaisheng Lai. "Public Perception of Artificial Intelligence in Medical Care: Content Analysis of Social Media." Journal of Medical Internet Research 22, no. 7 (July 13, 2020): e16649. http://dx.doi.org/10.2196/16649.

Full text
Abstract:
Background High-quality medical resources are in high demand worldwide, and the application of artificial intelligence (AI) in medical care may help alleviate the crisis related to this shortage. The development of the medical AI industry depends to a certain extent on whether industry experts have a comprehensive understanding of the public’s views on medical AI. Currently, the opinions of the general public on this matter remain unclear. Objective The purpose of this study is to explore the public perception of AI in medical care through a content analysis of social media data, including specific topics that the public is concerned about; public attitudes toward AI in medical care and the reasons for them; and public opinion on whether AI can replace human doctors. Methods Through an application programming interface, we collected a data set from the Sina Weibo platform comprising more than 16 million users throughout China by crawling all public posts from January to December 2017. Based on this data set, we identified 2315 posts related to AI in medical care and classified them through content analysis. Results Among the 2315 identified posts, we found three types of AI topics discussed on the platform: (1) technology and application (n=987, 42.63%), (2) industry development (n=706, 30.50%), and (3) impact on society (n=622, 26.87%). Out of 956 posts where public attitudes were expressed, 59.4% (n=568), 34.4% (n=329), and 6.2% (n=59) of the posts expressed positive, neutral, and negative attitudes, respectively. The immaturity of AI technology (27/59, 46%) and a distrust of related companies (n=15, 25%) were the two main reasons for the negative attitudes. Across 200 posts that mentioned public attitudes toward replacing human doctors with AI, 47.5% (n=95) and 32.5% (n=65) of the posts expressed that AI would completely or partially replace human doctors, respectively. In comparison, 20.0% (n=40) of the posts expressed that AI would not replace human doctors. Conclusions Our findings indicate that people are most concerned about AI technology and applications. Generally, the majority of people held positive attitudes and believed that AI doctors would completely or partially replace human ones. Compared with previous studies on medical doctors, the general public has a more positive attitude toward medical AI. Lack of trust in AI and the absence of the humanistic care factor are essential reasons why some people still have a negative attitude toward medical AI. We suggest that practitioners may need to pay more attention to promoting the credibility of technology companies and meeting patients’ emotional needs instead of focusing merely on technical issues.
APA, Harvard, Vancouver, ISO, and other styles
28

Goldfarb, Avi, Bledi Taska, and Florenta Teodoridis. "Artificial Intelligence in Health Care? Evidence from Online Job Postings." AEA Papers and Proceedings 110 (May 1, 2020): 400–404. http://dx.doi.org/10.1257/pandp.20201006.

Full text
Abstract:
This paper documents a puzzle. Despite the numerous popular press discussions of artificial intelligence (AI) in health care, there has been relatively little adoption. Using data from Burning Glass Technologies on millions of online job postings, we find that AI adoption in health care remains substantially lower than in most other industries and that under 3 percent of the hospitals in our data posted any jobs requiring AI skills from 2015-2018. The low adoption rates mean any statistical analysis is limited. Nevertheless, the adoption we do observe shows that larger hospitals, larger counties, and integrated salary model hospitals are more likely to adopt.
APA, Harvard, Vancouver, ISO, and other styles
29

Morley, Jessica, and Luciano Floridi. "An ethically mindful approach to AI for health care." Lancet 395, no. 10220 (January 2020): 254–55. http://dx.doi.org/10.1016/s0140-6736(19)32975-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hague, Douglas C. "Benefits, Pitfalls, and Potential Bias in Health Care AI." North Carolina Medical Journal 80, no. 4 (July 2019): 219–23. http://dx.doi.org/10.18043/ncm.80.4.219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Glauser, Wendy. "AI in health care: Improving outcomes or threatening equity?" Canadian Medical Association Journal 192, no. 1 (January 5, 2020): E21—E22. http://dx.doi.org/10.1503/cmaj.1095838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nordling, Linda. "A fairer way forward for AI in health care." Nature 573, no. 7775 (September 25, 2019): S103—S105. http://dx.doi.org/10.1038/d41586-019-02872-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hoffman, Sharona. "The Emerging Hazard of AI-Related Health Care Discrimination." Hastings Center Report 51, no. 1 (December 14, 2020): 8–9. http://dx.doi.org/10.1002/hast.1203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

KAPP, MARSHALL B. "Advance Health Care Planning." Southern Medical Journal 81, no. 2 (February 1988): 221–25. http://dx.doi.org/10.1097/00007611-198802000-00020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rhyne, David M., and David Jupp. "Health care requirements planning." Health Care Management Review 13, no. 1 (1988): 17–28. http://dx.doi.org/10.1097/00004010-198824000-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

PHILLIPS, ALICE. "Regional Health Care Planning." Nursing Management (Springhouse) 16, no. 12 (December 1985): 37???41. http://dx.doi.org/10.1097/00006247-198512000-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Urman, Alexandra, Ching-Kun Wang, Irene Dankwa-Mullan, Ethan Scheinberg, and Michael J. Young. "Harnessing AI for health equity in oncology research and practice." Journal of Clinical Oncology 36, no. 30_suppl (October 20, 2018): 67. http://dx.doi.org/10.1200/jco.2018.36.30_suppl.67.

Full text
Abstract:
67 Background: Recent advances in artificial intelligence (AI) carry underexplored practical and ethical implications for the practice of clinical oncology. As oncologic applications of AI proliferate, a framework for guiding their ethical implementations and equitable distribution will be crucial. Methods: We reviewed the current landscape of AI applications in oncology research and clinical practice by reviewing the current body of evidence in PubMed and Medline. Key ethical challenges and opportunities to address health equity are critically evaluated and highlighted. Ethical implications for patients, clinicians and society at large are delineated, with particular focus on the impact and ramifications of AI with respect to healthcare disparities and equity of oncology care delivery. Results: Growing concerns that AI may widen disparities in oncologic care by virtue of lack of affordability, inconsistent accessibility and biased machine-learning models are addressed. Although there is potential for AI to widen disparities in oncology care, using foresight in application, AI has the potential to (1) democratize access to specialized clinical knowledge, (2) improve the accuracy of predicting cancer susceptibility, recurrence and mortality, (3) prevent diagnostic errors in under-resourced settings, (4) minimize unintended bias and (5) enable access to tailored therapeutic options including clinical trials if appropriately deployed. Separately, AI can be harnessed to identify areas of underserved needs and optimize systems of health-information sharing and reimbursements as blockchain technology converges with AI. As AI advances it will have a larger presence in oncology research and clinical practice. Conclusions: A strategic framework integrating ethical standards and emphasizing equitable implementation can help ensure that the potential of AI to address disparities in oncology are maximally captured and its perils averted. Further work is being done on exploring these challenges and will be submitted as a manuscript.
APA, Harvard, Vancouver, ISO, and other styles
38

Ellahham, Samer, Nour Ellahham, and Mecit Can Emre Simsekler. "Application of Artificial Intelligence in the Health Care Safety Context: Opportunities and Challenges." American Journal of Medical Quality 35, no. 4 (October 4, 2019): 341–48. http://dx.doi.org/10.1177/1062860619878515.

Full text
Abstract:
There is a growing awareness that artificial intelligence (AI) has been used in the analysis of complicated and big data to provide outputs without human input in various health care contexts, such as bioinformatics, genomics, and image analysis. Although this technology can provide opportunities in diagnosis and treatment processes, there still may be challenges and pitfalls related to various safety concerns. To shed light on such opportunities and challenges, this article reviews AI in health care along with its implication for safety. To provide safer technology through AI, this study shows that safe design, safety reserves, safe fail, and procedural safeguards are key strategies, whereas cost, risk, and uncertainty should be identified for all potential technical systems. It is also suggested that clear guidance and protocols should be identified and shared with all stakeholders to develop and adopt safer AI applications in the health care context.
APA, Harvard, Vancouver, ISO, and other styles
39

Asan, Onur, Alparslan Emrah Bayrak, and Avishek Choudhury. "Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians." Journal of Medical Internet Research 22, no. 6 (June 19, 2020): e15154. http://dx.doi.org/10.2196/15154.

Full text
Abstract:
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
APA, Harvard, Vancouver, ISO, and other styles
40

Morley, Jessica, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, and Luciano Floridi. "The ethics of AI in health care: A mapping review." Social Science & Medicine 260 (September 2020): 113172. http://dx.doi.org/10.1016/j.socscimed.2020.113172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vogel, Lauren. "Plan needed to capitalize on robots, AI in health care." Canadian Medical Association Journal 189, no. 8 (February 26, 2017): E329—E330. http://dx.doi.org/10.1503/cmaj.1095395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Agarwal, Yashasvi, Mahima Jain, Shuchi Sinha, and Sanjay Dhir. "Delivering high‐tech, AI‐based health care at Apollo Hospitals." Global Business and Organizational Excellence 39, no. 2 (December 3, 2019): 20–30. http://dx.doi.org/10.1002/joe.21981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Mantena, Sreekar, Leo Anthony Celi, Salmaan Keshavjee, and Andrea Beratarrechea. "Improving community health-care screenings with smartphone-based AI technologies." Lancet Digital Health 3, no. 5 (May 2021): e280-e282. http://dx.doi.org/10.1016/s2589-7500(21)00054-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Smith, Howard L., Neill F. Piland, and Michael J. Funk. "Strategic planning in rural health care health care organizations." Health Care Management Review 17, no. 3 (1992): 63–80. http://dx.doi.org/10.1097/00004010-199222000-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Klumpp, Matthias, Marcus Hintze, Milla Immonen, Francisco Ródenas-Rigla, Francesco Pilati, Fernando Aparicio-Martínez, Dilay Çelebi, et al. "Artificial Intelligence for Hospital Health Care: Application Cases and Answers to Challenges in European Hospitals." Healthcare 9, no. 8 (July 29, 2021): 961. http://dx.doi.org/10.3390/healthcare9080961.

Full text
Abstract:
The development and implementation of artificial intelligence (AI) applications in health care contexts is a concurrent research and management question. Especially for hospitals, the expectations regarding improved efficiency and effectiveness by the introduction of novel AI applications are huge. However, experiences with real-life AI use cases are still scarce. As a first step towards structuring and comparing such experiences, this paper is presenting a comparative approach from nine European hospitals and eleven different use cases with possible application areas and benefits of hospital AI technologies. This is structured as a current review and opinion article from a diverse range of researchers and health care professionals. This contributes to important improvement options also for pandemic crises challenges, e.g., the current COVID-19 situation. The expected advantages as well as challenges regarding data protection, privacy, or human acceptance are reported. Altogether, the diversity of application cases is a core characteristic of AI applications in hospitals, and this requires a specific approach for successful implementation in the health care sector. This can include specialized solutions for hospitals regarding human–computer interaction, data management, and communication in AI implementation projects.
APA, Harvard, Vancouver, ISO, and other styles
46

Wolff, Justus, Josch Pauling, Andreas Keck, and Jan Baumbach. "Systematic Review of Economic Impact Studies of Artificial Intelligence in Health Care." Journal of Medical Internet Research 22, no. 2 (February 14, 2020): e16866. http://dx.doi.org/10.2196/16866.

Full text
Abstract:
Background Positive economic impact is a key decision factor in making the case for or against investing in an artificial intelligence (AI) solution in the health care industry. It is most relevant for the care provider and insurer as well as for the pharmaceutical and medical technology sector. Although the broad economic impact of digital health solutions in general has been assessed many times in literature and the benefit for patients and society has also been analyzed, the specific economic impact of AI in health care has been addressed only sporadically. Objective This study aimed to systematically review and summarize the cost-effectiveness studies dedicated to AI in health care and to assess whether they meet the established quality criteria. Methods In a first step, the quality criteria for economic impact studies were defined based on the established and adapted criteria schemes for cost impact assessments. In a second step, a systematic literature review based on qualitative and quantitative inclusion and exclusion criteria was conducted to identify relevant publications for an in-depth analysis of the economic impact assessment. In a final step, the quality of the identified economic impact studies was evaluated based on the defined quality criteria for cost-effectiveness studies. Results Very few publications have thoroughly addressed the economic impact assessment, and the economic assessment quality of the reviewed publications on AI shows severe methodological deficits. Only 6 out of 66 publications could be included in the second step of the analysis based on the inclusion criteria. Out of these 6 studies, none comprised a methodologically complete cost impact analysis. There are two areas for improvement in future studies. First, the initial investment and operational costs for the AI infrastructure and service need to be included. Second, alternatives to achieve similar impact must be evaluated to provide a comprehensive comparison. Conclusions This systematic literature analysis proved that the existing impact assessments show methodological deficits and that upcoming evaluations require more comprehensive economic analyses to enable economic decisions for or against implementing AI technology in health care. Trial Registration
APA, Harvard, Vancouver, ISO, and other styles
47

Maassen, Oliver, Sebastian Fritsch, Julia Palm, Saskia Deffge, Julian Kunze, Gernot Marx, Morris Riedel, Andreas Schuppert, and Johannes Bickenbach. "Future Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German University Hospitals: Web-Based Survey." Journal of Medical Internet Research 23, no. 3 (March 5, 2021): e26646. http://dx.doi.org/10.2196/26646.

Full text
Abstract:
Background The increasing development of artificial intelligence (AI) systems in medicine driven by researchers and entrepreneurs goes along with enormous expectations for medical care advancement. AI might change the clinical practice of physicians from almost all medical disciplines and in most areas of health care. While expectations for AI in medicine are high, practical implementations of AI for clinical practice are still scarce in Germany. Moreover, physicians’ requirements and expectations of AI in medicine and their opinion on the usage of anonymized patient data for clinical and biomedical research have not been investigated widely in German university hospitals. Objective This study aimed to evaluate physicians’ requirements and expectations of AI in medicine and their opinion on the secondary usage of patient data for (bio)medical research (eg, for the development of machine learning algorithms) in university hospitals in Germany. Methods A web-based survey was conducted addressing physicians of all medical disciplines in 8 German university hospitals. Answers were given using Likert scales and general demographic responses. Physicians were asked to participate locally via email in the respective hospitals. Results The online survey was completed by 303 physicians (female: 121/303, 39.9%; male: 173/303, 57.1%; no response: 9/303, 3.0%) from a wide range of medical disciplines and work experience levels. Most respondents either had a positive (130/303, 42.9%) or a very positive attitude (82/303, 27.1%) towards AI in medicine. There was a significant association between the personal rating of AI in medicine and the self-reported technical affinity level (H4=48.3, P<.001). A vast majority of physicians expected the future of medicine to be a mix of human and artificial intelligence (273/303, 90.1%) but also requested a scientific evaluation before the routine implementation of AI-based systems (276/303, 91.1%). Physicians were most optimistic that AI applications would identify drug interactions (280/303, 92.4%) to improve patient care substantially but were quite reserved regarding AI-supported diagnosis of psychiatric diseases (62/303, 20.5%). Of the respondents, 82.5% (250/303) agreed that there should be open access to anonymized patient databases for medical and biomedical research. Conclusions Physicians in stationary patient care in German university hospitals show a generally positive attitude towards using most AI applications in medicine. Along with this optimism comes several expectations and hopes that AI will assist physicians in clinical decision making. Especially in fields of medicine where huge amounts of data are processed (eg, imaging procedures in radiology and pathology) or data are collected continuously (eg, cardiology and intensive care medicine), physicians’ expectations of AI to substantially improve future patient care are high. In the study, the greatest potential was seen in the application of AI for the identification of drug interactions, assumedly due to the rising complexity of drug administration to polymorbid, polypharmacy patients. However, for the practical usage of AI in health care, regulatory and organizational challenges still have to be mastered.
APA, Harvard, Vancouver, ISO, and other styles
48

Chee, Marcel Lucas, Marcus Eng Hock Ong, Fahad Javaid Siddiqui, Zhongheng Zhang, Shir Lynn Lim, Andrew Fu Wah Ho, and Nan Liu. "Artificial Intelligence Applications for COVID-19 in Intensive Care and Emergency Settings: A Systematic Review." International Journal of Environmental Research and Public Health 18, no. 9 (April 29, 2021): 4749. http://dx.doi.org/10.3390/ijerph18094749.

Full text
Abstract:
Background: Little is known about the role of artificial intelligence (AI) as a decisive technology in the clinical management of COVID-19 patients. We aimed to systematically review and critically appraise the current evidence on AI applications for COVID-19 in intensive care and emergency settings. Methods: We systematically searched PubMed, Embase, Scopus, CINAHL, IEEE Xplore, and ACM Digital Library databases from inception to 1 October 2020, without language restrictions. We included peer-reviewed original studies that applied AI for COVID-19 patients, healthcare workers, or health systems in intensive care, emergency, or prehospital settings. We assessed predictive modelling studies and critically appraised the methodology and key findings of all other studies. Results: Of fourteen eligible studies, eleven developed prognostic or diagnostic AI predictive models, all of which were assessed to be at high risk of bias. Common pitfalls included inadequate sample sizes, poor handling of missing data, failure to account for censored participants, and weak validation of models. Conclusions: Current AI applications for COVID-19 are not ready for deployment in acute care settings, given their limited scope and poor quality. Our findings underscore the need for improvements to facilitate safe and effective clinical adoption of AI applications, for and beyond the COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
49

Kracht, Chelsea L., Susan B. Sisson, Kelly Kerr, Devon Walker, Lancer Stephens, Julie Seward, Amber Anderson, et al. "Health Care Provider’s Role in Obesity Prevention and Healthy Development of Young American Indian Children." Journal of Transcultural Nursing 30, no. 3 (August 3, 2018): 231–41. http://dx.doi.org/10.1177/1043659618792605.

Full text
Abstract:
Introduction: Health care providers (HCPs) serving American Indian (AI) populations are critical stakeholders in promoting healthy weight-related behaviors of young AI children. The purpose of this study is to develop an understanding of how HCP perceive their role in the healthy development of young AI children, and how they envision working with early care and education teachers and parents to enhance children’s health. Method: Twenty HCP that serve young AI children in Oklahoma participated in individual interviews. Thematic analysis was conducted on coded transcripts and three main themes, each with two to four subthemes were identified. Results: HCP had limited contact with teachers, felt family health was equal or more important than child health, and parental empowerment and gradual change was essential for success. Conclusion: Creating ways to involve HCP, early care and education teachers, and parents together in multilevel and multisector interventions has the potential to improve the health of young AI children.
APA, Harvard, Vancouver, ISO, and other styles
50

Makin, Vinni, Amy S. Nowacki, and Colleen Y. Colbert. "A Pilot Assessment of Primary Care Providers’ Knowledge of Adrenal Insufficiency Diagnosis and Management." Journal of Primary Care & Community Health 10 (January 2019): 215013271986216. http://dx.doi.org/10.1177/2150132719862163.

Full text
Abstract:
Background and Objectives: Adrenal insufficiency (AI) is one of the most challenging diagnoses in primary care, and misdiagnosis is costly. The aim of this educational needs assessment was to assess primary care physicians’ (PCPs) knowledge of AI diagnosis and management as a preliminary step in developing a professional education module to address knowledge of practice gaps. Methods: We developed a 12-item needs assessment and pretested questionnaire items prior to use to gather validity evidence. The questionnaire contained 4 AI knowledge items, 4 needs assessment items, and 4 demographic items. It was administered to 100 PCPs across a single integrated health care system over a 6-month period. Results: Fifty-one of 100 questionnaires were returned. The majority of respondents believed their knowledge of AI diagnosis and management was “average” when compared with peers. Responses indicated that PCPs were fairly comfortable diagnosing, but not managing AI patients. There was no association between respondents’ clinical knowledge of AI and respondents’ roles as clinical instructors (ie, having trainees assigned to them). A total of 54% of respondents said they utilized online resources to enhance current knowledge of AI and 88% of respondents said they would use a new AI resource, if available. When asked to rank preferences for professional development modalities, 26/38 respondents ranked UpToDate, 21/38 respondents ranked traditional lecture, and 19/38 respondents ranked case discussion among their top 3 choices. Conclusion: Results of this needs assessment showed that PCPs within our health care system both needed and desired professional development targeting AI diagnosis and management. A faculty development session, which included a short lecture and case scenarios, was developed and delivered to PCPs at participating family health centers. Session materials are now available for use by other institutions to meet professional development needs on this important topic.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography