Journal articles on the topic 'Medical XAI'

To see the other types of publications on this topic, follow the link: Medical XAI.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Medical XAI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Jinzhao, Hailong Li, Junqi Wang, and Lili He. "Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging." Diagnostics 13, no. 9 (April 27, 2023): 1571. http://dx.doi.org/10.3390/diagnostics13091571.

Full text
Abstract:
Advances in artificial intelligence (AI), especially deep learning (DL), have facilitated magnetic resonance imaging (MRI) data analysis, enabling AI-assisted medical image diagnoses and prognoses. However, most of the DL models are considered as “black boxes”. There is an unmet need to demystify DL models so domain experts can trust these high-performance DL models. This has resulted in a sub-domain of AI research called explainable artificial intelligence (XAI). In the last decade, many experts have dedicated their efforts to developing novel XAI methods that are competent at visualizing and explaining the logic behind data-driven DL models. However, XAI techniques are still in their infancy for medical MRI image analysis. This study aims to outline the XAI applications that are able to interpret DL models for MRI data analysis. We first introduce several common MRI data modalities. Then, a brief history of DL models is discussed. Next, we highlight XAI frameworks and elaborate on the principles of multiple popular XAI methods. Moreover, studies on XAI applications in MRI image analysis are reviewed across the tissues/organs of the human body. A quantitative analysis is conducted to reveal the insights of MRI researchers on these XAI techniques. Finally, evaluations of XAI methods are discussed. This survey presents recent advances in the XAI domain for explaining the DL models that have been utilized in MRI applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Full text
Abstract:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Saad Bin, Roberto Solis-Oba, and Lucian Ilie. "Explainable-AI in Automated Medical Report Generation Using Chest X-ray Images." Applied Sciences 12, no. 22 (November 18, 2022): 11750. http://dx.doi.org/10.3390/app122211750.

Full text
Abstract:
The use of machine learning in healthcare has the potential to revolutionize virtually every aspect of the industry. However, the lack of transparency in AI applications may lead to the problem of trustworthiness and reliability of the information provided by these applications. Medical practitioners rely on such systems for clinical decision making, but without adequate explanations, diagnosis made by these systems cannot be completely trusted. Explainability in Artificial Intelligence (XAI) aims to improve our understanding of why a given output has been produced by an AI system. Automated medical report generation is one area that would benefit greatly from XAI. This survey provides an extensive literature review on XAI techniques used in medical image analysis and automated medical report generation. We present a systematic classification of XAI techniques used in this field, highlighting the most important features of each one that could be used by future research to select the most appropriate XAI technique to create understandable and reliable explanations for decisions made by AI systems. In addition to providing an overview of the state of the art in this area, we identify some of the most important issues that need to be addressed and on which research should be focused.
APA, Harvard, Vancouver, ISO, and other styles
5

Payrovnaziri, Seyedeh Neelufar, Zhaoyi Chen, Pablo Rengifo-Moreno, Tim Miller, Jiang Bian, Jonathan H. Chen, Xiuwen Liu, and Zhe He. "Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review." Journal of the American Medical Informatics Association 27, no. 7 (May 17, 2020): 1173–85. http://dx.doi.org/10.1093/jamia/ocaa053.

Full text
Abstract:
Abstract Objective To conduct a systematic scoping review of explainable artificial intelligence (XAI) models that use real-world electronic health record data, categorize these techniques according to different biomedical applications, identify gaps of current studies, and suggest future research directions. Materials and Methods We searched MEDLINE, IEEE Xplore, and the Association for Computing Machinery (ACM) Digital Library to identify relevant papers published between January 1, 2009 and May 1, 2019. We summarized these studies based on the year of publication, prediction tasks, machine learning algorithm, dataset(s) used to build the models, the scope, category, and evaluation of the XAI methods. We further assessed the reproducibility of the studies in terms of the availability of data and code and discussed open issues and challenges. Results Forty-two articles were included in this review. We reported the research trend and most-studied diseases. We grouped XAI methods into 5 categories: knowledge distillation and rule extraction (N = 13), intrinsically interpretable models (N = 9), data dimensionality reduction (N = 8), attention mechanism (N = 7), and feature interaction and importance (N = 5). Discussion XAI evaluation is an open issue that requires a deeper focus in the case of medical applications. We also discuss the importance of reproducibility of research work in this field, as well as the challenges and opportunities of XAI from 2 medical professionals’ point of view. Conclusion Based on our review, we found that XAI evaluation in medicine has not been adequately and formally practiced. Reproducibility remains a critical concern. Ample opportunities exist to advance XAI research in medicine.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Mini Han, Kelvin Kam-lung Chong, Zhiyuan Lin, Xiangrong Yu, and Yi Pan. "An Explainable Artificial Intelligence-Based Robustness Optimization Approach for Age-Related Macular Degeneration Detection Based on Medical IOT Systems." Electronics 12, no. 12 (June 16, 2023): 2697. http://dx.doi.org/10.3390/electronics12122697.

Full text
Abstract:
AI-based models have shown promising results in diagnosing eye diseases based on multi-sources of data collected from medical IOT systems. However, there are concerns regarding their generalization and robustness, as these methods are prone to overfitting specific datasets. The development of Explainable Artificial Intelligence (XAI) techniques has addressed the black-box problem of machine learning and deep learning models, which can enhance interpretability and trustworthiness and optimize their performance in the real world. Age-related macular degeneration (AMD) is currently the primary cause of vision loss among elderly individuals. In this study, XAI methods were applied to detect AMD using various ophthalmic imaging modalities collected from medical IOT systems, such as colorful fundus photography (CFP), optical coherence tomography (OCT), ultra-wide fundus (UWF) images, and fluorescein angiography fundus (FAF). An optimized deep learning (DL) model and novel AMD identification systems were proposed based on the insights extracted by XAI. The findings of this study demonstrate that XAI not only has the potential to improve the transparency, reliability, and trustworthiness of AI models for ophthalmic applications, but it also has significant advantages for enhancing the robustness performance of these models. XAI could play a crucial role in promoting intelligent ophthalmology and be one of the most important techniques for evaluating and enhancing ophthalmic AI systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Antoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker, and Catherine Mooney. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review." Applied Sciences 11, no. 11 (May 31, 2021): 5088. http://dx.doi.org/10.3390/app11115088.

Full text
Abstract:
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.
APA, Harvard, Vancouver, ISO, and other styles
8

Petrauskas, Vytautas, Gyte Damuleviciene, Algirdas Dobrovolskis, Juozas Dovydaitis, Audrone Janaviciute, Raimundas Jasinevicius, Egidijus Kazanavicius, Jurgita Knasiene, Vita Lesauskaite, and Agnius Liutke. "XAI-based Medical Decision Support System Model." International Journal of Scientific and Research Publications (IJSRP) 10, no. 12 (December 24, 2020): 598–607. http://dx.doi.org/10.29322/ijsrp.10.12.2020.p10869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alkhalaf, Salem, Fahad Alturise, Adel Aboud Bahaddad, Bushra M. Elamin Elnaim, Samah Shabana, Sayed Abdel-Khalek, and Romany F. Mansour. "Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging." Cancers 15, no. 5 (February 27, 2023): 1492. http://dx.doi.org/10.3390/cancers15051492.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnoses. This includes highlighting specific areas of the image that the system recognized as indicative of cancer while also providing data on the fundamental AI algorithm and decision-making process used. The objective of XAI is to provide patients and doctors with a better understanding of the system’s decision-making process and to increase transparency and trust in the diagnosis method. Therefore, this study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique intends to accomplish the effectual colorectal and osteosarcoma cancer classification process. To achieve this, the AAOXAI-CD technique initially employs the Faster SqueezeNet model for feature vector generation. As well, the hyperparameter tuning of the Faster SqueezeNet model takes place with the use of the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD methodology can be tested on medical cancer imaging databases, and the outcomes ensured the auspicious outcome of the AAOXAI-CD methodology than other current approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.
APA, Harvard, Vancouver, ISO, and other styles
11

Sarp, Salih, Murat Kuzlu, Emmanuel Wilson, Umit Cali, and Ozgur Guler. "The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification." Electronics 10, no. 12 (June 11, 2021): 1406. http://dx.doi.org/10.3390/electronics10121406.

Full text
Abstract:
Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
12

Pumplun, Luisa, Felix Peters, Joshua F. Gawlitza, and Peter Buxmann. "Bringing Machine Learning Systems into Clinical Practice: A Design Science Approach to Explainable Machine Learning-Based Clinical Decision Support Systems." Journal of the Association for Information Systems 24, no. 4 (2023): 953–79. http://dx.doi.org/10.17705/1jais.00820.

Full text
Abstract:
Clinical decision support systems (CDSSs) based on machine learning (ML) hold great promise for improving medical care. Technically, such CDSSs are already feasible but physicians have been skeptical about their application. In particular, their opacity is a major concern, as it may lead physicians to overlook erroneous outputs from ML-based CDSSs, potentially causing serious consequences for patients. Research on explainable AI (XAI) offers methods with the potential to increase the explainability of black-box ML systems. This could significantly accelerate the application of MLbased CDSSs in medicine. However, XAI research to date has mainly been technically driven and largely neglects the needs of end users. To better engage the users of ML-based CDSSs, we applied a design science approach to develop a design for explainable ML-based CDSSs that incorporates insights from XAI literature while simultaneously addressing physicians’ needs. This design comprises five design principles that designers of ML-based CDSSs can apply to implement user-centered explanations, which are instantiated in a prototype of an explainable ML-based CDSS for lung nodule classification. We rooted the design principles and the derived prototype in a body of justificatory knowledge consisting of XAI literature, the concept of usability, and an online survey study involving 57 physicians. We refined the design principles and their instantiation by conducting walk-throughs with six radiologists. A final experiment with 45 radiologists demonstrated that our design resulted in physicians perceiving the ML-based CDSS as more explainable and usable in terms of the required cognitive effort than a system without explanations.
APA, Harvard, Vancouver, ISO, and other styles
13

Boudanga, Zineb, Siham benhadou, and Hicham Medromi. "An innovative medical waste management system in a smart city using XAI and vehicle routing optimization." F1000Research 12 (August 31, 2023): 1060. http://dx.doi.org/10.12688/f1000research.138867.1.

Full text
Abstract:
Background: The management of medical waste is a complex task that necessitates effective strategies to mitigate health risks, comply with regulations, and minimize environmental impact. In this study, a novel approach based on collaboration and technological advancements is proposed. Methods: By utilizing colored bags with identification tags, smart containers with sensors, object recognition sensors, air and soil control sensors, vehicles with Global Positioning System (GPS) and temperature humidity sensors, and outsourced waste treatment, the system optimizes waste sorting, storage, and treatment operations. Additionally, the incorporation of explainable artificial intelligence (XAI) technology, leveraging scikit-learn, xgboost, catboost, lightgbm, and skorch, provides real-time insights and data analytics, facilitating informed decision-making and process optimization. Results: The integration of these cutting-edge technologies forms the foundation of an efficient and intelligent medical waste management system. Furthermore, the article highlights the use of genetic algorithms (GA) to solve vehicle routing models, optimizing waste collection routes and minimizing transportation time to treatment centers. Conclusions: Overall, the combination of advanced technologies, optimization algorithms, and XAI contributes to improved waste management practices, ultimately benefiting both public health and the environment.
APA, Harvard, Vancouver, ISO, and other styles
14

Holzinger, Andreas. "Explainable AI and Multi-Modal Causability in Medicine." i-com 19, no. 3 (December 1, 2020): 171–79. http://dx.doi.org/10.1515/icom-2020-0024.

Full text
Abstract:
Abstract Progress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
APA, Harvard, Vancouver, ISO, and other styles
15

Ghnemat, Rawan, Sawsan Alodibat, and Qasem Abu Al-Haija. "Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification." Journal of Imaging 9, no. 9 (August 30, 2023): 177. http://dx.doi.org/10.3390/jimaging9090177.

Full text
Abstract:
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
16

Yiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth, and Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning." Traitement du Signal 39, no. 3 (June 30, 2022): 863–69. http://dx.doi.org/10.18280/ts.390311.

Full text
Abstract:
Artificial intelligence holds great promise in medical imaging, especially histopathological imaging. However, artificial intelligence algorithms cannot fully explain the thought processes during decision-making. This situation has brought the problem of explainability, i.e., the black box problem, of artificial intelligence applications to the agenda: an algorithm simply responds without stating the reasons for the given images. To overcome the problem and improve the explainability, explainable artificial intelligence (XAI) has come to the fore, and piqued the interest of many researchers. Against this backdrop, this study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM), one of the XAI applications. Afterwards, a detailed questionnaire survey was conducted with the pathologists on these images. Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested. The research results greatly help pathologists in the diagnosis of paratuberculosis.
APA, Harvard, Vancouver, ISO, and other styles
17

Lötsch, Jörn, Dario Kringel, and Alfred Ultsch. "Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients." BioMedInformatics 2, no. 1 (December 22, 2021): 1–17. http://dx.doi.org/10.3390/biomedinformatics2010001.

Full text
Abstract:
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
APA, Harvard, Vancouver, ISO, and other styles
18

Apostolopoulos, Ioannis D., and Peter P. Groumpos. "Fuzzy Cognitive Maps: Their Role in Explainable Artificial Intelligence." Applied Sciences 13, no. 6 (March 7, 2023): 3412. http://dx.doi.org/10.3390/app13063412.

Full text
Abstract:
Currently, artificial intelligence is facing several problems with its practical implementation in various application domains. The explainability of advanced artificial intelligence algorithms is a topic of paramount importance, and many discussions have been held recently. Pioneering and classical machine learning and deep learning models behave as black boxes, constraining the logical interpretations that the end users desire. Artificial intelligence applications in industry, medicine, agriculture, and social sciences require the users’ trust in the systems. Users are always entitled to know why and how each method has made a decision and which factors play a critical role. Otherwise, they will always be wary of using new techniques. This paper discusses the nature of fuzzy cognitive maps (FCMs), a soft computational method to model human knowledge and provide decisions handling uncertainty. Though FCMs are not new to the field, they are evolving and incorporate recent advancements in artificial intelligence, such as learning algorithms and convolutional neural networks. The nature of FCMs reveals their supremacy in transparency, interpretability, transferability, and other aspects of explainable artificial intelligence (XAI) methods. The present study aims to reveal and defend the explainability properties of FCMs and to highlight their successful implementation in many domains. Subsequently, the present study discusses how FCMs cope with XAI directions and presents critical examples from the literature that demonstrate their superiority. The study results demonstrate that FCMs are both in accordance with the XAI directives and have many successful applications in domains such as medical decision-support systems, precision agriculture, energy savings, environmental monitoring, and policy-making for the public sector.
APA, Harvard, Vancouver, ISO, and other styles
19

Rietberg, Max Tigo, Van Bach Nguyen, Jeroen Geerdink, Onno Vijlbrief, and Christin Seifert. "Accurate and Reliable Classification of Unstructured Reports on Their Diagnostic Goal Using BERT Models." Diagnostics 13, no. 7 (March 27, 2023): 1251. http://dx.doi.org/10.3390/diagnostics13071251.

Full text
Abstract:
Understanding the diagnostic goal of medical reports is valuable information for understanding patient flows. This work focuses on extracting the reason for taking an MRI scan of Multiple Sclerosis (MS) patients using the attached free-form reports: Diagnosis, Progression or Monitoring. We investigate the performance of domain-dependent and general state-of-the-art language models and their alignment with domain expertise. To this end, eXplainable Artificial Intelligence (XAI) techniques are used to acquire insight into the inner workings of the model, which are verified on their trustworthiness. The verified XAI explanations are then compared with explanations from a domain expert, to indirectly determine the reliability of the model. BERTje, a Dutch Bidirectional Encoder Representations from Transformers (BERT) model, outperforms RobBERT and MedRoBERTa.nl in both accuracy and reliability. The latter model (MedRoBERTa.nl) is a domain-specific model, while BERTje is a generic model, showing that domain-specific models are not always superior. Our validation of BERTje in a small prospective study shows promising results for the potential uptake of the model in a practical setting.
APA, Harvard, Vancouver, ISO, and other styles
20

Khanna, Varada Vivek, Krishnaraj Chadaga, Niranajana Sampathila, Srikanth Prabhu, Venkatesh Bhandage, and Govardhan K. Hegde. "A Distinctive Explainable Machine Learning Framework for Detection of Polycystic Ovary Syndrome." Applied System Innovation 6, no. 2 (February 23, 2023): 32. http://dx.doi.org/10.3390/asi6020032.

Full text
Abstract:
Polycystic Ovary Syndrome (PCOS) is a complex disorder predominantly defined by biochemical hyperandrogenism, oligomenorrhea, anovulation, and in some cases, the presence of ovarian microcysts. This endocrinopathy inhibits ovarian follicle development causing symptoms like obesity, acne, infertility, and hirsutism. Artificial Intelligence (AI) has revolutionized healthcare, contributing remarkably to science and engineering domains. Therefore, we have demonstrated an AI approach using heterogeneous Machine Learning (ML) and Deep Learning (DL) classifiers to predict PCOS among fertile patients. We used an Open-source dataset of 541 patients from Kerala, India. Among all the classifiers, the final multi-stack of ML models performed best with accuracy, precision, recall, and F1-score of 98%, 97%, 98%, and 98%. Explainable AI (XAI) techniques make model predictions understandable, interpretable, and trustworthy. Hence, we have utilized XAI techniques such as SHAP (SHapley Additive Values), LIME (Local Interpretable Model Explainer), ELI5, Qlattice, and feature importance with Random Forest for explaining tree-based classifiers. The motivation of this study is to accurately detect PCOS in patients while simultaneously proposing an automated screening architecture with explainable machine learning tools to assist medical professionals in decision-making.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Ambeshwar, Ramachandran Manikandan, Utku Kose, Deepak Gupta, and Suresh C. Satapathy. "Doctor's Dilemma: Evaluating an Explainable Subtractive Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3s (October 31, 2021): 1–26. http://dx.doi.org/10.1145/3457187.

Full text
Abstract:
In Medicine Deep Learning has become an essential tool to achieve outstanding diagnosis on image data. However, one critical problem is that Deep Learning comes with complicated, black-box models so it is not possible to analyze their trust level directly. So, Explainable Artificial Intelligence (XAI) methods are used to build additional interfaces for explaining how the model has reached the outputs by moving from the input data. Of course, that's again another competitive problem to analyze if such methods are successful according to the human view. So, this paper comes with two important research efforts: (1) to build an explainable deep learning model targeting medical image analysis, and (2) to evaluate the trust level of this model via several evaluation works including human contribution. The target problem was selected as the brain tumor classification, which is a remarkable, competitive medical image-based problem for Deep Learning. In the study, MR-based pre-processed brain images were received by the Subtractive Spatial Lightweight Convolutional Neural Network (SSLW-CNN) model, which includes additional operators to reduce the complexity of classification. In order to ensure the explainable background, the model also included Class Activation Mapping (CAM). It is important to evaluate the trust level of a successful model. So, numerical success rates of the SSLW-CNN were evaluated based on the peak signal-to-noise ratio (PSNR), computational time, computational overhead, and brain tumor classification accuracy. The objective of the proposed SSLW-CNN model is to obtain faster and good tumor classification with lesser time. The results illustrate that the SSLW-CNN model provides better performance of PSNR which is enhanced by 8%, classification accuracy is improved by 33%, computation time is reduced by 19%, computation overhead is decreased by 23%, and classification time is minimized by 13%, as compared to state-of-the-art works. Because the model provided good numerical results, it was then evaluated in terms of XAI perspective by including doctor-model based evaluations such as feedback CAM visualizations, usability, expert surveys, comparisons of CAM with other XAI methods, and manual diagnosis comparison. The results show that the SSLW-CNN provides good performance on brain tumor diagnosis and ensures a trustworthy solution for the doctors.
APA, Harvard, Vancouver, ISO, and other styles
22

Gozzi, Noemi, Edoardo Giacomello, Martina Sollini, Margarita Kirienko, Angela Ammirabile, Pierluca Lanzi, Daniele Loiacono, and Arturo Chiti. "Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs." Diagnostics 12, no. 9 (August 28, 2022): 2084. http://dx.doi.org/10.3390/diagnostics12092084.

Full text
Abstract:
To identify the best transfer learning approach for the identification of the most frequent abnormalities on chest radiographs (CXRs), we used embeddings extracted from pretrained convolutional neural networks (CNNs). An explainable AI (XAI) model was applied to interpret black-box model predictions and assess its performance. Seven CNNs were trained on CheXpert. Three transfer learning approaches were thereafter applied to a local dataset. The classification results were ensembled using simple and entropy-weighted averaging. We applied Grad-CAM (an XAI model) to produce a saliency map. Grad-CAM maps were compared to manually extracted regions of interest, and the training time was recorded. The best transfer learning model was that which used image embeddings and random forest with simple averaging, with an average AUC of 0.856. Grad-CAM maps showed that the models focused on specific features of each CXR. CNNs pretrained on a large public dataset of medical images can be exploited as feature extractors for tasks of interest. The extracted image embeddings contain relevant information that can be used to train an additional classifier with satisfactory performance on an independent dataset, demonstrating it to be the optimal transfer learning strategy and overcoming the need for large private datasets, extensive computational resources, and long training times.
APA, Harvard, Vancouver, ISO, and other styles
23

Abdelwahab, Youmna, Mohamed Kholief, and Ahmed Ahmed Hesham Sedky. "Justifying Arabic Text Sentiment Analysis Using Explainable AI (XAI): LASIK Surgeries Case Study." Information 13, no. 11 (November 11, 2022): 536. http://dx.doi.org/10.3390/info13110536.

Full text
Abstract:
With the increasing use of machine learning across various fields to address several aims and goals, the complexity of the ML and Deep Learning (DL) approaches used to provide solutions has also increased. In the last few years, Explainable AI (XAI) methods to further justify and interpret deep learning models have been introduced across several domains and fields. While most papers have applied XAI to English and other Latin-based languages, this paper aims to explain attention-based long short-term memory (LSTM) results across Arabic Sentiment Analysis (ASA), which is considered an uncharted area in previous research. With the use of Local Interpretable Model-agnostic Explanation (LIME), we intend to further justify and demonstrate how the LSTM leads to the prediction of sentiment polarity within ASA in domain-specific Arabic texts regarding medical insights on LASIK surgery across Twitter users. In our research, the LSTM reached an accuracy of 79.1% on the proposed data set. Throughout the representation of sentiments using LIME, it demonstrated accurate results regarding how specific words contributed to the overall sentiment polarity classification. Furthermore, we compared the word count with the probability weights given across the examples, in order to further validate the LIME results in the context of ASA.
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Haohui, and Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients." Information 13, no. 9 (September 15, 2022): 436. http://dx.doi.org/10.3390/info13090436.

Full text
Abstract:
Artificial intelligence is changing the practice of healthcare. While it is essential to employ such solutions, making them transparent to medical experts is more critical. Most of the previous work presented disease prediction models, but did not explain them. Many healthcare stakeholders do not have a solid foundation in these models. Treating these models as ‘black box’ diminishes confidence in their predictions. The development of explainable artificial intelligence (XAI) methods has enabled us to change the models into a ‘white box’. XAI allows human users to comprehend the results from machine learning algorithms by making them easy to interpret. For instance, the expenditures of healthcare services associated with unplanned readmissions are enormous. This study proposed a stacking-based model to predict 30-day hospital readmission for diabetic patients. We employed Random Under-Sampling to solve the imbalanced class issue, then utilised SelectFromModel for feature selection and constructed a stacking model with base and meta learners. Compared with the different machine learning models, performance analysis showed that our model can better predict readmission than other existing models. This proposed model is also explainable and interpretable. Based on permutation feature importance, the strong predictors were the number of inpatients, the primary diagnosis, discharge to home with home service, and the number of emergencies. The local interpretable model-agnostic explanations method was also employed to demonstrate explainability at the individual level. The findings for the readmission of diabetic patients could be helpful in medical practice and provide valuable recommendations to stakeholders for minimising readmission and reducing public healthcare costs.
APA, Harvard, Vancouver, ISO, and other styles
25

Madanu, Ravichandra, Maysam F. Abbod, Fu-Jung Hsiao, Wei-Ta Chen, and Jiann-Shing Shieh. "Explainable AI (XAI) Applied in Machine Learning for Pain Modeling: A Review." Technologies 10, no. 3 (June 14, 2022): 74. http://dx.doi.org/10.3390/technologies10030074.

Full text
Abstract:
Pain is a complex term that describes various sensations that create discomfort in various ways or types inside the human body. Generally, pain has consequences that range from mild to severe in different organs of the body and will depend on the way it is caused, which could be an injury, illness or medical procedures including testing, surgeries or therapies, etc. With recent advances in artificial-intelligence (AI) systems associated in biomedical and healthcare settings, the contiguity of physician, clinician and patient has shortened. AI, however, has more scope to interpret the pain associated in patients with various conditions by using any physiological or behavioral changes. Facial expressions are considered to give much information that relates with emotions and pain, so clinicians consider these changes with high importance for assessing pain. This has been achieved in recent times with different machine-learning and deep-learning models. To accentuate the future scope and importance of AI in medical field, this study reviews the explainable AI (XAI) as increased attention is given to an automatic assessment of pain. This review discusses how these approaches are applied for different pain types.
APA, Harvard, Vancouver, ISO, and other styles
26

Ledziński, Łukasz, and Grzegorz Grześk. "Artificial Intelligence Technologies in Cardiology." Journal of Cardiovascular Development and Disease 10, no. 5 (May 6, 2023): 202. http://dx.doi.org/10.3390/jcdd10050202.

Full text
Abstract:
As the world produces exabytes of data, there is a growing need to find new methods that are more suitable for dealing with complex datasets. Artificial intelligence (AI) has significant potential to impact the healthcare industry, which is already on the road to change with the digital transformation of vast quantities of information. The implementation of AI has already achieved success in the domains of molecular chemistry and drug discoveries. The reduction in costs and in the time needed for experiments to predict the pharmacological activities of new molecules is a milestone in science. These successful applications of AI algorithms provide hope for a revolution in healthcare systems. A significant part of artificial intelligence is machine learning (ML), of which there are three main types—supervised learning, unsupervised learning, and reinforcement learning. In this review, the full scope of the AI workflow is presented, with explanations of the most-often-used ML algorithms and descriptions of performance metrics for both regression and classification. A brief introduction to explainable artificial intelligence (XAI) is provided, with examples of technologies that have developed for XAI. We review important AI implementations in cardiology for supervised, unsupervised, and reinforcement learning and natural language processing, emphasizing the used algorithm. Finally, we discuss the need to establish legal, ethical, and methodical requirements for the deployment of AI models in medicine.
APA, Harvard, Vancouver, ISO, and other styles
27

Futia, Giuseppe, and Antonio Vetrò. "On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research." Information 11, no. 2 (February 22, 2020): 122. http://dx.doi.org/10.3390/info11020122.

Full text
Abstract:
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.
APA, Harvard, Vancouver, ISO, and other styles
28

Singh, Apoorva, Husanbir Pannu, and Avleen Malhi. "Explainable information retrieval using deep learning for medical images." Computer Science and Information Systems 19, no. 1 (2022): 277–307. http://dx.doi.org/10.2298/csis201030049s.

Full text
Abstract:
Image segmentation is useful to extract valuable information for an efficient analysis on the region of interest. Mostly, the number of images generated from a real life situation such as streaming video, is large and not ideal for traditional segmentation with machine learning algorithms. This is due to the following factors (a) numerous image features (b) complex distribution of shapes, colors and textures (c) imbalance data ratio of underlying classes (d) movements of the camera, objects and (e) variations in luminance for site capture. So, we have proposed an efficient deep learning model for image classification and the proof-of-concept has been the case studied on gastrointestinal images for bleeding detection. The Explainable Artificial Intelligence (XAI) module has been utilised to reverse engineer the test results for the impact of features on a given test dataset. The architecture is generally applicable in other areas of image classification. The proposed method has been compared with state-of-the-art including Logistic Regression, Support Vector Machine, Artificial Neural Network and Random Forest. It has reported F1 score of 0.76 on the real world streaming dataset which is comparatively better than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Ornek, Ahmet H., and Murat Ceylan. "Explainable Artificial Intelligence (XAI): Classification of Medical Thermal Images of Neonates Using Class Activation Maps." Traitement du Signal 38, no. 5 (October 31, 2021): 1271–79. http://dx.doi.org/10.18280/ts.380502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Taşcı, Burak. "Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet." Diagnostics 13, no. 5 (February 23, 2023): 859. http://dx.doi.org/10.3390/diagnostics13050859.

Full text
Abstract:
Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaining, and analyzing deep learning models, has increased recently. With explainable artificial intelligence, it is possible to understand whether the solutions offered by deep learning techniques are safe. This paper aims to diagnose a fatal disease such as a brain tumor faster and more accurately using XAI methods. In this study, we preferred datasets that are widely used in the literature, such as the four-class kaggle brain tumor dataset (Dataset I) and the three-class figshare brain tumor dataset (Dataset II). To extract features, a pre-trained deep learning model is chosen. DenseNet201 is used as the feature extractor in this case. The proposed automated brain tumor detection model includes five stages. First, training of brain MR images with DenseNet201, the tumor area was segmented with GradCAM. The features were extracted from DenseNet201 trained using the exemplar method. Extracted features were selected with iterative neighborhood component (INCA) feature selector. Finally, the selected features were classified using support vector machine (SVM) with 10-fold cross-validation. An accuracy of 98.65% and 99.97%, were obtained for Datasets I and II, respectively. The proposed model obtained higher performance than the state-of-the-art methods and can be used to aid radiologists in their diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
31

Dindorf, Carlo, Oliver Ludwig, Steven Simon, Stephan Becker, and Michael Fröhlich. "Machine Learning and Explainable Artificial Intelligence Using Counterfactual Explanations for Evaluating Posture Parameters." Bioengineering 10, no. 5 (April 24, 2023): 511. http://dx.doi.org/10.3390/bioengineering10050511.

Full text
Abstract:
Postural deficits such as hyperlordosis (hollow back) or hyperkyphosis (hunchback) are relevant health issues. Diagnoses depend on the experience of the examiner and are, therefore, often subjective and prone to errors. Machine learning (ML) methods in combination with explainable artificial intelligence (XAI) tools have proven useful for providing an objective, data-based orientation. However, only a few works have considered posture parameters, leaving the potential for more human-friendly XAI interpretations still untouched. Therefore, the present work proposes an objective, data-driven ML system for medical decision support that enables especially human-friendly interpretations using counterfactual explanations (CFs). The posture data for 1151 subjects were recorded by means of stereophotogrammetry. An expert-based classification of the subjects regarding the presence of hyperlordosis or hyperkyphosis was initially performed. Using a Gaussian progress classifier, the models were trained and interpreted using CFs. The label errors were flagged and re-evaluated using confident learning. Very good classification performances for both hyperlordosis and hyperkyphosis were found, whereby the re-evaluation and correction of the test labels led to a significant improvement (MPRAUC = 0.97). A statistical evaluation showed that the CFs seemed to be plausible, in general. In the context of personalized medicine, the present study’s approach could be of importance for reducing diagnostic errors and thereby improving the individual adaptation of therapeutic measures. Likewise, it could be a basis for the development of apps for preventive posture assessment.
APA, Harvard, Vancouver, ISO, and other styles
32

Laios, Alexandros, Evangelos Kalampokis, Racheal Johnson, Sarika Munot, Amudha Thangavelu, Richard Hutson, Tim Broadhead, et al. "Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer." Cancers 14, no. 14 (July 15, 2022): 3447. http://dx.doi.org/10.3390/cancers14143447.

Full text
Abstract:
(1) Background: Surgical cytoreduction for epithelial ovarian cancer (EOC) is a complex procedure. Encompassed within the performance skills to achieve surgical precision, intra-operative surgical decision-making remains a core feature. The use of eXplainable Artificial Intelligence (XAI) could potentially interpret the influence of human factors on the surgical effort for the cytoreductive outcome in question; (2) Methods: The retrospective cohort study evaluated 560 consecutive EOC patients who underwent cytoreductive surgery between January 2014 and December 2019 in a single public institution. The eXtreme Gradient Boosting (XGBoost) and Deep Neural Network (DNN) algorithms were employed to develop the predictive model, including patient- and operation-specific features, and novel features reflecting human factors in surgical heuristics. The precision, recall, F1 score, and area under curve (AUC) were compared between both training algorithms. The SHapley Additive exPlanations (SHAP) framework was used to provide global and local explainability for the predictive model; (3) Results: A surgical complexity score (SCS) cut-off value of five was calculated using a Receiver Operator Characteristic (ROC) curve, above which the probability of incomplete cytoreduction was more likely (area under the curve [AUC] = 0.644; 95% confidence interval [CI] = 0.598–0.69; sensitivity and specificity 34.1%, 86.5%, respectively; p = 0.000). The XGBoost outperformed the DNN assessment for the prediction of the above threshold surgical effort outcome (AUC = 0.77; 95% [CI] 0.69–0.85; p < 0.05 vs. AUC 0.739; 95% [CI] 0.655–0.823; p < 0.95). We identified “turning points” that demonstrated a clear preference towards above the given cut-off level of surgical effort; in consultant surgeons with <12 years of experience, age <53 years old, who, when attempting primary cytoreductive surgery, recorded the presence of ascites, an Intraoperative Mapping of Ovarian Cancer score >4, and a Peritoneal Carcinomatosis Index >7, in a surgical environment with the optimization of infrastructural support. (4) Conclusions: Using XAI, we explain how intra-operative decisions may consider human factors during EOC cytoreduction alongside factual knowledge, to maximize the magnitude of the selected trade-off in effort. XAI techniques are critical for a better understanding of Artificial Intelligence frameworks, and to enhance their incorporation in medical applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Mukhtorov, Doniyorjon, Madinakhon Rakhmonova, Shakhnoza Muksimova, and Young-Im Cho. "Endoscopic Image Classification Based on Explainable Deep Learning." Sensors 23, no. 6 (March 16, 2023): 3176. http://dx.doi.org/10.3390/s23063176.

Full text
Abstract:
Deep learning has achieved remarkably positive results and impacts on medical diagnostics in recent years. Due to its use in several proposals, deep learning has reached sufficient accuracy to implement; however, the algorithms are black boxes that are hard to understand, and model decisions are often made without reason or explanation. To reduce this gap, explainable artificial intelligence (XAI) offers a huge opportunity to receive informed decision support from deep learning models and opens the black box of the method. We conducted an explainable deep learning method based on ResNet152 combined with Grad–CAM for endoscopy image classification. We used an open-source KVASIR dataset that consisted of a total of 8000 wireless capsule images. The heat map of the classification results and an efficient augmentation method achieved a high positive result with 98.28% training and 93.46% validation accuracy in terms of medical image classification.
APA, Harvard, Vancouver, ISO, and other styles
34

Obayya, Marwa, Nadhem Nemri, Mohamed K. Nour, Mesfer Al Duhayyim, Heba Mohsen, Mohammed Rizwanullah, Abu Sarwar Zamani, and Abdelwahed Motwakel. "Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification." Applied Sciences 12, no. 17 (August 31, 2022): 8749. http://dx.doi.org/10.3390/app12178749.

Full text
Abstract:
Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches.
APA, Harvard, Vancouver, ISO, and other styles
35

Mohammed, K., and G. George. "IDENTIFICATION AND MITIGATION OF BIAS USING EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) FOR BRAIN STROKE PREDICTION." Open Journal of Physical Science (ISSN: 2734-2123) 4, no. 1 (April 21, 2023): 19–33. http://dx.doi.org/10.52417/ojps.v4i1.457.

Full text
Abstract:
Stroke is a time-sensitive illness that without rapid care and diagnosis can result in detrimental effects on the person. Caretakers need to enhance patient management by procedurally mining and storing the patient's medical records because of the increasing synergy between technology and medical diagnosis. Therefore, it is essential to explore how these risk variables interconnect with each other in patient health records and understand how they each individually affect stroke prediction. Using explainable Artificial Intelligence (XAI) techniques, we were able to show the imbalance dataset and improve our model’s accuracy, we showed how oversampling improves our model’s performance and used explainable AI techniques to further investigate the decision and oversample a feature to have even better performance. We showed and suggested explainable AI as a technique to improve model performance and serve as a level of trustworthiness for practitioners, we used four evaluation metrics, recall, precision, accuracy, and f1 score. The f1 score with the original data was 0% due to imbalanced data, the non-stroke data was significantly higher than the stroke data, the 2nd model has an f1 score of 81.78% and we used explainable AI techniques, Local Interpretable Model-agnostic Explanations (LIME) and SHapely Additive exPlanation (SHAP) to further analyse how the model came to a decision, this led us to investigate and oversample a specific feature to have a new f1 score of 83.34%. We suggest the use of explainable AI as a technique to further investigate a model’s method for decision-making.
APA, Harvard, Vancouver, ISO, and other styles
36

Petrauskas, Vytautas, Raimundas Jasinevicius, Egidijus Kazanavicius, and Zygimantas Meskauskas. "On the Extension of the XAI-based Medical Diagnostics DSS for the Treatment and Care of Patients." International Journal of Scientific and Research Publications (IJSRP) 11, no. 11 (November 12, 2021): 248–53. http://dx.doi.org/10.29322/ijsrp.11.11.2021.p11931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ali, Sikandar, Abdullah, Tagne Poupi Theodore Armand, Ali Athar, Ali Hussain, Maisam Ali, Muhammad Yaseen, Moon-Il Joo, and Hee-Cheol Kim. "Metaverse in Healthcare Integrated with Explainable AI and Blockchain: Enabling Immersiveness, Ensuring Trust, and Providing Patient Data Security." Sensors 23, no. 2 (January 4, 2023): 565. http://dx.doi.org/10.3390/s23020565.

Full text
Abstract:
Digitization and automation have always had an immense impact on healthcare. It embraces every new and advanced technology. Recently the world has witnessed the prominence of the metaverse which is an emerging technology in digital space. The metaverse has huge potential to provide a plethora of health services seamlessly to patients and medical professionals with an immersive experience. This paper proposes the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience. Our proposed architecture can be summarized as follows. It consists of three environments, namely the doctor’s environment, the patient’s environment, and the metaverse environment. The doctors and patients interact in a metaverse environment assisted by blockchain technology which ensures the safety, security, and privacy of data. The metaverse environment is the main part of our proposed architecture. The doctors, patients, and nurses enter this environment by registering on the blockchain and they are represented by avatars in the metaverse environment. All the consultation activities between the doctor and the patient will be recorded and the data, i.e., images, speech, text, videos, clinical data, etc., will be gathered, transferred, and stored on the blockchain. These data are used for disease prediction and diagnosis by explainable artificial intelligence (XAI) models. The GradCAM and LIME approaches of XAI provide logical reasoning for the prediction of diseases and ensure trust, explainability, interpretability, and transparency regarding the diagnosis and prediction of diseases. Blockchain technology provides data security for patients while enabling transparency, traceability, and immutability regarding their data. These features of blockchain ensure trust among the patients regarding their data. Consequently, this proposed architecture ensures transparency and trust regarding both the diagnosis of diseases and the data security of the patient. We also explored the building block technologies of the metaverse. Furthermore, we also investigated the advantages and challenges of a metaverse in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
38

Abir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, Tahia Tazin, Mohammad Monirujjaman Khan, Mehedi Masud, and Sultan Aljahdali. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method." Computational Intelligence and Neuroscience 2022 (April 27, 2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.

Full text
Abstract:
White blood cells (WBCs) are blood cells that fight infections and diseases as a part of the immune system. They are also known as “defender cells.” But the imbalance in the number of WBCs in the blood can be hazardous. Leukemia is the most common blood cancer caused by an overabundance of WBCs in the immune system. Acute lymphocytic leukemia (ALL) usually occurs when the bone marrow creates many immature WBCs that destroy healthy cells. People of all ages, including children and adolescents, can be affected by ALL. The rapid proliferation of atypical lymphocyte cells can cause a reduction in new blood cells and increase the chances of death in patients. Therefore, early and precise cancer detection can help with better therapy and a higher survival probability in the case of leukemia. However, diagnosing ALL is time-consuming and complicated, and manual analysis is expensive, with subjective and error-prone outcomes. Thus, detecting normal and malignant cells reliably and accurately is crucial. For this reason, automatic detection using computer-aided diagnostic models can help doctors effectively detect early leukemia. The entire approach may be automated using image processing techniques, reducing physicians’ workload and increasing diagnosis accuracy. The impact of deep learning (DL) on medical research has recently proven quite beneficial, offering new avenues and possibilities in the healthcare domain for diagnostic techniques. However, to make that happen soon in DL, the entire community must overcome the explainability limit. Because of the black box operation’s shortcomings in artificial intelligence (AI) models’ decisions, there is a lack of liability and trust in the outcomes. But explainable artificial intelligence (XAI) can solve this problem by interpreting the predictions of AI systems. This study emphasizes leukemia, specifically ALL. The proposed strategy recognizes acute lymphoblastic leukemia as an automated procedure that applies different transfer learning models to classify ALL. Hence, using local interpretable model-agnostic explanations (LIME) to assure validity and reliability, this method also explains the cause of a specific classification. The proposed method achieved 98.38% accuracy with the InceptionV3 model. Experimental results were found between different transfer learning methods, including ResNet101V2, VGG19, and InceptionResNetV2, later verified with the LIME algorithm for XAI, where the proposed method performed the best. The obtained results and their reliability demonstrate that it can be preferred in identifying ALL, which will assist medical examiners.
APA, Harvard, Vancouver, ISO, and other styles
39

Žlahtič, Bojan, Jernej Završnik, Helena Blažun Vošner, Peter Kokol, David Šuran, and Tadej Završnik. "Agile Machine Learning Model Development Using Data Canyons in Medicine: A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement." Applied Sciences 13, no. 14 (July 19, 2023): 8329. http://dx.doi.org/10.3390/app13148329.

Full text
Abstract:
Over the past few decades, machine learning has emerged as a valuable tool in the field of medicine, driven by the accumulation of vast amounts of medical data and the imperative to harness this data for the betterment of humanity. However, many of the prevailing machine learning algorithms in use today are characterized as black-box models, lacking transparency in their decision-making processes and are often devoid of clear visualization capabilities. The transparency of these machine learning models impedes medical experts from effectively leveraging them due to the high-stakes nature of their decisions. Consequently, the need for explainable artificial intelligence (XAI) that aims to address the demand for transparency in the decision-making mechanisms of black-box algorithms has arisen. Alternatively, employing white-box algorithms can empower medical experts by allowing them to contribute their knowledge to the decision-making process and obtain a clear and transparent output. This approach offers an opportunity to personalize machine learning models through an agile process. A novel white-box machine learning algorithm known as Data canyons was employed as a transparent and robust foundation for the proposed solution. By providing medical experts with a web framework where their expertise is transferred to a machine learning model and enabling the utilization of this process in an agile manner, a symbiotic relationship is fostered between the domains of medical expertise and machine learning. The flexibility to manipulate the output machine learning model and visually validate it, even without expertise in machine learning, establishes a crucial link between these two expert domains.
APA, Harvard, Vancouver, ISO, and other styles
40

Costa, Yandre M. G., Sergio A. Silva, Lucas O. Teixeira, Rodolfo M. Pereira, Diego Bertolini, Alceu S. Britto, Luiz S. Oliveira, and George D. C. Cavalcanti. "COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers." Sensors 22, no. 19 (September 26, 2022): 7303. http://dx.doi.org/10.3390/s22197303.

Full text
Abstract:
Since the beginning of the COVID-19 pandemic, many works have been published proposing solutions to the problems that arose in this scenario. In this vein, one of the topics that attracted the most attention is the development of computer-based strategies to detect COVID-19 from thoracic medical imaging, such as chest X-ray (CXR) and computerized tomography scan (CT scan). By searching for works already published on this theme, we can easily find thousands of them. This is partly explained by the fact that the most severe worldwide pandemic emerged amid the technological advances recently achieved, and also considering the technical facilities to deal with the large amount of data produced in this context. Even though several of these works describe important advances, we cannot overlook the fact that others only use well-known methods and techniques without a more relevant and critical contribution. Hence, differentiating the works with the most relevant contributions is not a trivial task. The number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community. Aiming to help researchers in this scenario, we present a review of the top-100 most cited papers in this field of investigation according to the Google Scholar search engine. We evaluate the distribution of the top-100 papers taking into account some important aspects, such as the type of medical imaging explored, learning settings, segmentation strategy, explainable artificial intelligence (XAI), and finally, the dataset and code availability.
APA, Harvard, Vancouver, ISO, and other styles
41

Hussain, Sardar Mehboob, Domenico Buongiorno, Nicola Altini, Francesco Berloco, Berardino Prencipe, Marco Moschetta, Vitoantonio Bevilacqua, and Antonio Brunetti. "Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence." Applied Sciences 12, no. 12 (June 19, 2022): 6230. http://dx.doi.org/10.3390/app12126230.

Full text
Abstract:
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Maqsood, Sarmad, Robertas Damaševičius, and Rytis Maskeliūnas. "Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM." Medicina 58, no. 8 (August 12, 2022): 1090. http://dx.doi.org/10.3390/medicina58081090.

Full text
Abstract:
Background and Objectives: Clinical diagnosis has become very significant in today’s health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Eder, Matthias, Emanuel Moser, Andreas Holzinger, Claire Jean-Quartier, and Fleur Jeanquartier. "Interpretable Machine Learning with Brain Image and Survival Data." BioMedInformatics 2, no. 3 (September 6, 2022): 492–510. http://dx.doi.org/10.3390/biomedinformatics2030031.

Full text
Abstract:
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.
APA, Harvard, Vancouver, ISO, and other styles
44

Chadaga, Krishnaraj, Srikanth Prabhu, Vivekananda Bhat, Niranjana Sampathila, Shashikiran Umakanth, and Rajagopala Chadaga. "A Decision Support System for Diagnosis of COVID-19 from Non-COVID-19 Influenza-like Illness Using Explainable Artificial Intelligence." Bioengineering 10, no. 4 (March 31, 2023): 439. http://dx.doi.org/10.3390/bioengineering10040439.

Full text
Abstract:
The coronavirus pandemic emerged in early 2020 and turned out to be deadly, killing a vast number of people all around the world. Fortunately, vaccines have been discovered, and they seem effectual in controlling the severe prognosis induced by the virus. The reverse transcription-polymerase chain reaction (RT-PCR) test is the current golden standard for diagnosing different infectious diseases, including COVID-19; however, it is not always accurate. Therefore, it is extremely crucial to find an alternative diagnosis method which can support the results of the standard RT-PCR test. Hence, a decision support system has been proposed in this study that uses machine learning and deep learning techniques to predict the COVID-19 diagnosis of a patient using clinical, demographic and blood markers. The patient data used in this research were collected from two Manipal hospitals in India and a custom-made, stacked, multi-level ensemble classifier has been used to predict the COVID-19 diagnosis. Deep learning techniques such as deep neural networks (DNN) and one-dimensional convolutional networks (1D-CNN) have also been utilized. Further, explainable artificial techniques (XAI) such as Shapley additive values (SHAP), ELI5, local interpretable model explainer (LIME), and QLattice have been used to make the models more precise and understandable. Among all of the algorithms, the multi-level stacked model obtained an excellent accuracy of 96%. The precision, recall, f1-score and AUC obtained were 94%, 95%, 94% and 98% respectively. The models can be used as a decision support system for the initial screening of coronavirus patients and can also help ease the existing burden on medical infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
45

Elbagoury, Bassant M., Luige Vladareanu, Victor Vlădăreanu, Abdel Badeeh Salem, Ana-Maria Travediu, and Mohamed Ismail Roushdy. "A Hybrid Stacked CNN and Residual Feedback GMDH-LSTM Deep Learning Model for Stroke Prediction Applied on Mobile AI Smart Hospital Platform." Sensors 23, no. 7 (March 27, 2023): 3500. http://dx.doi.org/10.3390/s23073500.

Full text
Abstract:
Artificial intelligence (AI) techniques for intelligent mobile computing in healthcare has opened up new opportunities in healthcare systems. Combining AI techniques with the existing Internet of Medical Things (IoMT) will enhance the quality of care that patients receive at home remotely and the successful establishment of smart living environments. Building a real AI for mobile AI in an integrated smart hospital environment is a challenging problem due to the complexities of receiving IoT medical sensors data, data analysis, and deep learning algorithm complexity programming for mobile AI engine implementation AI-based cloud computing complexities, especially when we tackle real-time environments of AI technologies. In this paper, we propose a new mobile AI smart hospital platform architecture for stroke prediction and emergencies. In addition, this research is focused on developing and testing different modules of integrated AI software based on XAI architecture, this is for the mobile health app as an independent expert system or as connected with a simulated environment of an AI-cloud-based solution. The novelty is in the integrated architecture and results obtained in our previous works and this extended research on hybrid GMDH and LSTM deep learning models for the proposed artificial intelligence and IoMT engine for mobile health edge computing technology. Its main goal is to predict heart–stroke disease. Current research is still missing a mobile AI system for heart/brain stroke prediction during patient emergency cases. This research work implements AI algorithms for stroke prediction and diagnosis. The hybrid AI in connected health is based on a stacked CNN and group handling method (GMDH) predictive analytics model, enhanced with an LSTM deep learning module for biomedical signals prediction. The techniques developed depend on the dataset of electromyography (EMG) signals, which provides a significant source of information for the identification of normal and abnormal motions in a stroke scenario. The resulting artificial intelligence mHealth app is an innovation beyond the state of the art and the proposed techniques achieve high accuracy as stacked CNN reaches almost 98% for stroke diagnosis. The GMDH neural network proves to be a good technique for monitoring the EMG signal of the same patient case with an average accuracy of 98.60% to an average of 96.68% of the signal prediction. Moreover, extending the GMDH model and a hybrid LSTM with dense layers deep learning model has improved significantly the prediction results that reach an average of 99%.
APA, Harvard, Vancouver, ISO, and other styles
46

Hashimi, Saeed M., Guozhong Huang, Anthony Maxwell, and Robert G. Birch. "DNA Gyrase from the Albicidin Producer Xanthomonas albilineans Has Multiple-Antibiotic-Resistance and Unusual Enzymatic Properties." Antimicrobial Agents and Chemotherapy 52, no. 4 (February 11, 2008): 1382–90. http://dx.doi.org/10.1128/aac.01551-07.

Full text
Abstract:
ABSTRACT The sugarcane pathogen Xanthomonas albilineans produces a family of antibiotics and phytotoxins termed albicidins, which inhibit plant and bacterial DNA gyrase supercoiling activity, with a 50% inhibitory concentration (50 nM) comparable to those of coumarins and quinolones. Here we show that X. albilineans has an unusual, antibiotic-resistant DNA gyrase. The X. albilineans gyrA and gyrB genes are not clustered with previously described albicidin biosynthesis and self-protection genes. The GyrA and GyrB products differ from Escherichia coli homologues through several insertions and through changes in several amino acid residues implicated in quinolone and coumarin resistance. Reconstituted X. albilineans DNA gyrase showed 20- to 25-fold-higher resistance than E. coli DNA gyrase to albicidin and ciprofloxacin and 8-fold-higher resistance to novobiocin in the supercoiling assay. The X. albilineans DNA gyrase is unusual in showing a high degree of distributive supercoiling and little DNA relaxation activity. X. albilineans GyrA (XaA) forms a functional gyrase heterotetramer with E. coli GyrB (EcB) and can account for albicidin and quinolone resistance and low levels of relaxation activity. XaB probably contributes to both coumarin resistance and the distributive supercoiling pattern. Although XaB shows fewer apparent changes relative to EcB, the EcA·XaB hybrid relaxed DNA in the presence or absence of ATP and was unable to supercoil. A fuller understanding of structural differences between albicidin-sensitive and -resistant gyrases may provide new clues into features of the enzyme amenable to interference by novel antibiotics.
APA, Harvard, Vancouver, ISO, and other styles
47

Ruiz Torres, Santiago. "El rito romano en la Segovia medieval: catalogación y análisis de unos fragmentos litúrgicos (siglos XII-XVI)." Hispania Sacra 62, no. 126 (October 26, 2010): 407–55. http://dx.doi.org/10.3989/hs.2010.v62.i126.254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Watly, Joanna, Aleksandra Hecel, Paulina Kolkowska, Henryk Kozlowski, and Magdalena Rowinska-Zyrek. "Poly-Xaa Sequences in Proteins - Biological Role and Interactions with Metal Ions: Chemical and Medical Aspects." Current Medicinal Chemistry 25, no. 1 (January 22, 2018): 22–48. http://dx.doi.org/10.2174/0929867324666170428104928.

Full text
Abstract:
Background: The understanding of the bioinorganic and coordination chemistry of metalloproteins containing unusual poly-Xaa sequences, in which a single amino acid is repeated consecutively, is crucial for describing their metal binding-structure-function relationship, and therefore also crucial for understanding their medicinal potential. To the best of our knowledge, this is the first systematic review on metal complexes with polyXaa sequences. Methods: We performed a thorough search of high quality peer reviewed literature on poly-Xaa type of sequences in proteins, focusing on their biological importance and on their interactions with metal ions. Results: 228 papers were included in the review. More than 70% of them discussed the role of metal complexes with the studied types of sequences. In this work, we showed numerous medically important and chemically fascinating examples of possible ‘poly-Xaa' metal binding sequences. Conclusion: Poly-Xaa sequences, in which a single amino acid is repeated consecutively, are often not only tempting binding sites for metal ions, but very often, together with the bound metal, serve as structure determinants for entire proteins. This, in turn, can have consequences for the whole organism. Such sequences in bacterial metal chaperones can be a possible target for novel, antimicrobial therapeutics.
APA, Harvard, Vancouver, ISO, and other styles
49

Singh, Nripendra Vikram, Jyotsana Sharma, Manjushri Dinkar Dongare, Ramakant Gharate, Shivkumar Chinchure, Manjunatha Nanjundappa, Shilpa Parashuram, et al. "In Vitro and In Planta Antagonistic Effect of Endophytic Bacteria on Blight Causing Xanthomonas axonopodis pv. punicae: A Destructive Pathogen of Pomegranate." Microorganisms 11, no. 1 (December 20, 2022): 5. http://dx.doi.org/10.3390/microorganisms11010005.

Full text
Abstract:
Pomegranate bacterial blight caused by Xanthomonas axonopodis pv. punicae (Xap) is a highly destructive disease. In the absence of host resistance to the disease, we aimed to evaluate the biocontrol potential of endophytic bacteria against Xap. Thus, in this study, we isolated endophytes from pomegranate plants, identified them on the basis of 16S rDNA sequencing, tested them against Xap, and estimated the endophyte-mediated host defense response. The population of isolated endophytes ranged from 3 × 106 to 8 × 107 CFU/g tissue. Furthermore, 26 isolates were evaluated for their biocontrol activity against Xap, and all the tested isolates significantly reduced the in vitro growth of Xap (15.65% ± 1.25% to 56.35% ± 2.66%) as compared to control. These isolates could reduce fuscan, an uncharacterized factor of Xap involved in its aggressiveness. Lower blight incidence (11.6%) and severity (6.1%) were recorded in plants sprayed with endophytes 8 days ahead of Xap spray (Set-III)_ as compared to control plants which were not exposed to endophytes (77.33 and 50%, respectively%) during in vivo evaluation. Moreover, significantly high phenolic and chlorophyll contents were estimated in endophyte-treated plants as compared to control. The promising isolates mostly belonged to the genera Bacillus, Burkholderia, and Lysinibacillus, and they were deposited to the National Agriculturally Important Microbial Culture Collection, India.
APA, Harvard, Vancouver, ISO, and other styles
50

Ngolet, L. O., M. Moyen Engoba, Innocent Kocko, Alexis Elira Dokekias, Jean-Vivien Mombouli, and Georges Marius Moyen. "Sickle-Cell Disease Healthcare Cost in Africa: Experience of the Congo." Anemia 2016 (2016): 1–5. http://dx.doi.org/10.1155/2016/2046535.

Full text
Abstract:
Background. Lack of medical coverage in Africa leads to inappropriate care that has an impact on the mortality rate. In this study, we aimed to evaluate the cost of severe acute sickle-cell related complications in Brazzaville.Methods. A retrospective study was conducted in 2014 in the Paediatric Intensive Care Unit. It concerned 94 homozygote sickle-cell children that developed severe acute sickle-cell disease related complications (average age 69 months). For each patient, we calculated the cost of care complication.Results. The household income was estimated as low (<XAF 90,000/<USD 158.40) in 27.7%. The overall median cost for hospitalization for sickle-cell related acute complications was XAF 65,460/USD 115.21. Costs were fluctuating depending on the generating factors of the severe acute complications (p=0.041). They were higher in case of complications generated by bacterial infections (ranging from XAF 66,765/USD 117.50 to XAF 135,271.50/USD 238.07) and lower in case of complications associated with malaria (ranging from XAF 28,305/49.82 to XAF 64,891.63/USD 114.21). The mortality rate was 17% and was associated with the cost of the case management (p=0.006).Conclusion. The case management cost of severe acute complications of sickle-cell disease in children is high in Congo.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography