Artículos de revistas sobre el tema "User-Centered explanations"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: User-Centered explanations.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "User-Centered explanations".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Delaney, Eoin, Arjun Pakrashi, Derek Greene y Mark T. Keane. "Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 20 (24 de marzo de 2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Texto completo
Resumen
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hwang, Jeonghwan, Taeheon Lee, Honggu Lee y Seonjeong Byun. "A Clinical Decision Support System for Sleep Staging Tasks With Explanations From Artificial Intelligence: User-Centered Design and Evaluation Study". Journal of Medical Internet Research 24, n.º 1 (19 de enero de 2022): e28659. http://dx.doi.org/10.2196/28659.

Texto completo
Resumen
Background Despite the unprecedented performance of deep learning algorithms in clinical domains, full reviews of algorithmic predictions by human experts remain mandatory. Under these circumstances, artificial intelligence (AI) models are primarily designed as clinical decision support systems (CDSSs). However, from the perspective of clinical practitioners, the lack of clinical interpretability and user-centered interfaces hinders the adoption of these AI systems in practice. Objective This study aims to develop an AI-based CDSS for assisting polysomnographic technicians in reviewing AI-predicted sleep staging results. This study proposed and evaluated a CDSS that provides clinically sound explanations for AI predictions in a user-centered manner. Methods Our study is based on a user-centered design framework for developing explanations in a CDSS that identifies why explanations are needed, what information should be contained in explanations, and how explanations can be provided in the CDSS. We conducted user interviews, user observation sessions, and an iterative design process to identify three key aspects for designing explanations in the CDSS. After constructing the CDSS, the tool was evaluated to investigate how the CDSS explanations helped technicians. We measured the accuracy of sleep staging and interrater reliability with macro-F1 and Cohen κ scores to assess quantitative improvements after our tool was adopted. We assessed qualitative improvements through participant interviews that established how participants perceived and used the tool. Results The user study revealed that technicians desire explanations that are relevant to key electroencephalogram (EEG) patterns for sleep staging when assessing the correctness of AI predictions. Here, technicians wanted explanations that could be used to evaluate whether the AI models properly locate and use these patterns during prediction. On the basis of this, information that is closely related to sleep EEG patterns was formulated for the AI models. In the iterative design phase, we developed a different visualization strategy for each pattern based on how technicians interpreted the EEG recordings with these patterns during their workflows. Our evaluation study on 9 polysomnographic technicians quantitatively and qualitatively investigated the helpfulness of the tool. For technicians with <5 years of work experience, their quantitative sleep staging performance improved significantly from 56.75 to 60.59 with a P value of .05. Qualitatively, participants reported that the information provided effectively supported them, and they could develop notable adoption strategies for the tool. Conclusions Our findings indicate that formulating clinical explanations for automated predictions using the information in the AI with a user-centered design process is an effective strategy for developing a CDSS for sleep staging.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rong, Yao, Peizhu Qian, Vaibhav Unhelkar y Enkelejda Kasneci. "I-CEE: Tailoring Explanations of Image Classification Models to User Expertise". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21545–53. http://dx.doi.org/10.1609/aaai.v38i19.30152.

Texto completo
Resumen
Effectively explaining decisions of black-box machine learning models is critical to responsible deployment of AI systems that rely on them. Recognizing their importance, the field of explainable AI (XAI) provides several techniques to generate these explanations. Yet, there is relatively little emphasis on the user (the explainee) in this growing body of work and most XAI techniques generate "one-size-fits-all'' explanations. To bridge this gap and achieve a step closer towards human-centered XAI, we present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise. Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i.e., example images), corresponding local explanations, and model decisions. However, unlike prior work, I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users. We posit that by tailoring the example set to user expertise, I-CEE can better facilitate users' understanding and simulatability of the model. To evaluate our approach, we conduct detailed experiments in both simulation and with human participants (N = 100) on multiple datasets. Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions (simulatability) compared to baselines, providing promising preliminary results. Experiments with human participants demonstrate that our method significantly improves user simulatability accuracy, highlighting the importance of human-centered XAI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Guesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh y Rawaa Alatrash. "Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System". Information 14, n.º 7 (14 de julio de 2023): 401. http://dx.doi.org/10.3390/info14070401.

Texto completo
Resumen
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Morrison, Katelyn, Philipp Spitzer, Violet Turri, Michelle Feng, Niklas Kühl y Adam Perer. "The Impact of Imperfect XAI on Human-AI Decision-Making". Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (17 de abril de 2024): 1–39. http://dx.doi.org/10.1145/3641022.

Texto completo
Resumen
Explainability techniques are rapidly being developed to improve human-AI decision-making across various cooperative work settings. Consequently, previous research has evaluated how decision-makers collaborate with imperfect AI by investigating appropriate reliance and task performance with the aim of designing more human-centered computer-supported collaborative tools. Several human-centered explainable AI (XAI) techniques have been proposed in hopes of improving decision-makers' collaboration with AI; however, these techniques are grounded in findings from previous studies that primarily focus on the impact of incorrect AI advice. Few studies acknowledge the possibility of the explanations being incorrect even if the AI advice is correct. Thus, it is crucial to understand how imperfect XAI affects human-AI decision-making. In this work, we contribute a robust, mixed-methods user study with 136 participants to evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task, taking into account their level of expertise and an explanation's level of assertiveness. Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance. We also discuss how explanations can deceive decision-makers during human-AI collaboration. Hence, we shed light on the impacts of imperfect XAI in the field of computer-supported cooperative work and provide guidelines for designers of human-AI collaboration systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Shu, Derek, Catherine T. Xu, Somya Pandey, Virginia Walls, Kristen Tenney, Abby Georgilis, Lisa Melink, Danny T. Y. Wu y Jennifer Rose Molano. "User-centered Design and Formative Evaluation of a Web Application to Collect and Visualize Real-time Clinician Well-being Levels". ACI Open 08, n.º 01 (enero de 2024): e1-e9. http://dx.doi.org/10.1055/s-0044-1779698.

Texto completo
Resumen
Abstract Background Clinician burnout is increasingly prevalent in the health care workplace. Hospital leadership needs an informatics tool to measure clinicians' well-being levels and provide empirical evidence to improve their work environment. Objectives This study aimed to (1) design and implement a web-based application to collect and visualize clinicians' well-being levels and (2) conduct formative usability evaluation. Methods Clinician and staff well-being champions guided the development of the Well-being Check application. User-centered design and Agile principles were used for incremental development of the app. The app included a customizable survey and an interactive visualization. The survey consisted of six standard, two optional, and three additional questions. The interactive visualization included various charts and word clouds with filters for drill-down analysis. The evaluation was done primarily with the rehabilitation (REHAB) team using data-centered approaches through historical survey data and qualitative coding of the free-text explanations and user-centered approaches through the System Usability Scale (SUS). Results The evaluation showed that the app appropriately accommodated historical survey data from the REHAB team, enabling the comparison between self-assessed and perceived team well-being levels, and summarized key drivers based on the qualitative coding of the free-text explanations. Responses from the 23 REHAB team members showed an above-average score (SUS: 80.22), indicating high usability of the app. Conclusion The Well-being Check app was developed in a user-centered manner and evaluated to demonstrate its effectiveness and usability. Future work includes iterative refinement of the app and designing a pre-poststudy using the app to measure the change in clinicians' well-being levels for quality improvement intervention.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Panchaud, Nadia H. y Lorenz Hurni. "Integrating Cartographic Knowledge Within a Geoportal: Interactions and Feedback in the User Interface". Cartographic Perspectives, n.º 89 (11 de abril de 2018): 5–24. http://dx.doi.org/10.14714/cp89.1402.

Texto completo
Resumen
Custom user maps (also called map mashups) made on geoportals by novice users often lead to poor cartographic results, because cartographic expertise is not part of the mapmaking process. In order to integrate cartographic design functionality within a geoportal, we explored several strategies and design choices. These strategies aimed at integrating explanations about cartographic rules and functions within the mapmaking process. They are defined and implemented based on a review of human-centered design, usability best practices, and previous work on cartographic applications. Cartographic rules and functions were made part of a cartographic wizard, which was evaluated with the help of a usability study. The study results show that the overall user experience with the cartographic functions and the wizard workflow was positive, although implementing functionalities for a diverse target audience proved challenging. Additionally, the results show that offering different ways to access information is welcomed and that explanations pertaining directly to the specific user-generated map are both helpful and preferred. Finally, the results provide guidelines for user interaction design for cartographic functionality on geoportals and other online mapping platforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhang, Zhan, Daniel Citardi, Dakuo Wang, Yegin Genc, Juan Shan y Xiangmin Fan. "Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data". Health Informatics Journal 27, n.º 2 (abril de 2021): 146045822110112. http://dx.doi.org/10.1177/14604582211011215.

Texto completo
Resumen
Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients’ understanding of radiology imaging data. The aim of this study is to understand patients’ perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor’s opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people’s trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients’ concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pasrija, Vatesh y Supriya Pasrija. "Demystifying Recommendations: Transparency and Explainability in Recommendation Systems". International Journal for Research in Applied Science and Engineering Technology 12, n.º 2 (29 de febrero de 2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.

Texto completo
Resumen
Abstract: Recommendation algorithms are widely used, however many consumers want more clarity on why specific goods are recommended to them. The absence of explainability jeopardizes user trust, satisfaction, and potentially privacy. Improving transparency is difficult and involves the need for flexible interfaces, privacy protection, scalability, and customisation. Explainable recommendations provide substantial advantages such as enhancing relevance assessment, bolstering user interactions, facilitating system monitoring, and fostering accountability. Typical methods include giving summaries of the fundamental approach, emphasizing significant data points, and utilizing hybrid recommendation models. Case studies demonstrate that openness has assisted platforms such as YouTube and Spotify in achieving more than a 10% increase in key metrics like click-through rate. Additional research should broaden the methods for providing explanations, increase real-world implementation in other industries, guarantee human-centered supervision of suggestions, and promote consumer trust by following ethical standards. Accurate recommendations are essential. The future involves developing technologies that provide users with information while honoring their autonomy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Robatto Simard, Simon, Michel Gamache y Philippe Doyon-Poulin. "Development and Usability Evaluation of VulcanH, a CMMS Prototype for Preventive and Predictive Maintenance of Mobile Mining Equipment". Mining 4, n.º 2 (9 de mayo de 2024): 326–51. http://dx.doi.org/10.3390/mining4020019.

Texto completo
Resumen
This paper details the design, development, and evaluation of VulcanH, a computerized maintenance management system (CMMS) specialized in preventive maintenance (PM) and predictive maintenance (PdM) management for underground mobile mining equipment. Further, it aims to expand knowledge on trust in automation (TiA) for PdM as well as contribute to the literature on explainability requirements of a PdM-capable artificial intelligence (AI). This study adopted an empirical approach through the execution of user tests with nine maintenance experts from five East-Canadian mines and implemented the User Experience Questionnaire Plus (UEQ+) and the Reliance Intentions Scale (RIS) to evaluate usability and TiA, respectively. It was found that the usability and efficiency of VulcanH were satisfactory for expert users and encouraged the gradual transition from PM to PdM practices. Quantitative and qualitative results documented participants’ willingness to rely on PdM predictions as long as suitable explanations are provided. Graphical explanations covering the full spectrum of the derived data were preferred. Due to the prototypical nature of VulcanH, certain relevant aspects of maintenance planning were not considered. Researchers are encouraged to include these notions in the evaluation of future CMMS proposals. This paper suggests a harmonious integration of both preventive and predictive maintenance practices in the mining industry. It may also guide future research in PdM to select an analytical algorithm capable of supplying adequate and causal justifications for informed decision making. This study fulfills an identified need to adopt a user-centered approach in the development of CMMSs in the mining industry. Hence, both researchers and industry stakeholders may benefit from the findings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Abu-Rasheed, Hasan, Christian Weber, Johannes Zenkert, Mareike Dornhöfer y Madjid Fathi. "Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval". Informatics 9, n.º 1 (19 de enero de 2022): 6. http://dx.doi.org/10.3390/informatics9010006.

Texto completo
Resumen
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Pratiwi, Putu Yudia y I. Gede Sudirtha. "Identification of Learning Experience in Online Learning with User Persona Techniques Based on Learner-Centered Design Concepts". SISTEMASI 11, n.º 2 (21 de mayo de 2022): 414. http://dx.doi.org/10.32520/stmsi.v11i2.1763.

Texto completo
Resumen
Online learning is now not a new thing in the world of education. However, with the change in the learning system that is carried out entirely online, of course, there is a need for adjustments so that the quality of learning is maintained. Online learning has provided convenience in the learning process because it can be done asynchronously. Understanding the learning experience of students is very important to optimize learning that focuses on learner-certified design. A good learning experience will also have an impact on student learning outcomes. Therefore, it is necessary to identify and analyze the needs of students in online learning so that the learning process becomes more optimal. The use of persona techniques can provide more detailed information to explore the needs of students in the online learning process. This research begins by identifying problems, observing target users, persona design, and persona analysis. The results of the study indicate that there are several shortcomings and obstacles faced during online learning and several things that need to be considered for online learning in the future, namely lectures to be carried out according to a predetermined schedule, providing more reference materials and more detailed explanations of material, providing learning videos for providing additional understanding related to lecture material, provide quizzes and discussion forums on e-learning at the end of each material, more intense interaction is needed to discuss learning materials, more frequent online meetings are held to provide confirmation and discussion of material and provide feedback on each assignment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Børøsund, Elin, Anders Meland, Hege R. Eriksen, Christine M. Rygg, Giske Ursin y Lise Solberg Nes. "Digital Cognitive Behavioral- and Mindfulness-Based Stress-Management Interventions for Survivors of Breast Cancer: Development Study". JMIR Formative Research 7 (19 de septiembre de 2023): e48719. http://dx.doi.org/10.2196/48719.

Texto completo
Resumen
Background Psychosocial stress-management interventions can reduce stress and distress and improve the quality of life for survivors of cancer. As these in-person interventions are not always offered or accessible, evidence-informed digital stress-management interventions may have the potential to improve outreach of psychosocial support for survivors of cancer. Few such digital interventions exist so far, few if any have been developed specifically for survivors of breast cancer, and few if any have attempted to explore more than 1 distinct type of intervention framework. Objective This study aimed to develop 2 digital psychosocial stress-management interventions for survivors of breast cancer; 1 cognitive behavioral therapy-based intervention (CBI), and 1 mindfulness-based intervention (MBI). Methods The development of the CBI and MBI interventions originated from the existing StressProffen program, a digital stress-management intervention program for survivors of cancer, based on a primarily cognitive behavioral therapeutic concept. Development processes entailed a multidisciplinary design approach and were iteratively conducted in close collaboration between key stakeholders, including experts within psychosocial oncology, cancer epidemiology, stress-management, and eHealth as well as survivors of breast cancer and health care providers. Core psychosocial oncology stress-management and cancer epidemiology experts first conducted a series of workshops to identify cognitive behavioral and mindfulness specific StressProffen content, overlapping psychoeducational content, and areas where development and incorporation of new material were needed. Following the program content adaptation and development phase, phases related to user testing of new content and technical, privacy, security, and ethical aspects and adjustments ensued. Intervention content for the distinct CBI and MBI interventions was refined in iterative user-centered design processes and adjusted to electronic format through stakeholder-centered iterations. Results For the CBI version, the mindfulness-based content of the original StressProffen was removed, and for the MBI version, cognitive behavioral content was removed. Varying degrees of new content were created for both versions, using a similar layout as for the original StressProffen program. New content and new exercises in particular were tested by survivors of breast cancer and a project-related editorial team, resulting in subsequent user centered adjustments, including ensuring auditory versions and adequate explanations before less intuitive sections. Other improvements included implementing a standard closing sentence to round off every exercise, and allowing participants to choose the length of some of the mindfulness exercises. A legal disclaimer and a description of data collection, user rights and study contact information were included to meet ethical, privacy, and security requirements. Conclusions This study shows how theory specific (ie, CBI and MBI) digital stress-management interventions for survivors of breast cancer can be developed through extensive collaborations between key stakeholders, including scientists, health care providers, and survivors of breast cancer. Offering a variety of evidence-informed stress-management approaches may potentially increase interest for outreach and impact of psychosocial interventions for survivors of cancer. International Registered Report Identifier (IRRID) RR2-10.2196/47195
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Pumplun, Luisa, Felix Peters, Joshua F. Gawlitza y Peter Buxmann. "Bringing Machine Learning Systems into Clinical Practice: A Design Science Approach to Explainable Machine Learning-Based Clinical Decision Support Systems". Journal of the Association for Information Systems 24, n.º 4 (2023): 953–79. http://dx.doi.org/10.17705/1jais.00820.

Texto completo
Resumen
Clinical decision support systems (CDSSs) based on machine learning (ML) hold great promise for improving medical care. Technically, such CDSSs are already feasible but physicians have been skeptical about their application. In particular, their opacity is a major concern, as it may lead physicians to overlook erroneous outputs from ML-based CDSSs, potentially causing serious consequences for patients. Research on explainable AI (XAI) offers methods with the potential to increase the explainability of black-box ML systems. This could significantly accelerate the application of MLbased CDSSs in medicine. However, XAI research to date has mainly been technically driven and largely neglects the needs of end users. To better engage the users of ML-based CDSSs, we applied a design science approach to develop a design for explainable ML-based CDSSs that incorporates insights from XAI literature while simultaneously addressing physicians’ needs. This design comprises five design principles that designers of ML-based CDSSs can apply to implement user-centered explanations, which are instantiated in a prototype of an explainable ML-based CDSS for lung nodule classification. We rooted the design principles and the derived prototype in a body of justificatory knowledge consisting of XAI literature, the concept of usability, and an online survey study involving 57 physicians. We refined the design principles and their instantiation by conducting walk-throughs with six radiologists. A final experiment with 45 radiologists demonstrated that our design resulted in physicians perceiving the ML-based CDSS as more explainable and usable in terms of the required cognitive effort than a system without explanations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Brusilovsky, Peter, Marco de Gemmis, Alexander Felfernig, Pasquale Lops, Marco Polignano, Giovanni Semeraro y Martijn C. Willemsen. "Report on the 10th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2023) at ACM RecSys 2023". ACM SIGIR Forum 57, n.º 2 (diciembre de 2023): 1–6. http://dx.doi.org/10.1145/3642979.3642999.

Texto completo
Resumen
The 10th edition of the Joint Workshop on Interfaces and Human Decision Making for Recommender Systems was held as part of the 17th ACM Conference on Recommender Systems (RecSys), the premier international forum for the presentation of new research results, systems and techniques in the broad field of recommender systems. The workshop was organized as a hybrid event: the physical session took place on September 18th at the venue of the main conference, Singapore, with the possibility for authors to present remotely. The IntRS workshop brings together an interdisciplinary community of researchers and practitioners who share research on new recommender systems (informed by psychology), including new design technologies and evaluation methodologies, and aim to identify critical challenges and emerging topics in the field. This year we focused particularly on topics related to Human-Centered AI, Explainability of decision-making models, User-adaptive XAI systems, which are becoming more and more popular in the last years, especially in domains where recommended options might have ethical and legal impacts on users. The integration of XAI with recommender systems is crucial for enhancing their transparency, interpretability, and accountability. This topic attracted a lot of interest from the community, as demonstrated by the fact that several workshop papers proposed methods for recommendation explanations. Date : 18 September 2023. Website : https://intrs2023.wordpress.com.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

McGonagle, Erin A., Dean J. Karavite, Robert W. Grundmeier, Sarah K. Schmidt, Larissa S. May, Daniel M. Cohen, Andrea T. Cruz et al. "Evaluation of an Antimicrobial Stewardship Decision Support for Pediatric Infections". Applied Clinical Informatics 14, n.º 01 (enero de 2023): 108–18. http://dx.doi.org/10.1055/s-0042-1760082.

Texto completo
Resumen
Abstract Objectives Clinical decision support (CDS) has promise for the implementation of antimicrobial stewardship programs (ASPs) in the emergency department (ED). We sought to assess the usability of a newly developed automated CDS to improve guideline-adherent antibiotic prescribing for pediatric community-acquired pneumonia (CAP) and urinary tract infection (UTI). Methods We conducted comparative usability testing between an automated, prototype CDS-enhanced discharge order set and standard order set, for pediatric CAP and UTI antibiotic prescribing. After an extensive user-centered design process, the prototype CDS was integrated into the electronic health record, used passive activation, and embedded locally adapted prescribing guidelines. Participants were randomized to interact with three simulated ED scenarios of children with CAP or UTI, across both systems. Measures included task completion, decision-making and usability errors, clinical actions (order set use and correct antibiotic selection), as well as objective measures of system usability, utility, and workload using the National Aeronautics and Space Administration Task Load Index (NASA-TLX). The prototype CDS was iteratively refined to optimize usability and workflow. Results Usability testing in 21 ED clinical providers demonstrated that, compared to the standard order sets, providers preferred the prototype CDS, with improvements in domains such as explanations of suggested antibiotic choices (p < 0.001) and provision of additional resources on antibiotic prescription (p < 0.001). Simulated use of the CDS also led to overall improved guideline-adherent prescribing, with a 31% improvement for CAP. A trend was present toward absolute workload reduction. Using the NASA-TLX, workload scores for the current system were median 26, interquartile ranges (IQR): 11 to 41 versus median 25, and IQR: 10.5 to 39.5 for the CDS system (p = 0.117). Conclusion Our CDS-enhanced discharge order set for ED antibiotic prescribing was strongly preferred by users, improved the accuracy of antibiotic prescribing, and trended toward reduced provider workload. The CDS was optimized for impact on guideline-adherent antibiotic prescribing from the ED and end-user acceptability to support future evaluative trials of ED ASPs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Zhang, Zhan, Daniel Citardi, Aiwen Xing, Xiao Luo, Yu Lu y Zhe He. "Patient Challenges and Needs in Comprehending Laboratory Test Results: Mixed Methods Study". Journal of Medical Internet Research 22, n.º 12 (7 de diciembre de 2020): e18725. http://dx.doi.org/10.2196/18725.

Texto completo
Resumen
Background Patients are increasingly able to access their laboratory test results via patient portals. However, merely providing access does not guarantee comprehension. Patients could experience confusion when reviewing their test results. Objective The aim of this study is to examine the challenges and needs of patients when comprehending laboratory test results. Methods We conducted a web-based survey with 203 participants and a set of semistructured interviews with 13 participants. We assessed patients’ perceived challenges and needs (both informational and technological needs) when they attempted to comprehend test results, factors associated with patients’ perceptions, and strategies for improving the design of patient portals to communicate laboratory test results more effectively. Descriptive and correlation analysis and thematic analysis were used to analyze the survey and interview data, respectively. Results Patients face a variety of challenges and confusion when reviewing laboratory test results. To better comprehend laboratory results, patients need different types of information, which are grouped into 2 categories—generic information (eg, reference range) and personalized or contextual information (eg, treatment options, prognosis, what to do or ask next). We also found that several intrinsic factors (eg, laboratory result normality, health literacy, and technology proficiency) significantly impact people’s perceptions of using portals to view and interpret laboratory results. The desired enhancements of patient portals include providing timely explanations and educational resources (eg, a health encyclopedia), increasing usability and accessibility, and incorporating artificial intelligence–based technology to provide personalized recommendations. Conclusions Patients face significant challenges in interpreting the meaning of laboratory test results. Designers and developers of patient portals should employ user-centered approaches to improve the design of patient portals to present information in a more meaningful way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zamakhsyari, Fardan, Achmad Teguh Wibowo y Mohammad Khusnu Milad. "Enhance User Interface to Deaf E-Learning Based on User Centered Design". MATICS: Jurnal Ilmu Komputer dan Teknologi Informasi (Journal of Computer Science and Information Technology) 14, n.º 2 (1 de diciembre de 2022): 57–63. http://dx.doi.org/10.18860/mat.v14i2.17703.

Texto completo
Resumen
Abstract—The cognitive learning approach through visual media is a characteristic of learning for deaf students because these students can receive learning more quickly. However, this method became an obstacle when this process was online conducted because of the effect of the Covid-19 pandemic. Based on the explanation, a need for media learning based on interactive media can help students in the studying process. This research focuses on developing learning media using User Center Design (UCD) method to a center for the development system. In this research, we develop a user interface (UI) for deaf students, especially in the Putra Asih inclusive school in the city of Kediri, Indonesia. The evaluation of this research using ISO-9241 shows the effectiveness of using the application obtained 87,15%, effectiveness of the user interface design obtained 80,05%, and user satisfaction obtained 71,18% where all parameters make sense and acceptable based on a response from the users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

ten Klooster, Iris, Jobke Wentzel, Floor Sieverink, Gerard Linssen, Robin Wesselink y Lisette van Gemert-Pijnen. "Personas for Better Targeted eHealth Technologies: User-Centered Design Approach". JMIR Human Factors 9, n.º 1 (15 de marzo de 2022): e24172. http://dx.doi.org/10.2196/24172.

Texto completo
Resumen
Background The full potential of eHealth technologies to support self-management and disease management for patients with chronic diseases is not being reached. A possible explanation for these lacking results is that during the development process, insufficient attention is paid to the needs, wishes, and context of the prospective end users. To overcome such issues, the user-centered design practice of creating personas is widely accepted to ensure the fit between a technology and the target group or end users throughout all phases of development. Objective In this study, we integrate several approaches to persona development into the Persona Approach Twente to attain a more holistic and structured approach that aligns with the iterative process of eHealth development. Methods In 3 steps, a secondary analysis was carried out on different parts of the data set using the Partitioning Around Medoids clustering method. First, we used health-related electronic patient record data only. Second, we added person-related data that were gathered through interviews and questionnaires. Third, we added log data. Results In the first step, 2 clusters were found, with average silhouette widths of 0.12 and 0.27. In the second step, again 2 clusters were found, with average silhouette widths of 0.08 and 0.12. In the third step, 3 clusters were identified, with average silhouette widths of 0.09, 0.12, and 0.04. Conclusions The Persona Approach Twente is applicable for mixed types of data and allows alignment of this user-centered design method to the iterative approach of eHealth development. A variety of characteristics can be used that stretches beyond (standardized) medical and demographic measurements. Challenges lie in data quality and fitness for (quantitative) clustering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Heasly, Christopher C., Lisa A. Dutra, Mark Kirkpatrick, Thomas L. Seamster y Robert A. Lyons. "A User-Centered Approach to the Design of a Naval Tactical Workstation Interface". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, n.º 15 (octubre de 1993): 1030. http://dx.doi.org/10.1177/154193129303701502.

Texto completo
Resumen
The 21st century Navy combatant ship will experience exponential increases in shipboard information to be processed, disseminated and integrated. High Definition System (HDS) technology will provide for the convergence of text, graphics, digital video, imagery, and complex computing to allow for a new range of advanced capabilities that exceed those of currently available workstations. These capabilities could result in unmanageable and overwhelming cognitive workloads for Navy tactical operators in CIC (Combat Information Center). For this reason, a prototype user interface was designed using future combat system requirements, proposed HDS capabilities, and human-computer interface design standards and principles. Usability testing of the protoype user interface was conducted as part of an effort to identify integrated information management technologies which reduce operator workload, increase human performance, and improve combat system effectiveness. This demonstration will focus on explanation and demonstration of future concepts envisioned for the AEGIS operational environment; organization and functionality of the menu structures and window contents; the usability testing methods utilized; results from usability testing; and plans for utilization of the prototype shell in other operational environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ewals, Lotte J. S., Lynn J. J. Heesterbeek, Bin Yu, Kasper van der Wulp, Dimitrios Mavroeidis, Mathias Funk, Chris C. P. Snijders, Igor Jacobs, Joost Nederend y Jon R. Pluyter. "The Impact of Expectation Management and Model Transparency on Radiologists’ Trust and Utilization of AI Recommendations for Lung Nodule Assessment on Computed Tomography: Simulated Use Study". JMIR AI 3 (13 de marzo de 2024): e52211. http://dx.doi.org/10.2196/52211.

Texto completo
Resumen
Background Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. Objective We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists’ trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. Methods In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists’ trust in their assessments had changed based on the AI recommendations. Results Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists’ confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists’ confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. Conclusions Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists’ trust in the AI-CAD system can be impaired. Radiologists’ confidence in their assessments was improved by using the AI recommendations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Walker, Sue, Manjula Halai, Rachel Warner y Josefina Bravo. "Beat Bad Microbes". Information Design Journal 26, n.º 1 (28 de abril de 2021): 17–32. http://dx.doi.org/10.1075/idj.20023.wal.

Texto completo
Resumen
Abstract Health-related information design has made a difference to people’s lives through clear explanation of procedures, processes, disease prevention and maintenance. This paper provides an example of user-centered design being applied to engage people with the prevention of drug-resistant infection. In particular, we focus on antibiotic resistance in the specific location of a community pharmacy in Rwanda. We describe an information campaign, Beat Bad Microbes, and summarize the challenges and opportunities of working in Rwanda on a cross-disciplinary project in which design research and practice are closely integrated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Attef, Maryam, Catherine Dulude, Chantal Trudel y Melanie Buba. "Virtual Family-Centered Rounds During the COVID-19 Pandemic – Technology Usability Analysis". Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 11, n.º 1 (octubre de 2022): 151–55. http://dx.doi.org/10.1177/2327857922111030.

Texto completo
Resumen
Family-centered rounds (FCR) are multidisciplinary rounds, involving patients and caregivers with the aim of shared decision making in medical care planning. In response to the COVID-19 pandemic, a tertiary care pediatric hospital re-engineered the in-person FCR process used by inpatient Pediatric Medicine teams implemented virtual family-centered rounds (vFCR). As part of a mixed methods study evaluating vFCR, naturalistic observation was used to evaluate the usability of vFCR technology. Functional and user requirements were assessed and confirmed through observation of interactions with technology intended to support vFCR. The duration of individual patient rounds and transition time between patients was also captured. Technology interactions were assessed in terms of what worked (successful interactions) and what did not work (usability issues and errors). Neilsen and Norman’s (1994) usability heuristics were used to support the evaluation and explanation of findings. While naturalistic observation yielded clear results in terms of effectiveness and efficiency, user satisfaction was not formally examined. The identified usability requirements and key characteristics for ease of use and adoption of vFCR identified in this study can be used by other hospitals looking to implement or improve inpatient virtual care technology usability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hu, Tan y Tieru Wu. "An evaluation method with convolutional white-box models for image attribution explainers". Journal of Physics: Conference Series 2637, n.º 1 (1 de noviembre de 2023): 012004. http://dx.doi.org/10.1088/1742-6596/2637/1/012004.

Texto completo
Resumen
Abstract Research in the explanation of models has developed rapidly in recent years. However, research in the evaluation of explainers is still limited, with most evaluation strategies centered around perturbation. This approach may produce incorrect evaluations when the relationship between features gets complicated. In this research, we present a ground-truth-based evaluation method for feature attribution explainers on image tasks. We design three perspectives to evaluate. The input perspective evaluates whether the explainers accurately represent the inputs that the model detects. The feature perspective evaluates whether the explainers capture the features important in decisions. The user perspective evaluates the reliability that the user derives from the data. Then, using the traditional white-box model, we extract the ground truth corresponding to the three perspectives and provide an example to demonstrate the procedure. To acquire the results of the image attribution explainer, we also reconstruct the traditional white-box model into the convolutional network white-box model. Our method provides an a priori benchmark that is not affected by the explainer. The experiments show that we may use the evaluation method for different tasks and extend it to natural datasets, which offers a flexible and low-cost evaluation strategy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Jiménez Cuanalo, Jaime Miguel, Martha Judith Soto Flores y Salvador León Beltrán. "Emotionality in the Images of Design: A Biological-Evolutionary Theory". Modern Environmental Science and Engineering 8, n.º 2 (8 de febrero de 2022): 102–10. http://dx.doi.org/10.15341/mese(2333-2581)/02.08.2022/003.

Texto completo
Resumen
From the start of this century there has been a proliferation not only in the study of neurosciences and the physiology of perception/emotion, but also in its dissemination. This has resulted in countless programs, courses, etc., that pretend to help the designer, be it architectural, advertising, product, packaging or others (Ulrich, R. S. 1999) to appropriately impact the experience of the end user; tendencies such as biophilia, neuromarketing, user centered design, the influence of color in a space (Aseel AL-Ayash, Robert T. Kane, Dianne Smith, Paul Green-Armytage, 2016), or the impact on behavior and the brain when observing works of art (Kendall J. Eskine, Natalie A. Kacinik, Jesse J. Prinz, 2012), are intended to give us answers based on the neurology and physiology of perception/emotion. Nevertheless, it is hard to separate science from pseudoscience, and even to organize into a useful model the copious scientific information available. The objective is to present a theoretical model that incorporates and synthesizes the state of knowledge in this field, to facilitate its application in the diverse art and design disciplines; this will help both the creator-artist to have a better understanding of their process and comprehension of their work, as well as the designer, to be able to predict the impact their projects will have upon the audience they are directed towards. We are talking from photographers and painters, to architects and illustrators, etc. The methodology consists mainly in the exegesis — and organization of the material resulting thereof — of basic text on this field. In this talk, we present the Theory of Emotive Reactions model, that has been developing in Tijuana since the beginning of this century, we give a shallow explanation of its scientific foundations, we point out its correlation with the state of knowledge and we present the basic principles for its application in the diverse artistic and design disciplines. Key words: design, emotion, neurosciences, perception, end user experience
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Lopes, Bárbara Gabrielle C. O., Liziane Santos Soares, Raquel Oliveira Prates y Marcos André Gonçalves. "Contrasting Explain-ML with Interpretability Machine Learning Tools in Light of Interactive Machine Learning Principles". Journal on Interactive Systems 13, n.º 1 (21 de noviembre de 2022): 313–34. http://dx.doi.org/10.5753/jis.2022.2556.

Texto completo
Resumen
The way Complex Machine Learning (ML) models generate their results is not fully understood, including by very knowledgeable users. If users cannot interpret or trust the predictions generated by the model, they will not use them. Furthermore, the human role is often not properly considered in the development of ML systems. In this article, we present the design, implementation and evaluation of Explain-ML, an Interactive Machine Learning (IML) system for Explainable Machine Learning that follows the principles of Human-Centered Machine Learning (HCML). We assess the user experience with the Explain-ML interpretability strategies, contrasting them with the analysis of how other IML tools address the IML principles. To do so, we have conducted an analysis of the results of the evaluation of Explain-ML with potential users in light of principles for IML systems design and a systematic inspection of three other tools – Rulematrix, Explanation Explorer and ATMSeer – using the Semiotic Inspection Method (SIM). Our results generated positive indicators regarding Explain-ML and the process that guided its development. Our analyses also highlighted aspects of the IML principles that are relevant from the users’ perspective. By contrasting the results with Explain-ML and SIM inspections of the other tools we were able to identify common interpretability strategies. We believe that the results reported in this work contribute to the understanding and consolidation of the IML principles, ultimately advancing the knowledge in HCML.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Bae, Jae Kwon. "Does XAI Technology Improve Innovation Performance of Financial Services?" Academic Society of Global Business Administration 20, n.º 3 (30 de junio de 2023): 194–213. http://dx.doi.org/10.38115/asgba.2023.20.3.194.

Texto completo
Resumen
XAI (eXplainable Artificial Intelligence) is an artificial intelligence technology that analyzes the causal relationship of artificial intelligence decision-making, finds appropriate evidence, and explains decision-making results at the user level. Customers of the XAI model are practitioners of financial institutions that operate the model and provide results to customers, and financial consumers who use the results of the AI algorithm model. The financial industry generates profits by managing individual and institutional funds, but also has responsibility for its management. Although artificial intelligence algorithms are being applied to tasks such as the discovery of financial crimes and cyber-crime that are becoming more intelligent and repetitive credit evaluations, unexpected algorithm vulnerabilities can cause losses due to incorrect judgments. XAI presents the basis for the results of artificial intelligence algorithms in the financial sector, and financial companies to which XAI is applied can reduce the possibility of wrong decision-making by artificial intelligence. This study aims to explore the XAI characteristics and influencing factors in the financial sector that affect the reliability and satisfaction of domestic financial consumers, and examine how these factors affect innovation performance in the financial sector. DARPA mentioned user satisfaction, explanation model level, task performance improvement, reliability evaluation, and error correction level as XAI's competency evaluation factors (competency evaluation index). This study refers to DARPA's XAI competency evaluation index, G20's human-centered artificial intelligence society principles, and the Financial Services Commission's financial sector artificial intelligence guidelines, etc. In addition, the innovation performance of financial institutions according to the introduction of XAI technology and service supply was set as a dependent variable. In order to verify the research model, an online survey was conducted targeting workers in companies introducing XAI in the financial sector. As a result of the study, it was found that the XAI characteristics of the financial sector (transparency, explainability, reliability, and result demonstrability) had a significant effect on innovation performance in the financial sector.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Wang, C. "Usability Evaluation of Public Web Mapping Sites". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4 (23 de abril de 2014): 285–89. http://dx.doi.org/10.5194/isprsarchives-xl-4-285-2014.

Texto completo
Resumen
Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. <br><br> Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The success rate from different public web mapping sites was calculated and compared, and displayed by the means of diagram. And the answers from questionnaires were also classified and organized in this part. Moreover, based on the analysis, this paper expands the discussion about the layout, map visualization, map tools, search logic and etc. Finally, this paper closed with some valuable guidelines and suggestions for the design of public web mapping sites. Also, limitations for this research stated in the end.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Barda, Amie J., Christopher M. Horvat y Harry Hochheiser. "A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare". BMC Medical Informatics and Decision Making 20, n.º 1 (8 de octubre de 2020). http://dx.doi.org/10.1186/s12911-020-01276-x.

Texto completo
Resumen
Abstract Background There is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-centered displays of explanations. This new framework served as the basis for qualitative inquiries and design review sessions with critical care nurses and physicians that informed the design of a user-centered explanation display for an ML-based prediction tool. Methods We used our framework to propose explanation displays for predictions from a pediatric intensive care unit (PICU) in-hospital mortality risk model. Proposed displays were based on a model-agnostic, instance-level explanation approach based on feature influence, as determined by Shapley values. Focus group sessions solicited critical care provider feedback on the proposed displays, which were then revised accordingly. Results The proposed displays were perceived as useful tools in assessing model predictions. However, specific explanation goals and information needs varied by clinical role and level of predictive modeling knowledge. Providers preferred explanation displays that required less information processing effort and could support the information needs of a variety of users. Providing supporting information to assist in interpretation was seen as critical for fostering provider understanding and acceptance of the predictions and explanations. The user-centered explanation display for the PICU in-hospital mortality risk model incorporated elements from the initial displays along with enhancements suggested by providers. Conclusions We proposed a framework for the design of user-centered displays of explanations for ML models. We used the proposed framework to motivate the design of a user-centered display of an explanation for predictions from a PICU in-hospital mortality risk model. Positive feedback from focus group participants provides preliminary support for the use of model-agnostic, instance-level explanations of feature influence as an approach to understand ML model predictions in healthcare and advances the discussion on how to effectively communicate ML model information to healthcare providers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Chari, Shruthi, Oshani Seneviratne, Mohamed Ghalwash, Sola Shirai, Daniel M. Gruen, Pablo Meyer, Prithwish Chakraborty y Deborah L. McGuinness. "Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanations". Semantic Web, 18 de mayo de 2023, 1–31. http://dx.doi.org/10.3233/sw-233282.

Texto completo
Resumen
In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO’s capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems’ outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Panigutti, Cecilia, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti y Salvatore Rinzivillo. "Co-design of human-centered, explainable AI for clinical decision support". ACM Transactions on Interactive Intelligent Systems, 14 de marzo de 2023. http://dx.doi.org/10.1145/3587271.

Texto completo
Resumen
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Lammert, Olesja, Birte Richter, Christian Schütze, Kirsten Thommes y Britta Wrede. "Humans in XAI: increased reliance in decision-making under uncertainty by using explanation strategies". Frontiers in Behavioral Economics 3 (8 de marzo de 2024). http://dx.doi.org/10.3389/frbhe.2024.1377075.

Texto completo
Resumen
IntroductionAlthough decision support systems (DSS) that rely on artificial intelligence (AI) increasingly provide explanations to computer and data scientists about opaque features of the decision process, especially when it involves uncertainty, there is still only limited attention to making the process transparent to end users.MethodsThis paper compares four distinct explanation strategies employed by a DSS, represented by the social agent Floka, designed to assist end users in making decisions under uncertainty. Using an economic experiment with 742 participants who make lottery choices according to the Holt and Laury paradigm, we contrast two explanation strategies offering accurate information (transparent vs. guided) with two strategies prioritizing human-centered explanations (emotional vs. authoritarian) and a baseline (no explanation).Results and discussionOur findings indicate that a guided explanation strategy results in higher user reliance than a transparent strategy. Furthermore, our results suggest that user reliance is contingent on the chosen explanation strategy, and, in some instances, the absence of an explanation can also lead to increased user reliance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Teze, Juan Carlos L., Jose Nicolas Paredes, Maria Vanina Martinez y Gerardo Ignacio Simari. "Engineering user-centered explanations to query answers in ontology-driven socio-technical systems". Semantic Web, 22 de mayo de 2023, 1–30. http://dx.doi.org/10.3233/sw-233297.

Texto completo
Resumen
The role of explanations in intelligent systems has in the last few years entered the spotlight as AI-based solutions appear in an ever-growing set of applications. Though data-driven (or machine learning) techniques are often used as examples of how opaque (also called black box) approaches can lead to problems such as bias and general lack of explainability and interpretability, in reality these features are difficult to tame in general, even for approaches that are based on tools typically considered to be more amenable, like knowledge-based formalisms. In this paper, we continue a line of research and development towards building tools that facilitate the implementation of explainable and interpretable hybrid intelligent socio-technical systems, focusing on features that users can leverage to build explanations to their queries. In particular, we present the implementation of a recently-proposed application framework (and make available its source code) for developing such systems, and explore user-centered mechanisms for building explanations based both on the kinds of explanations required (such as counterfactual, contextual, etc.) and the inputs used for building them (coming from various sources, such as the knowledge base and lower-level data-driven modules). In order to validate our approach, we develop two use cases, one as a running example for detecting hate speech in social platforms and the other as an extension that also contemplates cyberbullying scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Rong, Yao, Tobias Leemann, Thai-Trang Nguyen, Lisa Fiedler, Peizhu Qian, Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci y Enkelejda Kasneci. "Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations". IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 1–20. http://dx.doi.org/10.1109/tpami.2023.3331846.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Lee, Benjamin Charles Germain, Doug Downey, Kyle Lo y Daniel S. Weld. "LIMEADE: From AI Explanations to Advice Taking". ACM Transactions on Interactive Intelligent Systems, 28 de marzo de 2023. http://dx.doi.org/10.1145/3589345.

Texto completo
Resumen
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA 2 Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Bovermann, Klaudia y Theo J. Bastiaens. "Towards a motivational design? Connecting gamification user types and online learning activities". Research and Practice in Technology Enhanced Learning 15, n.º 1 (10 de enero de 2020). http://dx.doi.org/10.1186/s41039-019-0121-4.

Texto completo
Resumen
AbstractMotivation is a crucial factor for students’ learning behavior and plays a key role in the concept of gamification to foster students’ motivation through specific gamification mechanics and elements. User types for gamification and associated gamification mechanics can classify students’ interests and learning preferences and provide explanations for their motivational learning behavior. This study investigated how five gamification user types may relate to six mainly used online learning activities in a distance online bachelor’s and master’s class in educational science through the use of a systematic approach. A total of 86 students participated in the questionnaire in a cross-sectional study. The findings showed average agreement shares for all five gamification user types. The correlations revealed that the six online learning activities were at least significantly connected to one of the five gamification user types, and both person-centered and environment-centered perspectives were displayed. Finally, the results were discussed, and implications were derived for a motivational design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sovrano, Francesco, Kevin Ashley, Peter Leonid Brusilovsky y Fabio Vitali. "How to Improve the Explanatory Power of an Intelligent Textbook: a Case Study in Legal Writing". International Journal of Artificial Intelligence in Education, 6 de mayo de 2024. http://dx.doi.org/10.1007/s40593-024-00399-w.

Texto completo
Resumen
AbstractExplanatory processes are at the core of scientific investigation, legal reasoning, and education. However, effectively explaining complex or large amounts of information, such as that contained in a textbook or library, in an intuitive, user-centered way is still an open challenge. Indeed, different people may search for and request different types of information, even though texts typically have a predefined exposition and content. With this paper, we investigate how explanatory AI can better exploit the full potential of the vast and rich content library at our disposal. Based on a recent theory of explanations from Ordinary Language Philosophy, which frames the explanation process as illocutionary question-answering, we have developed a new type of interactive and adaptive textbook. Using the latest question-answering technology, our e-book software (YAI4Edu, for short) generates on-demand, expandable explanations that can help readers effectively explore teaching materials in a pedagogically productive way. It does this by extracting a specialized knowledge graph from a collection of books or other resources that helps identify the most relevant questions to be answered for a satisfactory explanation. We tested our technology with excerpts from a textbook that teaches how to write legal memoranda in the U.S. legal system. Then, to see whether YAI4Edu-enhanced textbooks are better than random and existing, general-purpose explanatory tools, we conducted a within-subjects user study with more than 100 English-speaking students. The students rated YAI4Edu’s explanations the highest. According to the students, the explanatory content generated by YAI4Edu is, on average, statistically better than two baseline alternatives (P values below .005).
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Ling, Shihong, Yutong Zhang y Na Du. "More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human–Automation Interaction". Human Factors: The Journal of the Human Factors and Ergonomics Society, 4 de marzo de 2024. http://dx.doi.org/10.1177/00187208241234810.

Texto completo
Resumen
Objective The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human–automation interaction. Background System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics. Method We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated. Results Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities. Conclusion The study demonstrated that visual explanations could improve performance, trust, and preference in human–automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects. Application These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human–machine collaboration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Yang, Yuqing, Boris Joukovsky, José Oramas Mogrovejo, Tinne Tuytelaars y Nikos Deligiannis. "SNIPPET: A Framework for Subjective Evaluation of Visual Explanations Applied to DeepFake Detection". ACM Transactions on Multimedia Computing, Communications, and Applications, 22 de mayo de 2024. http://dx.doi.org/10.1145/3665248.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) attempts to help humans understand machine learning decisions better and has been identified as a critical component towards increasing the trustworthiness of complex black-box systems, such as deep neural networks (DNNs). In this paper, we propose a generic and comprehensive framework named SNIPPET and create a user interface for the subjective evaluation of visual explanations, focusing on finding human-friendly explanations. SNIPPET considers human-centered evaluation tasks and incorporates the collection of human annotations. These annotations can serve as valuable feedback to validate the qualitative results obtained from the subjective assessment tasks. Moreover, we consider different user background categories during the evaluation process to ensure diverse perspectives and comprehensive evaluation. We demonstrate SNIPPET on a DeepFake face dataset. Distinguishing real from fake faces is a non-trivial task even for humans, that depends on rather subtle features, making it a challenging use case. Using SNIPPET, we evaluate four popular XAI methods which provide visual explanations: Gradient-weighted Class Activation Mapping (GradCAM), Layer-wise Relevance Propagation (LRP), attention rollout (rollout), and Transformer Attribution (TA). Based on our experimental results, we observe preference variations among different user categories. We find that most people are more favorable to the explanations of rollout. Moreover, when it comes to XAI-assisted understanding, those who have no or lack relevant background knowledge often consider that visual explanations are insufficient to help them understand. We open-source our framework for continued data collection and annotation at https://github.com/XAI-SubjEvaluation/SNIPPET.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

You, Yue, Chun-Hua Tsai, Yao Li, Fenglong Ma, Christopher Heron y Xinning Gui. "Beyond Self-diagnosis: How a Chatbot-based Symptom Checker Should Respond". ACM Transactions on Computer-Human Interaction, 31 de marzo de 2023. http://dx.doi.org/10.1145/3589959.

Texto completo
Resumen
Chatbot-based symptom checker (CSC) apps have become increasingly popular in healthcare. These apps engage users in human-like conversations and offer possible medical diagnoses. The conversational design of these apps can significantly impact user perceptions and experiences, and may influence medical decisions users make and the medical care they receive. However, the effects of the conversational design of CSCs remain understudied, and there is a need to investigate and enhance users’ interactions with CSCs. In this paper, we conducted a two-stage exploratory study using a human-centered design methodology. We first conducted a qualitative interview study to identify key user needs in engaging with CSCs. We then performed an experimental study to investigate potential CSC conversational design solutions based on the results from the interview study. We identified that emotional support, explanations of medical information, and efficiency were important factors for users in their interactions with CSCs. We also demonstrated that emotional support and explanations could affect user perceptions and experiences, and they are context-dependent. Based on these findings, we offer design implications for CSC conversations to improve the user experience and health-related decision-making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ridley, Michael. "Human‐centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper". Journal of the Association for Information Science and Technology, 24 de marzo de 2024. http://dx.doi.org/10.1002/asi.24889.

Texto completo
Resumen
AbstractExplainability is central to trust and accountability in artificial intelligence (AI) applications. The field of human‐centered explainable AI (HCXAI) arose as a response to mainstream explainable AI (XAI) which was focused on algorithmic perspectives and technical challenges, and less on the needs and contexts of the non‐expert, lay user. HCXAI is characterized by putting humans at the center of AI explainability. Taking a sociotechnical perspective, HCXAI prioritizes user and situational contexts, preferences reflection over acquiescence, and promotes the actionability of explanations. This review identifies the foundational ideas of HCXAI, how those concepts are operationalized in system design, how legislation and regulations might normalize its objectives, and the challenges that HCXAI must address as it matures as a field.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Smith, William Roth. "On Relationality and Organizationality: Degrees of durability, materiality, and communicatively constituting a fluid social collective". Organization Studies, 29 de julio de 2021, 017084062110354. http://dx.doi.org/10.1177/01708406211035497.

Texto completo
Resumen
Recent organizational theorizing contends that loosely structured fluid social collectives may attain degrees of “organizationality” depending on whether or not they achieve certain organization-like elements. The organizationality approach offers a compelling account for the persistence of fluid social collectives, but the framework could be strengthened by moving beyond language-centered explanations and including into theorizing a plurality of “entities” that differ in ontological status. Based on a case study within the context of a fluid user-built recreation space, this study adopts a relational ontology viewpoint on materiality to show how dynamic aspects of natural elements, expectations, feelings, and the cyclicality of nature can be theorized as material, and thus mattering, to organizing processes. Findings reveal that the degree of durability of these entities is key for understanding interconnected decision-making, identity, and ultimately how the fluid collective achieves or degrades organizationality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Baniecki, Hubert, Dariusz Parzych y Przemyslaw Biecek. "The grammar of interactive explanatory model analysis". Data Mining and Knowledge Discovery, 14 de febrero de 2023. http://dx.doi.org/10.1007/s10618-023-00924-w.

Texto completo
Resumen
AbstractThe growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Gombolay, Grace Y., Andrew Silva, Mariah Schrum, Nakul Gopalan, Jamika Hallman‐Cooper, Monideep Dutt y Matthew Gombolay. "Effects of explainable artificial intelligence in neurology decision support". Annals of Clinical and Translational Neurology, 5 de abril de 2024. http://dx.doi.org/10.1002/acn3.52036.

Texto completo
Resumen
AbstractObjectiveArtificial intelligence (AI)‐based decision support systems (DSS) are utilized in medicine but underlying decision‐making processes are usually unknown. Explainable AI (xAI) techniques provide insight into DSS, but little is known on how to design xAI for clinicians. Here we investigate the impact of various xAI techniques on a clinician's interaction with an AI‐based DSS in decision‐making tasks as compared to a general population.MethodsWe conducted a randomized, blinded study in which members of the Child Neurology Society and American Academy of Neurology were compared to a general population. Participants received recommendations from a DSS via a random assignment of an xAI intervention (decision tree, crowd sourced agreement, case‐based reasoning, probability scores, counterfactual reasoning, feature importance, templated language, and no explanations). Primary outcomes included test performance and perceived explainability, trust, and social competence of the DSS. Secondary outcomes included compliance, understandability, and agreement per question.ResultsWe had 81 neurology participants with 284 in the general population. Decision trees were perceived as the more explainable by the medical versus general population (P < 0.01) and as more explainable than probability scores within the medical population (P < 0.001). Increasing neurology experience and perceived explainability degraded performance (P = 0.0214). Performance was not predicted by xAI method but by perceived explainability.InterpretationxAI methods have different impacts on a medical versus general population; thus, xAI is not uniformly beneficial, and there is no one‐size‐fits‐all approach. Further user‐centered xAI research targeting clinicians and to develop personalized DSS for clinicians is needed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Sovrano, Francesco y Fabio Vitali. "Explanatory artificial intelligence (YAI): human-centered explanations of explainable AI and complex data". Data Mining and Knowledge Discovery, 10 de octubre de 2022. http://dx.doi.org/10.1007/s10618-022-00872-x.

Texto completo
Resumen
AbstractIn this paper we introduce a new class of software tools engaged in delivering successful explanations of complex processes on top of basic Explainable AI (XAI) software systems. These tools, that we call cumulatively Explanatory AI (YAI) systems, enhance the quality of the basic output of a XAI by adopting a user-centred approach to explanation that can cater to the individual needs of the explainees with measurable improvements in usability. Our approach is based on Achinstein’s theory of explanations, where explaining is an illocutionary (i.e., broad yet pertinent and deliberate) act of pragmatically answering a question. Accordingly, user-centrality enters in the equation by considering that the overall amount of information generated by answering all questions can rapidly become overwhelming and that individual users may perceive the need to explore just a few of them. In this paper, we give the theoretical foundations of YAI, formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. To this end, we frame the explanatory space as an hypergraph of knowledge and we identify a set of heuristics and properties that can help approximating a decomposition of it into a tree-like representation for efficient and user-centred explanation retrieval. Finally, we provide some old and new empirical results to support our theory, showing that explanations are more than textual or visual presentations of the sole information provided by a XAI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

BARKAI, Galia, Moran GADOT, Hadar AMIR, Michal MENASHE, Lilach SHVIMER-ROTHSCHILD y Eyal ZIMLICHMAN. "Patient and clinician experience with a rapidly implemented large-scale video consultation program during COVID-19". International Journal for Quality in Health Care, 14 de diciembre de 2020. http://dx.doi.org/10.1093/intqhc/mzaa165.

Texto completo
Resumen
Abstract Background The coronavirus disease 2019 (COVID-19) pandemic has forced health-care providers to find creative ways to allow continuity of care in times of lockdown. Telemedicine enables provision of care when in-person visits are not possible. Sheba Medical Center made a rapid transition of outpatient clinics to video consultations (VC) during the first wave of COVID-19 in Israel. Objective Results of a survey of patient and clinician user experience with VC are reported. Methods Satisfaction surveys were sent by text messages to patients, clinicians who practice VC (users) and clinicians who do not practice VC (non-users). Questions referred to general satisfaction, ease of use, technical issues and medical and communication quality. Questions and scales were based on surveys used regularly in outpatient clinics of Sheba Medical Center. Results More than 1200 clinicians (physicians, psychologists, nurses, social workers, dietitians, speech therapists, genetic consultants and others) provided VC during the study period. Five hundred and forty patients, 162 clinicians who were users and 50 clinicians who were non-users completed the survey. High level of satisfaction was reported by 89.8% of patients and 37.7% of clinician users. Technical problems were experienced by 21% of patients and 80% of clinician users. Almost 70% of patients but only 23.5% of clinicians found the platform very simple to use. Over 90% of patients were very satisfied with clinician’s courtesy, expressed a high sense of trust, thought that clinician’s explanations and recommendations were clear and estimated that the clinician understood their problems and 86.5% of them would recommend VC to family and friends. Eighty-seven percent of clinician users recognize the benefit of VC for patients during the COVID-19 pandemic but only 68% supported continuation of the service after the pandemic. Conclusion Our study reports high levels of patient satisfaction from outpatient clinics VC during the COVID-19 pandemic. Lower levels of clinician satisfaction can mostly be attributed to technical and administrative challenges related to the newly implemented telemedicine platform. Our findings support the continued future use of VC as a means of providing patient-centered care. Future steps need to be taken to continuously improve the clinical and administrative application of telemedicine services.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Bercher, Pascal, Felix Richter, Thilo Hörnle, Thomas Geier, Daniel Höller, Gregor Behnke, Florian Nothdurft et al. "A Planning-Based Assistance System for Setting Up a Home Theater". Proceedings of the AAAI Conference on Artificial Intelligence 29, n.º 1 (4 de marzo de 2015). http://dx.doi.org/10.1609/aaai.v29i1.9274.

Texto completo
Resumen
Modern technical devices are often too complex for many users to be able to use them to their full extent. Based on planning technology, we are able to provide advanced user assistance for operating technical devices. We present a system that assists a human user in setting up a complex home theater consisting of several HiFi devices. For a human user, the task is rather challenging due to a large number of different ports of the devices and the variety of available cables. The system supports the user by giving detailed instructions how to assemble the theater. Its performance is based on advanced user-centered planning capabilities including the generation, repair, and explanation of plans.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Schuff, Hendrik, Lindsey Vanderlyn, Heike Adel y Ngoc Thang Vu. "How to do human evaluation: A brief introduction to user studies in NLP". Natural Language Engineering, 6 de febrero de 2023, 1–24. http://dx.doi.org/10.1017/s1351324922000535.

Texto completo
Resumen
Abstract Many research topics in natural language processing (NLP), such as explanation generation, dialog modeling, or machine translation, require evaluation that goes beyond standard metrics like accuracy or F1 score toward a more human-centered approach. Therefore, understanding how to design user studies becomes increasingly important. However, few comprehensive resources exist on planning, conducting, and evaluating user studies for NLP, making it hard to get started for researchers without prior experience in the field of human evaluation. In this paper, we summarize the most important aspects of user studies and their design and evaluation, providing direct links to NLP tasks and NLP-specific challenges where appropriate. We (i) outline general study design, ethical considerations, and factors to consider for crowdsourcing, (ii) discuss the particularities of user studies in NLP, and provide starting points to select questionnaires, experimental designs, and evaluation methods that are tailored to the specific NLP tasks. Additionally, we offer examples with accompanying statistical evaluation code, to bridge the gap between theoretical guidelines and practical applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Conway, Mike, Howard Burkom y Amy Ising. "Cross Disciplinary Consultancy: Negation Detection Use Case". Online Journal of Public Health Informatics 11, n.º 1 (30 de mayo de 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9698.

Texto completo
Resumen
ObjectiveThis abstract describes an ISDS initiative to bring together public health practitioners and analytics solution developers from both academia and industry to define a roadmap for the development of algorithms, tools, and datasets to improve the capabilities of current text processing algorithms to identify negated terms (i.e. negation detection).IntroductionDespite considerable effort since the turn of the century to develop Natural Language Processing (NLP) methods and tools for detecting negated terms in chief complaints, few standardised methods have emerged. Those methods that have emerged (e.g. the NegEx algorithm [1]) are confined to local implementations with customised solutions. Important reasons for this lack of progress include (a) limited shareable datasets for developing and testing methods (b) jurisdictional data silos, and (c) the gap between resource-constrained public health practitioners and technical solution developers, typically university researchers and industry developers.To address these three problems ISDS, funded by a grant from the Defense Threat Reduction Agency, organized a consultancy meeting at the University of Utah designed to bring together (a) representatives from public health departments, (b) university researchers focused on the development of computational methods for public health surveillance, (c) members of public health oriented non-governmental organisations, and (d) industry representatives, with the goal of developing a roadmap for the development of validated, standardised and portable resources (methods and data sets) for negation detection in clinical text used for public health surveillance.MethodsFree-text chief complaints remain a vital resource for syndromic surveillance. However, the widespread adoption of Electronic Health Records (and federal Meaningful Use requirements) has brought changes to the syndromic surveillance practice ecosystem. These changes have included the widespread use of EHR-generated chief complaint “pick lists” (i.e. pre-defined chief complaints that are selected by the user, rather than text strings input by the user at a keyboard), triage note templated text, and triage note free-text (typically much more comprehensive than traditional chief complaints). A key requirement for a negation detection algorithm is the ability to successfully and accurately process these new and challenging data streams.Preparations for the consultancy included an email thread and a shared website for published articles and data samples leading to a structured pre-consultancy call designed to inform participants regarding the purpose of the consultancy and to align expectations. Then, health department users were requested to provide data samples exemplifying negation issues in the classification process. Presenting developers were asked to explain their underlying ideas, details of method implementation, size and composition of corpora used for evaluation, and classification performance results.ResultsThe consultancy was held on January 19th & 20th 2017 at the University of Utah’s Department of Biomedical Informatics, and consisted of 25 participants. Participants were drawn from various different sectors, with representation from ISDS (2), the Defense Threat Reduction Agency (1), universities and research institutes (10), public health departments (5), the Department of Veterans Affairs (4), non-profit organisations (2), and technology firms (1). Participants were drawn from a variety of different professional backgrounds, including research scientists, software developers, public health executives, epidemiologists, and analysts.Day 1 of the consultancy was devoted to providing an overview of NLP and current trends in negation detection, including a detailed description of widely used algorithms and tools for the negation detection task. Key questions included: Should our focus be chief complaints only, or should we widen our scope to emergency department triage notes?, How many other NLP tasks (e.g. reliable concept recognition) is it necessary to address on the road to improved negation detection? With this background established, Day 2 centered on presentations from five different United States local and regional health departments (King County WA, Boston MA, North Carolina, Georgia, and Tennessee) on the various approaches to text processing and negation detection utilized across several jurisdictions.Several key areas of focus emerged as a result of the consultancy discussion. First, there is a clear need for a large, easily accessible corpus of free-text chief complaints that can form a standardised testbed for negation detection algorithm development and evaluation. Annotated data, in this context, consists of chief complaints annotated for concepts (e.g. vomiting, pain in chest) and the negation status of those concepts. It is important that the annotation include both annotated clinical concepts and negation status to allow for the uniform evaluation and performance comparison of candidate negation detection algorithms. Further, the annotated corpus should consist of several thousand (as opposed to several hundred) distinct and representative chief complaints in order to compare algorithms against a sufficient variety and volume of negation patterns.ConclusionsThe consultancy was stimulating and eye-opening for both public health practitioner and technology developer attendees. Developers unfamiliar with the everyday health-monitoring context gained an appreciation of the difficulty of deriving useful indicators from chief complaints. Also highlighted was the challenge of processing triage notes and other free-text fields that are often unused for surveillance purposes. Practitioners were provided with concise explanations and evaluations of recent NLP approaches applicable to negation processing. The event afforded direct dialogue important for communication across professional cultures.Please note that a journal paper describing the consultancy has recently been published in the Online Journal of Public Health Informatics [2].References[1] Chapman W, Bridewell W, Henbury P, Cooper G, Buchanan B. A simple algorithm for identifying negated findings and diseases in discharge summaries. J Biomed Inform. 2001, 34(5):301-310.[2] Conway M, Mowery D, Ising A, Velupillai S, Doan S, Gunn J, Donovan M, Wiedeman C, Ballester L, Soetebier K, Tong C, Burkom H. Cross-disciplinary consultance to breige public health technical needs and analytic developers: negation detection use case. Online Journal of Public Health Informatics. 2018, 10(2)
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Schoeller, Felix, Mark Miller, Roy Salomon y Karl J. Friston. "Trust as Extended Control: Human-Machine Interactions as Active Inference". Frontiers in Systems Neuroscience 15 (13 de octubre de 2021). http://dx.doi.org/10.3389/fnsys.2021.669810.

Texto completo
Resumen
In order to interact seamlessly with robots, users must infer the causes of a robot’s behavior–and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown how trust emerges, develops, and supports human relationship to technological systems. In the following paper we review the literature on trust, human-robot interaction, HRC, and human interaction at large. Early models of trust suggest that it is a trade-off between benevolence and competence; while studies of human to human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We go on to introduce a model of trust as an agent’ best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be casted in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to the extension of the trustor’s perception-action cycle. This model has important implications for understanding human-robot interaction and collaboration–as it allows the traditional determinants of human trust, such as the benevolence and competence attributed to the trustee, to be defined in terms of hierarchical active inference, while vulnerability can be described in terms of information exchange and empowerment. Furthermore, this model emphasizes the role of user feedback during HRC and suggests that boredom and surprise may be used in personalized interactions as markers for under and over-reliance on the system. The description of trust as a sense of virtual control offers a crucial step toward grounding human factors in cognitive neuroscience and improving the design of human-centered technology. Furthermore, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía