Auswahl der wissenschaftlichen Literatur zum Thema „User-Centered explanations“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "User-Centered explanations" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "User-Centered explanations"

1

Delaney, Eoin, Arjun Pakrashi, Derek Greene und Mark T. Keane. „Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 20 (24.03.2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Der volle Inhalt der Quelle
Annotation:
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hwang, Jeonghwan, Taeheon Lee, Honggu Lee und Seonjeong Byun. „A Clinical Decision Support System for Sleep Staging Tasks With Explanations From Artificial Intelligence: User-Centered Design and Evaluation Study“. Journal of Medical Internet Research 24, Nr. 1 (19.01.2022): e28659. http://dx.doi.org/10.2196/28659.

Der volle Inhalt der Quelle
Annotation:
Background Despite the unprecedented performance of deep learning algorithms in clinical domains, full reviews of algorithmic predictions by human experts remain mandatory. Under these circumstances, artificial intelligence (AI) models are primarily designed as clinical decision support systems (CDSSs). However, from the perspective of clinical practitioners, the lack of clinical interpretability and user-centered interfaces hinders the adoption of these AI systems in practice. Objective This study aims to develop an AI-based CDSS for assisting polysomnographic technicians in reviewing AI-predicted sleep staging results. This study proposed and evaluated a CDSS that provides clinically sound explanations for AI predictions in a user-centered manner. Methods Our study is based on a user-centered design framework for developing explanations in a CDSS that identifies why explanations are needed, what information should be contained in explanations, and how explanations can be provided in the CDSS. We conducted user interviews, user observation sessions, and an iterative design process to identify three key aspects for designing explanations in the CDSS. After constructing the CDSS, the tool was evaluated to investigate how the CDSS explanations helped technicians. We measured the accuracy of sleep staging and interrater reliability with macro-F1 and Cohen κ scores to assess quantitative improvements after our tool was adopted. We assessed qualitative improvements through participant interviews that established how participants perceived and used the tool. Results The user study revealed that technicians desire explanations that are relevant to key electroencephalogram (EEG) patterns for sleep staging when assessing the correctness of AI predictions. Here, technicians wanted explanations that could be used to evaluate whether the AI models properly locate and use these patterns during prediction. On the basis of this, information that is closely related to sleep EEG patterns was formulated for the AI models. In the iterative design phase, we developed a different visualization strategy for each pattern based on how technicians interpreted the EEG recordings with these patterns during their workflows. Our evaluation study on 9 polysomnographic technicians quantitatively and qualitatively investigated the helpfulness of the tool. For technicians with <5 years of work experience, their quantitative sleep staging performance improved significantly from 56.75 to 60.59 with a P value of .05. Qualitatively, participants reported that the information provided effectively supported them, and they could develop notable adoption strategies for the tool. Conclusions Our findings indicate that formulating clinical explanations for automated predictions using the information in the AI with a user-centered design process is an effective strategy for developing a CDSS for sleep staging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rong, Yao, Peizhu Qian, Vaibhav Unhelkar und Enkelejda Kasneci. „I-CEE: Tailoring Explanations of Image Classification Models to User Expertise“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21545–53. http://dx.doi.org/10.1609/aaai.v38i19.30152.

Der volle Inhalt der Quelle
Annotation:
Effectively explaining decisions of black-box machine learning models is critical to responsible deployment of AI systems that rely on them. Recognizing their importance, the field of explainable AI (XAI) provides several techniques to generate these explanations. Yet, there is relatively little emphasis on the user (the explainee) in this growing body of work and most XAI techniques generate "one-size-fits-all'' explanations. To bridge this gap and achieve a step closer towards human-centered XAI, we present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise. Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i.e., example images), corresponding local explanations, and model decisions. However, unlike prior work, I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users. We posit that by tailoring the example set to user expertise, I-CEE can better facilitate users' understanding and simulatability of the model. To evaluate our approach, we conduct detailed experiments in both simulation and with human participants (N = 100) on multiple datasets. Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions (simulatability) compared to baselines, providing promising preliminary results. Experiments with human participants demonstrate that our method significantly improves user simulatability accuracy, highlighting the importance of human-centered XAI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Guesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh und Rawaa Alatrash. „Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System“. Information 14, Nr. 7 (14.07.2023): 401. http://dx.doi.org/10.3390/info14070401.

Der volle Inhalt der Quelle
Annotation:
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Morrison, Katelyn, Philipp Spitzer, Violet Turri, Michelle Feng, Niklas Kühl und Adam Perer. „The Impact of Imperfect XAI on Human-AI Decision-Making“. Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (17.04.2024): 1–39. http://dx.doi.org/10.1145/3641022.

Der volle Inhalt der Quelle
Annotation:
Explainability techniques are rapidly being developed to improve human-AI decision-making across various cooperative work settings. Consequently, previous research has evaluated how decision-makers collaborate with imperfect AI by investigating appropriate reliance and task performance with the aim of designing more human-centered computer-supported collaborative tools. Several human-centered explainable AI (XAI) techniques have been proposed in hopes of improving decision-makers' collaboration with AI; however, these techniques are grounded in findings from previous studies that primarily focus on the impact of incorrect AI advice. Few studies acknowledge the possibility of the explanations being incorrect even if the AI advice is correct. Thus, it is crucial to understand how imperfect XAI affects human-AI decision-making. In this work, we contribute a robust, mixed-methods user study with 136 participants to evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task, taking into account their level of expertise and an explanation's level of assertiveness. Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance. We also discuss how explanations can deceive decision-makers during human-AI collaboration. Hence, we shed light on the impacts of imperfect XAI in the field of computer-supported cooperative work and provide guidelines for designers of human-AI collaboration systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shu, Derek, Catherine T. Xu, Somya Pandey, Virginia Walls, Kristen Tenney, Abby Georgilis, Lisa Melink, Danny T. Y. Wu und Jennifer Rose Molano. „User-centered Design and Formative Evaluation of a Web Application to Collect and Visualize Real-time Clinician Well-being Levels“. ACI Open 08, Nr. 01 (Januar 2024): e1-e9. http://dx.doi.org/10.1055/s-0044-1779698.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Clinician burnout is increasingly prevalent in the health care workplace. Hospital leadership needs an informatics tool to measure clinicians' well-being levels and provide empirical evidence to improve their work environment. Objectives This study aimed to (1) design and implement a web-based application to collect and visualize clinicians' well-being levels and (2) conduct formative usability evaluation. Methods Clinician and staff well-being champions guided the development of the Well-being Check application. User-centered design and Agile principles were used for incremental development of the app. The app included a customizable survey and an interactive visualization. The survey consisted of six standard, two optional, and three additional questions. The interactive visualization included various charts and word clouds with filters for drill-down analysis. The evaluation was done primarily with the rehabilitation (REHAB) team using data-centered approaches through historical survey data and qualitative coding of the free-text explanations and user-centered approaches through the System Usability Scale (SUS). Results The evaluation showed that the app appropriately accommodated historical survey data from the REHAB team, enabling the comparison between self-assessed and perceived team well-being levels, and summarized key drivers based on the qualitative coding of the free-text explanations. Responses from the 23 REHAB team members showed an above-average score (SUS: 80.22), indicating high usability of the app. Conclusion The Well-being Check app was developed in a user-centered manner and evaluated to demonstrate its effectiveness and usability. Future work includes iterative refinement of the app and designing a pre-poststudy using the app to measure the change in clinicians' well-being levels for quality improvement intervention.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Panchaud, Nadia H., und Lorenz Hurni. „Integrating Cartographic Knowledge Within a Geoportal: Interactions and Feedback in the User Interface“. Cartographic Perspectives, Nr. 89 (11.04.2018): 5–24. http://dx.doi.org/10.14714/cp89.1402.

Der volle Inhalt der Quelle
Annotation:
Custom user maps (also called map mashups) made on geoportals by novice users often lead to poor cartographic results, because cartographic expertise is not part of the mapmaking process. In order to integrate cartographic design functionality within a geoportal, we explored several strategies and design choices. These strategies aimed at integrating explanations about cartographic rules and functions within the mapmaking process. They are defined and implemented based on a review of human-centered design, usability best practices, and previous work on cartographic applications. Cartographic rules and functions were made part of a cartographic wizard, which was evaluated with the help of a usability study. The study results show that the overall user experience with the cartographic functions and the wizard workflow was positive, although implementing functionalities for a diverse target audience proved challenging. Additionally, the results show that offering different ways to access information is welcomed and that explanations pertaining directly to the specific user-generated map are both helpful and preferred. Finally, the results provide guidelines for user interaction design for cartographic functionality on geoportals and other online mapping platforms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Zhan, Daniel Citardi, Dakuo Wang, Yegin Genc, Juan Shan und Xiangmin Fan. „Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data“. Health Informatics Journal 27, Nr. 2 (April 2021): 146045822110112. http://dx.doi.org/10.1177/14604582211011215.

Der volle Inhalt der Quelle
Annotation:
Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients’ understanding of radiology imaging data. The aim of this study is to understand patients’ perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor’s opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people’s trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients’ concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pasrija, Vatesh, und Supriya Pasrija. „Demystifying Recommendations: Transparency and Explainability in Recommendation Systems“. International Journal for Research in Applied Science and Engineering Technology 12, Nr. 2 (29.02.2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.

Der volle Inhalt der Quelle
Annotation:
Abstract: Recommendation algorithms are widely used, however many consumers want more clarity on why specific goods are recommended to them. The absence of explainability jeopardizes user trust, satisfaction, and potentially privacy. Improving transparency is difficult and involves the need for flexible interfaces, privacy protection, scalability, and customisation. Explainable recommendations provide substantial advantages such as enhancing relevance assessment, bolstering user interactions, facilitating system monitoring, and fostering accountability. Typical methods include giving summaries of the fundamental approach, emphasizing significant data points, and utilizing hybrid recommendation models. Case studies demonstrate that openness has assisted platforms such as YouTube and Spotify in achieving more than a 10% increase in key metrics like click-through rate. Additional research should broaden the methods for providing explanations, increase real-world implementation in other industries, guarantee human-centered supervision of suggestions, and promote consumer trust by following ethical standards. Accurate recommendations are essential. The future involves developing technologies that provide users with information while honoring their autonomy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Robatto Simard, Simon, Michel Gamache und Philippe Doyon-Poulin. „Development and Usability Evaluation of VulcanH, a CMMS Prototype for Preventive and Predictive Maintenance of Mobile Mining Equipment“. Mining 4, Nr. 2 (09.05.2024): 326–51. http://dx.doi.org/10.3390/mining4020019.

Der volle Inhalt der Quelle
Annotation:
This paper details the design, development, and evaluation of VulcanH, a computerized maintenance management system (CMMS) specialized in preventive maintenance (PM) and predictive maintenance (PdM) management for underground mobile mining equipment. Further, it aims to expand knowledge on trust in automation (TiA) for PdM as well as contribute to the literature on explainability requirements of a PdM-capable artificial intelligence (AI). This study adopted an empirical approach through the execution of user tests with nine maintenance experts from five East-Canadian mines and implemented the User Experience Questionnaire Plus (UEQ+) and the Reliance Intentions Scale (RIS) to evaluate usability and TiA, respectively. It was found that the usability and efficiency of VulcanH were satisfactory for expert users and encouraged the gradual transition from PM to PdM practices. Quantitative and qualitative results documented participants’ willingness to rely on PdM predictions as long as suitable explanations are provided. Graphical explanations covering the full spectrum of the derived data were preferred. Due to the prototypical nature of VulcanH, certain relevant aspects of maintenance planning were not considered. Researchers are encouraged to include these notions in the evaluation of future CMMS proposals. This paper suggests a harmonious integration of both preventive and predictive maintenance practices in the mining industry. It may also guide future research in PdM to select an analytical algorithm capable of supplying adequate and causal justifications for informed decision making. This study fulfills an identified need to adopt a user-centered approach in the development of CMMSs in the mining industry. Hence, both researchers and industry stakeholders may benefit from the findings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "User-Centered explanations"

1

Lerouge, Mathieu. „Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.

Der volle Inhalt der Quelle
Annotation:
Les systèmes d'aide à la décision basés sur l'optimisation combinatoire trouvent des applications dans divers domaines professionnels. Cependant, les décideurs qui utilisent ces systèmes ne comprennent souvent pas les concepts mathématiques et les principes algorithmiques qui les sous-tendent. Ce manque de compréhension peut entraîner du scepticisme et une réticence à accepter les solutions générées par le système, érodant ainsi la confiance placée dans le système. Cette thèse traite cette problématique dans le cas du problème de planification d'employés mobiles, en anglais Workforce Scheduling and Routing Problem (WSRP), un problème d'optimisation combinatoire couplant de l'allocation de ressources humaines et du routage.Tout d'abord, nous proposons un cadre qui modélise le processus d'explication de solutions pour les utilisateurs d'un système de résolution de WSRP, permettant d'aborder une large gamme de sujets. Les utilisateurs initient le processus en faisant des observations sur une solution et en formulant des questions liées à ces observations grâce à des modèles de texte prédéfinis. Ces questions peuvent être de type contrastif, scénario ou contrefactuel. D'un point de vue mathématique, elles reviennent essentiellement à se demander s'il existe une solution faisable et meilleure dans un voisinage de la solution courante. Selon les types de questions, cela conduit à la formulation d'un ou de plusieurs problèmes de décision et de programmes mathématiques.Ensuite, nous développons une méthode pour générer des textes d'explication de différents types, avec un vocabulaire de haut niveau adapté aux utilisateurs. Notre méthode repose sur des algorithmes efficaces calculant du contenu explicatif afin de remplir des modèles de textes d'explication. Des expériences numériques montrent que ces algorithmes ont des temps d'exécution globalement compatibles avec une utilisation en temps quasi-réel des explications par les utilisateurs.Enfin, nous présentons un design de système structurant les interactions entre nos techniques de génération d'explications et les utilisateurs qui reçoivent les textes d'explication. Ce système sert de base à un prototype d'interface graphique visant à démontrer l'applicabilité pratique et les potentiels bénéfices de notre approche dans son ensemble
Decision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "User-Centered explanations"

1

Chari, Shruthi, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das und Deborah L. McGuinness. „Explanation Ontology: A Model of Explanations for User-Centered AI“. In Lecture Notes in Computer Science, 228–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62466-8_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Novak, Jasminko, Tina Maljur und Kalina Drenska. „Transferring AI Explainability to User-Centered Explanations of Complex COVID-19 Information“. In Lecture Notes in Computer Science, 441–60. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21707-4_31.

Der volle Inhalt der Quelle
Annotation:
AbstractThis paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them (“black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people’s perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Oury, Jacob D., und Frank E. Ritter. „How User-Centered Design Supports Situation Awareness for Complex Interfaces“. In Human–Computer Interaction Series, 21–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-47775-2_2.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter moves the discussion of how to design an operation center down a level towards implementation. We present user-centered design (UCD) as a distinct design philosophy to replace user experience (UX) when designing systems like the Water Detection System (WDS). Just like any other component (e.g., electrical system, communications networks), the operator has safe operating conditions, expected error rates, and predictable performance, albeit with a more variable range for the associated metrics. However, analyzing the operator’s capabilities, like any other component in a large system, helps developers create reliable, effective systems that mitigate risks of system failure due to human error in integrated human–machine systems (e.g., air traffic control). With UCD as a design philosophy, we argue that situation awareness (SA) is an effective framework for developing successful UCD systems. SA is an established framework that describes operator performance via their ability to create and maintain a mental model of the information necessary to achieve their task. SA describes performance as a function of the operator’s ability to perceive useful information, comprehend its significance, and predict future system states. Alongside detailed explanations of UCD and SA, this chapter presents further guidance and examples demonstrating how to implement these concepts in real systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rabethge, Nico, und Franz Kummert. „Developing a Human-centred AI-based System to Assist Sorting Laundry“. In Informatik aktuell, 23–35. Wiesbaden: Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-43705-3_3.

Der volle Inhalt der Quelle
Annotation:
ZusammenfassungThis paper presents the development of a human-centred AI system for the classification of laundry according to washing categories such as color and type. The system aims to provide a solution that is both accurate and easy to use for individuals with varying levels of technical expertise. The development process involved a human-centred approach, including user research and testing, to ensure that the system meets the needs and expectations of its users. The system uses a combination of computer vision techniques and machine learning algorithms to analyze images of dirty laundry and provide recommendations for the appropriate washing category.In addition to the development of the system itself, this paper also focuses on the explanation of the AI. The aim is to increase transparency and promote understanding of how the system makes its decisions. This is achieved through the use of visualizations and explanations that make the inner workings of the AI more accessible to users.The results of testing demonstrate that the system is effective in accurately classifying dirty laundry. The explanation of the AI has yet to receive more feedback, whether users report that it increased their trust in the system and find it easy to use. The development of a human-centered AI system for laundry classification has the potential to improve the efficiency and accuracy of laundry sorting while also promoting understanding and trust in AI systems.Zusammenfassung. In diesem Beitrag wird die Entwicklung eines menschenzentrierten KI-Systems für die Klassifizierung von Wäsche nach Waschkategorien wie Farbe und Typ vorgestellt. Das System zielt darauf ab, eine Lösung zu bieten, die sowohl einfach wie auch möglichst genau für Personen mit unterschiedlichem technischem Fachwissen zu bedienen sein soll.Das System nutzt eine Kombination aus Computer-Vision-Techniken und Algorithmen des Deep Learning, um Bilder von schmutziger Wäsche zu analysieren und Empfehlungen für die richtige Waschkategorie zu geben. Neben der Entwicklung des Systems selbst geht es in diesem Beitrag auch um die Erklärung der KI und das Aktive Lernen. Ziel ist es, die Transparenz zu erhöhen und das Verständnis dafür zu fördern, wie das System seine Entscheidungen trifft. Dies wird durch den Einsatz von Visualisierungen und Erklärungen erreicht, die den Nutzern die Funktionsweise der KI näher bringen. Durch das Aktive Lernen wird der Aufwand für das Annotierten der Daten verringert, welches für jede Wäscherei aufgrund unterschiedlicher Bedürfnisse erneut durchgeführt werden müsste.Die Testergebnisse zeigen, dass das System in der Lage ist, bestimmte Attribute schmutziger Wäsche zuverlässig zu klassifizieren. Es sind zukünftig Nutzerstudien notwendig, welche überprüfen, ob das Sytem tatsächlich das Vertrauen in das System stärkt und es einfach zu bedienen ist. Die Entwicklung eines menschenzentrierten KI-Systems zur Wäscheklassifizierung hat das Potenzial, die Effizienz und Genauigkeit der Wäschesortierung zu verbessern und gleichzeitig das Verständnis und Vertrauen in KI-Systeme zu fördern.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zho, Yan, Yaohua Chen und Yiyu Yao. „User-Centered Interactive Data Mining“. In Data Warehousing and Mining, 2051–66. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch122.

Der volle Inhalt der Quelle
Annotation:
While many data mining models concentrate on automation and efficiency, interactive data mining models focus on adaptive and effective communications between human users and computer systems. User requirements and preferences play the most important roles in human-machine interactions, and guide the selection of target knowledge representations, operations, and measurements. Practically, user requirements and preferences also decide strategies of abnormal situation handling, and explanations of mined patterns. In this article, we discuss these fundamental issues based on a user-centered three-layer framework of interactive data mining.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ang, Chee S., und Panayiotis Zaphiris. „Developing Enjoyable Second Language Learning Software Tools“. In User-Centered Computer Aided Language Learning, 1–21. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-750-8.ch001.

Der volle Inhalt der Quelle
Annotation:
This chapter attempts to examine computer game theories — ludology and narratology— that explain computer games as play activities and storytelling media. Founded onthis theoretical explanation, a game model that incorporates gameplay and narrativesis presented. From the model, two aspects of learning in the game environment areidentified: gameplay-oriented and narrative-oriented. It is believed that playingcomputer games involves at least one of these types of learning; thus, this game’s naturecan be used in designing engaging educational software. In addition, based onMalone’s theoretical framework on motivational heuristics, there are two methods ofapplying computer games in language learning: extrinsic and intrinsic, depending onthe integration of game designs and learning materials. Then, two cases of language-learning games are scrutinized, using the game model, in order to demonstrate the useof computer games in language learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lee, Sangwon. „AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems“. In Human-Centered Artificial Intelligence, 91–102. Elsevier, 2022. http://dx.doi.org/10.1016/b978-0-323-85648-5.00014-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Reis, Rosa, und Paula Escudeiro. „The Role of Virtual Worlds for Enhancing Student-Student Interaction in MOOCs“. In User-Centered Design Strategies for Massive Open Online Courses (MOOCs), 208–21. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9743-0.ch013.

Der volle Inhalt der Quelle
Annotation:
This theoretical chapter attempts to clarify interaction role in Massive Open Online Courses (MOOCS) and increased emphasis on utilization the virtual worlds, as tools to a constructive process where the learner should be actively involved. An overview of the core concepts of the MOOCs and Virtual Worlds is provided and an explanation of how these environments can be used for helping in creation more authentic learning activities. The chapter presents an interaction model based on collaboration, so as to elucidate the major design differences. In conclusion, we want explore the changing role of formal learning in an era open education, where the Massive Open Online Courses can allow access, in many cases completely free of cost to the learner.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mehrotra, Siddharth, Carolina Centeio Jorge, Catholijn M. Jonker und Myrthe L. Tielman. „Building Appropriate Trust in AI: The Significance of Integrity-Centered Explanations“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230121.

Der volle Inhalt der Quelle
Annotation:
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact of displaying integrity, which is one of the factors that influence trust. The study analyzes how different integrity-based explanations provided by an AI agent affect a human’s appropriate level of trust in the agent. To explore this, we conducted a between-subject user study involving 160 participants who collaborated with an AI agent to estimate calories on a food plate, with the AI agent expressing its integrity in different ways through explanations. The preliminary results demonstrate that an AI agent that explicitly acknowledges honesty in its decision making process elicit higher subjective trust than those that are transparent about their decision-making process or fair about biases. These findings can aid in designing agent-based AI systems that foster appropriate trust from humans.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xia, Ziqing, Cherng En Lee, Chun-Hsien Chen, Jo-Yu Kuo und Kendrik Yan Hong Lim. „Mental States and Cognitive Performance Monitoring for User-Centered e-Learning System: A Case Study“. In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220697.

Der volle Inhalt der Quelle
Annotation:
The unprecedented long-term online learning caused by COVID-19 has increased stress symptoms among students. The e-learning system reduces communications between teachers and students, making it difficult to observe student’s mental issues and learning performance. This study aims to develop a non-intrusive method that can simultaneously monitor stress states and cognitive performance of student in the scenario of online education. Forty-three participants were recruited to perform a computer-based reading task under stressful and non-stressful conditions, and their eye-movement data were recorded. A tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was utilized to predict stress states and reading performance of students with an accuracy of 0.825 and 0.793, respectively. An interpretable model, SHAP (SHapley Additive exPlanation), was used to identify the most important eye-movement indicators and their effects on stress and reading performance. The proposed model can serve as a foundation for further development of user-centred services in e-learning system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "User-Centered explanations"

1

Brunotte, Wasja, Jakob Droste und Kurt Schneider. „Context, Content, Consent - How to Design User-Centered Privacy Explanations (S)“. In The 35th International Conference on Software Engineering and Knowledge Engineering. KSI Research Inc., 2023. http://dx.doi.org/10.18293/seke2023-032.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Simões-Marques, Mário, und Isabel L. Nunes. „Application of a User-Centered Design Approach to the Definition of a Knowledge Base Development Tool“. In Applied Human Factors and Ergonomics Conference (2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001259.

Der volle Inhalt der Quelle
Annotation:
Knowledge Bases (KB) are used to store structured and unstructured information about a specific subject. KB are a key element of Expert Systems, a type of Computer-Based Information Systems designed to analyze and offer recommendations and explanations about a specific problem domain, providing support when human experts are not available or helping experts dealing with very demanding and critical problems, usually because of the problem’s complexity, the volume of information processed, and the pressure for short time answers. Developing KB is a quite difficult task, since there is the need to figure out and map, among others, the knowledge elements, organization, context of use, composition and representation, relations, importance and the reasoning processes used to feed the inference process, combining the inputs coming from real world data with such knowledge in order to present the desired outputs. In this paper we propose to address the context and issues involved in defining the requirements for designing a KB development tool, which supports cooperative and participatory processes of knowledge elicitation, which, despite the eventual complexity of the problem at hand, are intuitive and easy to implement. This calls for an approach that carefully considers the principles and methodologies proposed by User-Centered Design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Silva, Ítallo, Leandro Marinho, Alan Said und Martijn C. Willemsen. „Leveraging ChatGPT for Automated Human-centered Explanations in Recommender Systems“. In IUI '24: 29th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3640543.3645171.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Xinru. „Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making“. In IUI '24: 29th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3640544.3645239.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Polignano, Marco, Giuseppe Colavito, Cataldo Musto, Marco de Gemmis und Giovanni Semeraro. „Lexicon Enriched Hybrid Hate Speech Detection with Human-Centered Explanations“. In UMAP '22: 30th ACM Conference on User Modeling, Adaptation and Personalization. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3511047.3537688.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Negrete Rojas, David, J. Carlos Rodriguez-Tenorio, Adrielly Nahomeé Ramos Álvarez, Alejandro C. Ramirez-Reivich, Ma Pilar Corona-Lira, Vicente Borja und Francisca Irene Soler Anguiano. „Enhancing Access to Water in Mexico City and Its Peri-Urban Area Through User Centered Design“. In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-72090.

Der volle Inhalt der Quelle
Annotation:
Abstract Lack of water in Mexico City and its peri-urban area has been an increasingly worrying matter in the last few years and has been addressed through multiple government and NGO initiatives such as supplemental delivery via water trucks, rainwater harvesting, intermittence in water services and pipeline network maintenance. However, the rapid and unstructured development of the urban spot has exacerbated the problem, rendering these solutions ineffective. Due to the complexity of the problem, solutions may be more impactful if they focus on fixing or creating effective interactions among the elements of the system — interactions in which the main element is the user. This paper presents a design proposal created from the observations made in Mexico City. Design Thinking was used by understanding the users’ needs and the interactions between the agents through three main phases: Approach and definition of the problem, Prototyping stage and Final proposal explanation. The proposal and research results could be used and extrapolated to other cities and communities around the world, as water availability and quality are a global challenge and can be tackled in similar ways.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cretu, Ioana, und Anca cristina Colibaba. „EQUAL CHANCES THROUGH UNEQUAL OPPORTUNITIES: FACILITATING LANGUAGE LEARNING AMONG STUDENTS IN MEDICINE, NURSING AND NUTRITION THROUGH ELEARNING“. In eLSE 2012. Editura Universitara, 2012. http://dx.doi.org/10.12753/2066-026x-12-076.

Der volle Inhalt der Quelle
Annotation:
The paper explores the potential benefits of using Blended Learning (face-to-face and online) to teach languages to students at university level by analyzing the experience gained at “Gr. T. Popa” University of Medicine and Pharmacy Iasi in partnership with EuroEd Foundation Iasi, within the wider context summarized below. While many Romanian students today begin their bachelor studies with a relatively high level of competence in at least one foreign language (most commonly English), it is important to acknowledge that this is not always the case. In fact, students may feel at a disadvantage compared to their colleagues and objectively have less chances to access scholarships etc. specifically due to not having had the same opportunities to learn a foreign language such as English in their past. Therefore, in order to provide all medical students with equal chances at academic and professional success, some may require additional opportunities in transversal areas such as language learning, ICT etc. For example, medical universities in Romania attempt to provide all their students with compulsory language instruction in their first year(s), making it optional later on. However, putting together groups with students of similar language levels and needs often proves to be an impossible administrative mission, the typical outcome of that being mixed-level groups of students who more or less want to study the same language. In our case, a solution was found in order to provide adequate additional support to students whose entry language level was less than B2 / independent according to the Common European Framework of Reference. For the past two academic years, the face-to-face language instruction of junior students in Medicine, Nursing and Nutrition according to the core curriculum has been supplemented with optional activities using the ELSTI language training package online. The ELSTI platform (http://www.eurobusinesslanguageskills.net) is the main result of a series of EU-funded projects involving EuroEd Foundation also from Iasi and, as it stands today, it provides courses of English, French, German, Italian and Spanish for levels A2 and B1. All the courses, sub-units, explanations, situations, tasks, tests and self-assessment tools are calibrated to fit the CEFR descriptors while serve real life communicative functions set in a business context promoting cultural awareness. In addition, they are accompanied by personal development and motivational modules. While students were recommended the content, instruction and activities related to the language they were studying in class, all students had free access to all the other online language courses as well. The online work was student-centered in the sense that, once logged on, students could decide which units/exercises to solve in which order, the entire process being driven by the students’ own goals, interests and preferences. As is turned out, this form of increased flexibility and controlled freedom deliberately embedded in the courses added significantly to some students’ motivation to continue well beyond set requirements, as well as to their overall enjoyment, autonomy and empowerment. Concurrently, the classroom experience could be targetted more clearly towards teaching and learning English for medical purposes. The statistical analysis takes into consideration attempts, times and scores for using grammar and vocabulary support independently, solving reading and dialogue-based tasks, playing games and simulations, etc. by each user, thus providing insight into how the students chose to engage with the different e-contents and instructions within and beyond the language they were studying in class. The quantitative data to which we are referring in this paper has been collected from over 500 students at UMF Iasi and indicates how popular, difficult, motivating etc. the various types of online language exercises are among students for whom language is not their main interest but rather a vehicle. Nevertheless, the online activity reports contain ample evidence of how these students have valued this opportunity of gaining skills in transversal areas such as foreign languages, but also cultural awareness, personal development and ICT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie