Literatura académica sobre el tema "Explicability"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Explicability".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Explicability"
Nathan, N. M. L. "Explicability and the unpreventable". Analysis 48, n.º 1 (1 de enero de 1988): 36–40. http://dx.doi.org/10.1093/analys/48.1.36.
Texto completoRobbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI". Minds and Machines 29, n.º 4 (15 de octubre de 2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.
Texto completoLee, Hanseul y Hyundeuk Cheon. "The Principle of Explicability in AI Ethics". Study of Humanities 35 (30 de junio de 2021): 37–63. http://dx.doi.org/10.31323/sh.2021.06.35.02.
Texto completoHerzog, Christian. "On the risk of confusing interpretability with explicability". AI and Ethics 2, n.º 1 (9 de diciembre de 2021): 219–25. http://dx.doi.org/10.1007/s43681-021-00121-9.
Texto completoAraújo, Alexandre de Souza. "Principle of explicability: regulatory challenges on artificial intelligence". Concilium 24, n.º 3 (26 de febrero de 2024): 273–96. http://dx.doi.org/10.53660/clm-2722-24a22.
Texto completoSreedharan, Sarath, Tathagata Chakraborti, Christian Muise y Subbarao Kambhampati. "Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2518–26. http://dx.doi.org/10.1609/aaai.v34i03.5634.
Texto completoSmith, Dominic. "Making Automation Explicable: A Challenge for Philosophy of Technology". New Formations 98, n.º 98 (1 de julio de 2019): 68–84. http://dx.doi.org/10.3898/newf:98.05.2019.
Texto completoLaFleur, William R. "Suicide off the Edge of Explicability: Awe in Ozu and Kore'eda". Film History: An International Journal 14, n.º 2 (junio de 2002): 158–65. http://dx.doi.org/10.2979/fil.2002.14.2.158.
Texto completoChakraborti, Tathagata, Anagha Kulkarni, Sarath Sreedharan, David E. Smith y Subbarao Kambhampati. "Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25 de mayo de 2021): 86–96. http://dx.doi.org/10.1609/icaps.v29i1.3463.
Texto completoBenjamin, William J. “Joe”. "The “explicability” of cylinder axis and power in refractions over toric soft lenses". International Contact Lens Clinic 25, n.º 3 (mayo de 1998): 89–92. http://dx.doi.org/10.1016/s0892-8967(98)00024-8.
Texto completoTesis sobre el tema "Explicability"
Bettinger, Alexandre. "Influence indépendante et explicabilité de l’exploration et de l’exploitation dans les métaheuristiques". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0190.
Texto completoRecommendation is the act of filtering information to target items (resources) that may be of interest to one or more users. In the context of digital textbooks, items are educational resources (lesson, exercise, chapter, video and others). This task can be seen as processing a large search space that represents the set of possible recommendations. Depending on the context of the recommendation, a recommendation can take different forms such as items, itemsets or item sequences.Note that recommender environments can be subject to a number of randomness and recommendation constraints.In this thesis, we are interested in the recommendation of itemsets (also called vectors or solutions) by metaheuristics.The issues of this thesis are interested in the influence of exploration and exploitation, in data reduction and in the explicability of exploration and exploitation
Risser-Maroix, Olivier. "Similarité visuelle et apprentissage de représentations". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7327.
Texto completoThe objective of this CIFRE thesis is to develop an image search engine, based on computer vision, to assist customs officers. Indeed, we observe, paradoxically, an increase in security threats (terrorism, trafficking, etc.) coupled with a decrease in the number of customs officers. The images of cargoes acquired by X-ray scanners already allow the inspection of a load without requiring the opening and complete search of a controlled load. By automatically proposing similar images, such a search engine would help the customs officer in his decision making when faced with infrequent or suspicious visual signatures of products. Thanks to the development of modern artificial intelligence (AI) techniques, our era is undergoing great changes: AI is transforming all sectors of the economy. Some see this advent of "robotization" as the dehumanization of the workforce, or even its replacement. However, reducing the use of AI to the simple search for productivity gains would be reductive. In reality, AI could allow to increase the work capacity of humans and not to compete with them in order to replace them. It is in this context, the birth of Augmented Intelligence, that this thesis takes place. This manuscript devoted to the question of visual similarity is divided into two parts. Two practical cases where the collaboration between Man and AI is beneficial are proposed. In the first part, the problem of learning representations for the retrieval of similar images is still under investigation. After implementing a first system similar to those proposed by the state of the art, one of the main limitations is pointed out: the semantic bias. Indeed, the main contemporary methods use image datasets coupled with semantic labels only. The literature considers that two images are similar if they share the same label. This vision of the notion of similarity, however fundamental in AI, is reductive. It will therefore be questioned in the light of work in cognitive psychology in order to propose an improvement: the taking into account of visual similarity. This new definition allows a better synergy between the customs officer and the machine. This work is the subject of scientific publications and a patent. In the second part, after having identified the key components allowing to improve the performances of thepreviously proposed system, an approach mixing empirical and theoretical research is proposed. This secondcase, augmented intelligence, is inspired by recent developments in mathematics and physics. First applied tothe understanding of an important hyperparameter (temperature), then to a larger task (classification), theproposed method provides an intuition on the importance and role of factors correlated to the studied variable(e.g. hyperparameter, score, etc.). The processing chain thus set up has demonstrated its efficiency byproviding a highly explainable solution in line with decades of research in machine learning. These findings willallow the improvement of previously developed solutions
Bourgeade, Tom. "Interprétabilité a priori et explicabilité a posteriori dans le traitement automatique des langues". Thesis, Toulouse 3, 2022. http://www.theses.fr/2022TOU30063.
Texto completoWith the advent of Transformer architectures in Natural Language Processing a few years ago, we have observed unprecedented progress in various text classification or generation tasks. However, the explosion in the number of parameters, and the complexity of these state-of-the-art blackbox models, is making ever more apparent the now urgent need for transparency in machine learning approaches. The ability to explain, interpret, and understand algorithmic decisions will become paramount as computer models start becoming more and more present in our everyday lives. Using eXplainable AI (XAI) methods, we can for example diagnose dataset biases, spurious correlations which can ultimately taint the training process of models, leading them to learn undesirable shortcuts, which could lead to unfair, incomprehensible, or even risky algorithmic decisions. These failure modes of AI, may ultimately erode the trust humans may have otherwise placed in beneficial applications. In this work, we more specifically explore two major aspects of XAI, in the context of Natural Language Processing tasks and models: in the first part, we approach the subject of intrinsic interpretability, which encompasses all methods which are inherently easy to produce explanations for. In particular, we focus on word embedding representations, which are an essential component of practically all NLP architectures, allowing these mathematical models to process human language in a more semantically-rich way. Unfortunately, many of the models which generate these representations, produce them in a way which is not interpretable by humans. To address this problem, we experiment with the construction and usage of Interpretable Word Embedding models, which attempt to correct this issue, by using constraints which enforce interpretability on these representations. We then make use of these, in a simple but effective novel setup, to attempt to detect lexical correlations, spurious or otherwise, in some popular NLP datasets. In the second part, we explore post-hoc explainability methods, which can target already trained models, and attempt to extract various forms of explanations of their decisions. These can range from diagnosing which parts of an input were the most relevant to a particular decision, to generating adversarial examples, which are carefully crafted to help reveal weaknesses in a model. We explore a novel type of approach, in parts allowed by the highly-performant but opaque recent Transformer architectures: instead of using a separate method to produce explanations of a model's decisions, we design and fine-tune an architecture which jointly learns to both perform its task, while also producing free-form Natural Language Explanations of its own outputs. We evaluate our approach on a large-scale dataset annotated with human explanations, and qualitatively judge some of our approach's machine-generated explanations
Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.
Texto completoThis thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
Parekh, Jayneel. "A Flexible Framework for Interpretable Machine Learning : application to image and audio classification". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT032.
Texto completoMachine learning systems and specially neural networks, have rapidly grown in their ability to address complex learning problems. Consequently, they are being integrated into society with an ever-rising influence on all levels of human experience. This has resulted in a need to gain human-understandable insights in their decision making process to ensure the decisions are being made ethically and reliably. The study and development of methods which can generate such insightsbroadly constitutes the field of interpretable machine learning. This thesis aims to develop a novel framework that can tackle two major problem settings in this field, post-hoc and by-design interpretation. Posthoc interpretability devises methods to interpret decisionsof a pre-trained predictive model, while by-design interpretability targets to learn a single model capable of both prediction and interpretation. To this end, we extend the traditional supervised learning formulation to include interpretation as an additional task besides prediction,each addressed by separate but related models, a predictor and an interpreter. Crucially, the interpreter is dependent on the predictor through its hidden layers and utilizes a dictionary of concepts as its representation for interpretation with the capacity to generate local and globalinterpretations. The framework is separately instantiated to address interpretability problems in the context of image and audio classification. Both systems are extensively evaluated for their interpretations on multiple publicly available datasets. We demonstrate high predictiveperformance and fidelity of interpretations in both cases. Despite adhering to the same underlying structure the two systems are designed differently for interpretations.The image interpretability system advances the pipeline for discovering learnt concepts for improvedunderstandability that is qualitatively evaluated. The audio interpretability system instead is designed with a novel representation based on non-negative matrix factorization to facilitate listenable interpretations whilst modeling audio objects composing a scene
Fauvel, Kevin. "Enhancing performance and explainability of multivariate time series machine learning methods : applications for social impact in dairy resource monitoring and earthquake early warning". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S043.
Texto completoThe prevalent deployment and usage of sensors in a wide range of sectors generate an abundance of multivariate data which has proven to be instrumental for researches, businesses and policies. More specifically, multivariate data which integrates temporal evolution, i.e. Multivariate Time Series (MTS), has received significant interests in recent years, driven by high resolution monitoring applications (e.g. healthcare, mobility) and machine learning. However, for many applications, the adoption of machine learning methods cannot rely solely on their prediction performance. For example, the European Union’s General Data Protection Regulation, which became enforceable on 25 May 2018, introduces a right to explanation for all individuals so that they can obtain “meaningful explanations of the logic involved” when automated decision-making has “legal effects” on individuals or similarly “significantly affecting” them. The current best performing state-of-the-art MTS machine learning methods are “black-box” models, i.e. complicated-to-understand models, which rely on explainability methods providing explanations from any machine learning model to support their predictions (post-hoc model-agnostic). The main line of work in post-hoc model-agnostic explainability methods approximates the decision surface of a model using an explainable surrogate model. However, the explanations from the surrogate models cannot be perfectly faithful with respect to the original model, which is a prerequisite for numerous applications. Faithfulness is critical as it corresponds to the level of trust an end-user can have in the explanations of model predictions, i.e. the level of relatedness of the explanations to what the model actually computes. This thesis introduces new approaches to enhance both performance and explainability of MTS machine learning methods, and derive insights from the new methods about two real-world applications
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texto completoCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Bennetot, Adrien. "A Neural-Symbolic learning framework to produce interpretable predictions for image classification". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS418.
Texto completoArtificial Intelligence has been developing exponentially over the last decade. Its evolution is mainly linked to the progress of computer graphics card processors, allowing to accelerate the calculation of learning algorithms, and to the access to massive volumes of data. This progress has been principally driven by a search for quality prediction models, making them extremely accurate but opaque. Their large-scale adoption is hampered by their lack of transparency, thus causing the emergence of eXplainable Artificial Intelligence (XAI). This new line of research aims at fostering the use of learning models based on mass data by providing methods and concepts to obtain explanatory elements concerning their functioning. However, the youth of this field causes a lack of consensus and cohesion around the key definitions and objectives governing it. This thesis contributes to the field through two perspectives, one through a theory of what is XAI and how to achieve it and one practical. The first is based on a thorough review of the literature, resulting in two contributions: 1) the proposal of a new definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy of existing explainability methods. The practical contribution consists of two learning frameworks, both based on a paradigm aiming at linking the connectionist and symbolic paradigms
Bove, Clara. "Conception et évaluation d’interfaces utilisateur explicatives pour systèmes complexes en apprentissage automatique". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS247.pdf.
Texto completoThis thesis focuses on human-centered eXplainable AI (XAI) and more specif- ically on the intelligibility of Machine Learning (ML) explanations for non-expert users. The technical context is as follows: on one side, either an opaque classifier or regressor provides a prediction, with an XAI post-hoc approach that generates pieces of information as explanations; on the other side, the user receives both the prediction and the explanations. Within this XAI technical context, several is- sues might lessen the quality of explanations. The ones we focus on are: the lack of contextual information in ML explanations, the unguided design of function- alities or the user’s exploration, as well as confusion that could be caused when delivering too much information. To solve these issues, we develop an experimental procedure to design XAI functional interfaces and evaluate the intelligibility of ML explanations by non-expert users. Doing so, we investigate the XAI enhancements provided by two types of local explanation components: feature importance and counterfac- tual examples. Thus, we propose generic XAI principles for contextualizing and allowing exploration on feature importance; and for guiding users in their com- parative analysis of counterfactual explanations with plural examples. We pro- pose an implementation of such principles into two distinct explanation-based user interfaces, respectively for an insurance and a financial scenarios. Finally, we use the enhanced interfaces to conduct users studies in lab settings and to measure two dimensions of intelligibility, namely objective understanding and subjective satisfaction. For local feature importance, we demonstrate that con- textualization and exploration improve the intelligibility of such explanations. Similarly for counterfactual examples, we demonstrate that the plural condition improve the intelligibility as well, and that comparative analysis appears to be a promising tool for users’ satisfaction. At a fundamental level, we consider the issue of inconsistency within ML explanations from a theoretical point of view. In the explanation process consid- ered for this thesis, the quality of an explanation relies both on the ability of the Machine Learning system to generate a coherent explanation and on the ability of the end user to make a correct interpretation of these explanations. Thus, there can be limitations: on one side, as reported in the literature, technical limitations of ML systems might produce potentially inconsistent explanations; on the other side, human inferences can be inaccurate, even if users are presented with con- sistent explanations. Investigating such inconsistencies, we propose an ontology to structure the most common ones from the literature. We advocate that such an ontology can be useful to understand current XAI limitations for avoiding explanations pitfalls
Faille, Juliette. "Data-Based Natural Language Generation : Evaluation and Explainability". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.
Texto completoRecent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
Libros sobre el tema "Explicability"
Explicability of Experience: Realism and Subjectivity in Spinoza's Theory of the Human Mind. Oxford University Press, 2018.
Buscar texto completoCapítulos de libros sobre el tema "Explicability"
Kannetzky, Frank. "Expressibility, Explicability, and Taxonomy". En Speech Acts, Mind, and Social Reality, 65–82. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-010-0589-0_5.
Texto completoHickey, James M., Pietro G. Di Stefano y Vlasios Vasileiou. "Fairness by Explicability and Adversarial SHAP Learning". En Machine Learning and Knowledge Discovery in Databases, 174–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67664-3_11.
Texto completoGarcía-Marzá, Domingo y Patrici Calvo. "Dialogic Digital Ethics: From Explicability to Participation". En Algorithmic Democracy, 191–205. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-53015-9_10.
Texto completoCarman, Mary y Benjamin Rosman. "Applying a Principle of Explicability to AI Research in Africa: Should We Do It?" En Conversations on African Philosophy of Mind, Consciousness and Artificial Intelligence, 183–201. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-36163-0_13.
Texto completoKousa, Päivi y Hannele Niemi. "Artificial Intelligence Ethics from the Perspective of Educational Technology Companies and Schools". En AI in Learning: Designing the Future, 283–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09687-7_17.
Texto completoKisselburgh, Lorraine y Jonathan Beever. "The Ethics of Privacy in Research and Design: Principles, Practices, and Potential". En Modern Socio-Technical Perspectives on Privacy, 395–426. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-82786-1_17.
Texto completoCherry, Chris. "Explicability, psychoanalysis and the paranormal". En Psychoanalysis and the Paranormal, 73–103. Routledge, 2018. http://dx.doi.org/10.4324/9780429478802-4.
Texto completoChilson, Kendra. "An Epistemic Approach to Cultivating Appropriate Trust in Autonomous Vehicles". En Autonomous Vehicle Ethics, 229—C14.P86. Oxford University PressNew York, 2022. http://dx.doi.org/10.1093/oso/9780197639191.003.0014.
Texto completoIversen, Nicolai y Dylan Cawthorne. "Ethics in Action: Envisioning Human Values in the Early Stages of Drone Development". En Social Robots in Social Institutions. IOS Press, 2023. http://dx.doi.org/10.3233/faia220645.
Texto completoPetersson, Lena, Kalista Vincent, Petra Svedberg, Jens M. Nygren y Ingrid Larsson. "Ethical Perspectives on Implementing AI to Predict Mortality Risk in Emergency Department Patients: A Qualitative Study". En Caring is Sharing – Exploiting the Value in Data for Health and Innovation. IOS Press, 2023. http://dx.doi.org/10.3233/shti230234.
Texto completoActas de conferencias sobre el tema "Explicability"
Zakershahrak, Mehrdad, Akshay Sonawane, Ze Gong y Yu Zhang. "Interactive Plan Explicability in Human-Robot Teaming". En 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2018. http://dx.doi.org/10.1109/roman.2018.8525540.
Texto completoSoni, Tanishq, Deepali Gupta, Mudita Uppal y Sapna Juneja. "Explicability of Artificial Intelligence in Healthcare 5.0". En 2023 International Conference on Artificial Intelligence and Smart Communication (AISC). IEEE, 2023. http://dx.doi.org/10.1109/aisc56616.2023.10085222.
Texto completoZhang, Yu, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo y Subbarao Kambhampati. "Plan explicability and predictability for robot task planning". En 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989155.
Texto completoChakraborti, Tathagata, Sarath Sreedharan y Subbarao Kambhampati. "Balancing Explicability and Explanations in Human-Aware Planning". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/185.
Texto completoRaison, Adrien, Pascal Bourdon, Christophe Habas y David Helbert. "Explicability in resting-state fMRI for gender classification". En 2021 Sixth International Conference on Advances in Biomedical Engineering (ICABME). IEEE, 2021. http://dx.doi.org/10.1109/icabme53305.2021.9604842.
Texto completoBrown, B., S. McArthur, B. Stephen, G. West y A. Young. "Improved Explicability for Pump Diagnostics in Nuclear Power Plants". En Tranactions - 2019 Winter Meeting. AMNS, 2019. http://dx.doi.org/10.13182/t31091.
Texto completoSreedharan, Sarath, Anagha Kulkarni, David Smith y Subbarao Kambhampati. "A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI Interaction". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/625.
Texto completo