Academic literature on the topic 'Explicabilité des algorithmes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explicabilité des algorithmes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explicabilité des algorithmes":

1

Robbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI." Minds and Machines 29, no. 4 (October 15, 2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
2

Краснов, Федор Владимирович, and Ирина Сергеевна Смазневич. "The explicability factor of the algorithm in the problems of searching for the similarity of text documents." Вычислительные технологии, no. 5(25) (October 28, 2020): 107–23. http://dx.doi.org/10.25743/ict.2020.25.5.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
С развитием все более сложных методов автоматического анализа текста повышается важность задачи объяснения пользователю, почему прикладная интеллектуальная информационная система выделяет некоторые тексты как схожие по смыслу. В работе рассмотрены ограничения, которые такая постановка накладывает на используемые интеллектуальные алгоритмы. Проведенный авторами эксперимент показал, что абсолютное значение схожести документов не универсально по отношению к интеллектуальному алгоритму, поэтому оптимальную пороговую величину схожести необходимо устанавливать отдельно для каждой решаемой задачи. Полученные результаты могут быть использованы при оценке применимости различных методов установления смысловой схожести между документами в прикладных информационных системах, а также при выборе оптимальных параметров модели с учетом требований объяснимости решения. The problem of providing a comprehensive explanation to any user why the applied intelligent information system suggests meaning similarity in certain texts imposes significant requirements on the intelligent algorithms. The article covers the entire set of technologies involved in the solution of the text clustering problem and several conclusions are stated thereof. Matrix decomposition aimed at reducing the dimension of the vector representation of a corpus does not provide clear explanatiom of the algorithmic principles to a user. Ranking using the TF-IDF function and its modifications finds a few documents that are similar in meaning, however, this method is the easiest for users to comprehend, since algorithms of this type detect specific matching words in the compared texts. Topic modeling methods (LSI, LDA, ARTM) assign large similarity values to texts despite a few matching words, while a person can easily tell that the general subject of the texts is the same. Yet the explanation of how topic modeling works requires additional effort for interpretation of the detected ones. This interpretation gets easier as the model quality grows, while the quality can be optimized by its average coherence. The experiment demonstrated that the absolute value of documents similarity is not invariant for different intelligent algorithms, so the optimal threshold value of similarity must be set separately for each problem to be solved. The results of the work can be further used to assess which of the various methods developed to detect meaning similarity in texts can be effectively implemented in applied information systems and to determine the optimal model parameters based on the solution explicability requirements.
3

van Bruxvoort, Xadya, and Maurice van Keulen. "Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System." Applied Sciences 11, no. 23 (November 25, 2021): 11187. http://dx.doi.org/10.3390/app112311187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised.
4

Kalyanpur, Aditya, Tom Breloff, and David A. Ferrucci. "Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10867–74. http://dx.doi.org/10.1609/aaai.v36i10.21333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the “knowledge acquisition” problem). To address these issues, we devise a novel logical reasoner called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query. We use a simple QA example from a children’s story to motivate Braid’s design and explain how the various components work together to produce a coherent logical explanation. Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results while providing frame-based explanations.
5

Niemi, Hannele. "AI in learning." Journal of Pacific Rim Psychology 15 (January 2021): 183449092110381. http://dx.doi.org/10.1177/18344909211038105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This special issue raises two thematic questions: (1) How will AI change learning in the future and what role will human beings play in the interaction with machine learning, and (2), What can we learn from the articles in this special issue for future research? These questions are reflected in the frame of the recent discussion of human and machine learning. AI for learning provides many applications and multimodal channels for supporting people in cognitive and non-cognitive task domains. The articles in this special issue evidence that agency, engagement, self-efficacy, and collaboration are needed in learning and working with intelligent tools and environments. The importance of social elements is also clear in the articles. The articles also point out that the teacher’s role in digital pedagogy primarily involves facilitating and coaching. AI in learning has a high potential, but it also has many limitations. Many worries are linked with ethical issues, such as biases in algorithms, privacy, transparency, and data ownership. This special issue also highlights the concepts of explainability and explicability in the context of human learning. We need much more research and research-based discussion for making AI more trustworthy for users in learning environments and to prevent misconceptions.
6

Antonio, Nuno, Ana de Almeida, and Luis Nunes. "Big Data in Hotel Revenue Management: Exploring Cancellation Drivers to Gain Insights Into Booking Cancellation Behavior." Cornell Hospitality Quarterly 60, no. 4 (May 29, 2019): 298–319. http://dx.doi.org/10.1177/1938965519851466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the hospitality industry, demand forecast accuracy is highly impacted by booking cancellations, which makes demand-management decisions difficult and risky. In attempting to minimize losses, hotels tend to implement restrictive cancellation policies and employ overbooking tactics, which, in turn, reduce the number of bookings and reduce revenue. To tackle the uncertainty arising from booking cancellations, we combined the data from eight hotels’ property management systems with data from several sources (weather, holidays, events, social reputation, and online prices/inventory) and machine learning interpretable algorithms to develop booking cancellation prediction models for the hotels. In a real production environment, improvement of the forecast accuracy due to the use of these models could enable hoteliers to decrease the number of cancellations, thus, increasing confidence in demand-management decisions. Moreover, this work shows that improvement of the demand forecast would allow hoteliers to better understand their net demand, that is, current demand minus predicted cancellations. Simultaneously, by focusing not only on forecast accuracy but also on its explicability, this work illustrates one other advantage of the application of these types of techniques in forecasting: the interpretation of the predictions of the model. By exposing cancellation drivers, models help hoteliers to better understand booking cancellation patterns and enable the adjustment of a hotel’s cancellation policies and overbooking tactics according to the characteristics of its bookings.
7

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
8

Hübner, Ursula H., Nicole Egbert, and Georg Schulte. "Clinical Information Systems – Seen through the Ethics Lens." Yearbook of Medical Informatics 29, no. 01 (August 2020): 104–14. http://dx.doi.org/10.1055/s-0040-1701996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objective: The more people there are who use clinical information systems (CIS) beyond their traditional intramural confines, the more promising the benefits are, and the more daunting the risks will be. This review thus explores the areas of ethical debates prompted by CIS conceptualized as smart systems reaching out to patients and citizens. Furthermore, it investigates the ethical competencies and education needed to use these systems appropriately. Methods: A literature review covering ethics topics in combination with clinical and health information systems, clinical decision support, health information exchange, and various mobile devices and media was performed searching the MEDLINE database for articles from 2016 to 2019 with a focus on 2018 and 2019. A second search combined these keywords with education. Results: By far, most of the discourses were dominated by privacy, confidentiality, and informed consent issues. Intertwined with confidentiality and clear boundaries, the provider-patient relationship has gained much attention. The opacity of algorithms and the lack of explicability of the results pose a further challenge. The necessity of sociotechnical ethics education was underpinned in many studies including advocating education for providers and patients alike. However, only a few publications expanded on ethical competencies. In the publications found, empirical research designs were employed to capture the stakeholders’ attitudes, but not to evaluate specific implementations. Conclusion: Despite the broad discourses, ethical values have not yet found their firm place in empirically rigorous health technology evaluation studies. Similarly, sociotechnical ethics competencies obviously need detailed specifications. These two gaps set the stage for further research at the junction of clinical information systems and ethics.
9

Krasnov, Fedor, Irina Smaznevich, and Elena Baskakova. "Optimization approach to the choice of explicable methods for detecting anomalies in homogeneous text collections." Informatics and Automation 20, no. 4 (August 3, 2021): 869–904. http://dx.doi.org/10.15622/ia.20.4.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of detecting anomalous documents in text collections is considered. The existing methods for detecting anomalies are not universal and do not show a stable result on different data sets. The accuracy of the results depends on the choice of parameters at each step of the problem solving algorithm process, and for different collections different sets of parameters are optimal. Not all of the existing algorithms for detecting anomalies work effectively with text data, which vector representation is characterized by high dimensionality with strong sparsity.The problem of finding anomalies is considered in the following statement: it is necessary to checking a new document uploaded to an applied intelligent information system for congruence with a homogeneous collection of documents stored in it. In such systems that process legal documents the following limitations are imposed on the anomaly detection methods: high accuracy, computational efficiency, reproducibility of results and explicability of the solution. Methods satisfying these conditions are investigated.The paper examines the possibility of evaluating text documents on the scale of anomaly by deliberately introducing a foreign document into the collection. A strategy for detecting novelty of the document in relation to the collection is proposed, which assumes a reasonable selection of methods and parameters. It is shown how the accuracy of the solution is affected by the choice of vectorization options, tokenization principles, dimensionality reduction methods and parameters of novelty detection algorithms.The experiment was conducted on two homogeneous collections of documents containing technical norms: standards in the field of information technology and railways. The following approaches were used: calculation of the anomaly index as the Hellinger distance between the distributions of the remoteness of documents to the center of the collection and to the foreign document; optimization of the novelty detection algorithms depending on the methods of vectorization and dimensionality reduction. The vector space was constructed using the TF-IDF transformation and ARTM topic modeling. The following algorithms have been tested: Isolation Forest, Local Outlier Factor and One-Class SVM (based on Support Vector Machine).The experiment confirmed the effectiveness of the proposed optimization strategy for determining the appropriate method for detecting anomalies for a given text collection. When searching for an anomaly in the context of topic clustering of legal documents, the Isolating Forest method is proved to be effective. When vectorizing documents using TF-IDF, it is advisable to choose the optimal dictionary parameters and use the One-Class SVM method with the corresponding feature space transformation function.
10

Coppi, Giulio, Rebeca Moreno Jimenez, and Sofia Kyriazi. "Explicability of humanitarian AI: a matter of principles." Journal of International Humanitarian Action 6, no. 1 (October 6, 2021). http://dx.doi.org/10.1186/s41018-021-00096-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-related implications.Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated?This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores the transformative potential of these systems on the complex power dynamics between humanitarians, principled assistance, and affected communities during acute crises. Our research confirms that the existing flaws in accountability and epistemic processes can be also found in the mathematical and statistical formulas and in the algorithms used for automation, artificial intelligence, predictive analytics, and other efficiency-gaining-related processes.In doing so, our analysis highlights the potential harm to people resulting from algorithmic opacity, either through removal or obfuscation of the causal connection between triggering events and humanitarian services through the so-called black box effect (algorithms are often described as black boxes, as their complexity and technical opacity hide and obfuscate their inner workings (Diakopoulos, Tow Center for Digital Journ, 2017). Recognizing the need for a humanitarian ethics dimension in the analysis of automation, AI, and ADMs used in humanitarian action, we endorse the concept of “explicability” as developed within the ethical framework of machine learning and human-computer interaction, together with a set of proxy metrics.Finally, we stress the need for developing auditable standards, as well as transparent guidelines and frameworks to rein in the risks of what has been defined as humanitarian experimentation (Sandvik, Jacobsen, and McDonald, Int. Rev. Red Cross 99(904), 319–344, 2017). This article concludes that accountability mechanisms for AI-based systems and ADMs used to respond to the needs of populations in situation of vulnerability should be an essential feature by default, in order to preserve the respect of the do no harm principle even in the digital dimension of aid.In conclusion, while we confirm existing concerns related to the adoption of AI-based systems and ADMs in humanitarian action, we also advocate for a roadmap towards humanitarian AI for the sector and introduce a tentative ethics framework as basis for future research.

Dissertations / Theses on the topic "Explicabilité des algorithmes":

1

Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse aborde deux enjeux auxquels les régulateurs doivent faire face dans l’économie numérique : le défi informationnel généré par l'utilisation de nouvelles technologies d'intelligence artificielle et la problématique du pouvoir de marché des grandes plateformes numériques. Le premier chapitre de cette thèse étudie la mise en place d’un système d’audit (coûteux et imparfait) par un régulateur cherchant à réduire le risque de dommage généré par les technologies d’intelligence artificielle, tout en limitant le coût de la régulation. Les entreprises peuvent investir dans l'explicabilité de leurs technologies pour mieux comprendre leurs algorithmes et réduire leur coût de conformité à la réglementation. Lorsque l’explicabilité n’affecte pas l’efficacité des audits, la prise en compte du niveau d'explicabilité de la technologie dans la politique d’audit du régulateur induit davantage d'investissement en explicabilité et une conformité plus forte de la part des entreprises en comparaison d’une politique neutre à l’explicabilité. Si, au contraire, l'explicabilité facilite la détection d'une mauvaise conduite par le régulateur, les entreprises peuvent s’engager dans une stratégie d’opacification de leur technologie. Un comportement opportuniste de la part du régulateur décourage l'investissement dans l'explicabilité. Pour promouvoir l'explicabilité et la conformité, il peut être nécessaire de mettre en œuvre une réglementation de type "commande et contrôle" avec des normes d'explicabilité minimales. Le deuxième chapitre explore les effets de la coopétition entre deux plateformes bifaces sur les prix de souscription des utilisateurs. Plus spécifiquement, les plateformes fixent les prix de souscription d’un groupe d’utilisateurs (par exemple, les vendeurs) de manière coopérative et les prix de l’autre groupe (par exemple, les acheteurs) de manière non coopérative. En coopérant pour fixer le prix de souscription des vendeurs, chaque plateforme internalise l’externalité négative qu’elle exerce sur l’autre plateforme lorsqu’elle réduit son prix. Cela conduit les plateformes à augmenter le prix de souscription pour les vendeurs par rapport à la situation de concurrence. Dans le même temps, à mesure que la valeur économique des vendeurs augmente, comme les acheteurs exercent un effet de réseau positif sur les vendeurs, la concurrence entre plateformes pour attirer les acheteurs s'intensifie, ce qui conduit à une baisse du prix de souscription pour les acheteurs. Nous considérons deux scénarios : un marché en croissance (dans lequel de nouveaux utilisateurs peuvent rejoindre la plateforme) et un marché mature. Le surplus total augmente uniquement dans le premier cas, lorsque de nouveaux acheteurs peuvent rejoindre le marché. Enfin, le troisième chapitre s’intéresse à l'interopérabilité entre une plateforme en place et un nouvel entrant comme instrument de régulation pour améliorer la contestabilité du marché et limiter le pouvoir de marché de la plateforme en place. L'interopérabilité permet de partager les effets de réseau entre les deux plateformes, ce qui réduit leur importance dans le choix de souscription des utilisateurs à une plateforme. L'introduction de l'interopérabilité entraîne une réduction de la demande pour la plateforme en place, qui réduit le prix de son tarif de souscription. En revanche, pour des niveaux d'interopérabilité relativement faibles, la demande pour le nouvel entrant augmente (de même que son prix et son profit), puis celle-ci diminue pour des niveaux d'interopérabilité plus élevés. Dans tous les cas, les utilisateurs bénéficient de la mise en place de l’interopérabilité
This thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
2

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue. Aussi complexes soient-ils aujourd'hui, ces modèles d'IA sont impossibles à interpréter et à comprendre par les humains. Dans cette thèse nous nous concentrons sur un domaine de recherche spécifique, à savoir l'intelligence artificielle explicable (xAI), qui vise à fournir des approches permettant d'interpréter les modèles d'IA complexes et d'expliquer leurs décisions. Nous présentons deux approches, STACI et BELLA, qui se concentrent sur les tâches de classification et de régression, respectivement, pour les données tabulaires. Les deux méthodes sont des approches post-hoc agnostiques au modèle déterministe, ce qui signifie qu'elles peuvent être appliquées à n'importe quel modèle boîte noire après sa création. De cette manière, l'interopérabilité présente une valeur ajoutée sans qu'il soit nécessaire de faire des compromis sur les performances du modèle de boîte noire. Nos méthodes fournissent des interprétations précises, simples et générales à la fois de l'ensemble du modèle boîte noire et de ses prédictions individuelles. Nous avons confirmé leur haute performance par des expériences approfondies et étude d'utilisateurs
Current state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
3

Li, Honghao. "Interpretable biological network reconstruction from observational data." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur les méthodes basées sur des contraintes. Nous présentons comme exemple l’algorithme PC, pour lequel nous proposons une modification qui garantit la cohérence des ensembles de séparation, utilisés pendant l’étape de reconstruction du squelette pour supprimer les arêtes entre les variables conditionnellement indépendantes, par rapport au graphe final. Elle consiste à itérer l’algorithme d’apprentissage de structure tout en limitant la recherche des ensembles de séparation à ceux qui sont cohérents par rapport au graphe obtenu à la fin de l’itération précédente. La contrainte peut être posée avec une complexité de calcul limitée à l’aide de la décomposition en block-cut tree du squelette du graphe. La modification permet d’augmenter le rappel au prix de la précision des méthodes basées sur des contraintes, tout en conservant une performance globale similaire ou supérieure. Elle améliore également l’interprétabilité et l’explicabilité du modèle graphique obtenu. Nous présentons ensuite la méthode basée sur des contraintes MIIC, récemment développée, qui adopte les idées du cadre du maximum de vraisemblance pour améliorer la robustesse et la performance du graphe obtenu. Nous discutons les caractéristiques et les limites de MIIC, et proposons plusieurs modifications qui mettent l’accent sur l’interprétabilité du graphe obtenu et l’extensibilité de l’algorithme. En particulier, nous mettons en œuvre l’approche itérative pour renforcer la cohérence de l’ensemble de séparation, nous optons pour une règle d’orientation conservatrice et nous utilisons la probabilité d’orientation de MIIC pour étendre la notation des arêtes dans le graphe final afin d’illustrer différentes relations causales. L’algorithme MIIC est appliqué à un ensemble de données d’environ 400 000 dossiers de cancer du sein provenant de la base de données SEER, comme benchmark à grande échelle dans la vie réelle
This thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
4

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications
This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations

Book chapters on the topic "Explicabilité des algorithmes":

1

García-Marzá, Domingo, and Patrici Calvo. "Dialogic Digital Ethics: From Explicability to Participation." In Algorithmic Democracy, 191–205. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-53015-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kisselburgh, Lorraine, and Jonathan Beever. "The Ethics of Privacy in Research and Design: Principles, Practices, and Potential." In Modern Socio-Technical Perspectives on Privacy, 395–426. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-82786-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe contexts of sociotechnical privacy have evolved significantly in 50 years, with correlate shifts in the norms, values, and ethical concerns in research and design. We examine these eras of privacy from an ethics perspective, arguing that as contexts expand from the individual, to internet, interdependence, intelligences, and artificiality, they also reframe the audience or stakeholder roles present and broaden the field of ethical concerns. We discuss these ethical issues and introduce a principlist framework to guide ethical decision-making, articulating a strategy by which principles are reflexively applied in the decision-making process, informed by the rich interface of epistemic and ethical values. Next, we discuss specific challenges to privacy presented by emerging technologies such as biometric identification systems, autonomous vehicles, predictive algorithms, deepfake technologies, and public health surveillance and examine these challenges around five ethical principles: autonomy, justice, non-maleficence, beneficence, and explicability. Finally, we connect the theoretical and applied to the practical to briefly identify law, regulation, and soft law resources—including technical standards, codes of conduct, curricular programs, and statements of principles—that can provide actionable guidance and rules for professional conduct and technological development, codifying the reasoning outcomes of ethics.

To the bibliography