Gotowa bibliografia na temat „Explicabilité des algorithmes”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Explicabilité des algorithmes”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Explicabilité des algorithmes"
Robbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI". Minds and Machines 29, nr 4 (15.10.2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.
Pełny tekst źródłaКраснов, Федор Владимирович, i Ирина Сергеевна Смазневич. "The explicability factor of the algorithm in the problems of searching for the similarity of text documents". Вычислительные технологии, nr 5(25) (28.10.2020): 107–23. http://dx.doi.org/10.25743/ict.2020.25.5.009.
Pełny tekst źródłavan Bruxvoort, Xadya, i Maurice van Keulen. "Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System". Applied Sciences 11, nr 23 (25.11.2021): 11187. http://dx.doi.org/10.3390/app112311187.
Pełny tekst źródłaKalyanpur, Aditya, Tom Breloff i David A. Ferrucci. "Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10867–74. http://dx.doi.org/10.1609/aaai.v36i10.21333.
Pełny tekst źródłaNiemi, Hannele. "AI in learning". Journal of Pacific Rim Psychology 15 (styczeń 2021): 183449092110381. http://dx.doi.org/10.1177/18344909211038105.
Pełny tekst źródłaAntonio, Nuno, Ana de Almeida i Luis Nunes. "Big Data in Hotel Revenue Management: Exploring Cancellation Drivers to Gain Insights Into Booking Cancellation Behavior". Cornell Hospitality Quarterly 60, nr 4 (29.05.2019): 298–319. http://dx.doi.org/10.1177/1938965519851466.
Pełny tekst źródłaAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid i Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)". Sustainability 14, nr 12 (16.06.2022): 7375. http://dx.doi.org/10.3390/su14127375.
Pełny tekst źródłaHübner, Ursula H., Nicole Egbert i Georg Schulte. "Clinical Information Systems – Seen through the Ethics Lens". Yearbook of Medical Informatics 29, nr 01 (sierpień 2020): 104–14. http://dx.doi.org/10.1055/s-0040-1701996.
Pełny tekst źródłaKrasnov, Fedor, Irina Smaznevich i Elena Baskakova. "Optimization approach to the choice of explicable methods for detecting anomalies in homogeneous text collections". Informatics and Automation 20, nr 4 (3.08.2021): 869–904. http://dx.doi.org/10.15622/ia.20.4.5.
Pełny tekst źródłaCoppi, Giulio, Rebeca Moreno Jimenez i Sofia Kyriazi. "Explicability of humanitarian AI: a matter of principles". Journal of International Humanitarian Action 6, nr 1 (6.10.2021). http://dx.doi.org/10.1186/s41018-021-00096-6.
Pełny tekst źródłaRozprawy doktorskie na temat "Explicabilité des algorithmes"
Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.
Pełny tekst źródłaThis thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Pełny tekst źródłaCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Li, Honghao. "Interpretable biological network reconstruction from observational data". Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.
Pełny tekst źródłaThis thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Pełny tekst źródłaThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Części książek na temat "Explicabilité des algorithmes"
García-Marzá, Domingo, i Patrici Calvo. "Dialogic Digital Ethics: From Explicability to Participation". W Algorithmic Democracy, 191–205. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-53015-9_10.
Pełny tekst źródłaKisselburgh, Lorraine, i Jonathan Beever. "The Ethics of Privacy in Research and Design: Principles, Practices, and Potential". W Modern Socio-Technical Perspectives on Privacy, 395–426. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-82786-1_17.
Pełny tekst źródła