Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Explainable artifical intelligence.

Дисертації з теми "Explainable artifical intelligence"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Explainable artifical intelligence".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Nilsson, Linus. "Explainable Artificial Intelligence for Reinforcement Learning Agents." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294162.

Повний текст джерела
Анотація:
Following the success that machine learning has enjoyed over the last decade, reinforcement learning has become a prime research area for automation and solving complex tasks. Ranging from playing video games at a professional level to robots collaborating in picking goods in warehouses, the applications of reinforcement learning are numerous. The systems are however, very complex and the understanding of why the reinforcement learning agents solve the tasks given to them in certain ways are still largely unknown to the human observer. This makes the actual use of the agents limited to non-cri
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Elguendouze, Sofiane. "Explainable Artificial Intelligence approaches for Image Captioning." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1003.

Повний текст джерела
Анотація:
L'évolution rapide des modèles de sous-titrage d'images, impulsée par l'intégration de techniques d'apprentissage profond combinant les modalités image et texte, a conduit à des systèmes de plus en plus complexes. Cependant, ces modèles fonctionnent souvent comme des boîtes noires, incapables de fournir des explications transparentes de leurs décisions. Cette thèse aborde l'explicabilité des systèmes de sous-titrage d'images basés sur des architectures Encodeur-Attention-Décodeur, et ce à travers quatre aspects. Premièrement, elle explore le concept d'espace latent, s'éloignant ainsi des appro
Стилі APA, Harvard, Vancouver, ISO та ін.
3

El, Qadi El Haouari Ayoub. "An EXplainable Artificial Intelligence Credit Rating System." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS486.

Повний текст джерела
Анотація:
Au cours des dernières années, le déficit de financement du commerce a atteint le chiffre alarmant de 1 500 milliards de dollars, soulignant une crise croissante dans le commerce mondial. Ce déficit est particulièrement préjudiciable aux petites et moyennes entreprises (PME), qui éprouvent souvent des difficultés à accéder au financement du commerce. Les systèmes traditionnels d'évaluation du crédit, qui constituent l'épine dorsale du finance-ment du commerce, ne sont pas toujours adaptés pour évaluer correctement la solvabilité des PME. Le terme "credit scoring" désigne les méthodes et techniques uti
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Vincenzi, Leonardo. "eXplainable Artificial Intelligence User Experience: contesto e stato dell’arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23338/.

Повний текст джерела
Анотація:
Il grande sviluppo del mondo dell’Intelligenza Artificiale unito alla sua vastissima applicazione in molteplici ambiti degli ultimi anni, ha portato a una sempre maggior richiesta di spiegabilità dei sistemi di Machine Learning. A seguito di questa necessità il campo dell’eXplainable Artificial Intelligence ha compiuto passi importanti verso la creazione di sistemi e metodi per rendere i sistemi intelligenti sempre più trasparenti e in un futuro prossimo, per garantire sempre più equità e sicurezza nelle decisioni prese dall’AI, si prevede una sempre più rigida regolamentazione verso la sua sp
Стилі APA, Harvard, Vancouver, ISO та ін.
5

PICCHIOTTI, NICOLA. "Explainable Artificial Intelligence: an application to complex genetic diseases." Doctoral thesis, Università degli studi di Pavia, 2021. http://hdl.handle.net/11571/1447637.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

ABUKMEIL, MOHANAD. "UNSUPERVISED GENERATIVE MODELS FOR DATA ANALYSIS AND EXPLAINABLE ARTIFICIAL INTELLIGENCE." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/889159.

Повний текст джерела
Анотація:
For more than a century, the methods of learning representation and the exploration of the intrinsic structures of data have developed remarkably and currently include supervised, semi-supervised, and unsupervised methods. However, recent years have witnessed the flourishing of big data, where typical dataset dimensions are high, and the data can come in messy, missing, incomplete, unlabeled, or corrupted forms. Consequently, discovering and learning the hidden structure buried inside such data becomes highly challenging. From this perspective, latent data analysis and dimensionality reductio
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rouget, Thierry. "Learning explainable concepts in the presence of a qualitative model." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9762.

Повний текст джерела
Анотація:
This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Знайти повний текст джерела
Анотація:
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ketata, Firas. "Risk prediction of endocrine diseases using data science and explainable artificial intelligence." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCD022.

Повний текст джерела
Анотація:
L'objectif de cette thèse est de prédire le risque de maladies endocriniennes à l'aide de la science des données et de l'apprentissage automatique. L'idée est d'exploiter cette identification de risque pour aider les médecins à gérer les ressources financières et personnaliser le traitement des anomalies glucidiques chez les patients atteints de bêta-thalassémie majeure, ainsi que pour le dépistage du syndrome métabolique chez les adolescents. Une étude d'explicabilité des prédictions a été développée dans cette thèse pour évaluer la fiabilité de la prédiction des anomalies glucidiques et pour
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Amarasinghe, Kasun. "Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6091.

Повний текст джерела
Анотація:
Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical System
Стилі APA, Harvard, Vancouver, ISO та ін.
11

PANIGUTTI, Cecilia. "eXplainable AI for trustworthy healthcare applications." Doctoral thesis, Scuola Normale Superiore, 2022. https://hdl.handle.net/11384/125202.

Повний текст джерела
Анотація:
Acknowledging that AI will inevitably become a central element of clinical practice, this thesis investigates the role of eXplainable AI (XAI) techniques in developing trustworthy AI applications in healthcare. The first part of this thesis focuses on the societal, ethical, and legal aspects of the use of AI in healthcare. It first compares the different approaches to AI ethics worldwide and then focuses on the practical implications of the European ethical and legal guidelines for AI applications in healthcare. The second part of the thesis explores how XAI techniques can help meet thr
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Keneni, Blen M. Keneni. "Evolving Rule Based Explainable Artificial Intelligence for Decision Support System of Unmanned Aerial Vehicles." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1525094091882295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Повний текст джерела
Анотація:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifyi
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Houzé, Étienne. "Explainable Artificial Intelligence for the Smart Home : Enabling Relevant Dialogue between Users and Autonomous Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252705.

Повний текст джерела
Анотація:
Smart home technologies aim at providing users control over their house, including but not limited to temperature, light, security, energy consumption. Despite offering the possibility of lowering power consumption (and thus the energy bill), smart home systems still struggle to convince a broad public, often being regarded as intrusive or not trustworthy. Gaining sympathy from users would require a solution to provide relevant explanations without relying on distant computing (which would imply sending private data online). We therefore propose an architecture where the autonomic controller s
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Pierrard, Régis. "Explainable Classification and Annotation through Relation Learning and Reasoning." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST008.

Повний текст джерела
Анотація:
Avec les succés récents de l’apprentissage profond et les interactions toujours plus nombreuses entre êtres humains et intelligences artificielles, l’explicabilité est devenue une préoccupation majeure. En effet, il est difficile de comprendre le comportement des réseaux de neurones profonds, ce qui les rend inadaptés à une utilisation dans les systèmes critiques. Dans cette thèse, nous proposons une approche visant à classifier ou annoter des signaux tout en expliquant les résultats obtenus. Elle est basée sur l’utilisation d’un modèle transparent, dont le raisonnement est clair, et de relati
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hedström, Anna. "Explainable Artificial Intelligence : How to Evaluate Explanations of Deep Neural Network Predictions using the Continuity Test." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281279.

Повний текст джерела
Анотація:
With a surging appetite to leverage deep learning models as means to enhance decisionmaking, new requirements for interpretability are set. Renewed research interest has thus been found within the machine learning community to develop explainability methods that can estimate the influence of a given input feature to the prediction made by a model. Current explainability methods of deep neural networks have nonetheless shown to be far from fault-proof and the question of how to properly evaluate explanations has largely remained unsolved. In this thesis work, we have taken a deep look into how
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.

Повний текст джерела
Анотація:
Les architectures neuronales profondes ont démontré des résultats remarquables dans diverses tâches de vision par ordinateur. Cependant, leur performance extraordinaire se fait au détriment de l'interprétabilité. En conséquence, le domaine de l'IA explicable a émergé pour comprendre réellement ce que ces modèles apprennent et pour découvrir leurs sources d'erreur. Cette thèse explore les algorithmes explicables afin de révéler les biais et les variables utilisés par ces modèles de boîte noire dans le contexte de la classification d'images. Par conséquent, nous divisons cette thèse en quatre pa
Стилі APA, Harvard, Vancouver, ISO та ін.
18

BITETTO, ALESSANDRO. "The role of Explainable Artificial Intelligence in risk assessment: a study on the economic and epidemiologic impact." Doctoral thesis, Università degli studi di Pavia, 2022. http://hdl.handle.net/11571/1452624.

Повний текст джерела
Анотація:
The growing application of black-box Artificial Intelligence algorithms in many real-world application is raising the importance of understanding how the models make their decision. The research field that aims to "open" the black-box and to make the predictions more interpretable, is referred as eXplainable Artificial Intelligence (XAI). Another important field of research, strictly related to XAI, is the compression of information, also referred as dimensionality reduction. Having a synthetic set of few variables that captures the behaviour and the relationships of many more variables can be
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ounissi, Mehdi. "Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.

Повний текст джерела
Анотація:
À une époque dominée par l'IA, son processus décisionnel opaque, connu sous le nom de problème de la "boîte noire", pose des défis significatifs, particulièrement dans des domaines critiques comme l'imagerie biomédicale où la précision et la confiance sont essentielles. Notre recherche se concentre sur l'amélioration de l'interprétabilité de l'IA dans les applications biomédicales. Nous avons développé un cadre pour l'analyse d'images biomédicales qui quantifie la phagocytose dans les maladies neurodégénératives à l'aide de la microscopie vidéo à contraste de phase en accéléré. Les méthodes tr
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Verenich, Ilya. "Explainable predictive monitoring of temporal measures of business processes." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/124037/1/Ilya_Verenich_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis explores data-driven, predictive approaches to monitor business process performance. These approaches allow process stakeholders to prevent or mitigate potential performance issues or compliance violations in real time, as early as possible. To help users understand the rationale for the predictions and build trust in them, the thesis proposes two techniques for explainable predictive process monitoring: one based on deep learning, the other driven by process models. This is achieved by decomposing a prediction into its elementary components. The techniques are compared against sta
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bracchi, Luca. "I-eXplainer: applicazione web per spiegazioni interattive." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20424/.

Повний текст джерела
Анотація:
Lo scopo di questa tesi è dimostrare che è possibile trasformare una spiegazione statica relativa all'output di un algoritmo spiegabile in una spiegazione interattiva con una maggior grado di efficacia. La spiegazione cercata sarà human readable ed esplorabile. Per dimostrare la tesi userò la spiegazione di uno degli algoritmi spiegabili del toolkit AIX360 e andrò a generarne una statica che chiamerò spiegazione di base. Questa verrà poi espansa ed arricchita grazie a una struttura dati costruita per contenere informazioni formattate e connesse tra loro. Tali informazioni verranno richieste d
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Costa, Bueno Vicente. "Fuzzy Horn clauses in artificial intelligence: a study of free models, and applications in art painting style categorization." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/673374.

Повний текст джерела
Анотація:
Aquesta tesi doctoral contribueix a l’estudi de les clàusules de Horn en lògiques difuses, així com al seu ús en representació difusa del coneixement aplicada al disseny d’un algorisme de classificació de pintures segons el seu estil artístic. En la primera part del treball ens centrem en algunes nocions rellevants per a la programació lògica, com ho són per exemple els models lliures i les estructures de Herbrand en lògica matemàtica difusa. Així doncs, provem l’existència de models lliures en classes universals difuses de Horn, i demostrem que tota teoria difusa universal de Horn sense igual
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Varela, Rial Alejandro 1993. "In silico modeling of protein-ligand binding." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2022. http://hdl.handle.net/10803/673579.

Повний текст джерела
Анотація:
The affinity of a drug to its target protein is one of the key properties of a drug. Although there are experimental methods to measure the binding affinity, they are expensive and relatively slow. Hence, accurately predicting this property with software tools would be very beneficial to drug discovery. In this thesis, several applications have been developed to model and predict the binding mode of a ligand to a protein, to evaluate the feasibility of that prediction and to perform model interpretability in deep neural networks trained on protein-ligand complexes.<br>La afinidad de un fármaco
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Sabbatini, Federico. "Interpretable Prediction of Galactic Cosmic-Ray Short-Term Variations with Artificial Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22147/.

Повний текст джерела
Анотація:
Monitoring the galactic cosmic-ray flux variations is a crucial issue for all those space missions for which cosmic rays constitute a limitation to the performance of the on-board instruments. If it is not possible to study the galactic cosmic-ray short-term fluctuations on board, it is necessary to benefit of models that are able to predict these flux modulations. Artificial neural networks are nowadays the most used tools to solve a wide range of different problems in various disciplines, including medicine, technology, business and many others. All artificial neural networks are black b
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Palmisano, Enzo Pio. "A First Study of Transferable and Explainable Deep Learning Models for HPC Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20099/.

Повний текст джерела
Анотація:
Il lavoro descritto in questa tesi è basato sullo studio di modelli di Deep Learning applicati all’anomaly detection per la rilevazioni di stati anomali nei sistemi HPC (High-Performance Computing). In particolare, l’obiettivo è studiare la trasferibilità di un modello e di spiegarne il risultato prodotto attraverso tecniche dell’Explainable Artificial Intelligence. I sistemi HPC sono dotati di numerosi sensori in grado di monitorare il corretto funzionamento in tempo reale. Tuttavia, a causa dell’elevato grado di complessità di questi sistemi, si rende necessario l’uso di tecniche innovative e
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Saulières, Léo. "Explication de l'apprentissage par renforcement." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES224.

Повний текст джерела
Анотація:
S dernières années, les modèles issus de l'Intelligence Artificielle (IA) ont connu une progression impressionnante tant sur la précision de leurs résultats que sur l'amplitude de leurs applications. Cette progression s'explique en partie par l'utilisation de réseaux de neurones permettant de résoudre efficacement diverses tâches en se basant sur un ensemble de données. Les différentes avancées en IA prédictive (par opposition à l'IA analytique qui s'intéresse à la représentation des connaissances et à la formalisation du raisonnement) ont été mises au service de domaines variés comme l'agricu
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Mualla, Yazan. "Explaining the Behavior of Remote Robots to Humans : An Agent-based Approach." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA023.

Повний текст джерела
Анотація:
Avec l’émergence et la généralisation des systèmes d'intelligence artificielle, comprendre le comportement des agents artificiels, ou robots intelligents, devient essentiel pour garantir une collaboration fluide entre l'homme et ces agents. En effet, il n'est pas simple pour les humains de comprendre les processus qui ont amenés aux décisions des agents. De récentes études dans le domaine l’intelligence artificielle explicable, particulièrement sur les modèles utilisant des objectifs, ont confirmé qu'expliquer le comportement d’un agent à un humain favorise la compréhensibilité de l'agent par
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Ankaräng, Marcus, and Jakob Kristiansson. "Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301907.

Повний текст джерела
Анотація:
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structu
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Matz, Filip, and Yuxiang Luo. "Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300897.

Повний текст джерела
Анотація:
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between pro
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Leoni, Cristian. "Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105622.

Повний текст джерела
Анотація:
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain use
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hazarika, Subhashis. "Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574692702479196.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Повний текст джерела
Анотація:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ciorna, Vasile. "VIANA : visualisation analytique au service de la conception des pneumatiques." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0500.

Повний текст джерела
Анотація:
Avec l’accélération des percées technologiques au cours des dernières décennies, de plus en plus de moyens sont disponibles pour soutenir les entreprises dans leurs développements de produits. Bien que cela ne soit pas nouveau, on observe récemment une nette augmentation de l’utilisation de modèles d’apprentissage automatique au sein des entreprises. Cependant, la manière dont ces modèles sont utilisés influence considérablement l’équilibre risque-récompense ainsi que leur adoption. Ce projet présente des réflexions sur la conception de visualisations visant à soutenir efficacement l’utilisati
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mita, Graziano. "Toward interpretable machine learning, with applications to large-scale industrial systems data." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS112.

Повний текст джерела
Анотація:
Les contributions présentées dans cette thèse sont doubles. Nous fournissons d'abord un aperçu général de l'apprentissage automatique interprétable, en établissant des liens avec différents domaines, en introduisant une taxonomie des approches d'explicabilité. Nous nous concentrons sur l'apprentissage des règles et proposons une nouvelle approche de classification, LIBRE, basée sur la synthèse de fonction booléenne monotone. LIBRE est une méthode ensembliste qui combine les règles candidates apprises par plusieurs apprenants faibles ascendants avec une simple union, afin d'obtenir un ensemble
Стилі APA, Harvard, Vancouver, ISO та ін.
35

VENTURA, FRANCESCO. "Explaining black-box deep neural models' predictions, behaviors, and performances through the unsupervised mining of their inner knowledge." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912972.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Baaj, Ismaïl. "Explainability of possibilistic and fuzzy rule-based systems." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS021.

Повний текст джерела
Анотація:
Aujourd'hui, les progrès de l'Intelligence Artificielle (IA) ont conduit à l'émergence de systèmes capables d'automatiser des processus complexes, en utilisant des modèles qui peuvent être difficiles à comprendre par les humains. Lorsque les humains utilisent ces systèmes d'IA, il est bien connu qu'ils veulent comprendre leurs comportements et leurs actions, car ils ont davantage confiance dans les systèmes qui peuvent expliquer leurs choix, leurs hypothèses et leurs raisonnements. La capacité d'explication des systèmes d'IA est devenue une exigence des utilisateurs, en particulier dans les en
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Tseng, Wei-Chih. "Impact of flow rate in extrusion-based bioprinting to improve printing quality." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG021.

Повний текст джерела
Анотація:
La bio-impression par extrusion constitue une technique efficace, simple et économique dans le domaine de la bio-impression. Elle permet la fabrication de structures d'échafaudages tridimensionnelles, poreuses et complexes, largement utilisées dans les domaines de la réparation et de la régénération des tissus. Toutefois, la majorité des encres destinées aux biomatériaux sont des fluides non newtoniens, et les propriétés rhéologiques et thermiques de ces encres varient considérablement. La diversité et la complexité de ces fluides entraînent fréquemment l'adoption d'approches empiriques pour d
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Повний текст джерела
Анотація:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a m
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ben, Gamra Siwar. "Contribution à la mise en place de réseaux profonds pour l'étude de tableaux par le biais de l'explicabilité : Application au style Tenebrisme ("clair-obscur")." Electronic Thesis or Diss., Littoral, 2023. http://www.theses.fr/2023DUNK0695.

Повний текст джерела
Анотація:
La détection de visages à partir des images picturales du style clair-obscur suscite un intérêt croissant chez les historiens de l'art et les chercheurs afin d'estimer l'emplacement de l'illuminant et ainsi répondre à plusieurs questions techniques. L'apprentissage profond suscite un intérêt croissant en raison de ses excellentes performances. Une optimisation du Réseau "Faster Region-based Convolutional Neural Network" a démontré sa capacité à relever efficacement les défis et à fournir des résultats prometteurs en matière de détection de visages à partir des images clai-obscur. Cependant, ce
Стилі APA, Harvard, Vancouver, ISO та ін.
40

cruciani, federica. "EXplainable Artificial Intelligence: enabling AI in neurosciences and beyond." Doctoral thesis, 2023. https://hdl.handle.net/11562/1085066.

Повний текст джерела
Анотація:
The adoption of AI models in medicine and neurosciences has the potential to play a significant role not only in bringing scientific advancements but also in clinical decision-making. However, concerns mounts due to the eventual biases AI could have which could result in far-reaching consequences particularly in a critical field like biomedicine. It is challenging to achieve usable intelligence because not only it is fundamental to learn from prior data, extract knowledge and guarantee generalization capabilities, but also to disentangle the underlying explanatory factors in order to deeply un
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Pereira, Filipe Inácio da Costa. "Explainable artificial intelligence - learning decision sets with sat." Master's thesis, 2018. http://hdl.handle.net/10451/34903.

Повний текст джерела
Анотація:
Tese de mestrado, Engenharia Informática (Interação e Conhecimento) Universidade de Lisboa, Faculdade de Ciências, 2018<br>Artificial Intelligence is a core research topic with key significance in technological growth. With the increase of data, we have more efficient models that in a few seconds will inform us of their prediction on a given input set. The more complex techniques nowadays with better results are Black Box Models. Unfortunately, these can’t provide an explanation behind their prediction, which is a major drawback for us humans. Explainable Artificial Intelligence, whose objecti
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Coke, Ricardo Miguel Branco. "Organization of information in neural networks." Master's thesis, 2021. http://hdl.handle.net/10348/11093.

Повний текст джерела
Анотація:
Dissertação submetida à UNIVERSIDADE DE TRAS-OS-MONTES E ALTO DOURO para obtenção do grau de MESTRE em Engenharia Electrotécnica e de Computadores<br>O mundo está a evoluir e a tecnologia tal como a conhecemos está a ultrapassar barreiras que antes pensávamos não poderem ser quebradas. A Inteligência Artificial tem sido uma das áreas que tem alcançado um nível de sucesso que proporciona a melhor das expectativas em muitas tarefas em todas as áreas de trabalho. No domínio de Machine Learning, as redes neuronais têm sido um dos conceitos que se tem destacado, resultando em precisões extremament
Стилі APA, Harvard, Vancouver, ISO та ін.
43

GUPTA, PALAK. "APPLICATION OF EXPLAINABLE ARTIFICIAL INTELLIGENCE IN THE IDENTIFICATION OF NON-SMALL CELL LUNG CANCER BIOMARKERS." Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/20022.

Повний текст джерела
Анотація:
Worldwide, lung cancer is the second most commonly diagnosed cancer. NSCLC is the most common type of lung cancer in the United States, accounting for 85% of all lung cancer diagnoses. The purpose of this study was to find potential diagnostic biomarkers for NSCLC by application of eXplainable Artificial Intelligence (XAI) on XGBoost machine learning (ML) models trained on binary classification datasets comprising the expression data of 60 non-small cell lung cancer tissue samples and 60 normal healthy tissue samples. After successfully incorporating SHAP values into the ML models,
Стилі APA, Harvard, Vancouver, ISO та ін.
44

"Towards Building an Intelligent Tutor for Gestural Languages using Concept Level Explainable AI." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57347.

Повний текст джерела
Анотація:
abstract: Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not
Стилі APA, Harvard, Vancouver, ISO та ін.
45

"Explainable AI in Workflow Development and Verification Using Pi-Calculus." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.55566.

Повний текст джерела
Анотація:
abstract: Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and Robotics Programming Language Environment (VIPLE). VIPLE is based on computational thinking and flowchart, which reduces the needs of memorization of detailed syntax in text-based programming languages. VIPLE has been used at Arizona State University (ASU) in multiple
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.

Повний текст джерела
Анотація:
Cardiovascular diseases are the leading global death cause. Their treatment and prevention rely on electrocardiogram interpretation, which is dependent on the physician’s variability. Subjectiveness is intrinsic to electrocardiogram interpretation and hence, prone to errors. To assist physicians in making precise and thoughtful decisions, artificial intelligence is being deployed to develop models that can interpret extent datasets and provide accurate decisions. However, the lack of interpretability of most machine learning models stands as one of the drawbacks of their deployment, part
Стилі APA, Harvard, Vancouver, ISO та ін.
47

"Foundations of Human-Aware Planning -- A Tale of Three Models." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51791.

Повний текст джерела
Анотація:
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Palesi, Luciano Alessandro Ipsaro. "Human Centered Big Data Analytics for Smart Applications." Doctoral thesis, 2022. http://hdl.handle.net/2158/1282883.

Повний текст джерела
Анотація:
This thesis is concerned with Smart Applications. Smart applications are all those applications that incorporate data-driven, actionable insights in the user experience, and they allow in different contexts users to complete actions or make decisions efficiently. The differences between smart applications and traditional applications are mainly that the former are dynamic and evolve on the basis of intuition, user feedback or new data. Moreover, smart applications are data-driven and linked to the context of use. There are several aspects to be considered in the development of intelligent a
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Brito, João Pedro da Cruz. "Deep Adversarial Frameworks for Visually Explainable Periocular Recognition." Master's thesis, 2021. http://hdl.handle.net/10400.6/11850.

Повний текст джерела
Анотація:
Machine Learning (ML) models have pushed state­of­the­art performance closer to (and even beyond) human level. However, the core of such algorithms is usually latent and hardly understandable. Thus, the field of Explainability focuses on researching and adopting techniques that can explain the reasons that support a model’s predictions. Such explanations of the decision­making process would help to build trust between said model and the human(s) using it. An explainable system also allows for better debugging, during the training phase, and fixing, upon deployment. But why should a developer d
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ribeiro, Manuel António de Melo Chinopa de Sousa. "Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies." Master's thesis, 2020. http://hdl.handle.net/10362/113651.

Повний текст джерела
Анотація:
Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground,
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!