Gotowa bibliografia na temat „Trustable AI”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Trustable AI”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Trustable AI"

1

Srivastava, B., i F. Rossi. "Rating AI systems for bias to promote trustable applications". IBM Journal of Research and Development 63, nr 4/5 (1.07.2019): 5:1–5:9. http://dx.doi.org/10.1147/jrd.2019.2935966.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Calegari, Roberta, Giovanni Ciatto i Andrea Omicini. "On the integration of symbolic and sub-symbolic techniques for XAI: A survey". Intelligenza Artificiale 14, nr 1 (17.09.2020): 7–32. http://dx.doi.org/10.3233/ia-190036.

Pełny tekst źródła
Streszczenie:
The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays—in particular in the XAI perspective. This is why this paper provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems.
Style APA, Harvard, Vancouver, ISO itp.
3

Bagnato, Alessandra, Antonio Cicchetti, Luca Berardinelli, Hugo Bruneliere i Romina Eramo. "AI-augmented Model-Based Capabilities in the AIDOaRt Project". ACM SIGAda Ada Letters 42, nr 2 (5.04.2023): 99–103. http://dx.doi.org/10.1145/3591335.3591349.

Pełny tekst źródła
Streszczenie:
The paper presents the AIDOaRT project, a 3 years long H2020-ECSEL European project involving 32 organizations, grouped in clusters from 7 different countries, focusing on AI-augmented automation supporting modeling, coding, testing, monitoring, and continuous development in Cyber-Physical Systems (CPS). To this end, the project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable and reliable CPSs. This paper introduces the AIDOaRt project, its overall objectives, and used requirement engineering methodology. Based on that, it also focuses on describing the current plan regarding a set of tools intended to cover the modelbased capabilities requirements from the project.
Style APA, Harvard, Vancouver, ISO itp.
4

Wadnere, Prof Dhanashree G., Prof Gopal A. Wadnere, Prof Suvarana Somvanshi i Prof Pranali Bhusare. "Recent Progress on the Convergence of the Internet of Things and Artificial Intelligence". International Journal for Research in Applied Science and Engineering Technology 11, nr 12 (31.12.2023): 1286–89. http://dx.doi.org/10.22214/ijraset.2023.57576.

Pełny tekst źródła
Streszczenie:
Abstract: Artificial Intelligence of Things (AIoT) is the natural growth for both Artificial Intelligence (AI) and Internet of Things (IoT) as they are mutually gainful.. AI raise the value of the IoT through Machine Learning by transforming the data into useful information, although the IoT increases the value of AI through connectivity and data exchange. Hence, InSecTT – Intelligent Secure Trustable Things, a pan-European effort with 52 key partners from 12 countries (EU and Turkey), gives intelligent, secure and trustworthy systems for industrial purposes. This results in global cost-efficient solutions of intelligent, end-to-end secure, authentic connectivity and interoperability to bring the Internet of Things and Artificial Intelligence in sync. InSecTT targets at creating trust in AI-based intelligent systems and solutions as a major part of the AIoT. This paper provides an overview regarding the concept and ideas behind InSecTT and introduces the InSecTT Reference Architecture for infrastructure organization of AIoT use cases.
Style APA, Harvard, Vancouver, ISO itp.
5

Huang, Xuanxiang, Yacine Izza i Joao Marques-Silva. "Solving Explainability Queries with Quantification: The Case of Feature Relevancy". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.

Pełny tekst źródła
Streszczenie:
Trustable explanations of machine learning (ML) models are vital in high-risk uses of artificial intelligence (AI). Apart from the computation of trustable explanations, a number of explainability queries have been identified and studied in recent work. Some of these queries involve solving quantification problems, either in propositional or in more expressive logics. This paper investigates one of these quantification problems, namely the feature relevancy problem (FRP), i.e.\ to decide whether a (possibly sensitive) feature can occur in some explanation of a prediction. In contrast with earlier work, that studied FRP for specific classifiers, this paper proposes a novel algorithm for the \fprob quantification problem which is applicable to any ML classifier that meets minor requirements. Furthermore, the paper shows that the novel algorithm is efficient in practice. The experimental results, obtained using random forests (RFs) induced from well-known publicly available datasets, demonstrate that the proposed solution outperforms existing state-of-the-art solvers for Quantified Boolean Formulas (QBF) by orders of magnitude. Finally, the paper also identifies a novel family of formulas that are challenging for currently state-of-the-art QBF solvers.
Style APA, Harvard, Vancouver, ISO itp.
6

González-Alday, Raquel, Esteban García-Cuesta, Casimir A. Kulikowski i Victor Maojo. "A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine". Applied Sciences 13, nr 19 (28.09.2023): 10778. http://dx.doi.org/10.3390/app131910778.

Pełny tekst źródła
Streszczenie:
Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review critically analyzes the existing literature regarding the explainability and interpretability of AI methods within the clinical domain. It offers a comprehensive overview of past and current research trends with the objective of identifying limitations that hinder the advancement of Explainable Artificial Intelligence (XAI) in the field of medicine. Such constraints encompass the diverse requirements of key stakeholders, including clinicians, patients, and developers, as well as cognitive barriers to knowledge acquisition, the absence of standardised evaluation criteria, the potential for mistaking explanations for causal relationships, and the apparent trade-off between model accuracy and interpretability. Furthermore, this review discusses possible research directions aimed at surmounting these challenges. These include alternative approaches to leveraging medical expertise to enhance interpretability within clinical settings, such as data fusion techniques and interdisciplinary assessments throughout the development process, emphasizing the relevance of taking into account the needs of final users to design trustable explainability methods.
Style APA, Harvard, Vancouver, ISO itp.
7

Khaire, Prof Sneha A., Vedang Shahane, Prathamesh Borse, Ashish Jundhare i Arvind Tatu. "Doctor-Bot: AI Powered Conversational Chatbot for Delivering E-Health". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2461–64. http://dx.doi.org/10.22214/ijraset.2022.41856.

Pełny tekst źródła
Streszczenie:
Abstract: Nowadays, making time for even the smallest of things has become quite difficult as everyone wants to save their time. Health suffersthe most due to this. Due to the shortage of time people have developed a habit of seeing a doctor and having a proper checkup only when it's extremely important and there's no way around it could be postponed. And sometimes people are just way too nervous to visit their nearest medical clinic, especially, in the times of COVID, when there was a massive scarcity of any medical assistance, something that could give you information about your specific medical condition, and recommend you solutions and medication, without the need of you going out, could be a great deal. This project takes the same issue in consideration and aims to provide an easy and accessible solution to help people in dealing withtheir specific medical condition with the help of this AI modules and machine learning powered Health Care Bot. Having a service that can give you the solutions without participating in the activity of reaching out to a doctor for yourminor issues not only just saves your time but also gives you the freedom and flexibility to choose any suitable time to take advantage of the mentioned services. The aim for creating this is to help people to have their minor medical problems sorted from the comfort of their home.And in case of something extremely serious that needs an expert assistance it will show the nearby medical facilities that the patient can reach out to with appropriate information. All the information is scraped using trustable sources and even after that refining the process by introducing AI modules to get the best possible solution. Keywords: AI chatbot, Conversational bot, Digital health, Machine Learning, Natural Language Processing
Style APA, Harvard, Vancouver, ISO itp.
8

Chua, Tat-Seng. "Towards Generative Search and Recommendation: A keynote at RecSys 2023". ACM SIGIR Forum 57, nr 2 (grudzień 2023): 1–14. http://dx.doi.org/10.1145/3642979.3642986.

Pełny tekst źródła
Streszczenie:
The emergence of large language models (LLM's), especially ChatGPT, has for the first time make AI known to almost everyone and affected every facet of our society. The LLMs have the potential to revolutionize the ways we seek and consume information. This has stemmed the recent trends in both academia and industry to develop LLM-based generative AI systems for various applications with enhanced capabilities. One such systems is the generative search and recommender system, which is capable of performing content retrieval, content repurposing, content creation and their integration to meet users' information needs. However, before such systems can be widely used and accepted, we need to address several challenges. The primary challenge is the trust and safety in the generated content as we expect the LLM's to make mistakes with hallucination. This is because of the quality of data being used for their training is often erroneous and biased. The other challenges in the search and recommendation domain include: how to teach the system to be pro-active in anticipating the needs of users and in directing the conversation towards a fruitful direction; as well as the integration of retrieved and generated content. This keynote presented a generative information seeking paradigm, and discuss key research towards a trustable generative system for search and recommendation. Date : 21 September 2023.
Style APA, Harvard, Vancouver, ISO itp.
9

Chhibber, Nalin, Joslin Goh i Edith Law. "Teachable Conversational Agents for Crowdwork: Effects on Performance and Trust". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7.11.2022): 1–21. http://dx.doi.org/10.1145/3555223.

Pełny tekst źródła
Streszczenie:
Traditional crowdsourcing has mostly been viewed as requester-worker interaction where requesters publish tasks to solicit input from human crowdworkers. While most of this research area is catered towards the interest of requesters, we view this workflow as a teacher-learner interaction scenario where one or more human-teachers solve Human Intelligence Tasks to train machine learners. In this work, we explore how teachable machine learners can impact their human-teachers, and whether they form a trustable relation that can be relied upon for task delegation in the context of crowdsourcing. Specifically, we focus our work on teachable agents that learn to classify news articles while also guiding the teaching process through conversational interventions. In a two-part study, where several crowd workers individually teach the agent, we investigate whether this learning by teaching approach benefits human-machine collaboration, and whether it leads to trustworthy AI agents that crowd workers would delegate tasks to. Results demonstrate the benefits of the learning by teaching approach, in terms of perceived usefulness for crowdworkers, and the dynamics of trust built through the teacher-learner interaction.
Style APA, Harvard, Vancouver, ISO itp.
10

Chavan, Shardul Sanjay, Sanket Tukaram Dhake, Shubham Virendra Jadhav i rof Johnson Mathew. "Drowning Detection System using LRCN Approach". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2980–85. http://dx.doi.org/10.22214/ijraset.2022.41996.

Pełny tekst źródła
Streszczenie:
Abstract: This project provides the insights of a real-time video surveillance system capable of automatically detecting drowning incidents in a swimming pool. Drowning is the 3rd reason for the highest unintentional deaths, and that’s why it is necessary to create trustable security mechanisms. Currently, most of the swimming pool's security mechanisms include CCTV surveillance and lifeguards to help in drowning situations. But this method is not enough for huge swimming pools like in amusement parks. Nowadays, some of the security systems are using AI for drowning detection using cameras situated underwater at a fixed location and also by using floating boards having a camera mounted on the bottom side so that underwater view can be captured. But the main problems in these systems arise when the pool is crowded and vision of cameras is blocked by people. In this project, rather than using underwater cameras, we are using cameras situated on top of the swimming pool to get an upper view of the swimming pool so that entire swimming pool will be under surveillance all time. Keywords: Computer vision, Convolutional neural network, Convlstm2D, LRCN, UCF50, OpenCV.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Trustable AI"

1

Bresson, Roman. "Neural learning and validation of hierarchical multi-criteria decision aiding models with interacting criteria". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG008.

Pełny tekst źródła
Streszczenie:
L’aide à la décision multicritères (ADMC) est un domaine qui vise à aider des décideurs experts (DE) pour des problèmes tels que la sélection, le classement ou la classification d’alternatives définies par plusieurs attributs qui peuvent intéragir. Ces modèles ne sont pas ceux qui prennent pas la décision, mais ils apportent une assistance au DE lors du processus. Il est donc crucial que le modèle offre au DE des moyens d’interpréter ses résultats. Ceci est en particulier vrai dans des contextes critiques où les erreurs peuvent avoir des conséquences désastreuses. Il est par conséquent indispensable que les modèles d’ADMC soient intelligibles, interprétables et que leur comportement soit fortement contraint par des connaissances provenant d’une expertise dans le domaine. De tels modèles sont généralement construits par une interaction (questions/réponses) avec un DE, par le biais de méthodes issues de la recherche opérationnelle. D’autre part, l’apprentissage automatique (ML) fonde son approche sur l’apprentissage du modèle optimal à partir de données d’ajustement. Ce domaine se concentre généralement sur les performances du modèle, en adaptant les paramètres de modèles complexes (dits boîtes noires) pour obtenir une erreur statistiquement faible sur de nouveaux exemples. Bien que cette approche soit adaptée à de nombreux contextes, l’utilisation de modèles boîtes noires est inconcevable dans les cas usuels d’ADMC, car ils ne sont ni interprétable, ni facilement contraignables. Cette thèse fait le pont entre ces deux domaines. Nous nous concentrons sur une certaine classe de modèles d’ADMC, appelés intégrales de Choquet hiérarchiques utilitaires (ICHU). Notre première contribution, qui est théorique, est de montrer l’identifiabilité (ou l’unicité de la paramétrisation) des ICHUs. Ce résultat motive notre seconde contribution : le framework NeurHCI, une architecture de modules de réseaux de neurones qui peuvent apprendre les paramètres d’un ICHU. En particulier, tous les modèles NeurHCI sont garantis comme étant formellement valides, répondant aux contraintes qui conviennent à de tels modèles (monotonie, normalisation), et restent interprétables. Nous montrons empiriquement que les modèles Neur-HCI sont performants sur des ensembles de données artificielles et réelles, et qu’ils présentent une stabilité remarquable, ce qui en fait des outils pertinents pour alléger l’effort d’élicitation de modèles lorsque les données sont facilement disponibles, et permet leur utilisation comme outils d’analyse appropriés pour identifier certains phénomènes sous-jacents dans les données
Multicriteria Decision Aiding (MCDA) is a field that aims at assisting expert decision mak ers (DM) in problems such as selecting, ranking, or classifying alternatives defined on several inter acting attributes. Such models do not make the decision, but assist the DM, who takes the final decision. It is thus crucial for the model to offer ways for the DM to maintain operational awareness, in particular in safety-critical contexts where errors can have dire consequences. It is thus a prerequisite of MCDA models to be intelligible, in terpretable, and to have a behaviour that is highly constrained by information stemming from in do main knowledge. Such models are usually built hand in hand with a field expert, obtaining infor mation through a Q&A procedure, and eliciting the model through methods rooted in operations research. On the other hand, Machine Learning (ML), and more precisely Preference Learning (PL), bases its approach on learning the optimal model from fitting data. This field usually focuses on model performances, tuning complex black-boxes to ob tain a statistically low error on new examples cases. While this is adapted to many settings, it is out of the question for decision aiding settings, as neither constrainedness nor intelligibility are available. This thesis bridges both fields. We focus on a certain class of MCDA models, called utilitaris tic hierarchical Choquet integrals (UHCI). Our first contribution, which is theoretical, is to show the identifiability (or unicity of the parameterization) of UHCIs This result motivates our second con tribution: the Neur-HCI framework, an archi tecture of neural network modules which can learn the parameters of a UHCI. In particular, all Neur HCI models are guaranteed to be formally valid, fitting the constraints that befit such a model, and remain interpretable. We show empirically that Neur-HCI models perform well on both artificial and real dataset, and that they exhibit remarkable stability, making it a relevant tool for alleviating the model elicitation effort when data is readily available, along with making it a suitable analysis tool for indentifying patterns in the data
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Trustable AI"

1

Séroussi, Brigitte, Patrick Weber, Ferdinand Dhombres, Cyril Grouin, Jan-David Liebe, Sylvia Pelayo, Andrea Pinna i in., red. Challenges of Trustable AI and Added-Value on Health. IOS Press, 2022. http://dx.doi.org/10.3233/shti294.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Séroussi, B., F. Dhombres i P. Weber. Challenges of Trustable AI and Added-Value on Health: Proceedings of MIE 2022. IOS Press, Incorporated, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Séroussi, B., F. Dhombres i P. Weber. Challenges of Trustable AI and Added-Value on Health: Proceedings of MIE 2022. IOS Press, Incorporated, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Trustable AI"

1

Bousquet, Cedric, i Diva Beltramin. "Machine Learning in Medicine: To Explain, or Not to Explain, That Is the Question". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220407.

Pełny tekst źródła
Streszczenie:
In 2022, the Medical Informatics Europe conference created a special topic called “Challenges of trustable AI and added-value on health” which was centered around the theme of eXplainable Artificial Intelligence. Unfortunately, two opposite views remain for biomedical applications of machine learning: accepting to use reliable but opaque models, vs. enforce models to be explainable. In this contribution we discuss these two opposite approaches and illustrate with examples the differences between them.
Style APA, Harvard, Vancouver, ISO itp.
2

Wong, Lori, Feliciano Yu, Sudeepa Bhattacharyya i Melody L. Greer. "Covid-19 Positivity Differences Among Patients of a Rural, Southern US State Hospital System Based on Population Density, Rural-Urban Classification, and Area Deprivation Index". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220560.

Pełny tekst źródła
Streszczenie:
In this study we examined the correlation of COVID-19 positivity with area deprivation index (ADI), social determinants of health (SDOH) factors based on a consumer and electronic medical record (EMR) data and population density in a patient population from a tertiary healthcare system in Arkansas. COVID-19 positivity was significantly associated with population density, age, race, and household size. Understanding health disparities and SDOH data can add value to health and the creation of trustable AI.
Style APA, Harvard, Vancouver, ISO itp.
3

N., Ambika. "An Augmented Edge Architecture for AI-IoT Services Deployment in the Modern Era". W Advances in Information Security, Privacy, and Ethics, 286–302. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5250-9.ch015.

Pełny tekst źródła
Streszczenie:
The previous proposal gains prognostic and regulatory examination. It uses boundary-based AI procedures to accomplish its task. It analyzes its received transmission utilizing a set of amenities. It verifies the data packets and detects the inconsistency in them. It also encompasses choosing the appropriate procedure to evaluate the data stored in the cloud. Kubernetes cases plan handles Docker similes vigorously. The dominant point has a trustable and stable credential supply. The system aims to manage the information of various groups. The leading device has a control component that aims to supervise the well-being of the other instruments. Replica set maintains anticipated mock-up count. The endpoints component seeks to spot and watch the modifications to the approaches in the service. The proposal suggests increasing the reliability by 4.37%, availability by 2.74%, and speed by 3.28%.
Style APA, Harvard, Vancouver, ISO itp.
4

Mehrjerd, Ameneh, Hassan Rezaei, Saeid Eslami i Nayyere Khadem Ghaebi. "Determination of Cut Off for Endometrial Thickness in Couples with Unexplained Infertility: Trustable AI". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220450.

Pełny tekst źródła
Streszczenie:
Endometrial thickness in assisted reproductive techniques is one of the essential factors in the success of pregnancy. Despite extensive studies on endometrial thickness prediction, research is still needed. We aimed to analyze the impact of endometrial thickness on the ongoing pregnancy rate in couples with unexplained infertility. A total of 729 couples with unexplained infertility were included in this study. A random forest model (RFM) and logistic regression (LRM) were used to predict pregnancy. Evaluation of the performance of RFM and LRM was based on classification criteria and ROC curve, Odd Ratio for ongoing Pregnancy by EMT categorized. The results showed that RFM outperformed the LRM in IVF/ICSI and IUI treatments, obtaining the highest accuracy. We obtained a 7.7mm cut-off point for IUI and 9.99 mm for IVF/ICSI treatment. The results showed machine learning is a valuable tool in predicting ongoing pregnancy and is trustable via multicenter data for two treatments. In addition, Endometrial thickness was not statistically significantly different from CPR and FHR in both treatments.
Style APA, Harvard, Vancouver, ISO itp.
5

Gautam, Abhishek Kumar, i Nitin Nitin. "Use of Smart Contracts and Distributed Ledger for Automation". W Research Anthology on Cross-Disciplinary Designs and Applications of Automation, 645–77. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3694-3.ch033.

Pełny tekst źródła
Streszczenie:
Blockchain as a service has evolved significantly from where it started as an underlying technology for Bitcoin cryptocurrency when introduced in 2008. Realization of the immense opportunities this technology possesses encouraged the development of several other Blockchain solutions such as Ethereum, which focused more on the unique competencies much beyond just the digital currency. In this chapter, the authors provided insights into the unmatchable capabilities of Blockchain to evade cyber-attacks that can facilitate a much-needed push for the scalable operation of autonomous vehicles by providing a safer and trustable ecosystem through smart contracts. The chapter also discusses the integration of Ethereum Blockchain with Confidential Consortium Framework (CFF) to overcome the shortcomings of Blockchain in terms of speed and volume. Towards the end, they talked about some of the modern technologies such as IoT and AI that can be benefitted by Blockchain.
Style APA, Harvard, Vancouver, ISO itp.
6

Gautam, Abhishek Kumar, i Nitin Nitin. "Use of Smart Contracts and Distributed Ledger for Automation". W Advances in Data Mining and Database Management, 245–77. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3295-9.ch014.

Pełny tekst źródła
Streszczenie:
Blockchain as a service has evolved significantly from where it started as an underlying technology for Bitcoin cryptocurrency when introduced in 2008. Realization of the immense opportunities this technology possesses encouraged the development of several other Blockchain solutions such as Ethereum, which focused more on the unique competencies much beyond just the digital currency. In this chapter, the authors provided insights into the unmatchable capabilities of Blockchain to evade cyber-attacks that can facilitate a much-needed push for the scalable operation of autonomous vehicles by providing a safer and trustable ecosystem through smart contracts. The chapter also discusses the integration of Ethereum Blockchain with Confidential Consortium Framework (CFF) to overcome the shortcomings of Blockchain in terms of speed and volume. Towards the end, they talked about some of the modern technologies such as IoT and AI that can be benefitted by Blockchain.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Trustable AI"

1

Ignatiev, Alexey. "Towards Trustable Explainable AI". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.

Pełny tekst źródła
Streszczenie:
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
Style APA, Harvard, Vancouver, ISO itp.
2

Al-Tirawi, Anas, i Robert G. Reynolds. "How to Design a Trustable Cultural Algorithm Using Common Value Auctions". W 2021 Third International Conference on Transdisciplinary AI (TransAI). IEEE, 2021. http://dx.doi.org/10.1109/transai51903.2021.00022.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bycroft, Benjamen P., Nicholas A. Oune, Daniel Thomlinson, Alonzo Lopez, Pamela S. Wood, Max Spolaor, Michael J. Durst i Scott A. Turner. "Capabilities Toward Trustable AI/ML Pose Estimation for Satellite-to-Satellite Imagery". W 2024 IEEE Aerospace Conference. IEEE, 2024. http://dx.doi.org/10.1109/aero58975.2024.10521110.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Tasneem, Sumaiya, i Kazi Aminul Islam. "Development of Trustable Deep Learning Model in Remote Sensing through Explainable-AI Method Selection". W 2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). IEEE, 2023. http://dx.doi.org/10.1109/uemcon59035.2023.10316012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii