Gotowa bibliografia na temat „Trustable AI”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Trustable AI”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Trustable AI"
Srivastava, B., i F. Rossi. "Rating AI systems for bias to promote trustable applications". IBM Journal of Research and Development 63, nr 4/5 (1.07.2019): 5:1–5:9. http://dx.doi.org/10.1147/jrd.2019.2935966.
Pełny tekst źródłaCalegari, Roberta, Giovanni Ciatto i Andrea Omicini. "On the integration of symbolic and sub-symbolic techniques for XAI: A survey". Intelligenza Artificiale 14, nr 1 (17.09.2020): 7–32. http://dx.doi.org/10.3233/ia-190036.
Pełny tekst źródłaBagnato, Alessandra, Antonio Cicchetti, Luca Berardinelli, Hugo Bruneliere i Romina Eramo. "AI-augmented Model-Based Capabilities in the AIDOaRt Project". ACM SIGAda Ada Letters 42, nr 2 (5.04.2023): 99–103. http://dx.doi.org/10.1145/3591335.3591349.
Pełny tekst źródłaWadnere, Prof Dhanashree G., Prof Gopal A. Wadnere, Prof Suvarana Somvanshi i Prof Pranali Bhusare. "Recent Progress on the Convergence of the Internet of Things and Artificial Intelligence". International Journal for Research in Applied Science and Engineering Technology 11, nr 12 (31.12.2023): 1286–89. http://dx.doi.org/10.22214/ijraset.2023.57576.
Pełny tekst źródłaHuang, Xuanxiang, Yacine Izza i Joao Marques-Silva. "Solving Explainability Queries with Quantification: The Case of Feature Relevancy". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.
Pełny tekst źródłaGonzález-Alday, Raquel, Esteban García-Cuesta, Casimir A. Kulikowski i Victor Maojo. "A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine". Applied Sciences 13, nr 19 (28.09.2023): 10778. http://dx.doi.org/10.3390/app131910778.
Pełny tekst źródłaKhaire, Prof Sneha A., Vedang Shahane, Prathamesh Borse, Ashish Jundhare i Arvind Tatu. "Doctor-Bot: AI Powered Conversational Chatbot for Delivering E-Health". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2461–64. http://dx.doi.org/10.22214/ijraset.2022.41856.
Pełny tekst źródłaChua, Tat-Seng. "Towards Generative Search and Recommendation: A keynote at RecSys 2023". ACM SIGIR Forum 57, nr 2 (grudzień 2023): 1–14. http://dx.doi.org/10.1145/3642979.3642986.
Pełny tekst źródłaChhibber, Nalin, Joslin Goh i Edith Law. "Teachable Conversational Agents for Crowdwork: Effects on Performance and Trust". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7.11.2022): 1–21. http://dx.doi.org/10.1145/3555223.
Pełny tekst źródłaChavan, Shardul Sanjay, Sanket Tukaram Dhake, Shubham Virendra Jadhav i rof Johnson Mathew. "Drowning Detection System using LRCN Approach". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2980–85. http://dx.doi.org/10.22214/ijraset.2022.41996.
Pełny tekst źródłaRozprawy doktorskie na temat "Trustable AI"
Bresson, Roman. "Neural learning and validation of hierarchical multi-criteria decision aiding models with interacting criteria". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG008.
Pełny tekst źródłaMulticriteria Decision Aiding (MCDA) is a field that aims at assisting expert decision mak ers (DM) in problems such as selecting, ranking, or classifying alternatives defined on several inter acting attributes. Such models do not make the decision, but assist the DM, who takes the final decision. It is thus crucial for the model to offer ways for the DM to maintain operational awareness, in particular in safety-critical contexts where errors can have dire consequences. It is thus a prerequisite of MCDA models to be intelligible, in terpretable, and to have a behaviour that is highly constrained by information stemming from in do main knowledge. Such models are usually built hand in hand with a field expert, obtaining infor mation through a Q&A procedure, and eliciting the model through methods rooted in operations research. On the other hand, Machine Learning (ML), and more precisely Preference Learning (PL), bases its approach on learning the optimal model from fitting data. This field usually focuses on model performances, tuning complex black-boxes to ob tain a statistically low error on new examples cases. While this is adapted to many settings, it is out of the question for decision aiding settings, as neither constrainedness nor intelligibility are available. This thesis bridges both fields. We focus on a certain class of MCDA models, called utilitaris tic hierarchical Choquet integrals (UHCI). Our first contribution, which is theoretical, is to show the identifiability (or unicity of the parameterization) of UHCIs This result motivates our second con tribution: the Neur-HCI framework, an archi tecture of neural network modules which can learn the parameters of a UHCI. In particular, all Neur HCI models are guaranteed to be formally valid, fitting the constraints that befit such a model, and remain interpretable. We show empirically that Neur-HCI models perform well on both artificial and real dataset, and that they exhibit remarkable stability, making it a relevant tool for alleviating the model elicitation effort when data is readily available, along with making it a suitable analysis tool for indentifying patterns in the data
Książki na temat "Trustable AI"
Séroussi, Brigitte, Patrick Weber, Ferdinand Dhombres, Cyril Grouin, Jan-David Liebe, Sylvia Pelayo, Andrea Pinna i in., red. Challenges of Trustable AI and Added-Value on Health. IOS Press, 2022. http://dx.doi.org/10.3233/shti294.
Pełny tekst źródłaSéroussi, B., F. Dhombres i P. Weber. Challenges of Trustable AI and Added-Value on Health: Proceedings of MIE 2022. IOS Press, Incorporated, 2022.
Znajdź pełny tekst źródłaSéroussi, B., F. Dhombres i P. Weber. Challenges of Trustable AI and Added-Value on Health: Proceedings of MIE 2022. IOS Press, Incorporated, 2022.
Znajdź pełny tekst źródłaCzęści książek na temat "Trustable AI"
Bousquet, Cedric, i Diva Beltramin. "Machine Learning in Medicine: To Explain, or Not to Explain, That Is the Question". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220407.
Pełny tekst źródłaWong, Lori, Feliciano Yu, Sudeepa Bhattacharyya i Melody L. Greer. "Covid-19 Positivity Differences Among Patients of a Rural, Southern US State Hospital System Based on Population Density, Rural-Urban Classification, and Area Deprivation Index". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220560.
Pełny tekst źródłaN., Ambika. "An Augmented Edge Architecture for AI-IoT Services Deployment in the Modern Era". W Advances in Information Security, Privacy, and Ethics, 286–302. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5250-9.ch015.
Pełny tekst źródłaMehrjerd, Ameneh, Hassan Rezaei, Saeid Eslami i Nayyere Khadem Ghaebi. "Determination of Cut Off for Endometrial Thickness in Couples with Unexplained Infertility: Trustable AI". W Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220450.
Pełny tekst źródłaGautam, Abhishek Kumar, i Nitin Nitin. "Use of Smart Contracts and Distributed Ledger for Automation". W Research Anthology on Cross-Disciplinary Designs and Applications of Automation, 645–77. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3694-3.ch033.
Pełny tekst źródłaGautam, Abhishek Kumar, i Nitin Nitin. "Use of Smart Contracts and Distributed Ledger for Automation". W Advances in Data Mining and Database Management, 245–77. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3295-9.ch014.
Pełny tekst źródłaStreszczenia konferencji na temat "Trustable AI"
Ignatiev, Alexey. "Towards Trustable Explainable AI". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.
Pełny tekst źródłaAl-Tirawi, Anas, i Robert G. Reynolds. "How to Design a Trustable Cultural Algorithm Using Common Value Auctions". W 2021 Third International Conference on Transdisciplinary AI (TransAI). IEEE, 2021. http://dx.doi.org/10.1109/transai51903.2021.00022.
Pełny tekst źródłaBycroft, Benjamen P., Nicholas A. Oune, Daniel Thomlinson, Alonzo Lopez, Pamela S. Wood, Max Spolaor, Michael J. Durst i Scott A. Turner. "Capabilities Toward Trustable AI/ML Pose Estimation for Satellite-to-Satellite Imagery". W 2024 IEEE Aerospace Conference. IEEE, 2024. http://dx.doi.org/10.1109/aero58975.2024.10521110.
Pełny tekst źródłaTasneem, Sumaiya, i Kazi Aminul Islam. "Development of Trustable Deep Learning Model in Remote Sensing through Explainable-AI Method Selection". W 2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). IEEE, 2023. http://dx.doi.org/10.1109/uemcon59035.2023.10316012.
Pełny tekst źródła