Artykuły w czasopismach na temat „Trustable AI”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Trustable AI.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 22 najlepszych artykułów w czasopismach naukowych na temat „Trustable AI”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Srivastava, B., i F. Rossi. "Rating AI systems for bias to promote trustable applications". IBM Journal of Research and Development 63, nr 4/5 (1.07.2019): 5:1–5:9. http://dx.doi.org/10.1147/jrd.2019.2935966.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Calegari, Roberta, Giovanni Ciatto i Andrea Omicini. "On the integration of symbolic and sub-symbolic techniques for XAI: A survey". Intelligenza Artificiale 14, nr 1 (17.09.2020): 7–32. http://dx.doi.org/10.3233/ia-190036.

Pełny tekst źródła
Streszczenie:
The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays—in particular in the XAI perspective. This is why this paper provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems.
Style APA, Harvard, Vancouver, ISO itp.
3

Bagnato, Alessandra, Antonio Cicchetti, Luca Berardinelli, Hugo Bruneliere i Romina Eramo. "AI-augmented Model-Based Capabilities in the AIDOaRt Project". ACM SIGAda Ada Letters 42, nr 2 (5.04.2023): 99–103. http://dx.doi.org/10.1145/3591335.3591349.

Pełny tekst źródła
Streszczenie:
The paper presents the AIDOaRT project, a 3 years long H2020-ECSEL European project involving 32 organizations, grouped in clusters from 7 different countries, focusing on AI-augmented automation supporting modeling, coding, testing, monitoring, and continuous development in Cyber-Physical Systems (CPS). To this end, the project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable and reliable CPSs. This paper introduces the AIDOaRt project, its overall objectives, and used requirement engineering methodology. Based on that, it also focuses on describing the current plan regarding a set of tools intended to cover the modelbased capabilities requirements from the project.
Style APA, Harvard, Vancouver, ISO itp.
4

Wadnere, Prof Dhanashree G., Prof Gopal A. Wadnere, Prof Suvarana Somvanshi i Prof Pranali Bhusare. "Recent Progress on the Convergence of the Internet of Things and Artificial Intelligence". International Journal for Research in Applied Science and Engineering Technology 11, nr 12 (31.12.2023): 1286–89. http://dx.doi.org/10.22214/ijraset.2023.57576.

Pełny tekst źródła
Streszczenie:
Abstract: Artificial Intelligence of Things (AIoT) is the natural growth for both Artificial Intelligence (AI) and Internet of Things (IoT) as they are mutually gainful.. AI raise the value of the IoT through Machine Learning by transforming the data into useful information, although the IoT increases the value of AI through connectivity and data exchange. Hence, InSecTT – Intelligent Secure Trustable Things, a pan-European effort with 52 key partners from 12 countries (EU and Turkey), gives intelligent, secure and trustworthy systems for industrial purposes. This results in global cost-efficient solutions of intelligent, end-to-end secure, authentic connectivity and interoperability to bring the Internet of Things and Artificial Intelligence in sync. InSecTT targets at creating trust in AI-based intelligent systems and solutions as a major part of the AIoT. This paper provides an overview regarding the concept and ideas behind InSecTT and introduces the InSecTT Reference Architecture for infrastructure organization of AIoT use cases.
Style APA, Harvard, Vancouver, ISO itp.
5

Huang, Xuanxiang, Yacine Izza i Joao Marques-Silva. "Solving Explainability Queries with Quantification: The Case of Feature Relevancy". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.

Pełny tekst źródła
Streszczenie:
Trustable explanations of machine learning (ML) models are vital in high-risk uses of artificial intelligence (AI). Apart from the computation of trustable explanations, a number of explainability queries have been identified and studied in recent work. Some of these queries involve solving quantification problems, either in propositional or in more expressive logics. This paper investigates one of these quantification problems, namely the feature relevancy problem (FRP), i.e.\ to decide whether a (possibly sensitive) feature can occur in some explanation of a prediction. In contrast with earlier work, that studied FRP for specific classifiers, this paper proposes a novel algorithm for the \fprob quantification problem which is applicable to any ML classifier that meets minor requirements. Furthermore, the paper shows that the novel algorithm is efficient in practice. The experimental results, obtained using random forests (RFs) induced from well-known publicly available datasets, demonstrate that the proposed solution outperforms existing state-of-the-art solvers for Quantified Boolean Formulas (QBF) by orders of magnitude. Finally, the paper also identifies a novel family of formulas that are challenging for currently state-of-the-art QBF solvers.
Style APA, Harvard, Vancouver, ISO itp.
6

González-Alday, Raquel, Esteban García-Cuesta, Casimir A. Kulikowski i Victor Maojo. "A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine". Applied Sciences 13, nr 19 (28.09.2023): 10778. http://dx.doi.org/10.3390/app131910778.

Pełny tekst źródła
Streszczenie:
Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review critically analyzes the existing literature regarding the explainability and interpretability of AI methods within the clinical domain. It offers a comprehensive overview of past and current research trends with the objective of identifying limitations that hinder the advancement of Explainable Artificial Intelligence (XAI) in the field of medicine. Such constraints encompass the diverse requirements of key stakeholders, including clinicians, patients, and developers, as well as cognitive barriers to knowledge acquisition, the absence of standardised evaluation criteria, the potential for mistaking explanations for causal relationships, and the apparent trade-off between model accuracy and interpretability. Furthermore, this review discusses possible research directions aimed at surmounting these challenges. These include alternative approaches to leveraging medical expertise to enhance interpretability within clinical settings, such as data fusion techniques and interdisciplinary assessments throughout the development process, emphasizing the relevance of taking into account the needs of final users to design trustable explainability methods.
Style APA, Harvard, Vancouver, ISO itp.
7

Khaire, Prof Sneha A., Vedang Shahane, Prathamesh Borse, Ashish Jundhare i Arvind Tatu. "Doctor-Bot: AI Powered Conversational Chatbot for Delivering E-Health". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2461–64. http://dx.doi.org/10.22214/ijraset.2022.41856.

Pełny tekst źródła
Streszczenie:
Abstract: Nowadays, making time for even the smallest of things has become quite difficult as everyone wants to save their time. Health suffersthe most due to this. Due to the shortage of time people have developed a habit of seeing a doctor and having a proper checkup only when it's extremely important and there's no way around it could be postponed. And sometimes people are just way too nervous to visit their nearest medical clinic, especially, in the times of COVID, when there was a massive scarcity of any medical assistance, something that could give you information about your specific medical condition, and recommend you solutions and medication, without the need of you going out, could be a great deal. This project takes the same issue in consideration and aims to provide an easy and accessible solution to help people in dealing withtheir specific medical condition with the help of this AI modules and machine learning powered Health Care Bot. Having a service that can give you the solutions without participating in the activity of reaching out to a doctor for yourminor issues not only just saves your time but also gives you the freedom and flexibility to choose any suitable time to take advantage of the mentioned services. The aim for creating this is to help people to have their minor medical problems sorted from the comfort of their home.And in case of something extremely serious that needs an expert assistance it will show the nearby medical facilities that the patient can reach out to with appropriate information. All the information is scraped using trustable sources and even after that refining the process by introducing AI modules to get the best possible solution. Keywords: AI chatbot, Conversational bot, Digital health, Machine Learning, Natural Language Processing
Style APA, Harvard, Vancouver, ISO itp.
8

Chua, Tat-Seng. "Towards Generative Search and Recommendation: A keynote at RecSys 2023". ACM SIGIR Forum 57, nr 2 (grudzień 2023): 1–14. http://dx.doi.org/10.1145/3642979.3642986.

Pełny tekst źródła
Streszczenie:
The emergence of large language models (LLM's), especially ChatGPT, has for the first time make AI known to almost everyone and affected every facet of our society. The LLMs have the potential to revolutionize the ways we seek and consume information. This has stemmed the recent trends in both academia and industry to develop LLM-based generative AI systems for various applications with enhanced capabilities. One such systems is the generative search and recommender system, which is capable of performing content retrieval, content repurposing, content creation and their integration to meet users' information needs. However, before such systems can be widely used and accepted, we need to address several challenges. The primary challenge is the trust and safety in the generated content as we expect the LLM's to make mistakes with hallucination. This is because of the quality of data being used for their training is often erroneous and biased. The other challenges in the search and recommendation domain include: how to teach the system to be pro-active in anticipating the needs of users and in directing the conversation towards a fruitful direction; as well as the integration of retrieved and generated content. This keynote presented a generative information seeking paradigm, and discuss key research towards a trustable generative system for search and recommendation. Date : 21 September 2023.
Style APA, Harvard, Vancouver, ISO itp.
9

Chhibber, Nalin, Joslin Goh i Edith Law. "Teachable Conversational Agents for Crowdwork: Effects on Performance and Trust". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7.11.2022): 1–21. http://dx.doi.org/10.1145/3555223.

Pełny tekst źródła
Streszczenie:
Traditional crowdsourcing has mostly been viewed as requester-worker interaction where requesters publish tasks to solicit input from human crowdworkers. While most of this research area is catered towards the interest of requesters, we view this workflow as a teacher-learner interaction scenario where one or more human-teachers solve Human Intelligence Tasks to train machine learners. In this work, we explore how teachable machine learners can impact their human-teachers, and whether they form a trustable relation that can be relied upon for task delegation in the context of crowdsourcing. Specifically, we focus our work on teachable agents that learn to classify news articles while also guiding the teaching process through conversational interventions. In a two-part study, where several crowd workers individually teach the agent, we investigate whether this learning by teaching approach benefits human-machine collaboration, and whether it leads to trustworthy AI agents that crowd workers would delegate tasks to. Results demonstrate the benefits of the learning by teaching approach, in terms of perceived usefulness for crowdworkers, and the dynamics of trust built through the teacher-learner interaction.
Style APA, Harvard, Vancouver, ISO itp.
10

Chavan, Shardul Sanjay, Sanket Tukaram Dhake, Shubham Virendra Jadhav i rof Johnson Mathew. "Drowning Detection System using LRCN Approach". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 2980–85. http://dx.doi.org/10.22214/ijraset.2022.41996.

Pełny tekst źródła
Streszczenie:
Abstract: This project provides the insights of a real-time video surveillance system capable of automatically detecting drowning incidents in a swimming pool. Drowning is the 3rd reason for the highest unintentional deaths, and that’s why it is necessary to create trustable security mechanisms. Currently, most of the swimming pool's security mechanisms include CCTV surveillance and lifeguards to help in drowning situations. But this method is not enough for huge swimming pools like in amusement parks. Nowadays, some of the security systems are using AI for drowning detection using cameras situated underwater at a fixed location and also by using floating boards having a camera mounted on the bottom side so that underwater view can be captured. But the main problems in these systems arise when the pool is crowded and vision of cameras is blocked by people. In this project, rather than using underwater cameras, we are using cameras situated on top of the swimming pool to get an upper view of the swimming pool so that entire swimming pool will be under surveillance all time. Keywords: Computer vision, Convolutional neural network, Convlstm2D, LRCN, UCF50, OpenCV.
Style APA, Harvard, Vancouver, ISO itp.
11

Sámano-Robles, Ramiro, Tomas Nordström, Kristina Kunert, Salvador Santonja-Climent, Mikko Himanka, Markus Liuska, Michael Karner i Eduardo Tovar. "The DEWI High-Level Architecture: Wireless Sensor Networks in Industrial Applications". Technologies 9, nr 4 (9.12.2021): 99. http://dx.doi.org/10.3390/technologies9040099.

Pełny tekst źródła
Streszczenie:
This paper presents the High-Level Architecture (HLA) of the European research project DEWI (Dependable Embedded Wireless Infrastructure). The objective of this HLA is to serve as a reference framework for the development of industrial Wireless Sensor and Actuator Networks (WSANs) based on the concept of the DEWI Bubble. The DEWI Bubble constitutes a set of architecture design rules and recommendations that can be used to integrate legacy industrial sensor networks with a modern, interoperable and flexible IoT (Internet-of-Things) infrastructure. The DEWI Bubble can be regarded as a high-level abstraction of an industrial WSAN with enhanced interoperability (via standardized interfaces), dependability, technology reusability and cross-domain development. The DEWI Bubble aims to resolve the issue on how to integrate commercial WSAN technology to match the dependability, interoperability and high criticality needs of industrial domains. This paper details the criteria used to design the HLA and the organization of the infrastructure internal and external to the DEWI Bubble. The description includes the different perspectives, models, or views of the architecture: the entity model, the layered perspective of the entity model and the functional model. This includes an overview of software and hardware interfaces. The DEWI HLA constitutes an extension of the ISO/IEC 29182 SNRA (Sensor Network Reference Architecture) towards the support of wireless industrial applications in different domains: aeronautics, automotive, railway and building. To improve interoperability with existing approaches, the DEWI HLA also reuses some features from other standardized technologies and architectures. The DEWI HLA and the concept of Bubble allow networks with different industrial sensor technologies to exchange information between them or with external clients via standard interfaces, thus providing consolidated access to sensor information of different industrial domains. This is an important aspect for smart city applications, Big Data, Industry 4.0 and the Internet-of-Things (IoT). The paper includes a non-exhaustive review of the state of the art of the different interfaces, protocols and standards of this architecture. The HLA has also been proposed as the basis of the European projects SCOTT (Secure Connected Trustable Things) for enhanced security and privacy in the IoT and InSecTT (Intelligent Secure Trustable Things) for the convergence of artificial intelligence (AI) and the IoT.
Style APA, Harvard, Vancouver, ISO itp.
12

Rovira-Más, Francisco, Verónica Saiz-Rubio i Andrés Cuenca-Cuenca. "Sensing Architecture for Terrestrial Crop Monitoring: Harvesting Data as an Asset". Sensors 21, nr 9 (30.04.2021): 3114. http://dx.doi.org/10.3390/s21093114.

Pełny tekst źródła
Streszczenie:
Very often, the root of problems found to produce food sustainably, as well as the origin of many environmental issues, derive from making decisions with unreliable or inexistent data. Data-driven agriculture has emerged as a way to palliate the lack of meaningful information when taking critical steps in the field. However, many decisive parameters still require manual measurements and proximity to the target, which results in the typical undersampling that impedes statistical significance and the application of AI techniques that rely on massive data. To invert this trend, and simultaneously combine crop proximity with massive sampling, a sensing architecture for automating crop scouting from ground vehicles is proposed. At present, there are no clear guidelines of how monitoring vehicles must be configured for optimally tracking crop parameters at high resolution. This paper structures the architecture for such vehicles in four subsystems, examines the most common components for each subsystem, and delves into their interactions for an efficient delivery of high-density field data from initial acquisition to final recommendation. Its main advantages rest on the real time generation of crop maps that blend the global positioning of canopy location, some of their agronomical traits, and the precise monitoring of the ambient conditions surrounding such canopies. As a use case, the envisioned architecture was embodied in an autonomous robot to automatically sort two harvesting zones of a commercial vineyard to produce two wines of dissimilar characteristics. The information contained in the maps delivered by the robot may help growers systematically apply differential harvesting, evidencing the suitability of the proposed architecture for massive monitoring and subsequent data-driven actuation. While many crop parameters still cannot be measured non-invasively, the availability of novel sensors is continually growing; to benefit from them, an efficient and trustable sensing architecture becomes indispensable.
Style APA, Harvard, Vancouver, ISO itp.
13

Jamshidi, Mohammad (Behdad), Sobhan Roshani, Fatemeh Daneshfar, Ali Lalbakhsh, Saeed Roshani, Fariborz Parandin, Zahra Malek i in. "Hybrid Deep Learning Techniques for Predicting Complex Phenomena: A Review on COVID-19". AI 3, nr 2 (6.05.2022): 416–33. http://dx.doi.org/10.3390/ai3020025.

Pełny tekst źródła
Streszczenie:
Complex phenomena have some common characteristics, such as nonlinearity, complexity, and uncertainty. In these phenomena, components typically interact with each other and a part of the system may affect other parts or vice versa. Accordingly, the human brain, the Earth’s global climate, the spreading of viruses, the economic organizations, and some engineering systems such as the transportation systems and power grids can be categorized into these phenomena. Since both analytical approaches and AI methods have some specific characteristics in solving complex problems, a combination of these techniques can lead to new hybrid methods with considerable performance. This is why several types of research have recently been conducted to benefit from these combinations to predict the spreading of COVID-19 and its dynamic behavior. In this review, 80 peer-reviewed articles, book chapters, conference proceedings, and preprints with a focus on employing hybrid methods for forecasting the spreading of COVID-19 published in 2020 have been aggregated and reviewed. These documents have been extracted from Google Scholar and many of them have been indexed on the Web of Science. Since there were many publications on this topic, the most relevant and effective techniques, including statistical models and deep learning (DL) or machine learning (ML) approach, have been surveyed in this research. The main aim of this research is to describe, summarize, and categorize these effective techniques considering their restrictions to be used as trustable references for scientists, researchers, and readers to make an intelligent choice to use the best possible method for their academic needs. Nevertheless, considering the fact that many of these techniques have been used for the first time and need more evaluations, we recommend none of them as an ideal way to be used in their project. Our study has shown that these methods can hold the robustness and reliability of statistical methods and the power of computation of DL ones.
Style APA, Harvard, Vancouver, ISO itp.
14

Jiang, Pei, Takashi Obi i Yoshikazu Nakajima. "Integrating prior knowledge to build transformer models". International Journal of Information Technology, 2.01.2024. http://dx.doi.org/10.1007/s41870-023-01635-7.

Pełny tekst źródła
Streszczenie:
AbstractThe big Artificial General Intelligence models inspire hot topics currently. The black box problems of Artificial Intelligence (AI) models still exist and need to be solved urgently, especially in the medical area. Therefore, transparent and reliable AI models with small data are also urgently necessary. To build a trustable AI model with small data, we proposed a prior knowledge-integrated transformer model. We first acquired prior knowledge using Shapley Additive exPlanations from various pre-trained machine learning models. Then, we used the prior knowledge to construct the transformer models and compared our proposed models with the Feature Tokenization Transformer model and other classification models. We tested our proposed model on three open datasets and one non-open public dataset in Japan to confirm the feasibility of our proposed methodology. Our results certified that knowledge-integrated transformer models perform better (1%) than general transformer models. Meanwhile, our proposed methodology identified that the self-attention of factors in our proposed transformer models is nearly the same, which needs to be explored in future work. Moreover, our research inspires future endeavors in exploring transparent small AI models.
Style APA, Harvard, Vancouver, ISO itp.
15

De Paoli, Federica, Silvia Berardelli, Ivan Limongelli, Ettore Rizzo i Susanna Zucca. "VarChat: the generative AI assistant for the interpretation of human genomic variations". Bioinformatics, 5.04.2024. http://dx.doi.org/10.1093/bioinformatics/btae183.

Pełny tekst źródła
Streszczenie:
Abstract Motivation In the modern era of genomic research, the scientific community is witnessing an explosive growth in the volume of published findings. While this abundance of data offers invaluable insights, it also places a pressing responsibility on genetic professionals and researchers to stay informed about the latest findings and their clinical significance. Genomic variant interpretation is currently facing a challenge in identifying the most up-to-date and relevant scientific papers, while also extracting meaningful information to accelerate the process from clinical assessment to reporting. Computer-aided literature search and summarization can play a pivotal role in this context. By synthesizing complex genomic findings into concise, interpretable summaries, this approach facilitates the translation of extensive genomic datasets into clinically relevant insights. Results To bridge this gap, we present VarChat (varchat.engenome.com), an innovative tool based on generative AI, developed to find and summarize the fragmented scientific literature associated with genomic variants into brief yet informative texts. VarChat provides users with a concise description of specific genetic variants, detailing their impact on related proteins and possible effects on human health. Additionally, VarChat offers direct links to related scientific trustable sources, and encourages deeper research. Availability varchat.engenome.com
Style APA, Harvard, Vancouver, ISO itp.
16

Al-Tirawi, Anas, i Robert G. Reynolds. "Cultural Algorithms as a Framework for the Design of Trustable Evolutionary Algorithms". International Journal of Semantic Computing, 8.04.2022, 1–28. http://dx.doi.org/10.1142/s1793351x22400062.

Pełny tekst źródła
Streszczenie:
One of the major challenges facing Artificial Intelligence in the future is the design of trustworthy algorithms. The development of trustworthy algorithms will be a key challenge in Artificial Intelligence for years to come. Cultural Algorithms (CAs) are viewed as one framework that can be employed to produce a trustable evolutionary algorithm. They contain features to support both sustainable and explainable computation that satisfy requirements for trustworthy algorithms proposed by Cox [Nine experts on the single biggest obstacle facing AI and algorithms in the next five years, Emerging Tech Brew, January 22, 2021]. Here, two different configurations of CAs are described and compared in terms of their ability to support sustainable solutions over the complete range of dynamic environments, from static to linear to nonlinear and finally chaotic. The Wisdom of the Crowds method was selected for the one configuration since it has been observed to work in both simple and complex environments and requires little long-term memory. The Common Value Auction (CVA) configuration was selected to represent those mechanisms that were more data centric and required more long-term memory content. Both approaches were found to provide sustainable performance across all the dynamic environments tested from static to chaotic. Based upon the information collected in the Belief Space, they produced this behavior in different ways. First, the topologies that they employed differed in terms of the “in degree” for different complexities. The CVA approach tended to favor reduced “indegree/outdegree”, while the WM exhibited a higher indegree/outdegree in the best topology for a given environment. These differences reflected the fact the CVA had more information available for the agents about the network in the Belief Space, whereas the agents in the WM had access to less available knowledge. It therefore needed to spread the knowledge that it currently had more widely throughout the population.
Style APA, Harvard, Vancouver, ISO itp.
17

Kondylakis, Haridimos, Varvara Kalokyri, Stelios Sfakianakis, Kostas Marias, Manolis Tsiknakis, Ana Jimenez-Pastor, Eduardo Camacho-Ramos i in. "Data infrastructures for AI in medical imaging: a report on the experiences of five EU projects". European Radiology Experimental 7, nr 1 (8.05.2023). http://dx.doi.org/10.1186/s41747-023-00336-x.

Pełny tekst źródła
Streszczenie:
AbstractArtificial intelligence (AI) is transforming the field of medical imaging and has the potential to bring medicine from the era of ‘sick-care’ to the era of healthcare and prevention. The development of AI requires access to large, complete, and harmonized real-world datasets, representative of the population, and disease diversity. However, to date, efforts are fragmented, based on single–institution, size-limited, and annotation-limited datasets. Available public datasets (e.g., The Cancer Imaging Archive, TCIA, USA) are limited in scope, making model generalizability really difficult. In this direction, five European Union projects are currently working on the development of big data infrastructures that will enable European, ethically and General Data Protection Regulation-compliant, quality-controlled, cancer-related, medical imaging platforms, in which both large-scale data and AI algorithms will coexist. The vision is to create sustainable AI cloud-based platforms for the development, implementation, verification, and validation of trustable, usable, and reliable AI models for addressing specific unmet needs regarding cancer care provision. In this paper, we present an overview of the development efforts highlighting challenges and approaches selected providing valuable feedback to future attempts in the area.Key points• Artificial intelligence models for health imaging require access to large amounts of harmonized imaging data and metadata.• Main infrastructures adopted either collect centrally anonymized data or enable access to pseudonymized distributed data.• Developing a common data model for storing all relevant information is a challenge.• Trust of data providers in data sharing initiatives is essential.• An online European Union meta-tool-repository is a necessity minimizing effort duplication for the various projects in the area.
Style APA, Harvard, Vancouver, ISO itp.
18

Momota, Mst Moriom R., i Bashir I. Morshed. "ML algorithms to estimate data reliability metric of ECG from inter-patient data for trustable AI-based cardiac monitors". Smart Health, październik 2022, 100350. http://dx.doi.org/10.1016/j.smhl.2022.100350.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Cabitza, Federico, Andrea Campagner i Luca Maria Sconfienza. "As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI". BMC Medical Informatics and Decision Making 20, nr 1 (11.09.2020). http://dx.doi.org/10.1186/s12911-020-01224-9.

Pełny tekst źródła
Streszczenie:
Abstract Background We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Methods Accordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordanceϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists. Results We evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI). Conclusion We propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems.
Style APA, Harvard, Vancouver, ISO itp.
20

Shae, Zon-Yin, i Jeffrey J. P. Tsai. "A Clinical Kidney Intelligence Platform Based on Big Data, Artificial Intelligence, and Blockchain Technology". International Journal on Artificial Intelligence Tools 31, nr 03 (styczeń 2022). http://dx.doi.org/10.1142/s021821302241007x.

Pełny tekst źródła
Streszczenie:
The high prevalence and incidence of end-stage rental disease (ESRD) and the difficulty in the early predicting the acute kidney injury (AKI) event highlights the limits of existing kidney care model, particularly the fragmented care and fractured data. The era of medical big data and artificial intelligence (AI) opens an opportunity to fill these knowledge and practice gaps. To obtain multi-dimensional view of the profiles of patients receiving dialysis, we propose to provide coherent care services and to actively collect patients’ multi-faceted information from home and hospital (e.g., photos of diets, sleep duration, or dermatologic manifestations). Furthermore, by introducing the blockchain in the infrastructure to enable the trustable medical exchange and effectively creates a large set of distributed medical data lake from various participated hospitals. We will introduce the medical coin, a virtual token, to vitalize digital service within the blockchain and create common interests among data generators, data vendors, and data users.We aim to create business models on top of its therapeutic effectiveness and unlock the academic and commercial value of medical data ecosystem.
Style APA, Harvard, Vancouver, ISO itp.
21

Duggal, Gaurav, Tejas Gaikwad i Bhupendra Sinha. "Dependable modulation classifier explainer with measurable explainability". Frontiers in Big Data 5 (9.01.2023). http://dx.doi.org/10.3389/fdata.2022.1081872.

Pełny tekst źródła
Streszczenie:
The Internet of Things (IoT) plays a significant role in building smart cities worldwide. Smart cities use IoT devices to collect and analyze data to provide better services and solutions. These IoT devices are heavily dependent on the network for communication. These new-age networks use artificial intelligence (AI) that plays a crucial role in reducing network roll-out and operation costs, improving entire system performance, enhancing customer services, and generating possibilities to embed a wide range of telecom services and applications. For IoT devices, it is essential to have a robust and trustable network for reliable communication among devices and service points. The signals sent between the devices or service points use modulation to send a password over a bandpass frequency range. Our study focuses on modulation classification performed using deep learning method(s), adaptive modulation classification (AMC), which has now become an integral part of a communication system. We propose a dependable modulation classifier explainer (DMCE) that focuses on the explainability of modulation classification. Our study demonstrates how we can visualize and understand a particular prediction made by seeing highlighted data points crucial for modulation class prediction. We also demonstrate a numeric explainability measurable metric (EMM) to interpret the prediction. In the end, we present a comparative analysis with existing state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
22

Ahmad, Khubab, Muhammad Shahbaz Khan, Fawad Ahmed, Maha Driss, Wadii Boulila, Abdulwahab Alazeb, Mohammad Alsulami, Mohammed S. Alshehri, Yazeed Yasin Ghadi i Jawad Ahmad. "FireXnet: an explainable AI-based tailored deep learning model for wildfire detection on resource-constrained devices". Fire Ecology 19, nr 1 (20.09.2023). http://dx.doi.org/10.1186/s42408-023-00216-0.

Pełny tekst źródła
Streszczenie:
Abstract Background Forests cover nearly one-third of the Earth’s land and are some of our most biodiverse ecosystems. Due to climate change, these essential habitats are endangered by increasing wildfires. Wildfires are not just a risk to the environment, but they also pose public health risks. Given these issues, there is an indispensable need for efficient and early detection methods. Conventional detection approaches fall short due to spatial limitations and manual feature engineering, which calls for the exploration and development of data-driven deep learning solutions. This paper, in this regard, proposes 'FireXnet', a tailored deep learning model designed for improved efficiency and accuracy in wildfire detection. FireXnet is tailored to have a lightweight architecture that exhibits high accuracy with significantly less training and testing time. It contains considerably reduced trainable and non-trainable parameters, which makes it suitable for resource-constrained devices. To make the FireXnet model visually explainable and trustable, a powerful explainable artificial intelligence (AI) tool, SHAP (SHapley Additive exPlanations) has been incorporated. It interprets FireXnet’s decisions by computing the contribution of each feature to the prediction. Furthermore, the performance of FireXnet is compared against five pre-trained models — VGG16, InceptionResNetV2, InceptionV3, DenseNet201, and MobileNetV2 — to benchmark its efficiency. For a fair comparison, transfer learning and fine-tuning have been applied to the aforementioned models to retrain the models on our dataset. Results The test accuracy of the proposed FireXnet model is 98.42%, which is greater than all other models used for comparison. Furthermore, results of reliability parameters confirm the model’s reliability, i.e., a confidence interval of [0.97, 1.00] validates the certainty of the proposed model’s estimates and a Cohen’s kappa coefficient of 0.98 proves that decisions of FireXnet are in considerable accordance with the given data. Conclusion The integration of the robust feature extraction of FireXnet with the transparency of explainable AI using SHAP enhances the model’s interpretability and allows for the identification of key characteristics triggering wildfire detections. Extensive experimentation reveals that in addition to being accurate, FireXnet has reduced computational complexity due to considerably fewer training and non-training parameters and has significantly fewer training and testing times.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii