Journal articles on the topic 'Explicabilité des algorithmes'

To see the other types of publications on this topic, follow the link: Explicabilité des algorithmes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Explicabilité des algorithmes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Robbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI." Minds and Machines 29, no. 4 (October 15, 2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
2

Краснов, Федор Владимирович, and Ирина Сергеевна Смазневич. "The explicability factor of the algorithm in the problems of searching for the similarity of text documents." Вычислительные технологии, no. 5(25) (October 28, 2020): 107–23. http://dx.doi.org/10.25743/ict.2020.25.5.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
С развитием все более сложных методов автоматического анализа текста повышается важность задачи объяснения пользователю, почему прикладная интеллектуальная информационная система выделяет некоторые тексты как схожие по смыслу. В работе рассмотрены ограничения, которые такая постановка накладывает на используемые интеллектуальные алгоритмы. Проведенный авторами эксперимент показал, что абсолютное значение схожести документов не универсально по отношению к интеллектуальному алгоритму, поэтому оптимальную пороговую величину схожести необходимо устанавливать отдельно для каждой решаемой задачи. Полученные результаты могут быть использованы при оценке применимости различных методов установления смысловой схожести между документами в прикладных информационных системах, а также при выборе оптимальных параметров модели с учетом требований объяснимости решения. The problem of providing a comprehensive explanation to any user why the applied intelligent information system suggests meaning similarity in certain texts imposes significant requirements on the intelligent algorithms. The article covers the entire set of technologies involved in the solution of the text clustering problem and several conclusions are stated thereof. Matrix decomposition aimed at reducing the dimension of the vector representation of a corpus does not provide clear explanatiom of the algorithmic principles to a user. Ranking using the TF-IDF function and its modifications finds a few documents that are similar in meaning, however, this method is the easiest for users to comprehend, since algorithms of this type detect specific matching words in the compared texts. Topic modeling methods (LSI, LDA, ARTM) assign large similarity values to texts despite a few matching words, while a person can easily tell that the general subject of the texts is the same. Yet the explanation of how topic modeling works requires additional effort for interpretation of the detected ones. This interpretation gets easier as the model quality grows, while the quality can be optimized by its average coherence. The experiment demonstrated that the absolute value of documents similarity is not invariant for different intelligent algorithms, so the optimal threshold value of similarity must be set separately for each problem to be solved. The results of the work can be further used to assess which of the various methods developed to detect meaning similarity in texts can be effectively implemented in applied information systems and to determine the optimal model parameters based on the solution explicability requirements.
3

van Bruxvoort, Xadya, and Maurice van Keulen. "Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System." Applied Sciences 11, no. 23 (November 25, 2021): 11187. http://dx.doi.org/10.3390/app112311187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised.
4

Kalyanpur, Aditya, Tom Breloff, and David A. Ferrucci. "Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10867–74. http://dx.doi.org/10.1609/aaai.v36i10.21333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the “knowledge acquisition” problem). To address these issues, we devise a novel logical reasoner called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query. We use a simple QA example from a children’s story to motivate Braid’s design and explain how the various components work together to produce a coherent logical explanation. Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results while providing frame-based explanations.
5

Niemi, Hannele. "AI in learning." Journal of Pacific Rim Psychology 15 (January 2021): 183449092110381. http://dx.doi.org/10.1177/18344909211038105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This special issue raises two thematic questions: (1) How will AI change learning in the future and what role will human beings play in the interaction with machine learning, and (2), What can we learn from the articles in this special issue for future research? These questions are reflected in the frame of the recent discussion of human and machine learning. AI for learning provides many applications and multimodal channels for supporting people in cognitive and non-cognitive task domains. The articles in this special issue evidence that agency, engagement, self-efficacy, and collaboration are needed in learning and working with intelligent tools and environments. The importance of social elements is also clear in the articles. The articles also point out that the teacher’s role in digital pedagogy primarily involves facilitating and coaching. AI in learning has a high potential, but it also has many limitations. Many worries are linked with ethical issues, such as biases in algorithms, privacy, transparency, and data ownership. This special issue also highlights the concepts of explainability and explicability in the context of human learning. We need much more research and research-based discussion for making AI more trustworthy for users in learning environments and to prevent misconceptions.
6

Antonio, Nuno, Ana de Almeida, and Luis Nunes. "Big Data in Hotel Revenue Management: Exploring Cancellation Drivers to Gain Insights Into Booking Cancellation Behavior." Cornell Hospitality Quarterly 60, no. 4 (May 29, 2019): 298–319. http://dx.doi.org/10.1177/1938965519851466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the hospitality industry, demand forecast accuracy is highly impacted by booking cancellations, which makes demand-management decisions difficult and risky. In attempting to minimize losses, hotels tend to implement restrictive cancellation policies and employ overbooking tactics, which, in turn, reduce the number of bookings and reduce revenue. To tackle the uncertainty arising from booking cancellations, we combined the data from eight hotels’ property management systems with data from several sources (weather, holidays, events, social reputation, and online prices/inventory) and machine learning interpretable algorithms to develop booking cancellation prediction models for the hotels. In a real production environment, improvement of the forecast accuracy due to the use of these models could enable hoteliers to decrease the number of cancellations, thus, increasing confidence in demand-management decisions. Moreover, this work shows that improvement of the demand forecast would allow hoteliers to better understand their net demand, that is, current demand minus predicted cancellations. Simultaneously, by focusing not only on forecast accuracy but also on its explicability, this work illustrates one other advantage of the application of these types of techniques in forecasting: the interpretation of the predictions of the model. By exposing cancellation drivers, models help hoteliers to better understand booking cancellation patterns and enable the adjustment of a hotel’s cancellation policies and overbooking tactics according to the characteristics of its bookings.
7

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
8

Hübner, Ursula H., Nicole Egbert, and Georg Schulte. "Clinical Information Systems – Seen through the Ethics Lens." Yearbook of Medical Informatics 29, no. 01 (August 2020): 104–14. http://dx.doi.org/10.1055/s-0040-1701996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objective: The more people there are who use clinical information systems (CIS) beyond their traditional intramural confines, the more promising the benefits are, and the more daunting the risks will be. This review thus explores the areas of ethical debates prompted by CIS conceptualized as smart systems reaching out to patients and citizens. Furthermore, it investigates the ethical competencies and education needed to use these systems appropriately. Methods: A literature review covering ethics topics in combination with clinical and health information systems, clinical decision support, health information exchange, and various mobile devices and media was performed searching the MEDLINE database for articles from 2016 to 2019 with a focus on 2018 and 2019. A second search combined these keywords with education. Results: By far, most of the discourses were dominated by privacy, confidentiality, and informed consent issues. Intertwined with confidentiality and clear boundaries, the provider-patient relationship has gained much attention. The opacity of algorithms and the lack of explicability of the results pose a further challenge. The necessity of sociotechnical ethics education was underpinned in many studies including advocating education for providers and patients alike. However, only a few publications expanded on ethical competencies. In the publications found, empirical research designs were employed to capture the stakeholders’ attitudes, but not to evaluate specific implementations. Conclusion: Despite the broad discourses, ethical values have not yet found their firm place in empirically rigorous health technology evaluation studies. Similarly, sociotechnical ethics competencies obviously need detailed specifications. These two gaps set the stage for further research at the junction of clinical information systems and ethics.
9

Krasnov, Fedor, Irina Smaznevich, and Elena Baskakova. "Optimization approach to the choice of explicable methods for detecting anomalies in homogeneous text collections." Informatics and Automation 20, no. 4 (August 3, 2021): 869–904. http://dx.doi.org/10.15622/ia.20.4.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of detecting anomalous documents in text collections is considered. The existing methods for detecting anomalies are not universal and do not show a stable result on different data sets. The accuracy of the results depends on the choice of parameters at each step of the problem solving algorithm process, and for different collections different sets of parameters are optimal. Not all of the existing algorithms for detecting anomalies work effectively with text data, which vector representation is characterized by high dimensionality with strong sparsity.The problem of finding anomalies is considered in the following statement: it is necessary to checking a new document uploaded to an applied intelligent information system for congruence with a homogeneous collection of documents stored in it. In such systems that process legal documents the following limitations are imposed on the anomaly detection methods: high accuracy, computational efficiency, reproducibility of results and explicability of the solution. Methods satisfying these conditions are investigated.The paper examines the possibility of evaluating text documents on the scale of anomaly by deliberately introducing a foreign document into the collection. A strategy for detecting novelty of the document in relation to the collection is proposed, which assumes a reasonable selection of methods and parameters. It is shown how the accuracy of the solution is affected by the choice of vectorization options, tokenization principles, dimensionality reduction methods and parameters of novelty detection algorithms.The experiment was conducted on two homogeneous collections of documents containing technical norms: standards in the field of information technology and railways. The following approaches were used: calculation of the anomaly index as the Hellinger distance between the distributions of the remoteness of documents to the center of the collection and to the foreign document; optimization of the novelty detection algorithms depending on the methods of vectorization and dimensionality reduction. The vector space was constructed using the TF-IDF transformation and ARTM topic modeling. The following algorithms have been tested: Isolation Forest, Local Outlier Factor and One-Class SVM (based on Support Vector Machine).The experiment confirmed the effectiveness of the proposed optimization strategy for determining the appropriate method for detecting anomalies for a given text collection. When searching for an anomaly in the context of topic clustering of legal documents, the Isolating Forest method is proved to be effective. When vectorizing documents using TF-IDF, it is advisable to choose the optimal dictionary parameters and use the One-Class SVM method with the corresponding feature space transformation function.
10

Coppi, Giulio, Rebeca Moreno Jimenez, and Sofia Kyriazi. "Explicability of humanitarian AI: a matter of principles." Journal of International Humanitarian Action 6, no. 1 (October 6, 2021). http://dx.doi.org/10.1186/s41018-021-00096-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-related implications.Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated?This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores the transformative potential of these systems on the complex power dynamics between humanitarians, principled assistance, and affected communities during acute crises. Our research confirms that the existing flaws in accountability and epistemic processes can be also found in the mathematical and statistical formulas and in the algorithms used for automation, artificial intelligence, predictive analytics, and other efficiency-gaining-related processes.In doing so, our analysis highlights the potential harm to people resulting from algorithmic opacity, either through removal or obfuscation of the causal connection between triggering events and humanitarian services through the so-called black box effect (algorithms are often described as black boxes, as their complexity and technical opacity hide and obfuscate their inner workings (Diakopoulos, Tow Center for Digital Journ, 2017). Recognizing the need for a humanitarian ethics dimension in the analysis of automation, AI, and ADMs used in humanitarian action, we endorse the concept of “explicability” as developed within the ethical framework of machine learning and human-computer interaction, together with a set of proxy metrics.Finally, we stress the need for developing auditable standards, as well as transparent guidelines and frameworks to rein in the risks of what has been defined as humanitarian experimentation (Sandvik, Jacobsen, and McDonald, Int. Rev. Red Cross 99(904), 319–344, 2017). This article concludes that accountability mechanisms for AI-based systems and ADMs used to respond to the needs of populations in situation of vulnerability should be an essential feature by default, in order to preserve the respect of the do no harm principle even in the digital dimension of aid.In conclusion, while we confirm existing concerns related to the adoption of AI-based systems and ADMs in humanitarian action, we also advocate for a roadmap towards humanitarian AI for the sector and introduce a tentative ethics framework as basis for future research.
11

Herzog, Christian. "On the Ethical and Epistemological Utility of Explicable AI in Medicine." Philosophy & Technology 35, no. 2 (May 30, 2022). http://dx.doi.org/10.1007/s13347-022-00546-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence (AI)-based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well as on a longer-term epistemological level by facilitating scientific progress that is informed through practice. With this article, I will therefore attempt to counter arguments against demands for explicable AI in medicine that are based on a notion of “whatever heals is right.” I will elucidate my elaboration on the positive aspects of explicable AI in medicine as well as by pointing out risks of non-explicable AI.
12

Penso, Marco, Sarah Solbiati, Sara Moccia, and Enrico G. Caiani. "Decision Support Systems in HF based on Deep Learning Technologies." Current Heart Failure Reports, February 10, 2022. http://dx.doi.org/10.1007/s11897-022-00540-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Purpose of Review Application of deep learning (DL) is growing in the last years, especially in the healthcare domain. This review presents the current state of DL techniques applied to electronic health record structured data, physiological signals, and imaging modalities for the management of heart failure (HF), focusing in particular on diagnosis, prognosis, and re-hospitalization risk, to explore the level of maturity of DL in this field. Recent Findings DL allows a better integration of different data sources to distillate more accurate outcomes in HF patients, thus resulting in better performance when compared to conventional evaluation methods. While applications in image and signal processing for HF diagnosis have reached very high performance, the application of DL to electronic health records and its multisource data for prediction could still be improved, despite the already promising results. Summary Embracing the current big data era, DL can improve performance compared to conventional techniques and machine learning approaches. DL algorithms have potential to provide more efficient care and improve outcomes of HF patients, although further investigations are needed to overcome current limitations, including results generalizability and transparency and explicability of the evidences supporting the process.
13

Kumar, Sowmya Ramesh, and Samarth Ramesh Kedilaya. "Navigating Complexity: Harnessing AI for Multivariate Time Series Forecasting in Dynamic Environments." Journal of Engineering and Applied Sciences Technology, December 31, 2023, 1–8. http://dx.doi.org/10.47363/jeast/2023(5)219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multivariate Time Series Analysis (MTSA) plays a pivotal role in forecasting within diverse domains by addressing the complexities arising from interdependencies among multiple variables. This exploration delves into the fundamentals, methodologies, and applications of MTSA, elucidating its role in enhancing predictive capabilities. The key concepts in MTSA, including Vector Autoregression, Cointegration, Error Correction Models, and Granger Causality, form the foundation for understanding dynamic relationships among variables. The methodology section outlines the critical steps in MTSA, such as model specification, estimation, diagnostics, and forecasting. Additionally, the abstract explores the capabilities of Artificial Intelligence (AI) in time-series forecasting, emphasizing improved accuracy, long-term trend recognition, dynamic pattern recognition, and the handling of seasonality and anomalies. Specific AI models, such as Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), Echo State Networks (ESNs), and Online Learning Algorithms, are discussed in detail, along with practical implementation examples. Furthermore, the abstract introduces the benefits and challenges associated with MTSA. The benefits include comprehensive insights, improved forecast accuracy, and real-world relevance, while challenges encompass data and model complexity, explicability, and the validity of assumptions. The discussion emphasizes the need for innovative approaches to explain the predictions of complex models and highlights ongoing research in developing explanability frameworks.
14

Ettouri, Sara, Daghouj Ghizlane, Reda Benchekroun, Kawtar El Hadi, Arab Lamiaa, Loubna el Maaloum, Bouchra Allali, and Asmaa El Kettani. "What artificial intelligence can bring us in the diagnosis and progression of glaucoma." Acta Ophthalmologica 102, S279 (January 2024). http://dx.doi.org/10.1111/aos.16231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aims/Purpose: Our goal is to focus on the contribution of artificial intelligence in glaucoma from diagnosis to progression.Methods: By a literature review, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression.Results: For the diagnosis: Recently, two papers were published reporting results of DL algorithms to diagnose glaucoma from Visual Field data. Several authors have evaluated AI‐based fundus photograph analysis for its utility for detecting glaucoma such as the study of Liu et al which grouped 241 032 images of 68 013 patients with a sensitivity of 96.2% and specificity of 87.2% also for analysing OCT imaging data from peripapillary RNFL thickness maps and the macular ganglion cell complex for discriminating between glaucomatous and normal eyes with AROC values ranging from 0.69 to 0.99. Prediction of IOP trends from previous data and medications would be a useful and plausible use of AI. For the progression of glaucoma: Multiclass support vector machines (SVMs), have been used to simultaneously discriminate between normal, non progressing, and progressing eyes through the analysis of confocal scanning laser ophthalmoscopy (CSLO) images with a correct classification rate of 88% .Conclusions: Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. Research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.References1. Mursch‐Edlmayr AS, Ng WS, Diniz‐Filho A, Sousa DC, Arnold L, Schlenker MB, et al. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl Vis Sci Technol. 15 oct 2020.2. Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology. août 2018.
15

Prem, Erich. "From ethical AI frameworks to tools: a review of approaches." AI and Ethics, February 9, 2023. http://dx.doi.org/10.1007/s43681-023-00258-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn reaction to concerns about a broad range of potential ethical issues, dozens of proposals for addressing ethical aspects of artificial intelligence (AI) have been published. However, many of them are too abstract for being easily translated into concrete designs for AI systems. The various proposed ethical frameworks can be considered an instance of principlism that is similar to that found in medical ethics. Given their general nature, principles do not say how they should be applied in a particular context. Hence, a broad range of approaches, methods, and tools have been proposed for addressing ethical concerns of AI systems. This paper presents a systematic analysis of more than 100 frameworks, process models, and proposed remedies and tools for helping to make the necessary shift from principles to implementation, expanding on the work of Morley and colleagues. This analysis confirms a strong focus of proposed approaches on only a few ethical issues such as explicability, fairness, privacy, and accountability. These issues are often addressed with proposals for software and algorithms. Other, more general ethical issues are mainly addressed with conceptual frameworks, guidelines, or process models. This paper develops a structured list and definitions of approaches, presents a refined segmentation of the AI development process, and suggests areas that will require more attention from researchers and developers.

To the bibliography