Journal articles on the topic 'Explicability'

To see the other types of publications on this topic, follow the link: Explicability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explicability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nathan, N. M. L. "Explicability and the unpreventable." Analysis 48, no. 1 (January 1, 1988): 36–40. http://dx.doi.org/10.1093/analys/48.1.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI." Minds and Machines 29, no. 4 (October 15, 2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.

Full text
Abstract:
Abstract There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Hanseul, and Hyundeuk Cheon. "The Principle of Explicability in AI Ethics." Study of Humanities 35 (June 30, 2021): 37–63. http://dx.doi.org/10.31323/sh.2021.06.35.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Herzog, Christian. "On the risk of confusing interpretability with explicability." AI and Ethics 2, no. 1 (December 9, 2021): 219–25. http://dx.doi.org/10.1007/s43681-021-00121-9.

Full text
Abstract:
AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.
APA, Harvard, Vancouver, ISO, and other styles
5

Araújo, Alexandre de Souza. "Principle of explicability: regulatory challenges on artificial intelligence." Concilium 24, no. 3 (February 26, 2024): 273–96. http://dx.doi.org/10.53660/clm-2722-24a22.

Full text
Abstract:
This essay aims to examine topic related to artificial intelligence, explaining how it works and specially studying aspects of regulatory framework. The paper approaches the principle of explicability of AI decisions, performing an analysis on the braziilian parliamentary discussion and brazilian data protection act. The hermeneutic method was adopted as well as literature review.
APA, Harvard, Vancouver, ISO, and other styles
6

Sreedharan, Sarath, Tathagata Chakraborti, Christian Muise, and Subbarao Kambhampati. "Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2518–26. http://dx.doi.org/10.1609/aaai.v34i03.5634.

Full text
Abstract:
In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model. We show how this formulation allows agents to not only leverage existing strategies for handling model differences like explanations (Chakraborti et al. 2017) and explicability (Kulkarni et al. 2019), but can also exhibit novel behaviors that are generated through the combination of these different strategies. Our formulation also reveals a deep connection to existing approaches in epistemic planning. Specifically, we show how we can leverage classical planning compilations for epistemic planning to solve Expectation-Aware planning problems. To the best of our knowledge, the proposed formulation is the first complete solution to planning with diverging user expectations that is amenable to a classical planning compilation while successfully combining previous works on explanation and explicability. We empirically show how our approach provides a computational advantage over our earlier approaches that rely on search in the space of models.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Dominic. "Making Automation Explicable: A Challenge for Philosophy of Technology." New Formations 98, no. 98 (July 1, 2019): 68–84. http://dx.doi.org/10.3898/newf:98.05.2019.

Full text
Abstract:
This article argues for an expanded conception of automation's 'explicability'. When it comes to topics as topical and shot through with multifarious anxieties as automation, it is, I argue, insufficient to rely on a conception of explicability as 'explanation' or 'simplification'. Instead, automation is the kind of topic that is challenging us to develop a more dynamic conception of explicability as explication. By this, I mean that automation is challenging us to develop epistemic strategies that are better capable of implicating people and their anxieties about automation in the topic, and, counterintuitively, of complicating how the topic is interfaced with. The article comprises an introduction followed by four main parts. While the introduction provides general context, each of the four subsequent parts seeks to demonstrate how diverse epistemic strategies might have a role to play in developing the process just described. Together, the parts are intended to build a cumulative case. This does not mean that the strategies they discuss are intended to be definitive, however – other strategies for making automation explicable may be possible and more desirable. Part one historicises automation as a concept. It does this through a focus on a famous passage from Descartes' Second Meditation, where he asks the reader to imagine automata glimpsed through a window. The aim here is to rehearse the presuppositions of a familiar 'modernist' epistemological model, and to outline how a contemporary understanding of automation as a wicked socio-economic problem challenges it. Parts two and three are then framed through concepts emerging from recent psychology: 'automation bias' and 'automation complacency'. The aim here is to consider recent developments in philosophy of technology in terms of these concepts, and to dramatically explicate key presuppositions at stake in the form of reasoning by analogy implied. While part two explicates an analogy between automation bias in philosophical engagements with technologies that involve a 'transcendental' tendency to reify automation, part three explicates an analogy between automation complacency and an opposed 'empirical turn' tendency in philosophy of technology to privilege nuanced description of case studies. Part four then conclude by arguing that anxieties concerning automation might usefully be redirected towards a different sense of the scope and purpose of philosophy of technology today: not as a movement to be 'turned' in one direction at the expense of others ('empirical' vs 'transcendental', for instance) but as a multidimensional 'problem space' to be explicated in many different directions at once. Through reference to Kierkegaard and Simondon, I show how different approaches to exemplification, indirection and indeterminacy can be consistent with this, and with the approach to explicability recommended above.
APA, Harvard, Vancouver, ISO, and other styles
8

LaFleur, William R. "Suicide off the Edge of Explicability: Awe in Ozu and Kore'eda." Film History: An International Journal 14, no. 2 (June 2002): 158–65. http://dx.doi.org/10.2979/fil.2002.14.2.158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chakraborti, Tathagata, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, and Subbarao Kambhampati. "Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 86–96. http://dx.doi.org/10.1609/icaps.v29i1.3463.

Full text
Abstract:
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for “explicable”, “legible”, “predictable” and “transparent” planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on “security” and “privacy” of plans which are also trying to answer the same question, but from the opposite point of view – i.e. when the agent is trying to hide instead of reveal its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.
APA, Harvard, Vancouver, ISO, and other styles
10

Benjamin, William J. “Joe.” "The “explicability” of cylinder axis and power in refractions over toric soft lenses." International Contact Lens Clinic 25, no. 3 (May 1998): 89–92. http://dx.doi.org/10.1016/s0892-8967(98)00024-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Arikpo, Abam, Omoogun Ajayi, and Orim Richard. "When, where, how and why does learners’ autonomy increase unemployment among school graduates? A peer tutor’s perspective." Journal of Public Administration and Governance 3, no. 1 (April 2, 2013): 115. http://dx.doi.org/10.5296/jpag.v3i1.3268.

Full text
Abstract:
Ordinarily, peer tutoring, which is the ability of a class or school mate to assess and respond by explanation and application to the difficulty of another class or school mate or mates over a given subject matter vis –a –vis an implicit job situation could be seen a solution to unemployment. There has also been incidence of unemployment arising from, the explicability and generalizability of the peer-tutors access, equity, improvement in attitude, skill, and knowledge quality, relevance and diversification of delivery methods. The paper is a discussion of resolution and causes of graduate unemployment that could emanate from peer tutoring.
APA, Harvard, Vancouver, ISO, and other styles
12

BAUER, YEHUDA. "IS THE HOLOCAUST EXPLICABLE?*." Holocaust and Genocide Studies 5, no. 2 (January 1, 1990): 145–55. http://dx.doi.org/10.1093/hgs/5.2.145.

Full text
Abstract:
Abstract In the discussions about the Holocaust, an increasing number of commentators — theologians, writers, as well as historians — argue that ultimately the Holocaust is a mystery, an inexplicable event in human history. Various expressions are used, such as ‘tremendum’, with its theological connotations, or ‘abyss’, and many others. They all indicate a measure of final incomprehension that such a horrible event could have occurred in the midst of a supposedly civilized European society. Nazi atrocities are usually referred to as ‘beastly’, or ‘bestiality’, and very commonly as ‘inhuman’. The present paper tries to evaluate the explicability of the Holocaust from the standpoint of an historian.
APA, Harvard, Vancouver, ISO, and other styles
13

Anagnostopoulos, Andreas. "Aristotle’s Parmenidean Dilemma." Archiv für Geschichte der Philosophie 95, no. 3 (September 2013): 245–74. http://dx.doi.org/10.1515/agph-2013-0011.

Full text
Abstract:
Abstract: Aristotle’s treatment, in Physics 1.8, of a dilemma purporting to show that change is impossible, aims in the first instance to defend not the existence of change, but the explicability of change, a presupposition of his natural science. The opponent fails to recognize that causal explanation is sensitive to the differences between merely coinciding beings. This formal principle of explanation is implicit in Aristotle’s theory that change involves a third, ‘underlying’ principle, in addition to the two opposites, form and privation, and it allows him to avoid the two horns of the dilemma. Aristotle’s treatment of the dilemma does not address the the issues of persistence through change or generation ex nihilo, as is often thought.
APA, Harvard, Vancouver, ISO, and other styles
14

Nešpor, Jan. "Automated Administrative Decision-Making: What is the Black Box Hiding?" AUC IURIDICA 70, no. 2 (May 23, 2024): 69–83. http://dx.doi.org/10.14712/23366478.2024.23.

Full text
Abstract:
The exploration of the “black box” phenomenon underscores opacity challenges in automated administrative decision-making systems, prompting a discussion on the paradox of transparency. Advocating for the concept of “qualified transparency”, the article aims to navigate the delicate balance between understanding and safeguarding sensitive information. Ethical imperatives, including respect for human autonomy, harm prevention, fairness, and explicability, are considered, culminating in recommendations for human participation, ethicality or accountability by design considerations, and the implementation of regulatory sandboxes to test such models prior to broad integration. Ultimately, the article advocates for a comprehensive discourse on transitioning from a human-centric to an automated public administration model, acknowledging the complexity and potential risks involved.
APA, Harvard, Vancouver, ISO, and other styles
15

Peláez-Rodríguez, César, Cosmin M. Marina, Jorge Pérez-Aracil, Carlos Casanova-Mateo, and Sancho Salcedo-Sanz. "Extreme Low-Visibility Events Prediction Based on Inductive and Evolutionary Decision Rules: An Explicability-Based Approach." Atmosphere 14, no. 3 (March 12, 2023): 542. http://dx.doi.org/10.3390/atmos14030542.

Full text
Abstract:
In this paper, we propose different explicable forecasting approaches, based on inductive and evolutionary decision rules, for extreme low-visibility events prediction. Explicability of the processes given by the rules is in the core of the proposal. We propose two different methodologies: first, we apply the PRIM algorithm and evolution to obtain induced and evolved rules, and subsequently these rules and boxes of rules are used as a possible simpler alternative to ML/DL classifiers. Second, we propose to integrate the information provided by the induced/evolved rules in the ML/DL techniques, as extra inputs, in order to enrich the complex ML/DL models. Experiments in the prediction of extreme low-visibility events in Northern Spain due to orographic fog show the good performance of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
16

Sovrano, Francesco, Salvatore Sapienza, Monica Palmirani, and Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal." J 5, no. 1 (February 18, 2022): 126–38. http://dx.doi.org/10.3390/j5010010.

Full text
Abstract:
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion.
APA, Harvard, Vancouver, ISO, and other styles
17

Краснов, Федор Владимирович, and Ирина Сергеевна Смазневич. "The explicability factor of the algorithm in the problems of searching for the similarity of text documents." Вычислительные технологии, no. 5(25) (October 28, 2020): 107–23. http://dx.doi.org/10.25743/ict.2020.25.5.009.

Full text
Abstract:
С развитием все более сложных методов автоматического анализа текста повышается важность задачи объяснения пользователю, почему прикладная интеллектуальная информационная система выделяет некоторые тексты как схожие по смыслу. В работе рассмотрены ограничения, которые такая постановка накладывает на используемые интеллектуальные алгоритмы. Проведенный авторами эксперимент показал, что абсолютное значение схожести документов не универсально по отношению к интеллектуальному алгоритму, поэтому оптимальную пороговую величину схожести необходимо устанавливать отдельно для каждой решаемой задачи. Полученные результаты могут быть использованы при оценке применимости различных методов установления смысловой схожести между документами в прикладных информационных системах, а также при выборе оптимальных параметров модели с учетом требований объяснимости решения. The problem of providing a comprehensive explanation to any user why the applied intelligent information system suggests meaning similarity in certain texts imposes significant requirements on the intelligent algorithms. The article covers the entire set of technologies involved in the solution of the text clustering problem and several conclusions are stated thereof. Matrix decomposition aimed at reducing the dimension of the vector representation of a corpus does not provide clear explanatiom of the algorithmic principles to a user. Ranking using the TF-IDF function and its modifications finds a few documents that are similar in meaning, however, this method is the easiest for users to comprehend, since algorithms of this type detect specific matching words in the compared texts. Topic modeling methods (LSI, LDA, ARTM) assign large similarity values to texts despite a few matching words, while a person can easily tell that the general subject of the texts is the same. Yet the explanation of how topic modeling works requires additional effort for interpretation of the detected ones. This interpretation gets easier as the model quality grows, while the quality can be optimized by its average coherence. The experiment demonstrated that the absolute value of documents similarity is not invariant for different intelligent algorithms, so the optimal threshold value of similarity must be set separately for each problem to be solved. The results of the work can be further used to assess which of the various methods developed to detect meaning similarity in texts can be effectively implemented in applied information systems and to determine the optimal model parameters based on the solution explicability requirements.
APA, Harvard, Vancouver, ISO, and other styles
18

Ruokolainen, Kalle, Ari Linna, and Hanna Tuomisto. "Use of Melastomataceae and pteridophytes for revealing phytogeographical patterns in Amazonian rain forests." Journal of Tropical Ecology 13, no. 2 (March 1997): 243–56. http://dx.doi.org/10.1017/s0266467400010439.

Full text
Abstract:
ABSTRACTSimilarities and differences among eight upland rain forest sites in Peruvian Amazonia were measured separately by using Melastomataceae, pteridophyte and tree species compositions and edaphic characteristics of the sites. All three plant groups showed a similar pattern among the sites, and this pattern could be explained by edaphic differences but not by geographical distances among the sites. The explicability of site-specific edaphic characteristics on the basis of geological history is discussed. The results suggest that both pteridophytes and Melastomataceae can be used as indicators of floristically different rain forest types that are edaphically defined. Distribution patterns of these plant groups can be studied much more rapidly than the patterns of trees and therefore both Melastomataceae and pteridophytes may be used in large scale phytogeographical studies that are urgently needed in the face of rapidly advancing deforestation.
APA, Harvard, Vancouver, ISO, and other styles
19

Möllmann, Nicholas RJ, Milad Mirbabaie, and Stefan Stieglitz. "Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations." Health Informatics Journal 27, no. 4 (October 2021): 146045822110523. http://dx.doi.org/10.1177/14604582211052391.

Full text
Abstract:
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars’ efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Cawthorne, Dylan, and Aimee Robbins-van Wynsberghe. "An Ethical Framework for the Design, Development, Implementation, and Assessment of Drones Used in Public Healthcare." Science and Engineering Ethics 26, no. 5 (June 23, 2020): 2867–91. http://dx.doi.org/10.1007/s11948-020-00233-1.

Full text
Abstract:
Abstract The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four bioethics principles: beneficence, non-maleficence, autonomy, and justice, plus a fifth principle from artificial intelligence ethics: explicability. These principles are abstract which makes operationalization a challenge; therefore, we suggest an approach of translation according to a values hierarchy whereby the top-level ethical principles are translated into relevant human values within the domain. The resulting framework is an applied ethics tool that facilitates awareness of relevant ethical issues during the design, development, implementation, and assessment of drones in public healthcare.
APA, Harvard, Vancouver, ISO, and other styles
21

Евстифеева, Е. А., С. И. Филиппченкова, and Е. В. Балакшина. "PSYCHOLOGICAL DETERMINANTS OF QUALITY OF LIFE OF UNIVERSITY STUDENTS." Вестник Тверского государственного университета. Серия: Педагогика и психология, no. 4(61) (December 23, 2022): 5–13. http://dx.doi.org/10.26456/vtpsyped/2022.4.005.

Full text
Abstract:
Анализируется проблема качества жизни студенческой молодежи. Приводится краткое содержание подходов отечественных и зарубежных исследователей по заданной теме. Указываются индикаторы уровня качества жизни человека в зависимости от значимости объективных и субъективных характеристик человеческой жизни. Описывается специфика диагностики эксплицитности качества жизни. Перечисляются результаты психодиагностического обследования обучающихся вуза на предмет основных компонентов рассматриваемого нами явления (факторы субъективного контроля, смысложизненные ориентации, особенности принятия решений, параметры качества жизни). The article analyzes the problem of the quality of life of student youth. A summary of the approaches of domestic and foreign researchers on a given topic is given. Indicators of human quality of life are given depending on the significance of objective and subjective characteristics of human life. The specifics of the diagnosis of the explicability of the quality of life are described. The results of a psychodiagnostic examination of university students are presented for the main components of the phenomenon we are considering (subjective control factors, meaningful orientations, decision-making features, quality of life parameters).
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Yijun, and Yuzhuo Xi. "Analysis of Multifactor Fundamentals Stock Selection Based on Backtesting." Advances in Economics, Management and Political Sciences 34, no. 1 (November 10, 2023): 93–99. http://dx.doi.org/10.54254/2754-1169/34/20231680.

Full text
Abstract:
In recent years, quantitative finance has become a major trend for investing which brings stable returns with controllable risks. Among various different quantitative strategies, multifactorial stock selection strategy based on fundamental data (e.g., financial statements, macro- and micro-economy data) is one of the widely investigated strategies. On this basis, this study chooses Chinese listed company to verify the feasibility and effectiveness of the stock selection strategy. To be specific, the Ricequant platform is utilized to realize the backtesting as well as data retrieving in order to estimate and evaluate the performances of the strategies. According to the analysis, several indicators show great ability to gain extra returns compared with systematic risks and market performances. In other words, the feasibility of explicability of the quantitative strategy based on multifactorial model is verified in Chinese market. Overall, these results shed light on guiding further exploration of fundamental analysis of different underlying assets based on multifactorial analysis.
APA, Harvard, Vancouver, ISO, and other styles
23

Kalyanpur, Aditya, Tom Breloff, and David A. Ferrucci. "Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10867–74. http://dx.doi.org/10.1609/aaai.v36i10.21333.

Full text
Abstract:
Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the “knowledge acquisition” problem). To address these issues, we devise a novel logical reasoner called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query. We use a simple QA example from a children’s story to motivate Braid’s design and explain how the various components work together to produce a coherent logical explanation. Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results while providing frame-based explanations.
APA, Harvard, Vancouver, ISO, and other styles
24

Schneider, Karsten. "Gauging “Ultra-Vires”: The Good Parts." German Law Journal 21, no. 5 (July 2020): 968–78. http://dx.doi.org/10.1017/glj.2020.61.

Full text
Abstract:
AbstractThe Federal Constitutional Court’s ultra-vires case law—especially its most recent iterations—has more than its fair share of bad parts. It went from non-existence to global prominence in an alarmingly short period of time. Fortunately, the doctrine contains some extraordinarily good parts. Within the case law, there are three beautiful, elegant, and highly expressive elements that are buried under a massive tower of good intentions and hard luck: First, the principle of distinction between the two concepts of responsibility and accountability, second, the ban on transferring blanket empowerments, and third, the idea of a “program of integration” as a good medium for expressing vague ideas. In combination, these elements constitute a constitutional mechanism that does not play hell with European law, but truly complements any union based on multi-level cooperation. Focusing on the good parts—and avoiding some bad parts—might help prospective ultra-vires reviews to steer clear of wreaking havoc. The subset of good parts can serve to shift the constitutional case law towards reliability, readability, robustness, foreseeability, and, if nothing else, explicability.
APA, Harvard, Vancouver, ISO, and other styles
25

Boulif, Abir, Bouchra Ananou, Mustapha Ouladsine, and Stéphane Delliaux. "A Literature Review: ECG-Based Models for Arrhythmia Diagnosis Using Artificial Intelligence Techniques." Bioinformatics and Biology Insights 17 (January 2023): 117793222211496. http://dx.doi.org/10.1177/11779322221149600.

Full text
Abstract:
In the health care and medical domain, it has been proven challenging to diagnose correctly many diseases with complicated and interferential symptoms, including arrhythmia. However, with the evolution of artificial intelligence (AI) techniques, the diagnosis and prognosis of arrhythmia became easier for the physicians and practitioners using only an electrocardiogram (ECG) examination. This review presents a synthesis of the studies conducted in the last 12 years to predict arrhythmia’s occurrence by classifying automatically different heartbeat rhythms. From a variety of research academic databases, 40 studies were selected to analyze, among which 29 of them applied deep learning methods (72.5%), 9 of them addressed the problem with machine learning methods (22.5%), and 2 of them combined both deep learning and machine learning to predict arrhythmia (5%). Indeed, the use of AI for arrhythmia diagnosis is emerging in literature, although there are some challenging issues, such as the explicability of the Deep Learning methods and the computational resources needed to achieve high performance. However, with the continuous development of cloud platforms and quantum calculation for AI, we can achieve a breakthrough in arrhythmia diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
26

Umbrello, Steven, and Ibo van de Poel. "Mapping value sensitive design onto AI for social good principles." AI and Ethics 1, no. 3 (February 1, 2021): 283–96. http://dx.doi.org/10.1007/s43681-021-00038-3.

Full text
Abstract:
AbstractValue sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.
APA, Harvard, Vancouver, ISO, and other styles
27

Strauß, Stefan. "Deep Automation Bias: How to Tackle a Wicked Problem of AI?" Big Data and Cognitive Computing 5, no. 2 (April 20, 2021): 18. http://dx.doi.org/10.3390/bdcc5020018.

Full text
Abstract:
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Xiangyu, Gobi Krishna Sinniah, Ruiwei Li, and Xiaoqing Li. "Correlation between Land Use Pattern and Urban Rail Ridership Based on Bicycle-Sharing Trajectory." ISPRS International Journal of Geo-Information 11, no. 12 (November 24, 2022): 589. http://dx.doi.org/10.3390/ijgi11120589.

Full text
Abstract:
As a form of rapid mass transportation, urban rail systems have always been widely used to alleviate urban traffic congestion and reconstruct urban structures. Land use characteristics are indispensable to this system and correlate with urban ridership. Dock-less bicycle-sharing expands the station service coverage range because it integrates public transportation with an urban rail system to create a convenient travel model. Consequently, the land use pattern with dock-less bicycle-sharing is associated with urban rail ridership. This paper measures the correlation between land use and urban rail ridership based on the trajectory of dock-less bicycle-sharing, which precisely reflects the travel behavior of passengers along the trip chain. The specific relationship has been determined using the random forest model. This paper found that the land use pattern could better explain the egress ridership during morning peak hours. In particular, it could explain 48.46% of the urban rail ridership in terms of egress, but the explicability for the ingress ridership slightly decreased to 36.88%. This suggests that the land use pattern is related to urban rail ridership. However, the impact situation varies, so we should understand this relationship with greater care.
APA, Harvard, Vancouver, ISO, and other styles
29

Dhiman, Pummy, Anupam Bonkra, Amandeep Kaur, Yonis Gulzar, Yasir Hamid, Mohammad Shuaib Mir, Arjumand Bano Soomro, and Osman Elwasila. "Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis." Information 14, no. 10 (October 3, 2023): 541. http://dx.doi.org/10.3390/info14100541.

Full text
Abstract:
Recent developments in IoT, big data, fog and edge networks, and AI technologies have had a profound impact on a number of industries, including medical. The use of AI for therapeutic purposes has been hampered by its inexplicability. Explainable Artificial Intelligence (XAI), a revolutionary movement, has arisen to solve this constraint. By using decision-making and prediction outputs, XAI seeks to improve the explicability of standard AI models. In this study, we examined global developments in empirical XAI research in the medical field. The bibliometric analysis tools VOSviewer and Biblioshiny were used to examine 171 open access publications from the Scopus database (2019–2022). Our findings point to several prospects for growth in this area, notably in areas of medicine like diagnostic imaging. With 109 research articles using XAI for healthcare classification, prediction, and diagnosis, the USA leads the world in research output. With 88 citations, IEEE Access has the greatest number of publications of all the journals. Our extensive survey covers a range of XAI applications in healthcare, such as diagnosis, therapy, prevention, and palliation, and offers helpful insights for researchers who are interested in this field. This report provides a direction for future healthcare industry research endeavors.
APA, Harvard, Vancouver, ISO, and other styles
30

van Bruxvoort, Xadya, and Maurice van Keulen. "Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System." Applied Sciences 11, no. 23 (November 25, 2021): 11187. http://dx.doi.org/10.3390/app112311187.

Full text
Abstract:
In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised.
APA, Harvard, Vancouver, ISO, and other styles
31

Niemi, Hannele. "AI in learning." Journal of Pacific Rim Psychology 15 (January 2021): 183449092110381. http://dx.doi.org/10.1177/18344909211038105.

Full text
Abstract:
This special issue raises two thematic questions: (1) How will AI change learning in the future and what role will human beings play in the interaction with machine learning, and (2), What can we learn from the articles in this special issue for future research? These questions are reflected in the frame of the recent discussion of human and machine learning. AI for learning provides many applications and multimodal channels for supporting people in cognitive and non-cognitive task domains. The articles in this special issue evidence that agency, engagement, self-efficacy, and collaboration are needed in learning and working with intelligent tools and environments. The importance of social elements is also clear in the articles. The articles also point out that the teacher’s role in digital pedagogy primarily involves facilitating and coaching. AI in learning has a high potential, but it also has many limitations. Many worries are linked with ethical issues, such as biases in algorithms, privacy, transparency, and data ownership. This special issue also highlights the concepts of explainability and explicability in the context of human learning. We need much more research and research-based discussion for making AI more trustworthy for users in learning environments and to prevent misconceptions.
APA, Harvard, Vancouver, ISO, and other styles
32

Alharbi, Abdulrahman, Ivan Petrunin, and Dimitrios Panagiotakopoulos. "Assuring Safe and Efficient Operation of UAV Using Explainable Machine Learning." Drones 7, no. 5 (May 19, 2023): 327. http://dx.doi.org/10.3390/drones7050327.

Full text
Abstract:
The accurate estimation of airspace capacity in unmanned traffic management (UTM) operations is critical for a safe, efficient, and equitable allocation of airspace system resources. While conventional approaches for assessing airspace complexity certainly exist, these methods fail to capture true airspace capacity, since they fail to address several important variables (such as weather). Meanwhile, existing AI-based decision-support systems evince opacity and inexplicability, and this restricts their practical application. With these challenges in mind, the authors propose a tailored solution to the needs of demand and capacity management (DCM) services. This solution, by deploying a synthesized fuzzy rule-based model and deep learning will address the trade-off between explicability and performance. In doing so, it will generate an intelligent system that will be explicable and reasonably comprehensible. The results show that this advisory system will be able to indicate the most appropriate regions for unmanned aerial vehicle (UAVs) operation, and it will also increase UTM airspace availability by more than 23%. Moreover, the proposed system demonstrates a maximum capacity gain of 65% and a minimum safety gain of 35%, while possessing an explainability attribute of 70%. This will assist UTM authorities through more effective airspace capacity estimation and the formulation of new operational regulations and performance requirements.
APA, Harvard, Vancouver, ISO, and other styles
33

Nesteruk, A. V., and A. V. Soldatov. "Historico-Philosophical Aspects of Mathematisation of Nature in Modern Science." Discourse 5, no. 4 (October 29, 2019): 5–17. http://dx.doi.org/10.32603/2412-8562-2019-5-4-5-17.

Full text
Abstract:
Introduction. The paper deals with the philosophical problems of the mathematisation of nature in modern science.Methodology and sources. The analysis is based on the issues in modern science viewed through the prism of the critique of the mathematisation of nature in phenomenological philosophy.Results and discussion. It is argued that the radical mathematisation of nature devoid of any references to the source of its contingent facticity in humanity, leads to the diminution of humanity and concealment of the primary medium of existence, that is the life world. Phenomenology shows how the life-word can be articulated through contrasting it to “nature” constructed scientifically. It is the analysis of the scientific universe as mental creation that can help to uncover the life-world when “nature” (as scientifically constructed and abstracted from the life-world), is itself subjected to a kind of deconstruction which leads us back to the life-world of the next, so to speak, reflected order.Conclusion. The life-world is articulated under the conditions that there exists a scientific explicability of nature. The life-world as it is articulated in theology does require some alternative explication of nature, but, contrary to science, it never leads to the concealment of the life-world. What is common to these life-worlds is exactly that which cannot be explicated by science, namely the underlying personhood.
APA, Harvard, Vancouver, ISO, and other styles
34

Prathomwong, Piyanat, and Pagorn Singsuriya. "Ethical Framework of Digital Technology, Artificial Intelligence, and Health Equity." Asia Social Issues 15, no. 5 (June 6, 2022): 252136. http://dx.doi.org/10.48048/asi.2022.252136.

Full text
Abstract:
Healthcare is evident in the extensive use of digital technology and artificial intelligence (AI). Although one aim of technological development and application is to promote health equity, it can at the same time increase health disparities. An ethical framework is needed to analyze issues arising in the effort to promote health equity through digital technology and AI. Based on an analysis of ethical principles for the promotion of health equity, this research article aims to synthesize an ethical framework for analyzing issues related to the promotion of health equity through digital technology and AI. Results of the study showed a synthesized framework that comprises two main groups of ethical principles: general principles and principles of management. The latter is meant to serve the implementation of the former. The general principles comprise four core principles: Human Dignity, Justice, Non-maleficence, and Beneficence, covering major principles and minor principles. For example, the core principle of Human Dignity includes three major principles (Non-humanization, Privacy, and Autonomy), and two minor principles (Explicability and Transparency). Other core principles have their relevant major and minor principles. The principles of management can be categorized according to their goals to serve different core principles. An illustration of applying the ethical framework is offered through the analysis and categorization of issues solicited from experts in multidisciplinary workshops on digital technology, AI, and health equity.
APA, Harvard, Vancouver, ISO, and other styles
35

Fornasier, Mateus de Oliveira. "ARTIFICIAL INTELLIGENCE AND LABOR." Revista da Faculdade Mineira de Direito 24, no. 47 (June 21, 2021): 396–421. http://dx.doi.org/10.5752/p.2318-7999.2021v24n47p396-421.

Full text
Abstract:
This article studies the impacts that the new economy, resulting from automation, represents for Law. Its hypothesis is that a new social security, based on universal basic income, funded by taxing the use of automation tools, should replace systems based mainly on the employment relationship; and principles related to transparency, explicability and non-discrimination should create obligations for developers and users of AI-powered worker selection tools. Methodology: hypothetical-deductive procedure method, with a transdisciplinary and qualitative approach, and bibliographic review research technique. Results: i) Labor regulation must be planned beyond substitution, focusing on a new economy, in which formal jobs, inserted in a paradigm of social and economic protection, are eroding, and the great challenge will be to protect decent work standards concurrently with the enlargement of dignity for non employed workers; ii) A universal basic income funded by taxes on automation would be interesting, but problematic from the point of view of solidarity in its costing, since the stress of national economies can cause fear on governments. And globally, such taxation, if adopted differently, can cause great tax competition between countries; iii) Transparency obligations are necessary to mitigate bias in hiring tools, but not self-sufficient, because the bias is complex, mainly due to the multiplicity of discriminatory factors and to the opacity of the logic in machine learning.
APA, Harvard, Vancouver, ISO, and other styles
36

Antonio, Nuno, Ana de Almeida, and Luis Nunes. "Big Data in Hotel Revenue Management: Exploring Cancellation Drivers to Gain Insights Into Booking Cancellation Behavior." Cornell Hospitality Quarterly 60, no. 4 (May 29, 2019): 298–319. http://dx.doi.org/10.1177/1938965519851466.

Full text
Abstract:
In the hospitality industry, demand forecast accuracy is highly impacted by booking cancellations, which makes demand-management decisions difficult and risky. In attempting to minimize losses, hotels tend to implement restrictive cancellation policies and employ overbooking tactics, which, in turn, reduce the number of bookings and reduce revenue. To tackle the uncertainty arising from booking cancellations, we combined the data from eight hotels’ property management systems with data from several sources (weather, holidays, events, social reputation, and online prices/inventory) and machine learning interpretable algorithms to develop booking cancellation prediction models for the hotels. In a real production environment, improvement of the forecast accuracy due to the use of these models could enable hoteliers to decrease the number of cancellations, thus, increasing confidence in demand-management decisions. Moreover, this work shows that improvement of the demand forecast would allow hoteliers to better understand their net demand, that is, current demand minus predicted cancellations. Simultaneously, by focusing not only on forecast accuracy but also on its explicability, this work illustrates one other advantage of the application of these types of techniques in forecasting: the interpretation of the predictions of the model. By exposing cancellation drivers, models help hoteliers to better understand booking cancellation patterns and enable the adjustment of a hotel’s cancellation policies and overbooking tactics according to the characteristics of its bookings.
APA, Harvard, Vancouver, ISO, and other styles
37

Beggar, Omar El, Mohammed Ramdani, and Mohamed Kissi. "Design and development of a fuzzy explainable expert system for a diagnostic robot of COVID-19." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (December 1, 2023): 6940. http://dx.doi.org/10.11591/ijece.v13i6.pp6940-6951.

Full text
Abstract:
<p><span lang="EN-US">Expert systems have been widely used in medicine to diagnose different diseases. However, these rule-based systems only explain why and how their outcomes are reached. The rules leading to those outcomes are also expressed in a machine language and confronted with the familiar problems of coverage and specificity. This fact prevents procuring expert systems with fully human-understandable explanations. Furthermore, early diagnosis involves a high degree of uncertainty and vagueness which constitutes another challenge to overcome in this study. This paper aims to design and develop a fuzzy explainable expert system for coronavirus disease-2019 (COVID-19) diagnosis that could be incorporated into medical robots. The proposed medical robotic application deduces the likelihood level of contracting COVID-19 from the entered symptoms, the personal information, and the patient's activities. The proposal integrates fuzzy logic to deal with uncertainty and vagueness in diagnosis. Besides, it adopts a hybrid explainable artificial intelligence (XAI) technique to provide different explanation forms. In particular, the textual explanations are generated as rules expressed in a natural language while avoiding coverage and specificity problems. Therefore, the proposal could help overwhelmed hospitals during the epidemic propagation and avoid contamination using a solution with a high level of explicability.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
38

Ursin, Frank, Cristian Timmermann, and Florian Steger. "Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals through Artificial Intelligence." Diagnostics 11, no. 3 (March 4, 2021): 440. http://dx.doi.org/10.3390/diagnostics11030440.

Full text
Abstract:
Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward for AI aided AD prediction in subjectively asymptomatic individuals and their ethical implications. The ethical assessment is based on a systematic literature search. Thematic analysis was conducted inductively of 18 included publications. The ethical framework includes the principles of autonomy, beneficence, non-maleficence, and justice. Reasons for offering predictive tests to asymptomatic individuals are the right to know, a positive balance of the risk-benefit assessment, and the opportunity for future planning. Reasons against are the lack of disease modifying treatment, the accuracy and explicability of AI aided prediction, the right not to know, and threats to social rights. We conclude that there are serious ethical concerns in offering early diagnosis to asymptomatic individuals and the issues raised by the application of AI add to the already known issues. Nevertheless, pre-symptomatic testing should only be offered on request to avoid inflicted harm. We recommend developing training for physicians in communicating AI aided prediction.
APA, Harvard, Vancouver, ISO, and other styles
39

Hickman, Eleanore, and Martin Petrin. "Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective." European Business Organization Law Review 22, no. 4 (October 6, 2021): 593–625. http://dx.doi.org/10.1007/s40804-021-00224-0.

Full text
Abstract:
AbstractAI will change many aspects of the world we live in, including the way corporations are governed. Many efficiencies and improvements are likely, but there are also potential dangers, including the threat of harmful impacts on third parties, discriminatory practices, data and privacy breaches, fraudulent practices and even ‘rogue AI’. To address these dangers, the EU published ‘The Expert Group’s Policy and Investment Recommendations for Trustworthy AI’ (the Guidelines). The Guidelines produce seven principles from its four foundational pillars of respect for human autonomy, prevention of harm, fairness, and explicability. If implemented by business, the impact on corporate governance will be substantial. Fundamental questions at the intersection of ethics and law are considered, but because the Guidelines only address the former without (much) reference to the latter, their practical application is challenging for business. Further, while they promote many positive corporate governance principles—including a stakeholder-oriented (‘human-centric’) corporate purpose and diversity, non-discrimination, and fairness—it is clear that their general nature leaves many questions and concerns unanswered. In this paper we examine the potential significance and impact of the Guidelines on selected corporate law and governance issues. We conclude that more specificity is needed in relation to how the principles therein will harmonise with company law rules and governance principles. However, despite their imperfections, until harder legislative instruments emerge, the Guidelines provide a useful starting point for directing businesses towards establishing trustworthy AI.
APA, Harvard, Vancouver, ISO, and other styles
40

Morley, Jessica, Luciano Floridi, Libby Kinsey, and Anat Elhalal. "From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices." Science and Engineering Ethics 26, no. 4 (December 11, 2019): 2141–68. http://dx.doi.org/10.1007/s11948-019-00165-5.

Full text
Abstract:
AbstractThe debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. 10.1126/science.132.3429.741; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
APA, Harvard, Vancouver, ISO, and other styles
41

Hà, Tiên-Dung, and Peter A. Chow-White. "The cancer multiple: Producing and translating genomic big data into oncology care." Big Data & Society 8, no. 1 (January 2021): 205395172097899. http://dx.doi.org/10.1177/2053951720978991.

Full text
Abstract:
This article provides an ethnographic account of how Big Data biology is produced, interpreted, debated, and translated in a Big Data-driven cancer clinical trial, entitled “Personalized OncoGenomics,” in Vancouver, Canada. We delve into epistemological differences between clinical judgment, pathological assessment, and bioinformatic analysis of cancer. To unpack these epistemological differences, we analyze a set of gazes required to produce Big Data biology in cancer care: clinical gaze, molecular gaze, and informational gaze. We are concerned with the interactions of these bodily gazes and their interdependence on each other to produce Big Data biology and translate it into clinical knowledge. To that end, our central research questions ask: How do medical practitioners and data scientists interact, contest, and collaborate to produce and translate Big Data into clinical knowledge? What counts as actionable and reliable data in cancer decision-making? How does the explicability or translatability of genomic Big Data come to redefine or contradict medical practice? The article contributes to current debates on whether Big Data engenders new questions and approaches to biology, or Big Data biology is merely an extension of early modern natural history and biology. This ethnographic account will highlight how genomic Big Data, which underpins the mechanism of personalized medicine, allows oncologists to understand and diagnose cancer in a different light, but it does not revolutionize or disrupt medical oncology on an institutional level. Rather, personalized medicine is interdependent on different styles of (medical) thought, gaze, and practice to be produced and made intelligible.
APA, Harvard, Vancouver, ISO, and other styles
42

Hübner, Ursula H., Nicole Egbert, and Georg Schulte. "Clinical Information Systems – Seen through the Ethics Lens." Yearbook of Medical Informatics 29, no. 01 (August 2020): 104–14. http://dx.doi.org/10.1055/s-0040-1701996.

Full text
Abstract:
Objective: The more people there are who use clinical information systems (CIS) beyond their traditional intramural confines, the more promising the benefits are, and the more daunting the risks will be. This review thus explores the areas of ethical debates prompted by CIS conceptualized as smart systems reaching out to patients and citizens. Furthermore, it investigates the ethical competencies and education needed to use these systems appropriately. Methods: A literature review covering ethics topics in combination with clinical and health information systems, clinical decision support, health information exchange, and various mobile devices and media was performed searching the MEDLINE database for articles from 2016 to 2019 with a focus on 2018 and 2019. A second search combined these keywords with education. Results: By far, most of the discourses were dominated by privacy, confidentiality, and informed consent issues. Intertwined with confidentiality and clear boundaries, the provider-patient relationship has gained much attention. The opacity of algorithms and the lack of explicability of the results pose a further challenge. The necessity of sociotechnical ethics education was underpinned in many studies including advocating education for providers and patients alike. However, only a few publications expanded on ethical competencies. In the publications found, empirical research designs were employed to capture the stakeholders’ attitudes, but not to evaluate specific implementations. Conclusion: Despite the broad discourses, ethical values have not yet found their firm place in empirically rigorous health technology evaluation studies. Similarly, sociotechnical ethics competencies obviously need detailed specifications. These two gaps set the stage for further research at the junction of clinical information systems and ethics.
APA, Harvard, Vancouver, ISO, and other styles
43

Kempf, Olivier, and Eloïse Berthier. "IA, explicabilité et défense." Revue Défense Nationale N° 820, no. 5 (May 1, 2019): 65–73. http://dx.doi.org/10.3917/rdna.820.0065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Osti, João Alexandre Saviolo, Claudinei José Rodrigues, Clovis Ferreira do Carmo, Ana Carolina Peixoto, Sergio Henrique Canello Schalch, Adriana Sacioto Marcantonio, Fernanda Menezes França, and Cacilda Thais Janson Mercante. "Artificial floating islands as a tool for the water quality improvement of fishponds." Ambiente e Agua - An Interdisciplinary Journal of Applied Science 16, no. 6 (December 14, 2021): 1–16. http://dx.doi.org/10.4136/ambi-agua.2734.

Full text
Abstract:
In this study, the ecotechnology artificial floating islands (AFIs), colonized by Eichhornia crassipes, have been tested as a tool for water quality improvement of fishponds. The experiment was carried out in semi-intensive production during the grow-out period of Nile tilapia, comprising one production cycle. It was completely randomized with two treatments (with and without AFIs) and three replications. Temperature, dissolved oxygen, conductivity, pH, turbidity, total dissolved solids (TDS), transparency (Secchi) and concentrations of chlorophyll a (CL a), total nitrogen (TN), total ammonia nitrogen (TAN), total phosphorus (TP) and orthophosphate (PO43--P) were analyzed fortnightly in the fishponds. Two groups ordered based on environmental characteristics were formed by applying the Principal Component Analysis (70.68% of explicability). The fishponds with AFIs were assigned to higher values of Secchi and lower values of pH, turbidity, TDS and concentrations of nutrients. On the other hand, the fishponds without AFIs were assigned to the highest values of these variables, except for Secchi. In 30 days, the AFIs showed the lowest concentrations of TP and PO43--P, and for CL a, TN and TAN, the differences were recorded after 90 days. The use of AFIs has demonstrated potential to conserve water quality in fishponds, notably for biologically assimilable elements (PO43--P and TAN) and for those directly related to eutrophication (P and N). Artificial floating islands should be encouraged for small and medium-sized farmers as tool to improve water quality in fishponds. However, new AFIs coverage rates must be evaluated, as well as the control of hydraulic retention rates. Keywords: aquaculture, ecotechnology, free-floating aquatic macrophytes.
APA, Harvard, Vancouver, ISO, and other styles
45

Saporta, Gilbert. "Équité, explicabilité, paradoxes et biais." Statistique et société, no. 10 | 3 (December 1, 2022): 13–23. http://dx.doi.org/10.4000/statsoc.539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Samraj, Tennyson. "Metaphysics: Intelligible Questions and the Explicable World of Intentionality." Athens Journal of Philosophy 1, no. 4 (November 30, 2022): 221–38. http://dx.doi.org/10.30958/ajphil.1-4-3.

Full text
Abstract:
Metaphysics deals with the intelligible world of questions and the explicable world of intentionality. Metaphysics is explicable, and its explicability is connected to questions related to what there is to know about the nature of reality. While physics deals with what is and what else there is, metaphysics deals with the nature of reality and what else there is to know about the nature of reality. If the content of metaphysics is considered as "answers" to questions related to cosmology and consciousness, then metaphysical claims must be understood in the context of the questions that necessitate such claims. For without understanding the relevance of the questions, we cannot establish the 'truth' or 'falsity' of metaphysical claims. The relevance of the questions is the basis for establishing the veracity of the metaphysical distinctions. Hence, all metaphysical distinctions are a non-reductive explanation of what is considered as being reductive. The content of consciousness or intentionality deals with the following metaphysical distinctions, namely, the matter/mind, the essence/ existence, the space/time, the concrete/abstract, the particular/universal, and the contingent/necessary distinctions. These distinctions are made possible because of the questions raised by the intelligent mind. Two questions that connect physics and metaphysics are-- what is there and the nature of what is there. Two further questions that promote our interest in physics and metaphysics are: what else is there to know, and what else is there to know about the nature of reality. Reality and the nature of reality are the same. However, because the mind makes this distinction, we can state that what is physical is an empirical given, and what is metaphysical is a phenomenological or an existential given. Keywords: metaphysics, intentionality, subjectivity, creativity, freedom and time
APA, Harvard, Vancouver, ISO, and other styles
47

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Full text
Abstract:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
APA, Harvard, Vancouver, ISO, and other styles
48

García Marzá, Domingo. "Ética digital discursiva: de la explicabilidad a la participación." Daimon, no. 90 (September 1, 2023): 99–114. http://dx.doi.org/10.6018/daimon.562821.

Full text
Abstract:
This article is intended to present a proposal for dialogic digital ethics on a critical reading of the European Commission's independent high-level expert group’s document Ethics Guidelines for Trustworthy AI (2019). These would be digital ethics with a normative horizon for action and criteria for justice based on dialogue and possible agreement between all agents involved and affected by the digital reality. The aim is to show that the participation of all parties involved is not merely advisable but morally required as part of the Commission’s effort to generate common European willingness and governance to deal with the fourth industrial revolution now going on. The acknowledgement of equal dignity implied by people-centric artificial intelligence (AI) is utterly unthinkable without this possibility of equal participation. Without it, trust cannot be generated or guaranteed. As we intend to show, the new principle of explicability plays a decisive role in this objective as a principle with a moral as well as an instrumental. El presente artículo tiene como objetivo presentar los rasgos básicos de una ética digital dialógica a partir de una lectura crítica del documento elaborado por grupo independiente de expertos de alto nivel para la Comisión Europea Ethics Guidelines for Trustworthy AI (High-level expert Group on Artificial Intelligence, 2019). Una ética digital que tiene en el diálogo y acuerdo posible de todos los agentes implicados y afectados por la realidad digital su horizonte normativo de actuación, su criterio de justicia. La finalidad es mostrar que, en su esfuerzo por generar una voluntad común y una gobernanza europeas ante la actual revolución industrial, la participación de todas las partes implicadas no solo es recomendable sino moralmente exigible. El reconocimiento de la igual dignidad que implica una Inteligencia Artificial centrada en las personas no es ni siquiera pensable sin el horizonte de una participación igual. Sin ella, la confianza no puede generarse ni garantizarse. Como pretendemos mostrar, en este objetivo juega un papel decisivo el nuevo principio de explicabilidad, principio que posee un valor moral y no solo instrumental.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Li, Nannan Zhang, Tongyang Zhao, Hao Zhang, Jinyu Chang, Jintao Tao, and Yujin Chi. "Lithium-Bearing Pegmatite Identification, Based on Spectral Analysis and Machine Learning: A Case Study of the Dahongliutan Area, NW China." Remote Sensing 15, no. 2 (January 13, 2023): 493. http://dx.doi.org/10.3390/rs15020493.

Full text
Abstract:
Lithium (Li) resources are widely used in many strategic emerging fields; recently, several large-scale to super-large-scale pegmatite-type lithium deposits have been discovered in Dahongliutan, NW China. However, the natural environmental conditions in the Dahongliutan area are extremely harsh; hence, manpower in field exploration is difficult to achieve. Efficient and rapid methods for identifying Li-rich pegmatites, based on hyperspectral remote sensing technology, have great potential for promoting the discovery of lithium resources. Ground spectral research is the cornerstone of regional hyperspectral imaging (HSI) for geological mapping. Direct observation and analysis by the naked eye are part of a process that is mainly dependent upon abundant experience and knowledge from experts. Machine learning (ML) technology has the advantages of automatic feature extraction and relationship characterization. Therefore, identifying the spectral features of Li-rich pegmatite via ML can accurately and efficiently distinguish the spectral characteristics of Li-rich pegmatites and Li-poor pegmatites, enabling further excavation to identify the strongest predictors of Li-pegmatite and laying a foundation for the accurate extraction of Li-rich pegmatites in the West Kunlun region using HSI. The spectral characteristics of pegmatite in the visible near-infrared and shortwave infrared (VNIR–SWIR) spectra were observed and analyzed. Li-rich pegmatite was identified based on the diagnostic spectral waveform characteristic parameters of the local wavelength range. The results demonstrated that the pegmatite ML recognition model was based on spectral characteristic parameters of the local wavelength range, with good model explicability, and the area under the curve (AUC) calculated for the model is 0.843. A recognition model based on full-range spectrum data achieved a higher precision, and the AUC value was up to 0.977. The evaluation of the Gini coefficient presented the strongest predictors, which were used to map the spatial distribution lithology, based on GF-5, in Akesayi and the 509 mines, producing encouraging lithological mapping results (Kappa > 0.9, OA > 94%).
APA, Harvard, Vancouver, ISO, and other styles
50

Minati, Gianfranco. "On Theoretical Incomprehensibility." Philosophies 4, no. 3 (August 15, 2019): 49. http://dx.doi.org/10.3390/philosophies4030049.

Full text
Abstract:
This contribution tentatively outlines the presumed conceptual duality between the issues of incompleteness and incomprehensibility—The first being more formal in nature and able to be declined in various ways until specified in the literature as theoretical incompleteness. This is theoretical and not temporary, which is admissible and the completion prosecutable. As considered in the literature, theoretical incompleteness refers to uncertainty principles in physics, incompleteness in mathematics, oracles for the Turing Machine, logical openness as the multiplicity of models focusing on coherence more than the optimum selections, fuzziness, quasiness, e.g., quasi-crystals, quasi-systems, and quasi-periodicity, which are intended as the space of equivalences that allow for coherent processes of emergence. The issue of incomprehensibility cannot be considered without reference to an agent endowed with cognitive abilities. In this article, we consider incomprehensibility as understood here as not generally scientifically explicable, i.e., with the available knowledge, as such incomprehensibility may be temporary, pending theoretical and technological advances, or deemed to be absolute as coincident with eventual definitive, theoretical non-explicability, and incomprehensibility. We considered the theoretically incomprehensibility mostly in three main ways: as the inexhaustibility of the multiplicity of constructivist reality as given by the theoretically incomprehensible endless loop of incomprehensible–comprehensible, and by existential questions. Moreover, theoretical incomprehensibility is intended as evidence of the logical openness of both the world and of understanding itself. The role of theoretical incomprehensibility is intended as a source of theoretical research issues such as paradoxes and paradigm shifts, where it is a matter of having cognitive strategies and approaches to look for, cohabit, combine, and use comprehensibility and (theoretical) incomprehensibility. The usefulness of imaginary numbers comes to mind. Can we support such research for local, temporary, and theoretical incomprehensibility with suitable approaches such as software tools, for instance, that simulate the logical frameworks of incomprehensibility? Is this a step toward a kind of artificial creativity leading to paradigm shifts? The most significant novelty of the article lies in the focus on the concept of theoretical incomprehensibility and distinguishing it from incomprehensibility and considering different forms of understanding. It is a matter of identifying strategies to act and coexist with the theoretically incomprehensible, to represent and use it, for example when dealing with imaginary numbers and quantum contexts where classical comprehensibility is theoretically impossible. Can we think of forms of non-classical understanding? In this article, these topics are developed in conceptual and philosophical ways.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography