Статті в журналах з теми "AI DECISIONS"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: AI DECISIONS.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "AI DECISIONS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Li, Zhuoyan, Zhuoran Lu, and Ming Yin. "Modeling Human Trust and Reliance in AI-Assisted Decision Making: A Markovian Approach." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (June 26, 2023): 6056–64. http://dx.doi.org/10.1609/aaai.v37i5.25748.

Повний текст джерела
Анотація:
The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making, in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Angie, Olympia Walker, Kaci Nguyen, Jiajun Dai, Anqing Chen, and Min Kyung Lee. "Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation." Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 14, 2023): 1–32. http://dx.doi.org/10.1145/3579601.

Повний текст джерела
Анотація:
Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shrestha, Yash Raj, Shiko M. Ben-Menahem, and Georg von Krogh. "Organizational Decision-Making Structures in the Age of Artificial Intelligence." California Management Review 61, no. 4 (July 13, 2019): 66–83. http://dx.doi.org/10.1177/0008125619862257.

Повний текст джерела
Анотація:
How does organizational decision-making change with the advent of artificial intelligence (AI)-based decision-making algorithms? This article identifies the idiosyncrasies of human and AI-based decision making along five key contingency factors: specificity of the decision search space, interpretability of the decision-making process and outcome, size of the alternative set, decision-making speed, and replicability. Based on a comparison of human and AI-based decision making along these dimensions, the article builds a novel framework outlining how both modes of decision making may be combined to optimally benefit the quality of organizational decision making. The framework presents three structural categories in which decisions of organizational members can be combined with AI-based decisions: full human to AI delegation; hybrid—human-to-AI and AI-to-human—sequential decision making; and aggregated human–AI decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Yinying. "Artificial intelligence in educational leadership: a symbiotic role of human-artificial intelligence decision-making." Journal of Educational Administration 59, no. 3 (February 17, 2021): 256–70. http://dx.doi.org/10.1108/jea-10-2020-0216.

Повний текст джерела
Анотація:
PurposeArtificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the sensational hyperbole of AI in teaching and learning. Instead, this paper aims to explore the role of AI in educational leadership.Design/methodology/approachTo explore the role of AI in educational leadership, I synthesized the literature that intersects AI, decision-making, and educational leadership from multiple disciplines such as computer science, educational leadership, administrative science, judgment and decision-making and neuroscience. Grounded in the intellectual interrelationships between AI and educational leadership since the 1950s, this paper starts with conceptualizing decision-making, including both individual decision-making and organizational decision-making, as the foundation of educational leadership. Next, I elaborated on the symbiotic role of human-AI decision-making.FindingsWith its efficiency in collecting, processing, analyzing data and providing real-time or near real-time results, AI can bring in analytical efficiency to assist educational leaders in making data-driven, evidence-informed decisions. However, AI-assisted data-driven decision-making may run against value-based moral decision-making. Taken together, both leaders' individual decision-making and organizational decision-making are best handled by using a blend of data-driven, evidence-informed decision-making and value-based moral decision-making. AI can function as an extended brain in making data-driven, evidence-informed decisions. The shortcomings of AI-assisted data-driven decision-making can be overcome by human judgment guided by moral values.Practical implicationsThe paper concludes with two recommendations for educational leadership practitioners' decision-making and future scholarly inquiry: keeping a watchful eye on biases and minding ethically-compromised decisions.Originality/valueThis paper brings together two fields of educational leadership and AI that have been growing up together since the 1950s and mostly growing apart till the late 2010s. To explore the role of AI in educational leadership, this paper starts with the foundation of leadership—decision-making, both leaders' individual decisions and collective organizational decisions. The paper then synthesizes the literature that intersects AI, decision-making and educational leadership from multiple disciplines to delineate the role of AI in educational leadership.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Stone, Merlin, Eleni Aravopoulou, Yuksel Ekinci, Geraint Evans, Matt Hobbs, Ashraf Labib, Paul Laughlin, Jon Machtynger, and Liz Machtynger. "Artificial intelligence (AI) in strategic marketing decision-making: a research agenda." Bottom Line 33, no. 2 (April 13, 2020): 183–200. http://dx.doi.org/10.1108/bl-03-2020-0022.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to review literature about the applications of artificial intelligence (AI) in strategic situations and identify the research that is needed in the area of applying AI to strategic marketing decisions. Design/methodology/approach The approach was to carry out a literature review and to consult with marketing experts who were invited to contribute to the paper. Findings There is little research into applying AI to strategic marketing decision-making. This research is needed, as the frontier of AI application to decision-making is moving in many management areas from operational to strategic. Given the competitive nature of such decisions and the insights from applying AI to defence and similar areas, it is time to focus on applying AI to strategic marketing decisions. Research limitations/implications The application of AI to strategic marketing decision-making is known to be taking place, but as it is commercially sensitive, data is not available to the authors. Practical implications There are strong implications for all businesses, particularly large businesses in competitive industries, where failure to deploy AI in the face of competition from firms, who have deployed AI to improve their decision-making could be dangerous. Social implications The public sector is a very important marketing decision maker. Although in most cases it does not operate competitively, it must make decisions about making different services available to different citizens and identify the risks of not providing services to certain citizens; so, this paper is relevant to the public sector. Originality/value To the best of the authors’ knowledge, this is one of the first papers to probe deployment of AI in strategic marketing decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge. "Resistance to medical artificial intelligence is an attribute in a compensatory decision process: response to Pezzo and Beckstead (2020)." Judgment and Decision Making 15, no. 3 (May 2020): 446–48. http://dx.doi.org/10.1017/s1930297500007233.

Повний текст джерела
Анотація:
AbstractIn Longoni et al. (2019), we examine how algorithm aversion influences utilization of healthcare delivered by human and artificial intelligence providers. Pezzo and Beckstead’s (2020) commentary asks whether resistance to medical AI takes the form of a noncompensatory decision strategy, in which a single attribute determines provider choice, or whether resistance to medical AI is one of several attributes considered in a compensatory decision strategy. We clarify that our paper both claims and finds that, all else equal, resistance to medical AI is one of several attributes (e.g., cost and performance) influencing healthcare utilization decisions. In other words, resistance to medical AI is a consequential input to compensatory decisions regarding healthcare utilization and provider choice decisions, not a noncompensatory decision strategy. People do not always reject healthcare provided by AI, and our article makes no claim that they do.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Senyk, Svitlana, Hanna Churpita, Iryna Borovska, Tetiana Kucher, and Andrii Petrovskyi. "The problems of defining the legal nature of the court judgement." Revista Amazonia Investiga 11, no. 56 (October 18, 2022): 48–55. http://dx.doi.org/10.34069/ai/2022.56.08.5.

Повний текст джерела
Анотація:
Description: The purpose of the article is to consider the procedural legislation on the functioning of court decisions as one of the types of court decisions. The subject of the study is court rulings in the Civil Procedure of Ukraine. The scientific study of judgments in civil proceedings was conducted on the basis of the complex use of general scientific and special methods of scientific knowledge, namely: dialectical, formal and dogmatic, system analysis, system and structural, hermeneutic, legal and comparative, legal and modeling, method of theoretical generalization. Results of the research. The formation and development of the doctrine of court decisions is analyzed. The notion of a court decision, a court decision is defined, and also the provisions of normative legal acts on this issue are considered. The features inherent in a court decision and a court decision in particular, as well as the rules for issuing court decisions are considered. Practical meaning. The clear system of requirements for a court decision as a procedural document and law enforcement act is established. Value / originality. Emphasis is placed on the need for further research to reveal the essence of the court decision as one of the elements of the mechanism for regulating legal relations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kortz, Mason, Jessica Fjeld, Hannah Hilligoss, and Adam Nagy. "Is Lawful AI Ethical AI?" Morals & Machines 2, no. 1 (2022): 60–65. http://dx.doi.org/10.5771/2747-5174-2022-1-60.

Повний текст джерела
Анотація:
Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tsai, Yun-Cheng, Fu-Min Szu, Jun-Hao Chen, and Samuel Yen-Chi Chen. "Financial Vision-Based Reinforcement Learning Trading Strategy." Analytics 1, no. 1 (August 9, 2022): 35–53. http://dx.doi.org/10.3390/analytics1010004.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence (AI) for quantitative trading have led to its general superhuman performance among notable trading performance results. However, if we use AI without proper supervision, it can lead to wrong choices and huge losses. Therefore, we need to ask why AI makes decisions and how AI makes decisions so that people can trust AI. By understanding the decision process, people can make error corrections, so the need for explainability highlights the artificial intelligence challenges that intelligent technology can explain in trading. This research focuses on financial vision, an explainable approach, and the link to its programmatic implementation. We hope our paper can refer to superhuman performance and the reasons for decisions in trading systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Singh, Surya Partap, Amitesh Srivastava, Suryansh Dwivedi, and Mr Anil Kumar Pandey. "AI Based Recruitment Tool." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 2815–19. http://dx.doi.org/10.22214/ijraset.2023.52193.

Повний текст джерела
Анотація:
Abstract: In this study, the researchers narrowed their focus to the application of algorithmic decision-making in ranking job applicants. Instead of comparing algorithms to human decision-makers, the study examined participants' perceptions of different types of algorithms. The researchers varied the complexity and transparency of the algorithm to understand how these factors influenced participants' perceptions. The study explored participants' trust in the algorithm's decision-making abilities, fairness of the decisions, and emotional responses to the situation. Unlike previous work, the study emphasized the impact of algorithm design and presentation on perceptions. The findings are important for algorithm designers, especially employers subject to public scrutiny for their hiring practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hah, Hyeyoung, and Deana Goldin. "Moving toward AI-assisted decision-making: Observation on clinicians’ management of multimedia patient information in synchronous and asynchronous telehealth contexts." Health Informatics Journal 28, no. 1 (January 2022): 146045822210770. http://dx.doi.org/10.1177/14604582221077049.

Повний текст джерела
Анотація:
Background. Artificial intelligence (AI) intends to support clinicians’ patient diagnosis decisions by processing and identifying insights from multimedia patient information. Objective. We explored clinicians’ current decision-making patterns using multimedia patient information (MPI) provided by AI algorithms and identified areas where AI can support clinicians in diagnostic decision-making. Design. We recruited 87 advanced practice nursing (APN) students who had experience making diagnostic decisions using AI algorithms under various care contexts, including telehealth and other healthcare modalities. The participants described their diagnostic decision-making experiences using videos, images, and audio-based MPI. Results. Clinicians processed multimedia patient information differentially such that their focus, selection, and utilization of MPI influence diagnosis and satisfaction levels. Conclusions and implications. To streamline collaboration between AI and clinicians across healthcare contexts, AI should understand clinicians’ patterns of MPI processing under various care environments and provide them with interpretable analytic results for them. Furthermore, clinicians must be trained with the interface and contents of AI technology and analytic assistance.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

de Fine Licht, Karl, and Jenny de Fine Licht. "Artificial intelligence, transparency, and public decision-making." AI & SOCIETY 35, no. 4 (March 19, 2020): 917–26. http://dx.doi.org/10.1007/s00146-020-00960-w.

Повний текст джерела
Анотація:
Abstract The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Stefan, Radu, and George Carutasu. "A Validation Model for Ethical Decisions in Artificial Intelligence Systems using Personal Data." MATEC Web of Conferences 343 (2021): 07016. http://dx.doi.org/10.1051/matecconf/202134307016.

Повний текст джерела
Анотація:
Decision making, a fundamental human process, has been more and more supported by computer systems in the second part of the last century. In the 21st century, intelligent decision support systems utilize Artificial Intelligence (AI) techniques to enhance and improve support for decision makers. Often decisions suggested by an AI system are based on personal data, such as credit scoring in financial institutions or purchase behavior in online shops and the like. Beyond the protection or personal data by the General Data Protection Regulation (GDPR), developers and operators of decisional AI systems need to ensure ethical standards are met. In respect to individuals, arguably the most relevant ethical aspect is the fairness principle, to ensure individuals are treated fairly. In this paper we present an evaluation model for decision ethicality of AI systems in respect to the fairness principle. The presented model treats any AI system as a “black-box”. It separates sensitive from general attributes in the input matrix. The model measures the distance between predicted values on altering inputs for sensitive attributes. The variance of the outputs is interpreted as individual fairness, that is treating similar individuals similarly. In addition, the model also informs about the group fairness. The validation model helps to determine to what extent an AI System, decides fairly in general for individuals and groups, thus can be used as a test tool in development and operation of AI Systems using personal data.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Duran, Sergi Gálvez. "Opening the Black-Box in Private-Law Employment Relationships: A Critical Review of the Newly Implemented Spanish Workers’ Council’s Right to Access Algorithms." Global Privacy Law Review 4, Issue 1 (February 1, 2023): 17–30. http://dx.doi.org/10.54648/gplr2023003.

Повний текст джерела
Анотація:
Article 22 of the General Data Protection Regulation (GDPR) provides individuals with the right not to be subject to automated decisions. In this article, the author questions the extent to which the legal framework for automated decision-making in the GDPR is attuned to the employment context. More specifically, the author argues that an individual’s right may not be the most appropriate approach to contesting artificial intelligence (AI) based decisions in situations involving dependency contracts, such as employment relationships. Furthermore, Article 22 GDPR derogations rarely apply in the employment context, which puts organizations on the wrong track when deploying AI systems to make decisions about hiring, performance, and termination. In this scenario, emerging initiatives are calling for a shift from an individual rights perspective to a collective governance approach over data as a way to leverage collective bargaining power. Taking inspiration from these different initiatives, I propose ‘algorithmic co-governance’ to address the lack of accountability and transparency in AI-based employment decisions. Algorithmic co-governance implies giving third parties (ideally, the workforce’s legal representatives) the power to negotiate, correct, and overturn AI-based employment decision tools. In this context, Spain has implemented a law reform requiring that Workers’ Councils are informed about the ‘parameters, rules, and instructions’ on which algorithmic decision-making is based, becoming the first law in the European Union requiring employers to share information about AI-based decisions with Workers’ Councils. I use this reform to evaluate a potential algorithmic co-governance model in the workplace, highlighting some shortcomings that may deprive its quality and effectiveness. Algorithms, Artificial Intelligence, AI Systems, Automated Decision-Making, Algorithmic Co-governance, Algorithmic Management, Data Protection, Privacy, GDPR, Employment Decisions, Right To Access Algorithms, Workers’ Council
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Byelov, D., and M. Bielova. "Artificial intelligence in judicial proceedings and court decisions, potential and risks." Uzhhorod National University Herald. Series: Law 2, no. 78 (August 31, 2023): 315–20. http://dx.doi.org/10.24144/2307-3322.2023.78.2.50.

Повний текст джерела
Анотація:
This article traces the role of artificial intelligence (AI) in the judiciary and its impact on judicial decision- making processes. It explores the potential of AI in the field of justice, and also reveals the potential risks associated with its use. The article examines various applications of AI in the judicial system, including automated processing of legal information, analysis of large volumes of data, prediction of court decisions and the use of assistant robots to support judges in decision-making. The use of AI can speed up judicial processes, improve access to justice and reduce the influence of the human factor on judicial decisions. However, the article also draws attention to the potential risks of using AI in the judiciary. These risks include the possibility of algorithmic unfairness, lack of transparency of algorithms, breaches of data confidentiality and privacy, and liability issues for errors that may be made by AI. The authors of the article recommend considering these risks when implementing AI in the judiciary and developing ethical standards and legal frameworks for its use. The general goal of the article is a balanced coverage of the potential and risks of AI in the judiciary, which helps readers get an objective picture of innovations in the field of the judicial system and their impact on the process of judicial decision-making. The article also explores the use of artificial intelligence (AI) in the judiciary and its impact on judicial decisions in global practice. It review current trends, problems and prospects related to the use of AI in the legal system. It also analyzez the legal, ethical and social aspects of the involvement of AI in court procedures. In addition, the article offers conclusions and recommendations for the further development of this technology in legal practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Bellaby, Ross W. "Can AI Weapons Make Ethical Decisions?" Criminal Justice Ethics 40, no. 2 (May 4, 2021): 86–107. http://dx.doi.org/10.1080/0731129x.2021.1951459.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Munyaka, Imani, Zahra Ashktorab, Casey Dugan, J. Johnson, and Qian Pan. "Decision Making Strategies and Team Efficacy in Human-AI Teams." Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 14, 2023): 1–24. http://dx.doi.org/10.1145/3579476.

Повний текст джерела
Анотація:
Human-AI teams are increasingly prevalent in various domains. We investigate how the decision-making of a team member in a human-AI team impacts the outcome of the collaboration and perceived team-efficacy. In a large scale study on Mechanical Turk (n=125), we find significant differences across different decision making styles and disclosed AI identity disclosure in an AI-driven collaborative game. We find that autocratic decision-making negatively impacts team-efficacy in Human-AI teams, similar to its effects on human-only teams. We find that decision making style and AI-identity disclosure impacts how individuals make decisions in a collaborative context. We discuss our findings of the differences of collaborative behavior in human-human-AI teams and human-AI-AI teams.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Murray, Daragh. "Using Human Rights Law to Inform States' Decisions to Deploy AI." AJIL Unbound 114 (2020): 158–62. http://dx.doi.org/10.1017/aju.2020.30.

Повний текст джерела
Анотація:
States are investing heavily in artificial intelligence (AI) technology, and are actively incorporating AI tools across the full spectrum of their decision-making processes. However, AI tools are currently deployed without a full understanding of their impact on individuals or society, and in the absence of effective domestic or international regulatory frameworks. Although this haste to deploy is understandable given AI's significant potential, it is unsatisfactory. The inappropriate deployment of AI technologies risks litigation, public backlash, and harm to human rights. In turn, this is likely to delay or frustrate beneficial AI deployments. This essay suggests that human rights law offers a solution. It provides an organizing framework that states should draw on to guide their decisions to deploy AI (or not), and can facilitate the clear and transparent justification of those decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kraus, Sarit, Amos Azaria, Jelena Fiosina, Maike Greve, Noam Hazon, Lutz Kolbe, Tim-Benjamin Lembcke, Jorg P. Muller, Soren Schleibaum, and Mark Vollrath. "AI for Explaining Decisions in Multi-Agent Environments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 09 (April 3, 2020): 13534–38. http://dx.doi.org/10.1609/aaai.v34i09.7077.

Повний текст джерела
Анотація:
Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: Explainable decisions in Multi-Agent Environments (xMASE). We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI systems' decisions in multi-agent environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Klaus, Phil, and Judy Zaichkowsky. "AI voice bots: a services marketing research agenda." Journal of Services Marketing 34, no. 3 (April 10, 2020): 389–98. http://dx.doi.org/10.1108/jsm-01-2019-0043.

Повний текст джерела
Анотація:
Purpose This paper aims to document how AI has changed the way consumers make decisions and propose how that change impacts services marketing, service research and service management. Design/methodology/approach A review of the literature, documentation of sales and customer service experiences support the evolution of bot-driven consumer decision-making, proposing the bot-driven service platform as a key component of the service experience. Findings Today the focus is on convenience, the less time and effort, the better. The authors propose that AI has taken convenience to a new level for consumers. By using bots as their service of choice, consumers outsource their decisions to algorithms, hence give little attention to traditional consumer decision-making models and brand emphasis. At the moment, this is especially true for low involvement types of decisions, but high involvement decisions are on the cusp of delegating to AI. Therefore, management needs to change how they view consumers’ decision-making-processes and how services are being managed. Research limitations/implications In an AI-convenience driven service economy, the emphasis needs to be on search ranking or warehouse stock, rather than the traditional drivers of brand values such as service quality. Customer experience management will shift from interaction with products and services toward interactions with new service platforms such as AI, bots. Hence, service marketing, as the authors know it might be in decline and be replaced by an efficient complex attribute computer decision-making model. Originality/value The change in consumer behavior leads to a change in the service marketing approach needed in the world of AI. The bot, the new service platform is now in charge of search and choice for many purchase situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Perveen, Nasira, Ashfaq Ahmad, Muhammad Usman, and Faiza Liaqat. "Study of Investment Decisions and Personal Characteristics through Risk Tolerance: Moderating Role of Investment Experience." Revista Amazonia Investiga 9, no. 34 (November 23, 2020): 57–68. http://dx.doi.org/10.34069/ai/2020.34.10.6.

Повний текст джерела
Анотація:
Investment decisions could be affected by behavioral biases associated with personal characteristics. This study empirically investigates the effect of personal characteristics on investors’ investment decision through risk tolerance. Furthermore, investment experience moderates the nexus between personal characteristics and risk tolerance. The scale consisting of 24 items was used related to selected constructs and variables. Data was collected form 175 individual investors of Pakistan Stock Exchange. PLS-SEM was used to make statistical analysis. The findings indicate that extraversion has substantial positive impact on investment decisions. Moreover, risk tolerance partially mediates the relationship between extroversion and investment decisions. The relationship between introversion and investment decisions is negative and risk tolerance partially mediates the aforesaid relationship. Furthermore, it is statistically proved that investment experience substantially moderates the association between extraversion and risk tolerance. However, investment experience does not play any conditional role in the association between introversion and risk tolerance. This study can be helpful for financial advisors to provide best consultancy to their clients (investors), while considering their personal characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fernández-Loría, Carlos, Foster Provost, and Xintian Han. "Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach." MIS Quarterly 45, no. 3 (September 1, 2022): 1635–60. http://dx.doi.org/10.25300/misq/2022/16749.

Повний текст джерела
Анотація:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zaman, Khansa. "Transformation of Marketing Decisions through Artificial Intelligence and Digital Marketing." Journal of Marketing Strategies 4, no. 2 (May 30, 2022): 353–64. http://dx.doi.org/10.52633/jms.v4i2.210.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) is ornamental to the strategic decisions of consumers and its competitive nature and has rapidly transformed the dynamics of the emerging digital world. The evolution of predictive marketing has increased the understating of consumer decision-making. Moreover, AI has enabled many businesses to predict big consumer data to fulfill customer expectations and provide customized products and services. AI’s role has been increased in operational marketing, such as design and selection of ads, customer targeting and customer analysis. Nevertheless, the role in strategic decision-making by employing machine learning techniques, knowledge representation, and computational intelligence improves efficacy. This article aims to provide a comprehensive understating of the role of AI in digital marketing to understand their target audience better. Secondly, it also accentuates the role of AI and predictive marketing in understanding complex consumer behavior by highlighting several solutions to predict the expectations of consumers. Moreover, the contribution of AI in managing customer relationships with an active role of managers is also one of the study's aims. The current study also discusses the future of AI in marketing and managers' role in utilizing disruptive technology. This paper's managerial implications are pertinent because deploying AI in competitive businesses is key to improving decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

McEvoy, Fiona J. "Political Machines: Ethical Governance in the Age of AI." Moral Philosophy and Politics 6, no. 2 (November 18, 2019): 337–56. http://dx.doi.org/10.1515/mopp-2019-0004.

Повний текст джерела
Анотація:
Abstract Policymakers are responsible for key decisions about political governance. Usually, they are selected or elected based on experience and then supported in their decision-making by the additional counsel of subject experts. Those satisfied with this system believe these individuals – generally speaking – will have the right intuitions about the best types of action. This is important because political decisions have ethical implications; they affect how we all live in society. Nevertheless, there is a wealth of research that cautions against trusting human judgment as it can be severely flawed. This paper will look at the root causes of the most common errors of human judgment before arguing – contra the instincts of many – that future AI systems could take a range of political decisions more reliably. I will argue that, if/when engineers establish ethically robust systems, governments will have a moral obligation to refer to them as a part of decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lavigne, Maxime, Fatima Mussa, Maria I. Creatore, Steven J. Hoffman, and David L. Buckeridge. "A population health perspective on artificial intelligence." Healthcare Management Forum 32, no. 4 (May 19, 2019): 173–77. http://dx.doi.org/10.1177/0840470419848428.

Повний текст джерела
Анотація:
The burgeoning field of Artificial Intelligence (AI) has the potential to profoundly impact the public’s health. Yet, to make the most of this opportunity, decision-makers must understand AI concepts. In this article, we describe approaches and fields within AI and illustrate through examples how they can contribute to informed decisions, with a focus on population health applications. We first introduce core concepts needed to understand modern uses of AI and then describe its sub-fields. Finally, we examine four sub-fields of AI most relevant to population health along with examples of available tools and frameworks. Artificial intelligence is a broad and complex field, but the tools that enable the use of AI techniques are becoming more accessible, less expensive, and easier to use than ever before. Applications of AI have the potential to assist clinicians, health system managers, policy-makers, and public health practitioners in making more precise, and potentially more effective, decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Robbins, Scott. "A Misdirected Principle with a Catch: Explicability for AI." Minds and Machines 29, no. 4 (October 15, 2019): 495–514. http://dx.doi.org/10.1007/s11023-019-09509-3.

Повний текст джерела
Анотація:
Abstract There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Åhs, Fredrik, Peter Mozelius, and Majen Espvall. "Preparing Psychologists and Social Workers for the Daily Use of AI." European Conference on the Impact of Artificial Intelligence and Robotics 4, no. 1 (November 17, 2022): 1–5. http://dx.doi.org/10.34190/eciair.4.1.771.

Повний текст джерела
Анотація:
A daily use of Artificial Intelligence (AI) is becoming a fact in many fields today, and two of them are psychology and social work. At the same time as AI systems are used for predicting psychological treatments and for decisions in social welfare, higher education has few AI courses for these professions. Moreover, there are several examples in these fields where AI can make unethical decisions that need to be corrected by humans. To better understand the possibilities and challenges of AI in psychology and social work, professional users of AI services need a tailored education on how the underlying technology works. The aim of this paper is to present a project concept for the design and evaluation of a novel course in AI for professional development in psychology and social work. For the design and development of the course the guiding research question should be: What are the strengths and challenges with contemporary AI techniques regarding prediction, adaptivity and decision systems? The suggested AI course should be given as a technology enhanced online training to enable the idea of anytime and anywhere for full-time working participants. Course content and activities are divided into the four separate sections of: 1) The history of AI structured around the 'Three waves of AI', with o focus on the current third wave. 2) A section with a focus on AI techniques for prediction and adaptivity. Underlying techniques such as machine learning, neural networks, and deep learning will be conceptually described and discussed, but not on a detailed level. 3) An elaborated discussion on the relevance, usefulness and trust, and the at the difference between AI-based decision systems and AI-based decision support systems. 4) Finally, the fourth section should comprise the ethical aspects of AI, and discuss transparency and Explainable AI. An innovative approach of the project is to use a neuroscientific assessment of the education to understand how the education changes brain function relevant to evaluate AI based decision. This should be complemented with a qualitative evaluation based on semi-structured interviews.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Khan, Muhammad Salar, Mehdi Nayebpour, Meng-Hao Li, Hadi El-Amine, Naoru Koizumi, and James L. Olds. "Explainable AI: A Neurally-Inspired Decision Stack Framework." Biomimetics 7, no. 3 (September 9, 2022): 127. http://dx.doi.org/10.3390/biomimetics7030127.

Повний текст джерела
Анотація:
European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yampolskiy, Roman V. "Unexplainability and Incomprehensibility of AI." Journal of Artificial Intelligence and Consciousness 07, no. 02 (July 17, 2020): 277–91. http://dx.doi.org/10.1142/s2705078520500150.

Повний текст джерела
Анотація:
Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly, it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Kartal, Elif. "A Comprehensive Study on Bias in Artificial Intelligence Systems." International Journal of Intelligent Information Technologies 18, no. 1 (January 1, 2022): 1–23. http://dx.doi.org/10.4018/ijiit.309582.

Повний текст джерела
Анотація:
Humans are social beings. Emotions, like their thoughts, play an essential role in decision-making. Today, artificial intelligence (AI) raises expectations for faster, more accurate, more rational, and fairer decisions with technological advancements. As a result, AI systems have often been seen as an ideal decision-making mechanism. But what if these systems decide against you based on gender, race, or other characteristics? Biased or unbiased AI, that's the question! The motivation of this study is to raise awareness among researchers about bias in AI and contribute to the advancement of AI studies and systems. As the primary purpose of this study is to examine bias in the decision-making process of AI systems, this paper focused on (1) bias in humans and AI, (2) the factors that lead to bias in AI systems, (3) current examples of bias in AI systems, and (4) various methods and recommendations to mitigate bias in AI systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hulsen, Tim. "Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare." AI 4, no. 3 (August 10, 2023): 652–66. http://dx.doi.org/10.3390/ai4030034.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks, and deep learning. AI can be applied in many different areas, such as econometrics, biometry, e-commerce, and the automotive industry. In recent years, AI has found its way into healthcare as well, helping doctors make better decisions (“clinical decision support”), localizing tumors in magnetic resonance images, reading and analyzing reports written by radiologists and pathologists, and much more. However, AI has one big risk: it can be perceived as a “black box”, limiting trust in its reliability, which is a very big issue in an area in which a decision can mean life or death. As a result, the term Explainable Artificial Intelligence (XAI) has been gaining momentum. XAI tries to ensure that AI algorithms (and the resulting decisions) can be understood by humans. In this narrative review, we will have a look at some central concepts in XAI, describe several challenges around XAI in healthcare, and discuss whether it can really help healthcare to advance, for example, by increasing understanding and trust. Finally, alternatives to increase trust in AI are discussed, as well as future research possibilities in the area of XAI.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lötsch, Jörn, Dario Kringel, and Alfred Ultsch. "Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients." BioMedInformatics 2, no. 1 (December 22, 2021): 1–17. http://dx.doi.org/10.3390/biomedinformatics2010001.

Повний текст джерела
Анотація:
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Jin, Weina, Xiaoxiao Li, and Ghassan Hamarneh. "Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?" Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 11945–53. http://dx.doi.org/10.1609/aaai.v36i11.21452.

Повний текст джерела
Анотація:
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of artificial intelligence (AI) models for clinical decision support. For medical images, a feature attribution map, or heatmap, is the most common form of explanation that highlights important features for AI models' prediction. However, it is unknown how well heatmaps perform on explaining decisions on multi-modal medical images, where each image modality or channel visualizes distinct clinical information of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the modality-specific feature importance (MSFI) metric. It encodes clinical image and explanation interpretation patterns of modality prioritization and modality-specific feature localization. We conduct a clinical requirement-grounded, systematic evaluation using computational methods and a clinician user study. Results show that the examined 16 heatmap algorithms failed to fulfill clinical requirements to correctly indicate AI model decision process or decision quality. The evaluation and MSFI metric can guide the design and selection of explainable AI algorithms to meet clinical requirements on multi-modal explanation.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Paliukas, Vytautas, and Asta Savanevičienė. "Harmonization of rational and creative decisions in quality management using AI technologies." Economics and Business 32, no. 1 (November 1, 2018): 195–208. http://dx.doi.org/10.2478/eb-2018-0016.

Повний текст джерела
Анотація:
Abstract Artificial Intelligence (AI) systems are rapidly evolving and becoming more common in management. Managers in business institutions are faced with the decision taking challenges and large amounts of data to be processed combining and harmonizing rational data with creative human experience in decision making. The aim of the study is to reveal the main obstacles of the harmonization of creative and rational decisions making in quality management using AI technologies in the Quality Management System (QMS). The first section presents a literature review of approaches and trends related to AI technology usage in organisations for data processing and creative-rational decision making, rational and creative quality management decision making and paradigms in decision harmonization. The Main Results section presents practical analysis and testing experience of automated AI Quality Management System developed at a higher education institution. During the analysis, an interview method was applied to find out specific system implementation issues. In the last section, the main analysis results and further development possibilities are discussed. The main findings and conclusions disclose two main problematic areas which may be defined as obstacles for rational and creative management decisions in quality management, related with clear responsibility distribution and assignment between data inputters and experience interpreters and duplicated qualitative data which AI system is not capable of rationalizing at the present development stage, speech and language processing techniques used when data processing algorithms cannot cope with the dual data processing technique, because in practice the system interprets and rationalizes only one category of data either quantitative - based on rational defined indicators, or qualitative, based on language recognition and speech related data interpretation. Managers’ experience in harmonizing creative human experience in organisation’s quality management was evaluated as positive. Data processed by tested AI system allows for rationalization of creative experience with ready quantitative data output from QMS system and final harmonized strategic quality management decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Drozd, Oleksii, Yuliia Dorokhina, Yuliia Leheza, Mykhailo Smokovych, and Natalia Zadyraka. "Cassation filters in administrative judicial procedure: a step in a chasm or a novel that ukrainian society expected?" Revista Amazonia Investiga 10, no. 40 (May 31, 2021): 222–32. http://dx.doi.org/10.34069/ai/2021.40.04.22.

Повний текст джерела
Анотація:
The purpose of the article is to characterize the grounds for the use of "cassation administrative filters" as part of the mechanism for exercising the right of an individual to cassation appeal against a court decision in a public law dispute. The subject of research is the peculiarities of cassation review of decisions in administrative proceedings. Methodology: The methodological basis for the article are general and special methods of legal science, in particular: the method of dialectical analysis, the method of prognostic modeling, formal and logical, normative and dogmatic, sociological methods. The results of the study: The current regulations on the right of an individual to cassation appeal against court decisions in administrative proceedings by characterizing the existing procedural filters are analyzed. Practical implication: Based on the study of the case law, the types of administrative cassation filters applied by the courts when reviewing the decisions are identified. Value / originality: It is proved that achieving the effectiveness of the application of cassation administrative filters requires a high level of professionalism, which ensures the proper implementation of the individual’s right to file a cassation appeal, and developing the unified approach to the use of assessment categories.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Han, Yi, Jinhao Chen, Meitao Dou, Jiahong Wang, and Kangxiao Feng. "The Impact of Artificial Intelligence on the Financial Services Industry." Academic Journal of Management and Social Sciences 2, no. 3 (May 20, 2023): 83–85. http://dx.doi.org/10.54097/ajmss.v2i3.8741.

Повний текст джерела
Анотація:
With the rapid development of artificial intelligence (AI) technology, the financial services sector is beginning to widely utilize these advanced technologies to improve efficiency, optimize decision-making, and ultimately improve customer satisfaction. However, despite the enormous potential that AI brings, its application also raises a host of questions about data privacy, security, and ethics. This paper will explore the application of AI in financial services and its possible impact. AI is already playing an important role in many financial services, including investment management, risk assessment, fraud detection, and customer service. For example, AI can help financial institutions make more accurate investment decisions through pattern recognition and predictive analytics. In risk assessment, AI can analyze large amounts of data to identify patterns that could lead to loan defaults or credit risk. In addition, AI chatbots and virtual assistants are changing the way customer service is done, providing 24/7 service and improving the customer experience. However, the widespread adoption of AI also brings new challenges. First, data privacy and security issues are a big concern, as AI often needs to deal with large amounts of personal and sensitive data. Second, transparency and explain ability of AI decisions are also a big problem. Due to the "black box" nature of some AI models, such as deep learning, the decision-making process can be difficult to understand, which can lead to public distrust of AI decision-making. Finally, AI could lead to the disappearance of jobs, especially those with low skills that can be automated. Therefore, in order to make the most of the opportunities brought by AI and effectively address the challenges it brings, we need to think deeply and discuss at multiple levels such as technology, policy and ethics. Future research could explore more deeply the specific applications of AI in financial services and how to design and implement effective strategies to manage the use of AI to ensure that the benefits outweigh the potential risks.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Olena, Olena. "Political and Legal Implications of the Use of Artificial Intelligence." Yearly journal of scientific articles “Pravova derzhava”, no. 34 (August 1, 2023): 684–93. http://dx.doi.org/10.33663/1563-3349-2023-34-684-693.

Повний текст джерела
Анотація:
Increasingly, applications with AI elements are being used not only in the technical field, to improve the efficiency of services provided by the private and public sectors,but also to make decisions that directly affect the lives of citizens. However, like any technological solution, AI has both positive and negative results, which are only beginning to be understood by social scientists. The purpose of the article is to identify the political and legal consequences of AI application and to analyze the legal mechanisms for ensuring its safe use based on the experience of foreign countries. As AI systems prove to be increasingly useful in the real world, they expand their scope of application, which leads to an increase in the risks of abuse. The consequences of losing effective control over them are of growing concern. Automated decision making can lead to distorted results that repeat and reinforce existing biases. There is an aura of neutrality and impartiality associated with AI decision-making, resulting in these systems being accepted as objective, even though they may be the result of biased historical decisions or even outright discrimination. Without transparency about the data or the AI algorithms that interpret it, the public may be left in the dark about how decisions that have a significant impact on their lives are made. Awareness of the dangers of uncontrolled AI use has led a number of countries to seek legal instruments to minimize the negative consequences of its use. The European Union is the closest to introducing basic standards for AI regulation. A draft of Artificial Intelligence Act was published in 2021 and classifies the risks of using AI intofour categories: unacceptable, high-risk, limited, and minimal. Once adopted, the AI Act will be the first horizontal legislative act in the EU to regulate AI systems, introducing rules for the safe and secure placement of AI-enabled products on the EU market. Taking into account the European experience and Ukrainian specifics in domestic legislation on the use of digital technologies should facilitate both adaptation to the European legal space and promote the development of the technology sector in the country. Key words: artifi cial intelligence, algorithms, discrimination, disinformation, democracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lukashova-Sanz, Olga, Martin Dechant, and Siegfried Wahl. "The Influence of Disclosing the AI Potential Error to the User on the Efficiency of User–AI Collaboration." Applied Sciences 13, no. 6 (March 10, 2023): 3572. http://dx.doi.org/10.3390/app13063572.

Повний текст джерела
Анотація:
User–AI collaboration is an increasingly common paradigm in assistive technologies. However, designers of such systems do not know whether communicating the AI’s accuracy is beneficial. Disclosing the accuracy could lead to more informed decision making or reduced trust in the AI. In the context of assistive technologies, understanding how design decisions affect User–AI collaboration is critical because less efficient User–AI collaboration may drastically lower the quality of life. To address this knowledge gap, we conducted a VR study in which a simulated AI predicted the user’s intended action in a selection task. Fifteen participants had to either intervene or delegate the decision to the AI. We compared participants’ behaviors with and without the disclosure of details on the AI’s accuracy prior to the system’s deployment while also varying the risk level in terms of decision consequences. The results showed that communicating potential errors shortened the decision-making time and allowed the users to develop a more efficient strategy for intervening in the decision. This work enables more effective designs of the interfaces for assistive technologies using AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Sahadevan, Sivasubramaniyan. "Project Management in the Era of Artificial Intelligence." European Journal of Theoretical and Applied Sciences 1, no. 3 (June 8, 2023): 349–59. http://dx.doi.org/10.59324/ejtas.2023.1(3).35.

Повний текст джерела
Анотація:
This study discusses the advantages of AI integration in project management, specifically in areas such as resource allocation, decision-making, risk management, and planning. By interpreting vast amounts of data from various sources, AI provides project managers with valuable insights to make better decisions. Although some tasks can be automated, human intervention is necessary for accuracy and efficacy. Therefore, AI should complement human skills, not replace them. Project managers require analytics skills and stay updated on AI technology to integrate it effectively. Ultimately, this study highlights that AI integration can enhance productivity and efficient project delivery.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fejes, Erzsébet, and Iván Futó. "Artificial Intelligence in Public Administration – Supporting Administrative Decisions." Pénzügyi Szemle = Public Finance Quarterly 66, Special edition 2021/1 (2021): 23–51. http://dx.doi.org/10.35551/pfq_2021_s_1_2.

Повний текст джерела
Анотація:
Artificial intelligence (AI) is an increasingly popular concept, although it is often used only as a marketing tool to label activities that are very far from AI. The purpose of this article is to show what artificial intelligence (AI) tools - expert systems - can actually be used for administrative decision in public administration. The end of the administrative decision must be justified in detail according to the legal regulations. Expert systems do this. The other large group of AI tools, solutions based on machine learning, act as black boxes, mapping input data to output data, so the reason for the solution is unknown. Therefore, these tools are not suitable for direct, administrative decision, but can support office work with expert systems. In this article, we present the operation of expert systems through examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Li, Fan, and Yuan Lu. "ENGAGING END USERS IN AN AI-ENABLED SMART SERVICE DESIGN - THE APPLICATION OF THE SMART SERVICE BLUEPRINT SCAPE (SSBS) FRAMEWORK." Proceedings of the Design Society 1 (July 27, 2021): 1363–72. http://dx.doi.org/10.1017/pds.2021.136.

Повний текст джерела
Анотація:
AbstractArtificial Intelligence (AI) has expanded in a diverse context, it infiltrates our social lives and is a critical part of algorithmic decision-making. Adopting AI technology, especially AI-enabled design, by end users who are non-AI experts is still limited. The incomprehensible, untransparent decision-making and difficulty of using AI become obstacles which prevent these end users to adopt AI technology. How to design the user experience (UX) based on AI technologies is an interesting topic to explore.This paper investigates how non-AI-expert end users can be engaged in the design process of an AI-enabled application by using a framework called Smart Service Blueprint Scape (SSBS), which aims to establish a bridge between UX and AI systems by mapping and translating AI decisions based on UX. A Dutch mobility service called ‘stUmobiel ’ was taken as a design case study. The goal is to design a reservation platform with stUmobiel end users. Co-creating with case users and assuring them to understand the decision-making and service provisional process of the AI-enabled design is crucial to promote users’ adoption. Furthermore, the concern of AI ethics also arises in the design process and should be discussed in a broader sense.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Qureshi, Asma, and Jeff Stevens. "Gulf Shores Company." South Asian Journal of Business and Management Cases 2, no. 1 (June 2013): 115–23. http://dx.doi.org/10.1177/2277977913480653.

Повний текст джерела
Анотація:
Business intelligence (BI) has been successful in eliminating the traditional decision support systems at Gulf Shores Company (GSC) to improve efficiency and effectiveness in service delivery and eliminate human errors. Such improved organizations are better prepared to respond quickly to threats and opportunities. Artificial intelligence (AI) supports an organization’s BI process by simplifying them and making them more cost-effective so that, under certain conditions, automated decisions and alerts can be used. Introducing AI into GSC processes would also give them the capability of making decisions in real time. GSC could implement the BI/AI combination in complex settings to address a wide range of risks specific to their industry. The case study describes the logic for implementing AI in petroleum industries, based on an intelligent system that helps offshore platforms start up, and explains how it can be applied in other industries such as medical billing.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Birch, Jonathan, Kathleen A. Creel, Abhinav K. Jha, and Anya Plutynski. "Clinical decisions using AI must consider patient values." Nature Medicine 28, no. 2 (January 31, 2022): 229–32. http://dx.doi.org/10.1038/s41591-021-01624-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Alnsour, Yazan, Marina Johnson, Abdullah Albizri, and Antoine Harfouche Harfouche. "Predicting Patient Length of Stay Using Artificial Intelligence to Assist Healthcare Professionals in Resource Planning and Scheduling Decisions." Journal of Global Information Management 31, no. 1 (May 12, 2023): 1–14. http://dx.doi.org/10.4018/jgim.323059.

Повний текст джерела
Анотація:
Artificial intelligence (AI) significantly revolutionizes and transforms the global healthcare industry by improving outcomes, increasing efficiency, and enhancing resource utilization. The applications of AI impact every aspect of healthcare operation, particularly resource allocation and capacity planning. This study proposes a multi-step AI-based framework and applies it to a real dataset to predict the length of stay (LOS) for hospitalized patients. The results show that the proposed framework can predict the LOS categories with an AUC of 0.85 and their actual LOS with a mean absolute error of 0.85 days. This framework can support decision-makers in healthcare facilities providing inpatient care to make better front-end operational decisions, such as resource capacity planning and scheduling decisions. Predicting LOS is pivotal in today's healthcare supply chain (HSC) systems where resources are scarce, and demand is abundant due to various global crises and pandemics. Thus, this research's findings have practical and theoretical implications in AI and HSC management.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Nordström, Maria. "AI under great uncertainty: implications and decision strategies for public policy." AI & SOCIETY, September 7, 2021. http://dx.doi.org/10.1007/s00146-021-01263-4.

Повний текст джерела
Анотація:
AbstractDecisions where there is not enough information for a well-informed decision due to unidentified consequences, options, or undetermined demarcation of the decision problem are called decisions under great uncertainty. This paper argues that public policy decisions on how and if to implement decision-making processes based on machine learning and AI for public use are such decisions. Decisions on public policy on AI are uncertain due to three features specific to the current landscape of AI, namely (i) the vagueness of the definition of AI, (ii) uncertain outcomes of AI implementations and (iii) pacing problems. Given that many potential applications of AI in the public sector concern functions central to the public sphere, decisions on the implementation of such applications are particularly sensitive. Therefore, it is suggested that public policy-makers and decision-makers in the public sector can adopt strategies from the argumentative approach in decision theory to mitigate the established great uncertainty. In particular, the notions of framing and temporal strategies are considered.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Shin, Minkyu, Jin Kim, Bas van Opheusden, and Thomas L. Griffiths. "Superhuman artificial intelligence can improve human decision-making by increasing novelty." Proceedings of the National Academy of Sciences 120, no. 12 (March 13, 2023). http://dx.doi.org/10.1073/pnas.2214840120.

Повний текст джерела
Анотація:
How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players’ strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Bankins, Sarah, Paul Formosa, Yannick Griep, and Deborah Richards. "AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context." Information Systems Frontiers, February 2, 2022. http://dx.doi.org/10.1007/s10796-021-10223-8.

Повний текст джерела
Анотація:
AbstractUsing artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriateness. In terms of decision makers, the use of human decision makers over AIs generally resulted in better perceptions of respectful treatment. In terms of decision valence, people experiencing positive over negative decisions generally resulted in better perceptions of respectful treatment. In instances where these cases conflict, on some indicators people preferred positive AI decisions over negative human decisions. Qualitative responses show how people identify justice concerns with both AI and human decision making. We outline implications for theory, practice, and future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Janssen, Marijn, Martijn Hartog, Ricardo Matheus, Aaron Yi Ding, and George Kuk. "Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers’ Experience on AI-supported Decision-Making in Government." Social Science Computer Review, December 28, 2020, 089443932098011. http://dx.doi.org/10.1177/0894439320980118.

Повний текст джерела
Анотація:
Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision makers are key factors in increasing accountability and transparency.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Claudy, Marius C., Karl Aquino, and Maja Graso. "Artificial Intelligence Can’t Be Charmed: The Effects of Impartiality on Laypeople’s Algorithmic Preferences." Frontiers in Psychology 13 (June 29, 2022). http://dx.doi.org/10.3389/fpsyg.2022.898027.

Повний текст джерела
Анотація:
Over the coming years, AI could increasingly replace humans for making complex decisions because of the promise it holds for standardizing and debiasing decision-making procedures. Despite intense debates regarding algorithmic fairness, little research has examined how laypeople react when resource-allocation decisions are turned over to AI. We address this question by examining the role of perceived impartiality as a factor that can influence the acceptance of AI as a replacement for human decision-makers. We posit that laypeople attribute greater impartiality to AI than human decision-makers. Our investigation shows that people value impartiality in decision procedures that concern the allocation of scarce resources and that people perceive AI as more capable of impartiality than humans. Yet, paradoxically, laypeople prefer human decision-makers in allocation decisions. This preference reverses when potential human biases are made salient. The findings highlight the importance of impartiality in AI and thus hold implications for the design of policy measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tiwari, Rudra. "Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 01 (January 27, 2023). http://dx.doi.org/10.55041/ijsrem17592.

Повний текст джерела
Анотація:
In recent years, there has been a growing need for Explainable AI (XAI) to build trust and understanding in AI decision making. XAI is a field of AI research that focuses on developing algorithms and models that can be easily understood and interpreted by humans. The goal of XAI is to make the inner workings of AI systems transparent and explainable, which can help people to understand the reasoning behind the decisions made by AI and make better decisions. In this paper, we will explore the various applications of XAI in different domains such as healthcare, finance, autonomous vehicles, and legal and government decisions. We will also discuss the different techniques used in XAI such as feature importance analysis, model interpretability, and natural language explanations. Finally, we will examine the challenges and future directions of XAI research. This paper aims to provide an overview of the current state of XAI research and its potential impact on building trust and understanding in AI decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії