Добірка наукової літератури з теми "AI DECISIONS"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "AI DECISIONS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "AI DECISIONS"

1

Li, Zhuoyan, Zhuoran Lu, and Ming Yin. "Modeling Human Trust and Reliance in AI-Assisted Decision Making: A Markovian Approach." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (June 26, 2023): 6056–64. http://dx.doi.org/10.1609/aaai.v37i5.25748.

Повний текст джерела
Анотація:
The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making, in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Angie, Olympia Walker, Kaci Nguyen, Jiajun Dai, Anqing Chen, and Min Kyung Lee. "Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation." Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 14, 2023): 1–32. http://dx.doi.org/10.1145/3579601.

Повний текст джерела
Анотація:
Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shrestha, Yash Raj, Shiko M. Ben-Menahem, and Georg von Krogh. "Organizational Decision-Making Structures in the Age of Artificial Intelligence." California Management Review 61, no. 4 (July 13, 2019): 66–83. http://dx.doi.org/10.1177/0008125619862257.

Повний текст джерела
Анотація:
How does organizational decision-making change with the advent of artificial intelligence (AI)-based decision-making algorithms? This article identifies the idiosyncrasies of human and AI-based decision making along five key contingency factors: specificity of the decision search space, interpretability of the decision-making process and outcome, size of the alternative set, decision-making speed, and replicability. Based on a comparison of human and AI-based decision making along these dimensions, the article builds a novel framework outlining how both modes of decision making may be combined to optimally benefit the quality of organizational decision making. The framework presents three structural categories in which decisions of organizational members can be combined with AI-based decisions: full human to AI delegation; hybrid—human-to-AI and AI-to-human—sequential decision making; and aggregated human–AI decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Yinying. "Artificial intelligence in educational leadership: a symbiotic role of human-artificial intelligence decision-making." Journal of Educational Administration 59, no. 3 (February 17, 2021): 256–70. http://dx.doi.org/10.1108/jea-10-2020-0216.

Повний текст джерела
Анотація:
PurposeArtificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the sensational hyperbole of AI in teaching and learning. Instead, this paper aims to explore the role of AI in educational leadership.Design/methodology/approachTo explore the role of AI in educational leadership, I synthesized the literature that intersects AI, decision-making, and educational leadership from multiple disciplines such as computer science, educational leadership, administrative science, judgment and decision-making and neuroscience. Grounded in the intellectual interrelationships between AI and educational leadership since the 1950s, this paper starts with conceptualizing decision-making, including both individual decision-making and organizational decision-making, as the foundation of educational leadership. Next, I elaborated on the symbiotic role of human-AI decision-making.FindingsWith its efficiency in collecting, processing, analyzing data and providing real-time or near real-time results, AI can bring in analytical efficiency to assist educational leaders in making data-driven, evidence-informed decisions. However, AI-assisted data-driven decision-making may run against value-based moral decision-making. Taken together, both leaders' individual decision-making and organizational decision-making are best handled by using a blend of data-driven, evidence-informed decision-making and value-based moral decision-making. AI can function as an extended brain in making data-driven, evidence-informed decisions. The shortcomings of AI-assisted data-driven decision-making can be overcome by human judgment guided by moral values.Practical implicationsThe paper concludes with two recommendations for educational leadership practitioners' decision-making and future scholarly inquiry: keeping a watchful eye on biases and minding ethically-compromised decisions.Originality/valueThis paper brings together two fields of educational leadership and AI that have been growing up together since the 1950s and mostly growing apart till the late 2010s. To explore the role of AI in educational leadership, this paper starts with the foundation of leadership—decision-making, both leaders' individual decisions and collective organizational decisions. The paper then synthesizes the literature that intersects AI, decision-making and educational leadership from multiple disciplines to delineate the role of AI in educational leadership.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Stone, Merlin, Eleni Aravopoulou, Yuksel Ekinci, Geraint Evans, Matt Hobbs, Ashraf Labib, Paul Laughlin, Jon Machtynger, and Liz Machtynger. "Artificial intelligence (AI) in strategic marketing decision-making: a research agenda." Bottom Line 33, no. 2 (April 13, 2020): 183–200. http://dx.doi.org/10.1108/bl-03-2020-0022.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to review literature about the applications of artificial intelligence (AI) in strategic situations and identify the research that is needed in the area of applying AI to strategic marketing decisions. Design/methodology/approach The approach was to carry out a literature review and to consult with marketing experts who were invited to contribute to the paper. Findings There is little research into applying AI to strategic marketing decision-making. This research is needed, as the frontier of AI application to decision-making is moving in many management areas from operational to strategic. Given the competitive nature of such decisions and the insights from applying AI to defence and similar areas, it is time to focus on applying AI to strategic marketing decisions. Research limitations/implications The application of AI to strategic marketing decision-making is known to be taking place, but as it is commercially sensitive, data is not available to the authors. Practical implications There are strong implications for all businesses, particularly large businesses in competitive industries, where failure to deploy AI in the face of competition from firms, who have deployed AI to improve their decision-making could be dangerous. Social implications The public sector is a very important marketing decision maker. Although in most cases it does not operate competitively, it must make decisions about making different services available to different citizens and identify the risks of not providing services to certain citizens; so, this paper is relevant to the public sector. Originality/value To the best of the authors’ knowledge, this is one of the first papers to probe deployment of AI in strategic marketing decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge. "Resistance to medical artificial intelligence is an attribute in a compensatory decision process: response to Pezzo and Beckstead (2020)." Judgment and Decision Making 15, no. 3 (May 2020): 446–48. http://dx.doi.org/10.1017/s1930297500007233.

Повний текст джерела
Анотація:
AbstractIn Longoni et al. (2019), we examine how algorithm aversion influences utilization of healthcare delivered by human and artificial intelligence providers. Pezzo and Beckstead’s (2020) commentary asks whether resistance to medical AI takes the form of a noncompensatory decision strategy, in which a single attribute determines provider choice, or whether resistance to medical AI is one of several attributes considered in a compensatory decision strategy. We clarify that our paper both claims and finds that, all else equal, resistance to medical AI is one of several attributes (e.g., cost and performance) influencing healthcare utilization decisions. In other words, resistance to medical AI is a consequential input to compensatory decisions regarding healthcare utilization and provider choice decisions, not a noncompensatory decision strategy. People do not always reject healthcare provided by AI, and our article makes no claim that they do.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Senyk, Svitlana, Hanna Churpita, Iryna Borovska, Tetiana Kucher, and Andrii Petrovskyi. "The problems of defining the legal nature of the court judgement." Revista Amazonia Investiga 11, no. 56 (October 18, 2022): 48–55. http://dx.doi.org/10.34069/ai/2022.56.08.5.

Повний текст джерела
Анотація:
Description: The purpose of the article is to consider the procedural legislation on the functioning of court decisions as one of the types of court decisions. The subject of the study is court rulings in the Civil Procedure of Ukraine. The scientific study of judgments in civil proceedings was conducted on the basis of the complex use of general scientific and special methods of scientific knowledge, namely: dialectical, formal and dogmatic, system analysis, system and structural, hermeneutic, legal and comparative, legal and modeling, method of theoretical generalization. Results of the research. The formation and development of the doctrine of court decisions is analyzed. The notion of a court decision, a court decision is defined, and also the provisions of normative legal acts on this issue are considered. The features inherent in a court decision and a court decision in particular, as well as the rules for issuing court decisions are considered. Practical meaning. The clear system of requirements for a court decision as a procedural document and law enforcement act is established. Value / originality. Emphasis is placed on the need for further research to reveal the essence of the court decision as one of the elements of the mechanism for regulating legal relations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kortz, Mason, Jessica Fjeld, Hannah Hilligoss, and Adam Nagy. "Is Lawful AI Ethical AI?" Morals & Machines 2, no. 1 (2022): 60–65. http://dx.doi.org/10.5771/2747-5174-2022-1-60.

Повний текст джерела
Анотація:
Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tsai, Yun-Cheng, Fu-Min Szu, Jun-Hao Chen, and Samuel Yen-Chi Chen. "Financial Vision-Based Reinforcement Learning Trading Strategy." Analytics 1, no. 1 (August 9, 2022): 35–53. http://dx.doi.org/10.3390/analytics1010004.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence (AI) for quantitative trading have led to its general superhuman performance among notable trading performance results. However, if we use AI without proper supervision, it can lead to wrong choices and huge losses. Therefore, we need to ask why AI makes decisions and how AI makes decisions so that people can trust AI. By understanding the decision process, people can make error corrections, so the need for explainability highlights the artificial intelligence challenges that intelligent technology can explain in trading. This research focuses on financial vision, an explainable approach, and the link to its programmatic implementation. We hope our paper can refer to superhuman performance and the reasons for decisions in trading systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Singh, Surya Partap, Amitesh Srivastava, Suryansh Dwivedi, and Mr Anil Kumar Pandey. "AI Based Recruitment Tool." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 2815–19. http://dx.doi.org/10.22214/ijraset.2023.52193.

Повний текст джерела
Анотація:
Abstract: In this study, the researchers narrowed their focus to the application of algorithmic decision-making in ranking job applicants. Instead of comparing algorithms to human decision-makers, the study examined participants' perceptions of different types of algorithms. The researchers varied the complexity and transparency of the algorithm to understand how these factors influenced participants' perceptions. The study explored participants' trust in the algorithm's decision-making abilities, fairness of the decisions, and emotional responses to the situation. Unlike previous work, the study emphasized the impact of algorithm design and presentation on perceptions. The findings are important for algorithm designers, especially employers subject to public scrutiny for their hiring practices.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "AI DECISIONS"

1

Blandford, Ann. "Design, decisions and dialogue." Thesis, Open University, 1991. http://oro.open.ac.uk/57316/.

Повний текст джерела
Анотація:
This thesis presents a design for an Intelligent Educational System to support the teaching of design evaluation in engineering. The design consists of a simple computerbased tool (or 'learning environment') for displaying and manipulating infonnation used in the course of problem solving, with a separate dialogue component capable of discussing aspects of the problem and of the problem solving strategy with the user. Many of the novel features of the design have been incorporated in a prototype system called WOMBAT. The main focus of this research has been on the design of the dialogue component. The design of the dialogue component is based on ideas taken from recent work on rational agency. The dialogue component has expertise in engaging in dialogues which support collaborative problem solving (involving system and user) in domains characterised as justified beliefs. It is capable of negotiating about what to do next and about what beliefs to take into account in problem solving. The system acquires problem-related beliefs by applying a simple plausible reasoning mechanism to a database of possible beliefs. The dialogue proceeds by turn-taking in which the current speaker constructs their chosen utterance (which may consist of several propositions and questions) and explicitly indicates when they have finished. When it is the system's turn to make an utterance, it decides what to say based on its beliefs about the current situation and on the likely utility of the various possible responses which it considers appropriate in the circumstances. Two aspects of the problem solving have been fully implemented. These are the discussion about what criteria a decision should be based on and the discussion about what decision step should be taken next. The system's contributions to the interaction are opportunistic, in the sense that at a dialogue level the system does not try to plan beyond the current utterance, and at a problem solving level it does not plan beyond the next action. The results of a formative evaluation of WOMBAT, in which it was exposed to a number of engineering educators, indicate that it is capable of engaging in a coherent dialogue, and that the dialogue is seen to have a pedagogical purpose. Although the approach of reasoning about the next action opportunistically has not proved adequate at a problem solving level, at a dialogue level it yields good results.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Houtsma, Meile Jacob. "Perceived AI Performance and Intended Future Use in AI-based Applications." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414835.

Повний текст джерела
Анотація:
This case study explored perceived artificial intelligence (AI) performance and intended future use (IFU) in users of AI-based applications. Users could become less motivated to use these applications if AIs do not clearly communicate their actions. A prototype, a user test, and a structured interview were iteratively developed. Eight students participated in the final iteration, which was thematically analyzed. The results indicate that an AI-based application that shows recommendations can positively affect perceived AI performance and IFU. Possibly, the recommendations increased users’ understanding of AI decisions, as well as their satisfaction. Therefore, recommendations could be a potential design element for increasing perceived AI performance and IFU. Finally, time-saving functionality is a design element that could lead to higher IFU in AI-based applications, possibly only for other tasks than examining recommendations. Further research needs to test these findings under different circumstances.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ali, Kashan, and Kim Freimann. "Applying the Technology Acceptance Model to AI decisions in the Swedish Telecom Industry." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21825.

Повний текст джерела
Анотація:
· Purpose Artificial Intelligence is one of the trend areas in research. It is applied in many different contexts successfully including Telecom sector. The purpose of this study is to replicate the study done in application of AI in the medical sector to understand the similar challenges of using AI in the Telecom sector. · Design/Methodology/approach Online questionnaire-based empirical study is used, and 190 responses were collected. First authors compare the general Technology acceptance model framework used in the medical sector and compare it with the non-AI users. Afterwards, this study proposes the improved TAM model that best fits into the Telecom sector. Later, this study uses the proposed improved model to compare the AI and non-AI users to understand the acceptance of AI-technology tools application in the Telecom sector. · Findings Confirmatory Factor analysis revealed that the general TAM model fit is adequate and applicable in Medical sector as well as in the Telecom sector. Also, hypothesis testing using SEM concluded that the general supported paths between the constructs and variables related to PU, PEU, SN, ATU, and BI in the medical sector is not same as in the Telecom sector. · Research limitations Results are based on the limited datasets from one of the larger companies in Telecom sector which could leads to inherent biases. Authors not sure if “AI-technology tools” in the questions have common understanding across all the respondents or not. · Results TAM model cannot be generalized across the sectors. An improved model has been developed used in the Telecom sector to analyze the user’s behavior and acceptance of AI technology. An extended model has been proposed which can be used as a continuation of this study. Keywords: Medical, Telecom, Artificial Intelligence, Network Intelligence, Technology acceptance model (TAM), Confirmatory Factor analysis (CFA), Structural equation modeling (SEM), Perceived usefulness (PU), Perceived Ease of Use (PEU), Subjective Norms (SN), Attitude Towards AI Use (ATU), Behavioural Intention (BI).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alabdallah, Abdallah. "Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41035.

Повний текст джерела
Анотація:
Deep Neural Networks have long been considered black box systems, where their interpretability is a concern when applied in safety critical systems. In this work, a novel approach of interpreting the decisions of DNNs is proposed. The approach depends on exploiting generative models and the interpretability of their latent space. Three methods for ranking features are explored, two of which depend on sensitivity analysis, and the third one depends on Random Forest model. The Random Forest model was the most successful to rank the features, given its accuracy and inherent interpretability.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Erhard, Annalena. "The Cost of Algorithmic decisions : A Systematic Literature Review." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85526.

Повний текст джерела
Анотація:
Decisions have been automated since the early days. Ever since the rise of AI, ML and DataAnalytics, algorithmic decision-making has experienced a boom time. Nowadays, using AI withina company is said to be critical to the success of a company. Considering the point that it can bequite costly to develop AI/ ML and integrating it into decision-making, it is striking how littleresearch was put into the identification and analysis of its cost drivers by now. This thesis is acontribution to raise and the awareness of possible cost drivers to algorithmic decisions. Thetopic was divided in two subgroups. That is solely algorithms and hybrid decision-making. Asystematic literature review was conducted to create a theoretical base for further research. Thecost drivers for algorithms to make decisions without human interaction, the identified costdrivers identified can be found at Data Storage (including initial, floor rent, energy, service,disposal, and environmental costs), Data Processing, Transferring and Migrating. Additionally,social costs and the ones related to fairness as well as the ones related to algorithms themselves(Implementation and Design, Execution and Maintenance) could be found. Business Intelligenceused for decision making raises costs in Data quality, Update delays of cloud systems, Personneland Personnel training, Hardware, Software, Maintenance and Data Storage. Moreover, it isimportant to say that the recurrence of some costs was detected. Further research should go inthe direction of applicability of the theoretical costs in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

MIGLIERINA, ENRICO CARLO. "UN APPROCCIO DINAMICO AI PROBLEMI DI OTTIMIZZAZIONE VETTORIALE." Doctoral thesis, Università degli studi di Trieste, 2002. http://thesis2.sba.units.it/store/handle/item/13246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Orefors, Emil, and Nouri Issaki. "AI IN CONTEXT BASED STATISTICS IN CLINICAL DECISION SUPPORT." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39923.

Повний текст джерела
Анотація:
Some treatments may cause unwanted effects and may make it difficult to achieve an optimal personalised decision for a specific patient. Decision support systems in healthcare is a topic that is getting much attention today. The purpose of using such a system is to enhance treatment's quality and to make it easier for clinicians to process and providing information by having access to patient's electronic health record and past experience. In this thesis, the developed a Clinical decision support system (CDSS) that helps clinicians to identify similar patients and extracting relevant experience. The vision here is to enable clinicians to make more informed decisions when choosing a suitable treatment for patient’s condition. So, here we focus on a more generic approach using case-based reasoning (CBR) and clustering in order to enable context-based statistics for a wider usage of CDSS in healthcare. We are testing our framework on a specific register that considers patients with cerebral pares and their ability to walk. In addition, the solution in our framework will measure how much the range of motions during the foot changes (increase or decrease) before and after an operation of the patient. During this work, an interview has been conducted with a clinical expert to collect requirements to develop such systems. The main function of the system is to check if a patient is similar to any previous patients so the clinician can get relevant information in choosing better treatment solution for a patient. The clinician involved in the project was convinced that our approach could become a valuable tool in a clinical decision-making situation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Elbegzaya, Temuulen <1991&gt. "Application AI in Traditional Supply Chain Management Decision-Making." Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17733.

Повний текст джерела
Анотація:
In the generation of demand uncertainty and complex market, the ability to fully integrate and orchestrate the entire supply chain spectrum of end-to-end processes from acquiring materials to converting, to delivery to final customers is highly desired by many organizations. While data sourcing, managing and manipulating are becoming one of the core advantages in the businesses, a number of leading-edge organizations have been studying and exploring the limits of machine learning and artificial intelligence (AI) to enrich excellence. The common usage of AI is being referr toed in extensive computational modelling for reasoning, recognizing patterns, calculating endless possibilities, learning, and understanding from the experience to facilitate one's needs. Especially in demand planning and forecasting, AI and/or machine learning is being used to guide effective planning of future demands with industrial precision of 85%, but lacks in full implementation among other sub-applications in supply chain (SC) such as MRP, MPS, predictive maintenance, and learning from experience instantly. One area of AI’s potential application that has not fully explored is in emerging management philosophy of SCM that requires the comprehension of complex interactions, real-time joint problem solving, and interrelated decision-making processes. This absence of competency in AI is due to lack of replicating information input on practical implications, technical merits, problem scopes, complex heuristics and long-term analysis that human brain can perform. With this obstacle in mind, this paper will concentrate one following hypotheses: - Identification of sub-application problems in SCM that can be solved through AI and machine learning algorithms - Exploring other literature and exploratory works on AI development and designs in SCM - Summarizing modern SCM models that can be addressed and replicated in AI application areas, problem scopes and methodology - Discuss and develop wproblem-solvingtraditional manager’s decision-makithe ng process in SCM using AI/ML techniques - Examine and synthesize SC data inputs required to enhance technical integrity and joint problem-solving in AI - Review future outlook on multitude of application of AI and machine learning in SCM
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Frank, Michael Patrick. "Advances in decision-theoretic AI : limited rationality and abstract search." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34070.

Повний текст джерела
Анотація:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 153-165).
by Michael Patrick Frank.
M.S.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Latora, Antonio Giuseppe. "Metodologie Analytic Hierarchy Process ibride per applicazioni di Multiple Criteria Decision Analysis ai processi di Procurement." Doctoral thesis, Università di Catania, 2012. http://hdl.handle.net/10761/1040.

Повний текст джерела
Анотація:
Costi e benefici connessi alle decisioni che quotidianamente caratterizzano la vita dell uomo, discendono dai molteplici e spesso contrastanti punti di vista o criteri utilizzati nell attività di decision making divenuta oggetto di approfonditi e recenti studi afferenti ad una vera e propria disciplina matematica definita Multiple Criteria Decision Analysis. Ogni decisione non rappresenta esclusivamente un atto di scelta bensì il risultato di un processo decisionale di comprensione e modellazione del problema ovverosia una serie di attività che consentono la trasformazione del problema in soluzione determinando nel decision maker un passaggio di stato psicologico da una percezione di insoddisfazione ad una di soddisfazione. Gli studi realizzati in ambito MCDA hanno originato due differenti scuole di pensiero: la scuola francese basata principalmente sul concetto di outranking e la scuola americana delle Multi - Attribute Utility and Value Theories; a quest ultima appartiene l Analytic Hierarchy Process, metodologia MCDA sviluppata da Thomas Lorie Saaty al fine operare il ranking di un numero definito o indefinito di alternative; il confronto a coppie, ovverosia la misurazione dell importanza relativa tra potenziali azioni o alternative, secondo un criterio o punto di vista di livello superiore, consente infatti la determinazione di una scala di priorità per entità intangibili, per definizione prive di scale di misura, ma anche per entità tangibili valutabili in topologia metrica su idonee scale di misura. Obiettivo della tesi di dottorato è stato lo studio, finalizzato alla modifica e all applicazione, della metodologia MCDA AHP al procurement, processo di ingegneria gestionale dedicato all approvvigionamento di beni e servizi nel contesto di una generica value chain. L e-procurement è una realtà consolidata nei contesti business-to-consumer, business-to-business e business-to-government, ma il modello concernente il generico processo di approvvigionamento, dato l odierno stato dell arte dell ICT applicata al procurement, non può considerarsi standard di riferimento poiché elaborato in un epoca caratterizzata da sistemi informativi ed informatici indubbiamente differenti dagli attuali riguardo a tecnologie e schemi concettuali. La metodologia Hybrid Analytic Hierarchy Process oggetto di ricerca, ha tratto origine dalla definizione di Saaty secondo la quale un numero non ha alcun significato se non quello assegnato ad esso da chi è chiamato ad interpretarlo . In un contesto caratterizzato da grandezze quali - quantitative, le metodologie ibride H-AHP-R ed H-AHP-A si caratterizzano poiché consentono il ranking parallelo e seriale di un numero definito o indefinito di alternative mediante lean MCDA ovverosia mediante calcolo diretto NO-AHP del rating ideale con riferimento ai criteri quantitativi e mediante calcolo AHP-R ed AHP-A del rating ideale riferito ai criteri qualitativi. Disegnando un processo di lean e scouting basato su metodologia MCDA H-AHP-A, lo studio eseguito, relativamente a specifiche tipologie di acquisti, ha re-ingegnerizzato parzialmente il generico processo di approvvigionamento, con l obiettivo di rendere razionale, efficiente, efficace e conforme ai criteri di valutazione individuati, la scelta dell alternativa di approvvigionamento nell universo delle soluzioni individuate sul web, al fine di massimizzare il valore dell acquisto e minimizzare i tempi di scelta.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "AI DECISIONS"

1

Phillips-Wren, Gloria, Nikhil Ichalkaranje, and Lakhmi C. Jain, eds. Intelligent Decision Making: An AI-Based Approach. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-76829-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cox, Louis Anthony. AI-ML for Decision and Risk Analysis. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32013-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

A, Sexton George, and Langley Research Center, eds. "Diverter" AI based decision aid: Phases I & II. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

International Conference on Systems Research, Informatics, and Cybernetics (17th 2005 Baden-Baden, Germany). Cognitive, emotive, and ethical aspects of decision making in humans and in AI. Windsor, Ont: International Institute for Advanced Studies in Systems Research and Cybernetics, 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhaoxia, Guo, and Leung Yung-sun, eds. Optimizing decision making in the apparel supply chain using artificial intelligence (AI): From production to retail. Cambridge: Woodhead Publishing Ltd, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sifeng, Liu, and Lin Yi 1959-, eds. Hybrid rough sets and applications in uncertain decision-making. Boca Raton: Auerbach Publications, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mendel, Jerry M. Perceptual computing: Aiding people in making subjective judgments. Hoboken, N.J: John Wiley & Sons, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Russell, Stuart J. Do the right thing: Studies in limited rationality. Cambridge, Mass: MIT Press, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lewis, Carroll. Ai-li-si meng you qi jing. Hong Kong: The Sunbeam Pub., 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lewis, Carroll. Ai-li-si man you qi jing. Hong Kong: Da Gueng Cu Ban She, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "AI DECISIONS"

1

Schmees, Johannes, and Stephan Dreyer. "Legal Requirements for AI Decisions in Administration and Justice." In Work and AI 2030, 115–22. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-40232-7_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ullman, D. G., and D. Herling. "Computer Support for Design Team Decisions." In AI System Support for Conceptual Design, 349–61. London: Springer London, 1996. http://dx.doi.org/10.1007/978-1-4471-1475-8_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kolek, Stefan, Duc Anh Nguyen, Ron Levie, Joan Bruna, and Gitta Kutyniok. "A Rate-Distortion Framework for Explaining Black-Box Model Decisions." In xxAI - Beyond Explainable AI, 91–115. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_6.

Повний текст джерела
Анотація:
AbstractWe present the Rate-Distortion Explanation (RDE) framework, a mathematically well-founded method for explaining black-box model decisions. The framework is based on perturbations of the target input signal and applies to any differentiable pre-trained model such as neural networks. Our experiments demonstrate the framework’s adaptability to diverse data modalities, particularly images, audio, and physical simulations of urban environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cabitza, Federico. "Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings." In Modeling Decisions for Artificial Intelligence, 283–94. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26773-5_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner, and Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.

Повний текст джерела
Анотація:
AbstractThe release of ChatGPT to the general public has sparked discussions about the dangers of artificial intelligence (AI) among the public. The European Commission’s draft of the AI Act has further fueled these discussions, particularly in relation to the definition of AI and the assignment of risk levels to different technologies. Security concerns in AI systems arise from the need to protect against potential adversaries and to safeguard individuals from AI decisions that may harm their well-being. However, ensuring secure and trustworthy AI systems is challenging, especially with deep learning models that lack explainability. This paper proposes the concept of Controllable AI as an alternative to Trustworthy AI and explores the major differences between the two. The aim is to initiate discussions on securing complex AI systems without sacrificing practical capabilities or transparency. The paper provides an overview of techniques that can be employed to achieve Controllable AI. It discusses the background definitions of explainability, Trustworthy AI, and the AI Act. The principles and techniques of Controllable AI are detailed, including detecting and managing control loss, implementing transparent AI decisions, and addressing intentional bias or backdoors. The paper concludes by discussing the potential applications of Controllable AI and its implications for real-world scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Palvai, Ravi Ram Reddy, and Arshinder Kaur. "Steel Price Forecasting for Better Procurement Decisions: Comparing Tree-Based Decision Learning Methods." In Applications of Emerging Technologies and AI/ML Algorithms, 139–47. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1019-9_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Baier, Christel, Maria Christakis, Timo P. Gros, David Groß, Stefan Gumhold, Holger Hermanns, Jörg Hoffmann, and Michaela Klauck. "Lab Conditions for Research on Explainable Automated Decisions." In Trustworthy AI - Integrating Learning, Optimization and Reasoning, 83–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73959-1_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Miller, Gloria J. "Artificial Intelligence Project Success Factors—Beyond the Ethical Principles." In Lecture Notes in Business Information Processing, 65–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98997-2_4.

Повний текст джерела
Анотація:
AbstractThe algorithms implemented through artificial intelligence (AI) and big data projects are used in life-and-death situations. Despite research that addresses varying aspects of moral decision-making based upon algorithms, the definition of project success is less clear. Nevertheless, researchers place the burden of responsibility for ethical decisions on the developers of AI systems. This study used a systematic literature review to identify five categories of AI project success factors in 17 groups related to moral decision-making with algorithms. It translates AI ethical principles into practical project deliverables and actions that underpin the success of AI projects. It considers success over time by investigating the development, usage, and consequences of moral decision-making by algorithmic systems. Moreover, the review reveals and defines AI success factors within the project management literature. Project managers and sponsors can use the results during project planning and execution.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mamalakis, Antonios, Imme Ebert-Uphoff, and Elizabeth A. Barnes. "Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science." In xxAI - Beyond Explainable AI, 315–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_16.

Повний текст джерела
Анотація:
AbstractIn recent years, artificial intelligence and specifically artificial neural networks (NNs) have shown great success in solving complex, nonlinear problems in earth sciences. Despite their success, the strategies upon which NNs make decisions are hard to decipher, which prevents scientists from interpreting and building trust in the NN predictions; a highly desired and necessary condition for the further use and exploitation of NNs’ potential. Thus, a variety of methods have been recently introduced with the aim of attributing the NN predictions to specific features in the input space and explaining their strategy. The so-called eXplainable Artificial Intelligence (XAI) is already seeing great application in a plethora of fields, offering promising results and insights about the decision strategies of NNs. Here, we provide an overview of the most recent work from our group, applying XAI to meteorology and climate science. Specifically, we present results from satellite applications that include weather phenomena identification and image to image translation, applications to climate prediction at subseasonal to decadal timescales, and detection of forced climatic changes and anthropogenic footprint. We also summarize a recently introduced synthetic benchmark dataset that can be used to improve our understanding of different XAI methods and introduce objectivity into the assessment of their fidelity. With this overview, we aim to illustrate how gaining accurate insights about the NN decision strategy can help climate scientists and meteorologists improve practices in fine-tuning model architectures, calibrating trust in climate and weather prediction and attribution, and learning new science.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jha, Susmit, Vasumathi Raman, Alessandro Pinto, Tuhin Sahai, and Michael Francis. "On Learning Sparse Boolean Formulae for Explaining AI Decisions." In Lecture Notes in Computer Science, 99–114. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57288-8_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "AI DECISIONS"

1

Wang, Xinru, Chen Liang, and Ming Yin. "The Effects of AI Biases and Explanations on Human Decision Fairness: A Case Study of Bidding in Rental Housing Markets." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/343.

Повний текст джерела
Анотація:
The use of AI-based decision aids in diverse domains has inspired many empirical investigations into how AI models’ decision recommendations impact humans’ decision accuracy in AI-assisted decision making, while explorations on the impacts on humans’ decision fairness are largely lacking despite their clear importance. In this paper, using a real-world business decision making scenario—bidding in rental housing markets—as our testbed, we present an experimental study on understanding how the bias level of the AI-based decision aid as well as the provision of AI explanations affect the fairness level of humans’ decisions, both during and after their usage of the decision aid. Our results suggest that when people are assisted by an AI-based decision aid, both the higher level of racial biases the decision aid exhibits and surprisingly, the presence of AI explanations, result in more unfair human decisions across racial groups. Moreover, these impacts are partly made through triggering humans’ “disparate interactions” with AI. However, regardless of the AI bias level and the presence of AI explanations, when people return to make independent decisions after their usage of the AI-based decision aid, their decisions no longer exhibit significant unfairness across racial groups.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lin, Patrick. "AI Decisions, Risk, and Ethics." In AIES '18: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3278721.3278806.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mcguire, Mollie, and Miroslav Bernkopf. "Stress and Motivation on Reliance Decisions with Automation." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003579.

Повний текст джерела
Анотація:
The decision to rely on automation is crucial in high-stress environments where there is an element of uncertainty. It is equally vital in human-automation partnership that the human’s expectations of automation reliability are appropriately calibrated. Therefore, it is important to better understand reliance decisions with varying automation reliability. The current study examined the effects of stress and motivation on the decision to rely on autonomous partners. Participants were randomly assigned to a stress and motivation condition, using the Trier Social Stress Test (TSST) for stress induction, and monetary incentive for motivation. The main task was an iterative pattern learning task where one of two AI partners, one with high reliability and one with low reliability, gave advice at every iteration; the AI partner alternated every ten iterations. While motivation had a stronger effect than stress, both motivation and stress affected reliance decisions with the high reliability AI. The low reliability AI was affected to a lesser degree if at all. Overall, the decision to not rely on the AI partner, especially with the higher in reliability was slower than the decision to rely on the AI partner, with the slowest decision times occurring in the high stress condition with motivated participants, suggesting more deliberate processing was utilized when deciding against the advice of the AI higher in reliability.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alexandrov, Natalia. "Explainable AI Decisions for Human-Autonomy Interactions." In 17th AIAA Aviation Technology, Integration, and Operations Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2017. http://dx.doi.org/10.2514/6.2017-3991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lee Bower, Linda. "AI Decision Making for Allocating Government Grant Funds." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100869.

Повний текст джерела
Анотація:
This paper discusses the use of Artificial Intelligence in government decision making with a case study on the use of Artificial Intelligence to distribute government grant funds. Artificial Intelligence enables autonomous systems and decision support aids. A formal process is very important when designing a system to make decisions autonomously with Artificial Intelligence. The Office of Justice Programs, an agency of the U.S. Department of Justice, focuses on crime prevention; it provides research and development assistance to state, local, and tribal criminal justice agencies. OJP’s public safety grants involve about $2 billion distributed to some 2,000 grantees. In the past, the agency had no standard approach for determining who received grants. Then, about 2011, OJP began introducing objective measures into the grant review process and automated the process. With AI, the new system resulted in increased accuracy and consistency of decisions, as well as a more efficient review process.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sidorova, Anna, and Kashif Saeed. "Incorporating Stakeholder enfranchisement, Risks, Gains, and AI decisions in AI Governance Framework." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2022. http://dx.doi.org/10.24251/hicss.2022.722.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Alyaev, Sergey, Andrew Holsaeter, Reidar Brumer Bratvold, Sofija Ivanova, and Morten Bendiksen. "Systematic Decisions Under Uncertainty: An Experiment Towards Better Geosteering Operations." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204133-ms.

Повний текст джерела
Анотація:
Abstract Geosteering workflows are increasingly based on updated quantifications of subsurface uncertainties during real-time operations. These workflows give tremendous amounts of information that a human brain cannot make sense of. To advance value creation from geosteering, the industry should develop and adopt decision support systems (DSSs). DSSs might provide either expert tools which inform decisions under uncertainty or optimization-based recommendations. In both cases the adoption of a DSS would require new skillsets to dynamically and systematically interpret uncertainties and parameters required for operational decision making. The aim of this work is to identify the relevant skills and ways to aid good geosteering decisions. We present an experiment where 54 geosteering experts took part in performing steering decisions under uncertainty in a controlled environment using an online competition platform. In the experiment we compare the decisions of the experts with an AI bot that had the same information at its disposal. Two of the participants beat the AI bot. A survey was conducted to reveal their winning strategies. The survey shows that both of the winners had extensive prior geosteering experience. That, together with luck, allowed them to beat the AI bot. At the same time neither of the winners utilized the full potential of uncertainty tools in the platform. While geosteering experts possess insights due to prior experience, the information in the real-time data will still be overwhelming, sometimes resulting in inconsistent and unreliable geosteering choices. The AI bot guarantees reliable and consistent decisions by optimization based on systematic uncertainty analysis. Further development of DSSs, and their use as training-simulators for experts, should lead to improved well placements through adopting well-established principles for high-quality decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Loreggia, Andrea, Nicholas Mattei, Taher Rahgooy, Francesca Rossi, Biplav Srivastava, and Kristen Brent Venable. "Making Human-Like Moral Decisions." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3534174.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gao, Ruijiang, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Min Kyung Lee, and Matthew Lease. "Human-AI Collaboration with Bandit Feedback." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/237.

Повний текст джерела
Анотація:
Human-machine complementarity is important when neither the algorithm nor the human yield dominant performance across all instances in a given domain. Most research on algorithmic decision-making solely centers on the algorithm's performance, while recent work that explores human-machine collaboration has framed the decision-making problems as classification tasks. In this paper, we first propose and then develop a solution for a novel human-machine collaboration problem in a bandit feedback setting. Our solution aims to exploit the human-machine complementarity to maximize decision rewards. We then extend our approach to settings with multiple human decision makers. We demonstrate the effectiveness of our proposed methods using both synthetic and real human responses, and find that our methods outperform both the algorithm and the human when they each make decisions on their own. We also show how personalized routing in the presence of multiple human decision-makers can further improve the human-machine team performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kramer, Max F., Jana Schaich Borg, Vincent Conitzer, and Walter Sinnott-Armstrong. "When Do People Want AI to Make Decisions?" In AIES '18: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3278721.3278752.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "AI DECISIONS"

1

Ruvinsky, Alicia, Timothy Garton, Daniel Chausse, Rajeev Agrawal, Harland Yu, and Ernest Miller. Accelerating the tactical decision process with High-Performance Computing (HPC) on the edge : motivation, framework, and use cases. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42169.

Повний текст джерела
Анотація:
Managing the ever-growing volume and velocity of data across the battlefield is a critical problem for warfighters. Solving this problem will require a fundamental change in how battlefield analyses are performed. A new approach to making decisions on the battlefield will eliminate data transport delays by moving the analytical capabilities closer to data sources. Decision cycles depend on the speed at which data can be captured and converted to actionable information for decision making. Real-time situational awareness is achieved by locating computational assets at the tactical edge. Accelerating the tactical decision process leverages capabilities in three technology areas: (1) High-Performance Computing (HPC), (2) Machine Learning (ML), and (3) Internet of Things (IoT). Exploiting these areas can reduce network traffic and shorten the time required to transform data into actionable information. Faster decision cycles may revolutionize battlefield operations. Presented is an overview of an artificial intelligence (AI) system design for near-real-time analytics in a tactical operational environment executing on co-located, mobile HPC hardware. The report contains the following sections, (1) an introduction describing motivation, background, and state of technology, (2) descriptions of tactical decision process leveraging HPC problem definition and use case, and (3) HPC tactical data analytics framework design enabling data to decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sulaiman, Muhammad. AI-enabled proactive mHealth with automated decision-making. Peeref, March 2023. http://dx.doi.org/10.54985/peeref.2303p9916446.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hoffman, Wyatt, and Heeu Millie Kim. Reducing the Risks of Artificial Intelligence for Military Decision Advantage. Center for Security and Emerging Technology, March 2023. http://dx.doi.org/10.51593/2021ca008.

Повний текст джерела
Анотація:
Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Buchanan, Ben. The AI Triad and What It Means for National Security Strategy. Center for Security and Emerging Technology, August 2020. http://dx.doi.org/10.51593/20200021.

Повний текст джерела
Анотація:
One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Konaev, Margarita, Husanjot Chahal, Ryan Fedasiuk, Tina Huang, and Ilya Rahkovsky. U.S. Military Investments in Autonomy and AI: A Budgetary Assessment. Center for Security and Emerging Technology, October 2020. http://dx.doi.org/10.51593/20200069.

Повний текст джерела
Анотація:
The Pentagon has a wide range of research and development programs using autonomy and AI in unmanned vehicles and systems, information processing, decision support, targeting functions, and other areas. This policy brief delves into the details of DOD’s science and technology program to assess trends in funding, key areas of focus, and gaps in investment that could stymie the development and fielding of AI systems in operational settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

SAINI, RAVINDER, AbdulKhaliq Alshadid, and Lujain Aldosari. Investigation on the application of artificial intelligence in prosthodontics. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2022. http://dx.doi.org/10.37766/inplasy2022.12.0096.

Повний текст джерела
Анотація:
Review question / Objective: 1. Which artificial intelligence techniques are practiced in dentistry? 2. How AI is improving the diagnosis, clinical decision making, and outcome of dental treatment? 3. What are the current clinical applications and diagnostic performance of AI in the field of prosthodontics? Condition being studied: Procedures for desktop designing and fabrication Computer-aided design (CAD/CAM) in particular have made their way into routine healthcare and laboratory practice.Based on flat imagery, artificial intelligence may also be utilized to forecast the debonding of dental repairs. Dental arches in detachable prosthodontics may be categorized using Convolutional neural networks (CNN). By properly positioning the teeth, machine learning in CAD/CAM software can reestablish healthy inter-maxillary connections. AI may assist with accurate color matching in challenging cosmetic scenarios that include a single central incisor or many front teeth. Intraoral detectors can identify implant placements in implant prosthodontics and instantly input them into CAD software. The design and execution of dental implants could potentially be improved by utilizing AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rudd, Ian. Leveraging Artificial Intelligence and Robotics to Improve Mental Health. Intellectual Archive, July 2022. http://dx.doi.org/10.32370/iaj.2710.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) is one of the oldest fields of computer science used in building structures that look like human beings in terms of thinking, learning, solving problems, and decision making (Jovanovic et al., 2021). AI technologies and techniques have been in application in various aspects to aid in solving problems and performing tasks more reliably, efficiently, and effectively than what would happen without their use. These technologies have also been reshaping the health sector's field, particularly digital tools and medical robotics (Dantas & Nogaroli, 2021). The new reality has been feasible since there has been exponential growth in the patient health data collected globally. The different technological approaches are revolutionizing medical sciences into dataintensive sciences (Dantas & Nogaroli, 2021). Notably, with digitizing medical records supported the increasing cloud storage, the health sector created a vast and potentially immeasurable volume of biomedical data necessary for implementing robotics and AI. Despite the notable use of AI in healthcare sectors such as dermatology and radiology, its use in psychological healthcare has neem models. Considering the increased mortality and morbidity levels among patients with psychiatric illnesses and the debilitating shortage of psychological healthcare workers, there is a vital requirement for AI and robotics to help in identifying high-risk persons and providing measures that avert and treat mental disorders (Lee et al., 2021). This discussion is focused on understanding how AI and robotics could be employed in improving mental health in the human community. The continued success of this technology in other healthcare fields demonstrates that it could also be used in redefining mental sicknesses objectively, identifying them at a prodromal phase, personalizing the treatments, and empowering patients in their care programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Goodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier, and Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, March 2023. http://dx.doi.org/10.46999/arxn5612.

Повний текст джерела
Анотація:
Assessments, especially those used for high-stakes decision making, draw on evidence-based frameworks. Such frameworks inform every aspect of the testing process, from development to results reporting. The frameworks that language assessment professionals use draw on theory in language learning, assessment design, and measurement and psychometrics in order to provide underpinnings for the evaluation of language skills including speaking, writing, reading, and listening. This paper focuses on the construct, or underlying trait, of writing ability. The paper conceptualizes the writing construct for the Duolingo English Test, a digital-first assessment. “Digital-first” includes technology such as artificial intelligence (AI) and machine learning, with human expert involvement, throughout all item development, test scoring, and security processes. This work is situated in the Burstein et al. (2022) theoretical ecosystem for digital-first assessment, the first representation of its kind that incorporates design, validation/measurement, and security all situated directly in assessment practices that are digital first. The paper first provides background information about the Duolingo English Test and then defines the writing construct, including the purposes for writing. It also introduces principles underpinning the design of writing items and illustrates sample items that assess the writing construct.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sarafian, Iliana. Considerazioni chiave: affrontare le discriminazioni strutturali e le barriere al vaccino covid-19 per le comunità rom in italia. SSHAP, May 2022. http://dx.doi.org/10.19088/sshap.2022.024.

Повний текст джерела
Анотація:
Questo rapporto evidenzia come le discriminazioni strutturali e l'esclusione sociale influenzino le percezioni e gli atteggiamenti nei confronti del vaccino per il COVID-19 tra le comunità rom in Italia. Uno degli obiettivi è mettere in luce il ruolo che le autorità pubbliche e le comunità possono svolgere nel sostenere l'adozione del vaccino e nel contrasto ai più ampi processi di esclusione sociale.1 Le risposte contraddittorie che lo Stato italiano ha fornito durante la pandemia di Covid-19, insieme alle forme di esclusione già in atto, hanno comportato un aumento della sfiducia delle comunità rom nei confronti delle iniziative statali, impattando anche sull’adesione alla campagna vaccinale.2 Questo documento si propone di supportare e informare le amministrazioni locali e le istituzioni sanitarie pubbliche coinvolte nell’assistenza e nei processi di inclusione delle comunità rom in Italia. Il presente documento si basa su una ricerca condotta di persona e a distanza dal novembre 2021 al gennaio 2022 in Italia con le comunità rom e sinti di Milano, Roma e Catania. Sebbene queste comunità si caratterizzino per diversità storica e per differenti forme di identità linguistica, geografica, religiosa, sono state individuate delle somiglianze nel modo in cui hanno vissuto la pandemia di COVID-19 e nelle decisioni a proposito del vaccino. Questo documento è stato sviluppato per SSHAP da Iliana Sarafian (LSE) con i contributi e le revisioni di Elizabeth Storer (LSE), Tabitha Hrynick (IDS), Marco Solimene (University of Iceland), Dijana Pavlovic (Upre Roma) e Olivia Tulloch (Anthrologica). La ricerca è stata finanziata dalla British Academy COVID-19 Recovery: G7 Fund (COVG7210058) e si è svolta presso il Firoz Lalji Institute for Africa, London School of Economics. La sintesi è di responsabilità di SSHAP.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Artifical intelligence (AI) in decision making. Committee on Publication Ethics, September 2021. http://dx.doi.org/10.24318/9kvagrnj.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії