Статті в журналах з теми "Artificial Social Agents"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Artificial Social Agents.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Artificial Social Agents".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Holtgraves, T. M., S. J. Ross, C. R. Weywadt, and T. L. Han. "Perceiving artificial social agents." Computers in Human Behavior 23, no. 5 (September 2007): 2163–74. http://dx.doi.org/10.1016/j.chb.2006.02.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wiese, Eva, Tyler Shaw, Daniel Lofaro, and Carryl Baldwin. "Designing Artificial Agents as Social Companions." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1604–8. http://dx.doi.org/10.1177/1541931213601764.

Повний текст джерела
Анотація:
When we interact with others, we make inferences about their internal states (i.e., intentions, emotions) and use this information to understand and predict their behavior. Reasoning about the internal states of others is referred to as mentalizing, and presupposes that our social partners are believed to have a mind. Seeing mind in others increases trust, prosocial behaviors and feelings of social connection, and leads to improved joint performance. However, while human agents trigger mind perception by default, artificial agents are not automatically treated as intentional entities but need to be designed to do so. The panel addresses this issue by discussing how mind attribution to robots and other automated agents can be elicited by design, what the effects of mind perception are on attitudes and performance in human-robot and human-machine interaction and what behavioral and neuroscientific paradigms can be used to investigate these questions. Application areas covered include social robotics, automation, driver-vehicle interfaces, and others.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chaminade, Thierry, and Jessica K. Hodgins. "Artificial agents in social cognitive sciences." Interaction Studies 7, no. 3 (November 13, 2006): 347–53. http://dx.doi.org/10.1075/is.7.3.07cha.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wykowska, Agnieszka, Thierry Chaminade, and Gordon Cheng. "Embodied artificial agents for understanding human social cognition." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1693 (May 5, 2016): 20150375. http://dx.doi.org/10.1098/rstb.2015.0375.

Повний текст джерела
Анотація:
In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nakanishi, Hideyuki, Shinya Shimizu, and Katherine Isbister. "SENSITIZING SOCIAL AGENTS FOR VIRTUAL TRAINING." Applied Artificial Intelligence 19, no. 3-4 (March 9, 2005): 341–61. http://dx.doi.org/10.1080/08839510590910192.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xia, Zheng You, and Chen Ling Gu. "The Role of Belief in the Emergence of Social Conventions in Artificial Social System." Advanced Materials Research 159 (December 2010): 210–15. http://dx.doi.org/10.4028/www.scientific.net/amr.159.210.

Повний текст джерела
Анотація:
The emergence of social conventions in multi-agent systems has been analyzed mainly by considering a group of homogeneous autonomous agents that can reach a global agreement using locally available information. We use novel viewpoint to consider that the process through which agents coordinate their behaviors to reduce conflict is also the process agents use to evaluate trust relations with their neighbors during local interactions. In this paper, we propose using the belief update rule called Instances of Satisfying and Dissatisfying (ISD) to study the evolution of agents' beliefs during local interactions. We also define an action selection rule called “highest cumulative belief” (HCB) to coordinate their behavior to reduce conflicts among agents in MAS (multi-agent systems). We find that the HCB can cause a group of agents to achieve the emergence of social conventions. Furthermore, we discover that if a group of agents can achieve the emergence of social conventions through ISD and HCB rules in an artificial social system, after a number of iterations this group of agents can enter the harmony state wherein each agent fully believes its neighbors.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pana, Laura. "Artificial Ethics." International Journal of Technoethics 3, no. 3 (July 2012): 1–20. http://dx.doi.org/10.4018/jte.2012070101.

Повний текст джерела
Анотація:
A new morality is generated in the present scientific and technical environment, and a new ethics is needed, an ethics which may found both individual and social morality, guiding a moral evolution of different cultural fields and which has chances to keep alive the moral culture itself. Pointed out are the scientific, technical, and philosophical premises of artificial ethics. Specifically the status and the role of artificial ethics is described and detailed by selecting ethical decision procedures, norms, principles and values that are suitable to be applied both by human and artificial moral agents. Moral intelligence as a kind of practical intelligence is studied and its role in human and artificial moral conduct is evaluated. A set of ethical values that may be shared and applied by both human and artificial moral agents is presented. Common features of human and artificial moral agents as well as specific properties of artificial moral agents are analyzed. Artificial ethics is presented and integrated in the multi-set of artificial cognition, discovery, activity, organization and evolution forms. Experiments and the results of this article are explored further in the article.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Payr, Sabine. "SOCIAL ENGAGEMENT WITH ROBOTS AND AGENTS: INTRODUCTION." Applied Artificial Intelligence 25, no. 6 (July 2011): 441–44. http://dx.doi.org/10.1080/08839514.2011.586616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Coman, Alexandra, and David W. Aha. "AI Rebel Agents." AI Magazine 39, no. 3 (September 28, 2018): 16–26. http://dx.doi.org/10.1609/aimag.v39i3.2762.

Повний текст джерела
Анотація:
The ability to say "no" in a variety of ways and contexts is an essential part of being socio-cognitively human. Through a variety of examples, we show that, despite ominous portrayals in science fiction, AI agents with human-inspired noncompliance abilities have many potential benefits. Rebel agents are intelligent agents that can oppose goals or plans assigned to them, or the general attitudes or behavior of other agents. They can serve purposes such as ethics, safety, and task execution correctness, and provide or support diverse points of view. We present a framework to help categorize and design rebel agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. In recognition of the fact that, in human psychology, non-compliance has profound socio-cognitive implications, we also explore socio-cognitive dimensions of AI rebellion: social awareness and counternarrative intelligence. This latter term refers to an agent's ability to produce and use alternative narratives that support, express, or justify rebellion, either sincerely or deceptively. We encourage further conversation about AI rebellion within the AI community and beyond, given the inherent interdisciplinarity of the topic.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tnennenholtz, Moshe. "On Social Constraints for Rational Agents." Computational Intelligence 15, no. 4 (November 1999): 367–83. http://dx.doi.org/10.1111/0824-7935.00098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Acerbi, Alberto, Davide Marocco, and Paul Vogt. "Social learning in embodied agents." Connection Science 20, no. 2-3 (September 2008): 69–72. http://dx.doi.org/10.1080/09540090802091867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Nir, Ronen, Alexander Shleyfman, and Erez Karpas. "Automated Synthesis of Social Laws in STRIPS." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 9941–48. http://dx.doi.org/10.1609/aaai.v34i06.6549.

Повний текст джерела
Анотація:
Agents operating in a multi-agent environment must consider not just their actions, but also those of the other agents in the system. Artificial social systems are a well-known means for coordinating a set of agents, without requiring centralized planning or online negotiation between agents. Artificial social systems enact a social law which restricts the agents from performing some actions under some circumstances. A robust social law prevents the agents from interfering with each other, but does not prevent them from achieving their goals. Previous work has addressed how to check if a given social law, formulated in a variant of ma-strips, is robust, via compilation to planning. However, the social law was manually specified. In this paper, we address the problem of automatically synthesizing a robust social law for a given multi-agent environment. We treat the problem of social law synthesis as a search through the space of possible social laws, relying on the robustness verification procedure as a goal test. We also show how to exploit additional information produced by the robustness verification procedure to guide the search.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wallkötter, Sebastian, Silvia Tulli, Ginevra Castellano, Ana Paiva, and Mohamed Chetouani. "Explainable Embodied Agents Through Social Cues." ACM Transactions on Human-Robot Interaction 10, no. 3 (July 2021): 1–24. http://dx.doi.org/10.1145/3457188.

Повний текст джерела
Анотація:
The issue of how to make embodied agents explainable has experienced a surge of interest over the past 3 years, and there are many terms that refer to this concept, such as transparency and legibility. One reason for this high variance in terminology is the unique array of social cues that embodied agents can access in contrast to that accessed by non-embodied agents. Another reason is that different authors use these terms in different ways. Hence, we review the existing literature on explainability and organize it by (1) providing an overview of existing definitions, (2) showing how explainability is implemented and how it exploits different social cues, and (3) showing how the impact of explainability is measured. Additionally, we present a list of open questions and challenges that highlight areas that require further investigation by the community. This provides the interested reader with an overview of the current state of the art.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Karpas, Erez, Alexander Shleyfman, and Moshe Tennenholtz. "Automated Verification of Social Law Robustness in STRIPS." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 163–71. http://dx.doi.org/10.1609/icaps.v27i1.13817.

Повний текст джерела
Анотація:
Agents operating in a multi-agent environment must consider not just their own actions, but also those of the other agents in the system. Artificial social systems are a well known means for coordinating a set of agents, without requiring centralized planning or online negotiation between agents. Artificial social systems enact a social law which restricts the agents from performing some actions under some circumstances. A good social law prevents the agents from interfering with each other, but does not prevent them from achieving their goals. However, designing good social laws, or even checking whether a proposed social law is good, are hard questions. In this paper, we take a first step towards automating these processes, by formulating criteria for good social laws in a multi-agent planning framework. We then describe an automated technique for verifying if a proposed social law meets these criteria, based on a compilation to classical planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chan, Chi-Kong, Jianye Hao, and Ho-Fung Leung. "Reciprocal Social Strategy in Social Repeated Games and Emergence of Social Norms." International Journal on Artificial Intelligence Tools 26, no. 01 (February 2017): 1760007. http://dx.doi.org/10.1142/s0218213017600077.

Повний текст джерела
Анотація:
In an artificial society where agents repeatedly interact with one another, effective coordination among agents is generally a challenge. This is especially true when the participating agents are self-interested, and that there is no central authority to coordinate, and direct communication or negotiation are not possible. Recently, the problem was studied in a paper by Hao and Leung, where a new repeated game mechanism for modeling multi-agent interactions as well as a new reinforcement learning based agent learning method were proposed. In particular, the game mechanism differs from traditional repeated games in that the agents are anonymous, and the agents interact with randomly chosen opponents during each iteration. Their learning mechanism allows agents to coordinate without negotiations. The initial results had been promising. However, extended simulation also reveals that the outcomes are not stable in the long run in some cases, as the high level of cooperation is eventually not sustainable. In this work, we revisit he problem and propose a new learning mechanism as follows. First, we propose an enhanced Q-learning-based framework that allows the agents to better capture both the individual and social utilities that they have learned through observations. Second, we propose a new concept of \social attitude" for determining the action of the agents throughout the game. Simulation results reveal that this approach can achieve higher social utility, including close-to-optimal results in some scenarios, and more importantly, the results are sustainable with social norms emerging.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hofstede, Gert Jan. "GRASP agents: social first, intelligent later." AI & SOCIETY 34, no. 3 (December 26, 2017): 535–43. http://dx.doi.org/10.1007/s00146-017-0783-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Mao, Wenji, and Jonathan Gratch. "Modeling social inference in virtual agents." AI & SOCIETY 24, no. 1 (February 24, 2009): 5–11. http://dx.doi.org/10.1007/s00146-009-0195-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hoey, Jesse, Tobias Schröder, Jonathan Morgan, Kimberly B. Rogers, Deepak Rishi, and Meiyappan Nagappan. "Artificial Intelligence and Social Simulation: Studying Group Dynamics on a Massive Scale." Small Group Research 49, no. 6 (October 3, 2018): 647–83. http://dx.doi.org/10.1177/1046496418802362.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence and computer science can be used by social scientists in their study of groups and teams. Here, we explain how developments in machine learning and simulations with artificially intelligent agents can help group and team scholars to overcome two major problems they face when studying group dynamics. First, because empirical research on groups relies on manual coding, it is hard to study groups in large numbers (the scaling problem). Second, conventional statistical methods in behavioral science often fail to capture the nonlinear interaction dynamics occurring in small groups (the dynamics problem). Machine learning helps to address the scaling problem, as massive computing power can be harnessed to multiply manual codings of group interactions. Computer simulations with artificially intelligent agents help to address the dynamics problem by implementing social psychological theory in data-generating algorithms that allow for sophisticated statements and tests of theory. We describe an ongoing research project aimed at computational analysis of virtual software development teams.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cross, Emily S., Ruud Hortensius, and Agnieszka Wykowska. "From social brains to social robots: applying neurocognitive insights to human–robot interaction." Philosophical Transactions of the Royal Society B: Biological Sciences 374, no. 1771 (March 11, 2019): 20180024. http://dx.doi.org/10.1098/rstb.2018.0024.

Повний текст джерела
Анотація:
Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human–robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Johnson, Deborah G., and Merel Noorman. "Recommendations for Future Development of Artificial Agents [Commentary]." IEEE Technology and Society Magazine 33, no. 4 (2014): 22–28. http://dx.doi.org/10.1109/mts.2014.2363978.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Laban, Guy, Jean-Noël George, Val Morrison, and Emily S. Cross. "Tell me more! Assessing interactions with social robots from speech." Paladyn, Journal of Behavioral Robotics 12, no. 1 (December 12, 2020): 136–59. http://dx.doi.org/10.1515/pjbr-2021-0011.

Повний текст джерела
Анотація:
AbstractAs social robots are increasingly introduced into health interventions, one potential area where they might prove valuable is in supporting people’s psychological health through conversation. Given the importance of self-disclosure for psychological health, this study assessed the viability of using social robots for eliciting rich disclosures that identify needs and emotional states in human interaction partners. Three within-subject experiments were conducted with participants interacting with another person, a humanoid social robot, and a disembodied conversational agent (voice assistant). We performed a number of objective evaluations of disclosures to these three agents via speech content and voice analyses and also probed participants’ subjective evaluations of their disclosures to three agents. Our findings suggest that participants overall disclose more to humans than artificial agents, that agents’ embodiment influences disclosure quantity and quality, and that people are generally aware of differences in their personal disclosures to three agents studied here. Together, the findings set the stage for further investigation into the psychological underpinnings of self-disclosures to artificial agents and their potential role in eliciting disclosures as part of mental and physical health interventions.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Anshelevich, Elliot, and John Postl. "Randomized Social Choice Functions Under Metric Preferences." Journal of Artificial Intelligence Research 58 (April 13, 2017): 797–827. http://dx.doi.org/10.1613/jair.5340.

Повний текст джерела
Анотація:
We determine the quality of randomized social choice algorithms in a setting in which the agents have metric preferences: every agent has a cost for each alternative, and these costs form a metric. We assume that these costs are unknown to the algorithms (and possibly even to the agents themselves), which means we cannot simply select the optimal alternative, i.e. the alternative that minimizes the total agent cost (or median agent cost). However, we do assume that the agents know their ordinal preferences that are induced by the metric space. We examine randomized social choice functions that require only this ordinal information and select an alternative that is good in expectation with respect to the costs from the metric. To quantify how good a randomized social choice function is, we bound the distortion, which is the worst-case ratio between the expected cost of the alternative selected and the cost of the optimal alternative. We provide new distortion bounds for a variety of randomized algorithms, for both general metrics and for important special cases. Our results show a sizable improvement in distortion over deterministic algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ye, Zi Qing, and Xiao Yi Yu. "Repeated-Evaluation Genetic Algorithm for Simulating Social Security in the Artificial Society." Applied Mechanics and Materials 103 (September 2011): 513–17. http://dx.doi.org/10.4028/www.scientific.net/amm.103.513.

Повний текст джерела
Анотація:
In order to find optimal policy to govern agents’ society, artificial agents have been deployed in simulating social or economic phenomena. However, with an increase of the complexity of agents’ internal behaviors as well as their social interactions, modeling social behaviors and tracking down optimal policies in mathematical form become intractable. In this paper, the repeated evaluation genetic algorithm is used to find optimal solutions to deter criminals in order to reduce the social cost caused by the crimes in the artificial society. The society is characterized by multiple equilibria and noisy parameters. Sampling evaluation is used to evaluate every candidate. The results of experiments show that genetic algorithms can quickly find the optimal solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Buechner, Jeff. "A Revision of the Buechner–Tavani Model of Digital Trust and a Philosophical Problem It Raises for Social Robotics." Information 11, no. 1 (January 16, 2020): 48. http://dx.doi.org/10.3390/info11010048.

Повний текст джерела
Анотація:
In this paper the Buechner–Tavani model of digital trust is revised—new conditions for self-trust are incorporated into the model. These new conditions raise several philosophical problems concerning the idea of a substantial self for social robotics, which are closely examined. I conclude that reductionism about the self is incompatible with, while the idea of a substantial self is compatible with, trust relations between human agents, between human agents and artificial agents, and between artificial agents.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Nalepka, Patrick, Maurice Lamb, Rachel W. Kallen, Kevin Shockley, Anthony Chemero, Elliot Saltzman, and Michael J. Richardson. "Human social motor solutions for human–machine interaction in dynamical task contexts." Proceedings of the National Academy of Sciences 116, no. 4 (January 7, 2019): 1437–46. http://dx.doi.org/10.1073/pnas.1813164116.

Повний текст джерела
Анотація:
Multiagent activity is commonplace in everyday life and can improve the behavioral efficiency of task performance and learning. Thus, augmenting social contexts with the use of interactive virtual and robotic agents is of great interest across health, sport, and industry domains. However, the effectiveness of human–machine interaction (HMI) to effectively train humans for future social encounters depends on the ability of artificial agents to respond to human coactors in a natural, human-like manner. One way to achieve effective HMI is by developing dynamical models utilizing dynamical motor primitives (DMPs) of human multiagent coordination that not only capture the behavioral dynamics of successful human performance but also, provide a tractable control architecture for computerized agents. Previous research has demonstrated how DMPs can successfully capture human-like dynamics of simple nonsocial, single-actor movements. However, it is unclear whether DMPs can be used to model more complex multiagent task scenarios. This study tested this human-centered approach to HMI using a complex dyadic shepherding task, in which pairs of coacting agents had to work together to corral and contain small herds of virtual sheep. Human–human and human–artificial agent dyads were tested across two different task contexts. The results revealed (i) that the performance of human–human dyads was equivalent to those composed of a human and the artificial agent and (ii) that, using a “Turing-like” methodology, most participants in the HMI condition were unaware that they were working alongside an artificial agent, further validating the isomorphism of human and artificial agent behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ramchurn, S. D., C. Mezzetti, A. Giovannucci, J. A. Rodriguez-Aguilar, R. K. Dash, and N. R. Jennings. "Trust-Based Mechanisms for Robust and Efficient Task Allocation in the Presence of Execution Uncertainty." Journal of Artificial Intelligence Research 35 (June 16, 2009): 119–59. http://dx.doi.org/10.1613/jair.2751.

Повний текст джерела
Анотація:
Vickrey-Clarke-Groves (VCG) mechanisms are often used to allocate tasks to selfish and rational agents. VCG mechanisms are incentive compatible, direct mechanisms that are efficient (i.e., maximise social utility) and individually rational (i.e., agents prefer to join rather than opt out). However, an important assumption of these mechanisms is that the agents will "always" successfully complete their allocated tasks. Clearly, this assumption is unrealistic in many real-world applications, where agents can, and often do, fail in their endeavours. Moreover, whether an agent is deemed to have failed may be perceived differently by different agents. Such subjective perceptions about an agent's probability of succeeding at a given task are often captured and reasoned about using the notion of "trust". Given this background, in this paper we investigate the design of novel mechanisms that take into account the trust between agents when allocating tasks. Specifically, we develop a new class of mechanisms, called "trust-based mechanisms", that can take into account multiple subjective measures of the probability of an agent succeeding at a given task and produce allocations that maximise social utility, whilst ensuring that no agent obtains a negative utility. We then show that such mechanisms pose a challenging new combinatorial optimisation problem (that is NP-complete), devise a novel representation for solving the problem, and develop an effective integer programming solution (that can solve instances with about 2x10^5 possible allocations in 40 seconds).
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Scheutz, Matthias. "The Case for Explicit Ethical Agents." AI Magazine 38, no. 4 (December 28, 2017): 57–64. http://dx.doi.org/10.1609/aimag.v38i4.2746.

Повний текст джерела
Анотація:
Morality is a fundamentally human trait which permeates all levels of human society, from basic etiquette and normative expectations of social groups, to formalized legal principles upheld by societies. Hence, future interactive AI systems, in particular, cognitive systems on robots deployed in human settings, will have to meet human normative expectations, for otherwise these system risk causing harm. While the interest in “machine ethics” has increased rapidly in recent years, there are only very few current efforts in the cognitive systems community to investigate moral and ethical reasoning. And there is currently no cognitive architecture that has even rudimentary moral or ethical competence, i.e., the ability to judge situations based on moral principles such as norms and values and make morally and ethically sound decisions. We hence argue for the urgent need to instill moral and ethical competence in all cognitive system intended to be employed in human social contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

von Ungern-Sternberg, Antje. "Artificial Agents and General Principles of Law." Volume 60 · 2017 60, no. 1 (January 1, 2018): 239–66. http://dx.doi.org/10.3790/gyil.60.1.239.

Повний текст джерела
Анотація:
Artificial agents – from autonomous cars and weapon systems to social bots, from profiling and tracking programmes to risk assessment software predicting criminal recidivism or voting behaviour – challenge general principles of national and international law. This article addresses three of these principles: responsibility, explainability, and autonomy. Responsibility requires that actors be held accountable for their actions, including damages and breaches of law. Responsibility for actions and decisions taken by artificial agents can be secured by resorting to strict or objective liability schemes, which do not require human fault and other human factors, or by relocating human fault, i.e. by holding programmers, supervisors, or standard setters accountable. ‘Explainability’ is a term used to characterise that even if artificial agents produce useful and reliable results, it must be explainable how these results are generated. Lawyers have to define those areas of law that require an explanation for artificial agents’ activities, ranging from human rights interferences to, possibly, any form of automated decision-making that affects an individual. Finally, the many uses of artificial agents also raise questions regarding several aspects of autonomy, including privacy and data protection, individuality, and freedom from manipulation. Yet, artificial agents do not only challenge existing principles of law, they can also strengthen responsibility, explainability, and autonomy.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Anastassacos, Nicolas, Stephen Hailes, and Mirco Musolesi. "Partner Selection for the Emergence of Cooperation in Multi-Agent Systems Using Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7047–54. http://dx.doi.org/10.1609/aaai.v34i05.6190.

Повний текст джерела
Анотація:
Social dilemmas have been widely studied to explain how humans are able to cooperate in society. Considerable effort has been invested in designing artificial agents for social dilemmas that incorporate explicit agent motivations that are chosen to favor coordinated or cooperative responses. The prevalence of this general approach points towards the importance of achieving an understanding of both an agent's internal design and external environment dynamics that facilitate cooperative behavior. In this paper, we investigate how partner selection can promote cooperative behavior between agents who are trained to maximize a purely selfish objective function. Our experiments reveal that agents trained with this dynamic learn a strategy that retaliates against defectors while promoting cooperation with other agents resulting in a prosocial society.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Caballero, Alberto, Juan Botía, and Antonio Gómez-Skarmeta. "Using cognitive agents in social simulations." Engineering Applications of Artificial Intelligence 24, no. 7 (October 2011): 1098–109. http://dx.doi.org/10.1016/j.engappai.2011.06.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

De Jong, S., S. Uyttendaele, and K. Tuyls. "Learning to Reach Agreement in a Continuous Ultimatum Game." Journal of Artificial Intelligence Research 33 (December 20, 2008): 551–74. http://dx.doi.org/10.1613/jair.2685.

Повний текст джерела
Анотація:
It is well-known that acting in an individually rational manner, according to the principles of classical game theory, may lead to sub-optimal solutions in a class of problems named social dilemmas. In contrast, humans generally do not have much difficulty with social dilemmas, as they are able to balance personal benefit and group benefit. As agents in multi-agent systems are regularly confronted with social dilemmas, for instance in tasks such as resource allocation, these agents may benefit from the inclusion of mechanisms thought to facilitate human fairness. Although many of such mechanisms have already been implemented in a multi-agent systems context, their application is usually limited to rather abstract social dilemmas with a discrete set of available strategies (usually two). Given that many real-world examples of social dilemmas are actually continuous in nature, we extend this previous work to more general dilemmas, in which agents operate in a continuous strategy space. The social dilemma under study here is the well-known Ultimatum Game, in which an optimal solution is achieved if agents agree on a common strategy. We investigate whether a scale-free interaction network facilitates agents to reach agreement, especially in the presence of fixed-strategy agents that represent a desired (e.g. human) outcome. Moreover, we study the influence of rewiring in the interaction network. The agents are equipped with continuous-action learning automata and play a large number of random pairwise games in order to establish a common strategy. From our experiments, we may conclude that results obtained in discrete-strategy games can be generalized to continuous-strategy games to a certain extent: a scale-free interaction network structure allows agents to achieve agreement on a common strategy, and rewiring in the interaction network greatly enhances the agents' ability to reach agreement. However, it also becomes clear that some alternative mechanisms, such as reputation and volunteering, have many subtleties involved and do not have convincing beneficial effects in the continuous case.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Karpov, Valery. "On moral aspects of adaptive behavior of artificial agents." Artificial societies 16, no. 2 (2021): 0. http://dx.doi.org/10.18254/s207751800014740-3.

Повний текст джерела
Анотація:
This article describes a model of a social agent, whose behavior can be stated in terms of basic moral mechanisms and norms. Morality is considered here as a flexible adaptive mechanism that allows agents to vary behavior depending on the environment conditions. The control system of the social agent is based on the "emotion-requirement" architecture. Together with the mechanisms of imitative behavior and the identification of other observable agents with the subjective "Me" concept, this architecture allows to interpret the agent's behavior in terms of empathy, sympathy, and friend-foe relationships. Experiments with this model are described, the main variable parameter of which was the tendency to sympathy. The objective of the experiments was to determine the dependence of the group "well-being" indicators on their altruism. The results obtained are quite consistent with the well-known sociological conclusions, which made it possible to say that the proposed behavioral models and architecture of agents are adequate to intuitive ideas about the role and essence of morality. Thus, the possibility of transition in this area from abstract humanitarian reasoning to constructive schemes and models of adaptive behavior of artificial agents was demonstrated.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kim, Youngsang, and Hoonsik Yoo. "Cross-cultural comparison of preferences for the external appearance of artificial intelligence agents." Social Behavior and Personality: an international journal 49, no. 11 (November 3, 2021): 1–23. http://dx.doi.org/10.2224/sbp.10824.

Повний текст джерела
Анотація:
We analyzed international differences in preferences related to the two dimensional (2D) versus three dimensional (3D) and male versus female external appearance of artificial intelligence (AI) agents for use in self-driving automobiles. We recruited 823 participants in five countries (South Korea, United States, China, Russia, and Brazil), who completed a survey. South Korean, Chinese, and North American respondents preferred a 2D appearance of the AI agent, which appears to result from the religious or philosophical views held in countries with a large or growing number of Christians, whereas Brazilian and Russian respondents preferred a 3D appearance. Brazilian respondents' high rate of functional illiteracy may be the reason for this finding; however, there were difficulties in identifying the reason for the Russian preference. Furthermore, men in all five countries preferred female AI agents, whereas South Korean, Chinese, and Russian women preferred female agents, but in the United States and Brazil women preferred male agents. These findings may offer valuable guidelines for design of personalized AI agent appearance, taking into account differences in preferences between countries and by gender.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Dar, Sanobar, and Ulysses Bernardet. "When Agents Become Partners: A Review of the Role the Implicit Plays in the Interaction with Artificial Social Agents." Multimodal Technologies and Interaction 4, no. 4 (November 22, 2020): 81. http://dx.doi.org/10.3390/mti4040081.

Повний текст джерела
Анотація:
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication channels addresses these three aspects of the interaction process. To better understand what has been done in the field of Human Computer Interaction (HCI) in terms of incorporating the type channels mentioned above, we reviewed the literature on implicit nonverbal interaction with a specific emphasis on the interaction between humans on the one side, and robot and virtual humans on the other side. These Artificial Social Agents (ASA) are increasingly used as advanced tools for solving not only physical but also social tasks. In the literature review, we identify domains of interaction between humans and artificial social agents that have shown exponential growth over the years. The review highlights the value of incorporating implicit interaction capabilities in Human Agent Interaction (HAI) which we believe will lead to satisfying human and artificial social agent team performance. We conclude the article by presenting a case study of a system that harnesses subtle nonverbal, implicit interaction to increase the state of relaxation in users. This “Virtual Human Breathing Relaxation System” works on the principle of physiological synchronisation between a human and a virtual, computer-generated human. The active entrainment concept behind the relaxation system is generic and can be applied to other human agent interaction domains of implicit physiology-based interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Petcu, A., B. Faltings, and D. C. Parkes. "M-DPOP: Faithful Distributed Implementation of Efficient Social Choice Problems." Journal of Artificial Intelligence Research 32 (July 31, 2008): 705–55. http://dx.doi.org/10.1613/jair.2500.

Повний текст джерела
Анотація:
In the efficient social choice problem, the goal is to assign values, subject to side constraints, to a set of variables to maximize the total utility across a population of agents, where each agent has private information about its utility function. In this paper we model the social choice problem as a distributed constraint optimization problem (DCOP), in which each agent can communicate with other agents that share an interest in one or more variables. Whereas existing DCOP algorithms can be easily manipulated by an agent, either by misreporting private information or deviating from the algorithm, we introduce M-DPOP, the first DCOP algorithm that provides a faithful distributed implementation for efficient social choice. This provides a concrete example of how the methods of mechanism design can be unified with those of distributed optimization. Faithfulness ensures that no agent can benefit by unilaterally deviating from any aspect of the protocol, neither information-revelation, computation, nor communication, and whatever the private information of other agents. We allow for payments by agents to a central bank, which is the only central authoritythat we require. To achieve faithfulness, we carefully integrate the Vickrey-Clarke-Groves (VCG) mechanism with the DPOP algorithm, such that each agent is only asked to perform computation, report information, and send messages that is in its own best interest. Determining agent i's payment requires solving the social choice problem without agent i. Here, we present a method to reuse computation performed in solving the main problem in a way that is robust against manipulation by the excluded agent. Experimental results on structured problems show that as much as 87% of the computation required for solving the marginal problems can be avoided by re-use, providing very good scalability in the number of agents. On unstructured problems, we observe a sensitivity of M-DPOP to the density of the problem, and we show that reusability decreases from almost 100% for very sparse problems to around 20% for highly connected problems. We close with a discussion of the features of DCOP that enable faithful implementations in this problem, the challenge of reusing computation from the main problem to marginal problems in other algorithms such as ADOPT and OptAPO, and the prospect of methods to avoid the welfare loss that can occur because of the transfer of payments to the bank.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Conte, Rosaria. "The necessity of intelligent agents in social simulation." Advances in Complex Systems 03, no. 01n04 (January 2000): 19–38. http://dx.doi.org/10.1142/s0219525900000030.

Повний текст джерела
Анотація:
The social simulation field is here argued to show a history of growing complexity, especially at the agent level. The simulation of the emergence of Macro-social phenomena has required heterogeneous and dynamic agents, at least in the sense of agents moving in a physical and social space. In turn, the simulation of learning and evolutionary algorithms allowed for a two-way account of the Micro-Macro link. In this presentation, a third step is envisaged in this process, and a 3-layer representation is outlined: the Micro, Mental, and Macro layers. This complex representation is shown to be necessary for understanding the functioning of social institutions. The 3-layer model is briefly discussed, and specific cognitive structures, which evolved to cooperate with emerging Macro-social systems and institutions, are analysed. Finally, social intelligence is argued to receive growing attention is several fields and applications of the science of artificial, with which social simulation is interfaced or will soon be.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ta, Vivian, Caroline Griffith, Carolynn Boatfield, Xinyu Wang, Maria Civitello, Haley Bader, Esther DeCero, and Alexia Loggarakis. "User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis." Journal of Medical Internet Research 22, no. 3 (March 6, 2020): e16235. http://dx.doi.org/10.2196/16235.

Повний текст джерела
Анотація:
Background Previous research suggests that artificial agents may be a promising source of social support for humans. However, the bulk of this research has been conducted in the context of social support interventions that specifically address stressful situations or health improvements. Little research has examined social support received from artificial agents in everyday contexts. Objective Considering that social support manifests in not only crises but also everyday situations and that everyday social support forms the basis of support received during more stressful events, we aimed to investigate the types of everyday social support that can be received from artificial agents. Methods In Study 1, we examined publicly available user reviews (N=1854) of Replika, a popular companion chatbot. In Study 2, a sample (n=66) of Replika users provided detailed open-ended responses regarding their experiences of using Replika. We conducted thematic analysis on both datasets to gain insight into the kind of everyday social support that users receive through interactions with Replika. Results Replika provides some level of companionship that can help curtail loneliness, provide a “safe space” in which users can discuss any topic without the fear of judgment or retaliation, increase positive affect through uplifting and nurturing messages, and provide helpful information/advice when normal sources of informational support are not available. Conclusions Artificial agents may be a promising source of everyday social support, particularly companionship, emotional, informational, and appraisal support, but not as tangible support. Future studies are needed to determine who might benefit from these types of everyday social support the most and why. These results could potentially be used to help address global health issues or other crises early on in everyday situations before they potentially manifest into larger issues.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Bryndin, Evgeny. "About Formation of International Ethical Digital Environment with Smart Artificial Intelligence." International Journal of Information Technology, Control and Automation 11, no. 1 (January 31, 2021): 1–9. http://dx.doi.org/10.5121/ijitca.2021.11101.

Повний текст джерела
Анотація:
Intellectual agent ensembles allow you to create digital environment by professional images with language, behavioral and active communications, when images and communications are implemented by agents with smart artificial intelligence. Through language, behavioral and active communications, intellectual agents implement collective activities. The ethical standard through intelligent agents allows you to regulate the safe use of ensembles made of robots and digital doubles with creative communication artificial intelligence in the social sphere, industry and other professional fields. The use of intelligent agents with smart artificial intelligence requires responsibility from the developer and owner for harming others. If harm to others occurred due to the mistakes of the developer, then he bears responsibility and costs. If the damage to others occurred due to the fault of the owner due to non-compliance with the terms of use, then he bears responsibility and costs. Ethical standard and legal regulation help intellectual agents with intelligent artificial intelligence become professional members of society. Ensembles of intelligent agents ith smart artificial intelligence will be able to safely work with society as professional images with skills, knowledge and competencies, implemented in the form of retrained digital twins and cognitive robots that interact through language, behavioral and active ethical communications. Cognitive robots and digital doubles through self-developing ensembles of intelligent agents with synergistic interaction and intelligent artificial intelligence can master various high-tech professions and competencies. Their use in the industry increases labor productivity and economic efficiency of production. Their application in the social sphere improves the quality of life of a person and society. Their widespread application requires compliance with an ethical standard so that their use does not cause harm. The introduction and use of an ethical standard for the use of cognitive robots and digital doubles with smart artificial intelligence increases the safety of their use. Ethical relationships between individuals and intellectual agents will also be governed by an ethical standard.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Georges, Christophre, and John C. Wallace. "LEARNING DYNAMICS AND NONLINEAR MISSPECIFICATION IN AN ARTIFICIAL FINANCIAL MARKET." Macroeconomic Dynamics 13, no. 5 (October 30, 2009): 625–55. http://dx.doi.org/10.1017/s1365100509080262.

Повний текст джерела
Анотація:
In this paper, we explore the consequence of learning to forecast in a very simple environment. Agents have bounded memory and incorrectly believe that there is nonlinear structure underlying the aggregate time series dynamics. Under social learning with finite memory, agents may be unable to learn the true structure of the economy and rather may chase spurious trends, destabilizing the actual aggregate dynamics. We explore the degree to which agents' forecasts are drawn toward a minimal state variable learning equilibrium as well as a weaker long-run consistency condition.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Harmon, Sarah. "An Expressive Dilemma Generation Model for Players and Artificial Agents." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 12, no. 1 (June 25, 2021): 176–82. http://dx.doi.org/10.1609/aiide.v12i1.12879.

Повний текст джерела
Анотація:
Dilemma scenarios and knowledge of social values support engaging and believable narrative gameplay. Previous work in dilemma generation has explored scenarios involving utility payoffs between two to three people. However, these models primarily require character relationships as preconditions, and do not extend to more complex choices that relate only to causes and values. This paper builds upon past work to create an expressive model of dilemma categorization for player and non-player characters.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Fox, John. "Artificial cognitive systems: Where does argumentation fit in?" Behavioral and Brain Sciences 34, no. 2 (March 29, 2011): 78–79. http://dx.doi.org/10.1017/s0140525x10002839.

Повний текст джерела
Анотація:
AbstractMercier and Sperber (M&S) suggest that human reasoning is reflective and has evolved to support social interaction. Cognitive agents benefit from being able to reflect on their beliefs whether they are acting alone or socially. A formal framework for argumentation that has emerged from research on artificial cognitive systems that parallels M&S's proposals may shed light on mental processes that underpin social interactions.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

de Melo, Celso M., Stacy Marsella, and Jonathan Gratch. "Social decisions and fairness change when people’s interests are represented by autonomous agents." Autonomous Agents and Multi-Agent Systems 32, no. 1 (July 19, 2017): 163–87. http://dx.doi.org/10.1007/s10458-017-9376-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Demiris, G. "USE OF ARTIFICIAL INTELLIGENCE FOR SOCIAL ENGAGEMENT: THE CASE OF EMBODIED CONVERSATIONAL AGENTS." Innovation in Aging 2, suppl_1 (November 1, 2018): 53. http://dx.doi.org/10.1093/geroni/igy023.196.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Stuart, Michael T., and Markus Kneer. "Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–27. http://dx.doi.org/10.1145/3479507.

Повний текст джерела
Анотація:
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Grimaldo, Francisco, Miguel Lozano, Fernando Barber, and Guillermo Vigueras. "Simulating socially intelligent agents in semantic virtual environments." Knowledge Engineering Review 23, no. 4 (December 2008): 369–88. http://dx.doi.org/10.1017/s026988890800009x.

Повний текст джерела
Анотація:
AbstractThe simulation of synthetic humans inhabiting virtual environments is a current research topic with a great number of behavioral problems to be tackled. Semantical virtual environments (SVEs) have recently been proposed not only to ease world modeling but also to enhance the agent–object and agent–agent interaction. Thus, we propose the use of ontologies to define the world’s knowledge base and to introduce semantic levels of detail that help the sensorization of complex scenes—containing lots of interactive objects. The object taxonomy also helps to create general and reusable operativity for autonomous characters—for example, liquids can be poured from containers such as bottles. On the other hand, we use the ontology to define social relations among agents within an artificial society. These relations must be taken into account in order to display socially acceptable decisions. Therefore, we have implemented a market-based social model that reaches coordination and sociability by means of task exchanges. This paper presents a multi-agent framework oriented to simulate socially intelligent characters in SVEs. The framework has been successfully tested in three-dimensional (3D) dynamic scenarios while simulating a virtual university bar, where groups of waiters and customers interact with both the objects in the scene and the other virtual agents, finally displaying complex social behaviors.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tomiltseva, D. A., and A. S. Zheleznov. "Inevitable Third: Ethical and Political Aspects of Interactions with Artificial Agents." Journal of Political Theory, Political Philosophy and Sociology of Politics Politeia 4, no. 99 (December 12, 2020): 90–107. http://dx.doi.org/10.30570/2078-5089-2020-99-4-90-107.

Повний текст джерела
Анотація:
Artificial agents i.e., man-made technical devices and software that are capable of taking meaningful actions and making independent decisions, permeate almost all spheres of human life today. Being new political actants, they transform the nature of human interactions, which gives rise to the problem of ethical and political regulation of their activities. Therefore, the appearance of such agents triggers a global philosophical reflection that goes beyond technical or practical issues and makes researchers return to the fundamental problems of ethics. The article identifies three main aspects that call for philosophical understanding of the existence of artificial agents. First, artificial agents reveal the true contradiction between declared moral and political values and real social practices. Learning from the data on the assessments and conclusions that have already taken place, artificial agents make decisions that correspond to the prevailing behavioral patterns rather than moral principles of their creators or consumers. Second, the specificity of the creation and functioning of artificial agents brings the problem of responsibility for their actions to the forefront, which, in turn, requires a new approach to the political regulation of the activities of not only developers, customers and users, but also the agents themselves. Third, the current forms of the activity of artificial agents shift the traditional boundaries of the human and raise the question of redefining the humanitarian. Having carefully analyzed the selected aspects, the authors reveal their logic and outline the field for further discussion.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Crokidakis, Nuno, and Jorge S. Sá Martins. "Can honesty survive in a corrupt parliament?" International Journal of Modern Physics C 29, no. 10 (October 2018): 1850094. http://dx.doi.org/10.1142/s0129183118500948.

Повний текст джерела
Анотація:
In this work, we study a simple model of social contagion that aims to represent the dynamics of social influences among politicians in an artificial corrupt parliament. We consider an agent-based model with three distinct types of artificial individuals (deputies), namely honest deputies, corrupt deputies and inflexible corrupt deputies. These last agents are committed to corruption, and they never change their state. The other two classes of agents are susceptible deputies, that can change state due to social pressure of other agents. We analyze the dynamic and stationary properties of the model as functions of the frozen density of inflexible corrupt individuals and two other parameters related to the strength of the social influences. We show that the honest individuals can disappear in the steady state, and such disappearance is related to an active-absorbing nonequilibrium phase transition that appears to be in the directed percolation universality class. We also determine the conditions leading to the survival of honesty in the long-time evolution of the system, and the regions of parameters for which the honest deputies can be the either the majority or the minority in the artificial parliament.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Mavrodiev, Pavlin, and Frank Schweitzer. "Enhanced or distorted wisdom of crowds? An agent-based model of opinion formation under social influence." Swarm Intelligence 15, no. 1-2 (May 7, 2021): 31–46. http://dx.doi.org/10.1007/s11721-021-00189-3.

Повний текст джерела
Анотація:
AbstractWe propose an agent-based model of collective opinion formation to study the wisdom of crowds under social influence. The opinion of an agent is a continuous positive value, denoting its subjective answer to a factual question. The wisdom of crowds states that the average of all opinions is close to the truth, i.e., the correct answer. But if agents have the chance to adjust their opinion in response to the opinions of others, this effect can be destroyed. Our model investigates this scenario by evaluating two competing effects: (1) agents tend to keep their own opinion (individual conviction), (2) they tend to adjust their opinion if they have information about the opinions of others (social influence). For the latter, two different regimes (full information vs. aggregated information) are compared. Our simulations show that social influence only in rare cases enhances the wisdom of crowds. Most often, we find that agents converge to a collective opinion that is even farther away from the true answer. Therefore, under social influence the wisdom of crowds can be systematically wrong.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

van Wynsberghe, Aimee, and Scott Robbins. "Critiquing the Reasons for Making Artificial Moral Agents." Science and Engineering Ethics 25, no. 3 (February 19, 2018): 719–35. http://dx.doi.org/10.1007/s11948-018-0030-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tigard, Daniel W. "Artificial Agents in Natural Moral Communities: A Brief Clarification." Cambridge Quarterly of Healthcare Ethics 30, no. 3 (June 10, 2021): 455–58. http://dx.doi.org/10.1017/s0963180120001000.

Повний текст джерела
Анотація:
AbstractWhat exactly is it that makes one morally responsible? Is it a set of facts which can be objectively discerned, or is it something more subjective, a reaction to the agent or context-sensitive interaction? This debate gets raised anew when we encounter newfound examples of potentially marginal agency. Accordingly, the emergence of artificial intelligence (AI) and the idea of “novel beings” represent exciting opportunities to revisit inquiries into the nature of moral responsibility. This paper expands upon my article “Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible” and clarifies my reliance upon two competing views of responsibility. Although AI and novel beings are not close enough to us in kind to be considered candidates for the same sorts of responsibility we ascribe to our fellow human beings, contemporary theories show us the priority and adaptability of our moral attitudes and practices. This allows us to take seriously the social ontology of relationships that tie us together. In other words, moral responsibility is to be found primarily in the natural moral community, even if we admit that those communities now contain artificial agents.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії