Journal articles on the topic 'Computational Social Choice Theory'

To see the other types of publications on this topic, follow the link: Computational Social Choice Theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational Social Choice Theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nipkow, Tobias. "Social Choice Theory in HOL." Journal of Automated Reasoning 43, no. 3 (August 1, 2009): 289–304. http://dx.doi.org/10.1007/s10817-009-9147-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

ENDRISS, ULLE. "The 1st international workshop on computational social choice." Knowledge Engineering Review 23, no. 2 (June 2008): 213–15. http://dx.doi.org/10.1017/s0269888908001343.

Full text
Abstract:
AbsractComputational social choice is a new discipline currently emerging at the interface of social choice theory and computer science. It is concerned with the application of computational techniques to the study of social choice mechanisms, and with the integration of social choice paradigms into computing. The first international workshop specifically dedicated to this topic took place in December 2006 in Amsterdam, attracting a mix of computer scientists, people working in artificial intelligence and multiagent systems, economists, game and social choice theorists, logicians, mathematicians, philosophers, and psychologists as participants.
APA, Harvard, Vancouver, ISO, and other styles
3

Chatterjee, Siddharth, and Arunava Sen. "Automated Reasoning in Social Choice Theory: Some Remarks." Mathematics in Computer Science 8, no. 1 (March 2014): 5–10. http://dx.doi.org/10.1007/s11786-014-0177-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Serrano, Roberto. "The Theory of Implementation of Social Choice Rules." SIAM Review 46, no. 3 (January 2004): 377–414. http://dx.doi.org/10.1137/s0036144503435945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mikhailov, I. F. "Computational approach to social knowledge." Philosophy of Science and Technology 26, no. 2 (2021): 23–37. http://dx.doi.org/10.21146/2413-9084-2021-26-1-23-37.

Full text
Abstract:
Social and cognitive sciences have always faced the choice: either to meet the methodologi- cal standards given by successful natural sciences or to rely on their own. Talking about the conversion of knowledge into technology, the second way did not bring great success. The first way implies two alternative opportunities: reductionism or discovery of proprietary general laws. None of these chances have been realized with any satisfactory results, too. Methodological analysis shows that, to achieve significant progress in social sciences, what is missed there is not new facts or definitions but new conceptual schemes. The reason, as the author supposes, is the nomothetic approach being applied to systems with high degree of complexity and hierarchy. If we assume that social structures and processes are built upon cognitive psychological structures and processes, the former inherit the distributed computational architecture of the latter. The paper analyzes various conceptions of computations in order to determine their relevance to the task of building computational social science. The author offers a “generic” definition of computations as a process carried out by a computational system if the latter is understood as a mechanism of some representation. According to the author, the computationalization of social science implies “naturalization” of computations. This requires a theory that would explain the mechanism of growing complexity and hierarchy of natural (in particular, social) computational systems. As a method for constructing such a science, a kind of reverse engineering is proposed, which is recreation of a computational algorithmic scheme of social tissue by the determination and recombination of “social primitives” – elementary operations of social interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Manzo, Gianluca. "Is rational choice theory still a rational choice of theory? A response to Opp." Social Science Information 52, no. 3 (August 5, 2013): 361–82. http://dx.doi.org/10.1177/0539018413488477.

Full text
Abstract:
Authoritative rational choice theorists continue to argue that wide variants of rational choice theory should be regarded as the best starting-point to formulate theoretical hypotheses on the micro foundations of complex macro-level social dynamics. Building on recent writings on neo-classical rational choice theory, on behavioral economics and on cognitive psychology, the present article challenges this view and argues that: (1) neo-classical rational choice theory is an astonishingly malleable and powerful analytical device whose descriptive accuracy is nevertheless limited to a very specific class of choice settings; (2) the ‘wide’ sociological rational choice theory does not add anything original to the neo-classical framework on a conceptual level and it is also methodologically weaker; (3) at least four alternative action-oriented approaches that reject portrayal of actors as computational devices operating over probability distributions can be used to design sociological explanations that are descriptively accurate at the micro level.
APA, Harvard, Vancouver, ISO, and other styles
7

Chevaleyre, Yann, Ulle Endriss, Jérôme Lang, and Nicolas Maudet. "Preference Handling in Combinatorial Domains: From AI to Social Choice." AI Magazine 29, no. 4 (December 28, 2008): 37. http://dx.doi.org/10.1609/aimag.v29i4.2201.

Full text
Abstract:
In both individual and collective decision making, the space of alternatives from which the agent (or the group of agents) has to choose often has a combinatorial (or multi-attribute) structure. We give an introduction to preference handling in combinatorial domains in the context of collective decision making, and show that the considerable body of work on preference representation and elicitation that AI researchers have been working on for several years is particularly relevant. After giving an overview of languages for compact representation of preferences, we discuss problems in voting in combinatorial domains, and then focus on multiagent resource allocation and fair division. These issues belong to a larger field, known as computational social choice, that brings together ideas from AI and social choice theory, to investigate mechanisms for collective decision making from a computational point of view. We conclude by briefly describing some of the other research topics studied in computational social choice.
APA, Harvard, Vancouver, ISO, and other styles
8

Tanaka, Yasuhito. "A topological proof of Eliaz’s unified theorem of social choice theory." Applied Mathematics and Computation 176, no. 1 (May 2006): 83–90. http://dx.doi.org/10.1016/j.amc.2005.09.055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neath, Andrew A., Joseph E. Cavanaugh, and Adam G. Weyhaupt. "Model evaluation, discrepancy function estimation, and social choice theory." Computational Statistics 30, no. 1 (September 27, 2014): 231–49. http://dx.doi.org/10.1007/s00180-014-0532-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

NURMI, HANNU, and JANUSZ KACPRZYK. "POLITICAL REPRESENTATION: PERSPECTIVE FROM FUZZY SYSTEMS THEORY." New Mathematics and Natural Computation 03, no. 02 (July 2007): 153–63. http://dx.doi.org/10.1142/s1793005707000690.

Full text
Abstract:
The theory of fuzzy sets has been applied to social choice primarily in the context where one is given a set of individual fuzzy preference relations and the aim is to find a non-fuzzy choice set of winners or best alternatives. In this article, we discuss the problem of composing multi-member deliberative bodies starting again from a set of individual fuzzy preference relations. We outline methods of aggregating these relations into a measure of how well each candidate represents each voter in terms of the latter's preferences. Our main goal is to show how the considerations discussed in the context of individual non-fuzzy complete and transitive preference relations can be extended into the domain of fuzzy preference relations.
APA, Harvard, Vancouver, ISO, and other styles
11

DARMANN, ANDREAS. "POPULAR SPANNING TREES." International Journal of Foundations of Computer Science 24, no. 05 (August 2013): 655–77. http://dx.doi.org/10.1142/s0129054113500226.

Full text
Abstract:
Combinatorial Optimization is combined with Social Choice Theory when the goal is to decide on the quality of a spanning tree of an undirected graph. Given individual preferences over the edges of the graph, spanning trees are compared by means of a Condorcet criterion. The comparisons are based on scoring functions used in classic voting rules such as approval voting and Borda voting. In this work, we investigate the computational complexity involved in deciding on the quality of a spanning tree with respect to the different voting rules adapted. In particular, we draw the sharp separation line between polynomially solvable and computationally intractable instances.
APA, Harvard, Vancouver, ISO, and other styles
12

Tanaka, Yasuhito. "The Gibbard–Satterthwaite theorem of social choice theory in an infinite society and LPO (limited principle of omniscience)." Applied Mathematics and Computation 193, no. 2 (November 2007): 475–81. http://dx.doi.org/10.1016/j.amc.2007.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mikhailov, Igor. "Natural Computations and Artificial Intelligence." Chelovek 33, no. 2 (2022): 65. http://dx.doi.org/10.31857/s023620070019511-9.

Full text
Abstract:
The research program focused on the analysis of computational approaches to natural and artificial intelligence is one of four accepted for implementation at the Center for the Philosophy of Consciousness and Cognitive Sciences of the Institute of Philosophy, Russian Academy of Sciences. Presumably, it should become a direction of interdisciplinary research at the crossroad of philosophy, cognitive psychology, cognitive and social neuroscience, and artificial intelligence. The working hypothesis proposed for discussion attended by the relevant specialists is as follows: if an acceptable computational theory of mind appears, we will be able to restrict our research to a simple scientific ontology describing only parts of a physical implementation of computational algorithms, adding a relevant version of computational mathematics thereto. Another hypothesis proposed is that there is an essential ontological intersection between the mechanisms underlying human cognitive abilities and their social organization, both of which serving as an implementation medium for complex distributed cognitive computations. Particularly those which are associated with social organization are responsible for logical and verbal (“rational”) cognitive abilities. As a result of some previous research, an ontology of nested distributed computational systems was generally formulated, which, as expected, can demonstrate significant heuristic potential if supplemented with an adequate mathematical apparatus. Since only individuals with certain cognitive abilities can be social agents, a philosophical problem arises: are cognitive abilities necessary or sufficient to involve their carriers in stable social interactions? In the first case, we have a weak thesis about the cognitive determination of sociality, in the second — the strong one. The choice between these positions is, too, a subject of future research.
APA, Harvard, Vancouver, ISO, and other styles
14

Luqman, Anam, Muhammad Akram, and Florentin Smarandache. "Complex Neutrosophic Hypergraphs: New Social Network Models." Algorithms 12, no. 11 (November 6, 2019): 234. http://dx.doi.org/10.3390/a12110234.

Full text
Abstract:
A complex neutrosophic set is a useful model to handle indeterminate situations with a periodic nature. This is characterized by truth, indeterminacy, and falsity degrees which are the combination of real-valued amplitude terms and complex-valued phase terms. Hypergraphs are objects that enable us to dig out invisible connections between the underlying structures of complex systems such as those leading to sustainable development. In this paper, we apply the most fruitful concept of complex neutrosophic sets to theory of hypergraphs. We define complex neutrosophic hypergraphs and discuss their certain properties including lower truncation, upper truncation, and transition levels. Furthermore, we define T-related complex neutrosophic hypergraphs and properties of minimal transversals of complex neutrosophic hypergraphs. Finally, we represent the modeling of certain social networks with intersecting communities through the score functions and choice values of complex neutrosophic hypergraphs. We also give a brief comparison of our proposed model with other existing models.
APA, Harvard, Vancouver, ISO, and other styles
15

Hassan, Samer, Mauricio Salgado, and Juan Pavón. "Friendship Dynamics: Modelling Social Relationships through a Fuzzy Agent-Based Simulation." Discrete Dynamics in Nature and Society 2011 (2011): 1–19. http://dx.doi.org/10.1155/2011/765640.

Full text
Abstract:
Social relationships such as friendship and partner choice are ruled by theproximity principle, which states that the more similar two individuals are, the more likely they will become friends. However, proximity, similarity, and friendship are concepts with blurred edges and imprecise grades of membership. This study shows how to simulate these friendship dynamics in an agent-based model that appliesfuzzy sets theoryto implement agent attributes, rules, and social relationships, explaining the process in detail. Although in principle it may be thought that the use of fuzzy sets theory makes agent-based modelling more elaborated, in practice it saves the modeller from taking some arbitrary decisions on how to use crisp values for representing properties that are inherently fuzzy. The consequences of applying fuzzy sets and operations to define a fuzzy friendship relationship are compared with a simpler implementation, with crisp values. By integrating agent computational models and fuzzy set theory, this paper provides useful insights into scholars and practitioners to tackle the uncertainty inherent to social relationships in a systematic way.
APA, Harvard, Vancouver, ISO, and other styles
16

Attri, Vikas. "Comparative study of Existing Models for Online Social Network." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 11, 2021): 483–90. http://dx.doi.org/10.17762/turcomat.v12i2.856.

Full text
Abstract:
Today, Online Social Networks becomes the first choice for businesses to broadcast their campaigns for branding, publicity, strategies, advertising, marketing, social influence and so many other areas. Social Network is a platform for communicating with social actors and Social Media is used by companies for broadcasting information. Online Social Network used by businesses for number of purposes but the primary concern is build new social connections that helps to target most audiences for successful campaign purposes. In OSNs sites the social objects are represented by nodes and the term edge used for connection between nodes under the graph theory. Today Social Network sites have becomes most exploded as compared to traditional sites because of impact of so many influence models over traditional models.Some of popular websites of OSN such as MySpace, Facebook, Flickr, YouTube, Google Video, Orkut, LinkedIn, Live Journal and BlogSpot have great impact on customer when targeting the sales marketing funnel for businesses. Adjacent users sometimes called engaged users tend to have more trust level as compared to random pairs users on the social media sites. Already have so much research that helps to calculate the trust factor using influence modeling. So influence models play a vital role to predict the behavior of the customer that helps to fulfill the goal of the business. The key contribution of this work is study of online social networking models.
APA, Harvard, Vancouver, ISO, and other styles
17

Maly, Jan, and Johannes P. Wallner. "Ranking Sets of Defeasible Elements in Preferential Approaches to Structured Argumentation: Postulates, Relations, and Characterizations." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 6435–43. http://dx.doi.org/10.1609/aaai.v35i7.16798.

Full text
Abstract:
Preferences play a key role in computational argumentation in AI, as they reflect various notions of argument strength vital for the representation of argumentation. Within central formal approaches to structured argumentation, preferential approaches are applied by lifting preferences over defeasible elements to rankings over sets of defeasible elements, in order to be able to compare the relative strength of two arguments and their respective defeasible constituents. To overcome the current gap in the scientific landscape, we give in this paper a general study of the critical component of lifting operators in structured argumentation. We survey existing lifting operators scattered in the literature of argumentation theory, social choice, and utility theory, and show fundamental relations and properties of these operators. Extending existing works from argumentation and social choice, we propose a list of postulates for lifting operations, and give a complete picture of (non-)satisfaction for the considered operators. Based on our postulates, we present impossibility results, stating for which sets of postulates there is no hope of satisfaction, and for two main lifting operators presented in structured argumentation, Elitist and Democratic, we give a full characterization in terms of our postulates.
APA, Harvard, Vancouver, ISO, and other styles
18

Confalonieri, Roberto, and Oliver Kutz. "Blending under deconstruction." Annals of Mathematics and Artificial Intelligence 88, no. 5-6 (July 25, 2019): 479–516. http://dx.doi.org/10.1007/s10472-019-09654-6.

Full text
Abstract:
Abstract The cognitive-linguistic theory of conceptual blending was introduced by Fauconnier and Turner in the late 90s to provide a descriptive model and foundational approach for the (almost uniquely) human ability to invent new concepts. Whilst blending is often described as ‘fluid’ and ‘effortless’ when ascribed to humans, it becomes a highly complex, multi-paradigm problem in Artificial Intelligence. This paper aims at presenting a coherent computational narrative, focusing on how one may derive a formal reconstruction of conceptual blending from a deconstruction of the human ability of concept invention into some of its core components. It thus focuses on presenting the key facets that a computational framework for concept invention should possess. A central theme in our narrative is the notion of refinement, understood as ways of specialising or generalising concepts, an idea that can be seen as providing conceptual uniformity to a number of theoretical constructs as well as implementation efforts underlying computational versions of conceptual blending. Particular elements underlying our reconstruction effort include ontologies and ontology-based reasoning, image schema theory, spatio-temporal reasoning, abstract specification, social choice theory, and axiom pinpointing. We overview and analyse adopted solutions and then focus on open perspectives that address two core problems in computational approaches to conceptual blending: searching for the shared semantic structure between concepts—the so-called generic space in conceptual blending—and concept evaluation, i.e., to determine the value of newly found blends.
APA, Harvard, Vancouver, ISO, and other styles
19

Rico, Noelia, Camino R. Vela, Raúl Pérez-Fernández, and Irene Díaz. "Reducing the Computational Time for the Kemeny Method by Exploiting Condorcet Properties." Mathematics 9, no. 12 (June 15, 2021): 1380. http://dx.doi.org/10.3390/math9121380.

Full text
Abstract:
Preference aggregation and in particular ranking aggregation are mainly studied by the field of social choice theory but extensively applied in a variety of contexts. Among the most prominent methods for ranking aggregation, the Kemeny method has been proved to be the only one that satisfies some desirable properties such as neutrality, consistency and the Condorcet condition at the same time. Unfortunately, the problem of finding a Kemeny ranking is NP-hard, which prevents practitioners from using it in real-life problems. The state of the art of exact algorithms for the computation of the Kemeny ranking experienced a major boost last year with the presentation of an algorithm that provides searching time guarantee up to 13 alternatives. In this work, we propose an enhanced version of this algorithm based on pruning the search space when some Condorcet properties hold. This enhanced version greatly improves the performance in terms of runtime consumption.
APA, Harvard, Vancouver, ISO, and other styles
20

Holland, Paul W., and Dorothy T. Thayer. "Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions." Journal of Educational and Behavioral Statistics 25, no. 2 (June 2000): 133–83. http://dx.doi.org/10.3102/10769986025002133.

Full text
Abstract:
The well-developed theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothing and are particularly well-suited to problems in which there is little or no theory to guide a choice of probability models, e.g., smoothing a distribution to eliminate roughness and zero frequencies in order to equate scores from different tests. Attention is given to efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and to computationally efficient methods for obtaining the asymptotic standard errors of the fitted frequencies and proportions. We discuss tools that can be used to diagnose the quality of the fitted frequencies for both the univariate and the bivariate cases. Five examples, using real data, are used to illustrate the methods of this paper.
APA, Harvard, Vancouver, ISO, and other styles
21

Conitzer, Vincent. "Designing Preferences, Beliefs, and Identities for Artificial Intelligence." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9755–59. http://dx.doi.org/10.1609/aaai.v33i01.33019755.

Full text
Abstract:
Research in artificial intelligence, as well as in economics and other related fields, generally proceeds from the premise that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, as we design AI systems, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. The premise of this paper is that as AI is being broadly deployed in the world, we need well-founded theories of, and methodologies and algorithms for, how to design preferences, identities, and beliefs. This paper lays out an approach to address these problems from a rigorous foundation in decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields.
APA, Harvard, Vancouver, ISO, and other styles
22

Kagan, Pavel. "Theory and practice of organizational and technological design in construction." E3S Web of Conferences 389 (2023): 06013. http://dx.doi.org/10.1051/e3sconf/202338906013.

Full text
Abstract:
To achieve the effectiveness of the integrated process of organizational and technological design in construction, it is necessary to use the latest achievements of science in this area, taking into account the experience of their implementation. The main function of organizational and technological design is to model the implementation of construction processes (taking into account the identification and analysis of previous work), determine the options for connections and their priority, and, ultimately, select the most rational organizational and technological solutions that ensure the readiness of facilities and the construction organization for implementation. works. This choice is carried out with the help of a multi-criteria, multi-level assessment of such decisions based on computational and information modeling of construction processes, the choice of organization methods and technology for the production of works, the creation of an information base to provide construction with all types of resources, using information and software and the development of organizational and technological documentation. In Russia, in recent years, project management in the construction industry has become increasingly widespread. This is due to the modern processes of globalization, development and the emergence of common properties of the organizational structures of Russian and foreign construction companies to achieve the effectiveness of the functioning of such organizations. The unification of the target functions of construction organizations makes it possible to obtain a comprehensive, global management structure for efficient construction enterprises, which leads to the development of project management uniformity and the necessary development of organizational and technological design as a necessary stage in the construction of residential, social, cultural or other facilities.
APA, Harvard, Vancouver, ISO, and other styles
23

Holehouse, James, and Hector Pollitt. "Non-equilibrium time-dependent solution to discrete choice with social interactions." PLOS ONE 17, no. 5 (May 26, 2022): e0267083. http://dx.doi.org/10.1371/journal.pone.0267083.

Full text
Abstract:
We solve the binary decision model of Brock and Durlauf (2001) in time using a method reliant on the resolvent of the master operator of the stochastic process. Our solution is valid when not at equilibrium and can be used to exemplify path-dependent behaviours of the binary decision model. The solution is computationally fast and is indistinguishable from Monte Carlo simulation. Well-known metastable effects are observed in regions of the model’s parameter space where agent rationality is above a critical value, and we calculate the time scale at which equilibrium is reached using a highly accurate method based on first passage time theory. In addition to considering selfish agents, who only care to maximise their own utility, we consider altruistic agents who make decisions on the basis of maximising global utility. Curiously, we find that although altruistic agents coalesce more strongly on a particular decision, thereby increasing their utility in the short-term, they are also more prone to being subject to non-optimal metastable regimes as compared to selfish agents. The method used for this solution can be easily extended to other binary decision models, including Kirman’s model of ant recruitment Kirman (1993), and under reinterpretation also provides a time-dependent solution to the mean-field Ising model. Finally, we use our time-dependent solution to construct a likelihood function that can be used on non-equilibrium data for model calibration. This is a rare finding, since often calibration in economic agent based models must be done without an explicit likelihood function. From simulated data, we show that even with a well-defined likelihood function, model calibration is difficult unless one has access to data representative of the underlying model.
APA, Harvard, Vancouver, ISO, and other styles
24

Lockwood, Patricia L., Matthew A. J. Apps, Vincent Valton, Essi Viding, and Jonathan P. Roiser. "Neurocomputational mechanisms of prosocial learning and links to empathy." Proceedings of the National Academy of Sciences 113, no. 35 (August 15, 2016): 9763–68. http://dx.doi.org/10.1073/pnas.1603198113.

Full text
Abstract:
Reinforcement learning theory powerfully characterizes how we learn to benefit ourselves. In this theory, prediction errors—the difference between a predicted and actual outcome of a choice—drive learning. However, we do not operate in a social vacuum. To behave prosocially we must learn the consequences of our actions for other people. Empathy, the ability to vicariously experience and understand the affect of others, is hypothesized to be a critical facilitator of prosocial behaviors, but the link between empathy and prosocial behavior is still unclear. During functional magnetic resonance imaging (fMRI) participants chose between different stimuli that were probabilistically associated with rewards for themselves (self), another person (prosocial), or no one (control). Using computational modeling, we show that people can learn to obtain rewards for others but do so more slowly than when learning to obtain rewards for themselves. fMRI revealed that activity in a posterior portion of the subgenual anterior cingulate cortex/basal forebrain (sgACC) drives learning only when we are acting in a prosocial context and signals a prosocial prediction error conforming to classical principles of reinforcement learning theory. However, there is also substantial variability in the neural and behavioral efficiency of prosocial learning, which is predicted by trait empathy. More empathic people learn more quickly when benefitting others, and their sgACC response is the most selective for prosocial learning. We thus reveal a computational mechanism driving prosocial learning in humans. This framework could provide insights into atypical prosocial behavior in those with disorders of social cognition.
APA, Harvard, Vancouver, ISO, and other styles
25

Heller, Jürgen, and Claudia Repitsch. "Exploiting Prior Information in Stochastic Knowledge Assessment." Methodology 8, no. 1 (August 1, 2012): 12–22. http://dx.doi.org/10.1027/1614-2241/a000035.

Full text
Abstract:
Various adaptive procedures for efficiently assessing the knowledge state of an individual have been developed within the theory of knowledge structures. These procedures set out to draw a detailed picture of an individual’s knowledge in a certain field by posing a minimal number of questions. While research so far mostly emphasized theoretical issues, the present paper focuses on an empirical evaluation of probabilistic assessment. It reports on simulation data showing that both efficiency and accuracy of the assessment exhibit considerable sensitivity to the choice of parameters and prior information as captured by the initial likelihood of the knowledge states. In order to deal with problems that arise from incorrect prior information, an extension of the probabilistic assessment is proposed. Systematic simulations provide evidence for the efficiency and robustness of the proposed extension, as well as its feasibility in terms of computational costs.
APA, Harvard, Vancouver, ISO, and other styles
26

T.Sajana, Monali Gulhane,. "Human Behavior Prediction and Analysis Using Machine Learning-A Review." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 870–76. http://dx.doi.org/10.17762/turcomat.v12i5.1499.

Full text
Abstract:
Nowadays many trends are being in the area of medicine to predict the human behaviour and analysis of patient behaviour is being studied but the technical difficulty of cost efficient method to predict the behaviour of user is overcome in the proposed researched methodology .The mental health of the used can lead to good immunity system to be healthy in this pandemic of COVID-19. Hence After a detailed study on different human health disease classification techniques it is found that machine learning techniques are reliable for the feature extraction and analysis of the different human parameters. CNN is the most optimum choice of classification of diseases. Feature extraction and feature selection is automatically managed by the CNN layers, which reduces the training speed. Techniques like sensor-based feature extraction like EEG, ECG, etc. will be further explored using machine learning algorithms for detection of early detections of diseases from human behavior on different platforms in this research. Social behavior and eating habits play a vital role in disease detection. A system that combines such a wide variety of features with effective classification techniques at each stage is needed. The research in this paper contributes the review of the human behavior analysis through different body parameters, food habits and social media influences with social behavior of the person. The main objective of research is to analysis theses different area parameters to predict the early signs of the diseases.
APA, Harvard, Vancouver, ISO, and other styles
27

Dhamal, Swapnil, and Y. Narahari. "Scalable Preference Aggregation in Social Networks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 42–50. http://dx.doi.org/10.1609/hcomp.v1i1.13074.

Full text
Abstract:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
APA, Harvard, Vancouver, ISO, and other styles
28

CHEN, SHU-HENG. "GRAPHS, NETWORKS AND ACE." New Mathematics and Natural Computation 02, no. 03 (November 2006): 299–314. http://dx.doi.org/10.1142/s1793005706000531.

Full text
Abstract:
Following the standard probabilistic approach, we shall explicitly show the relation between a microscopic and a macroscopic view of an economy in the context of a discrete choice model. Two crucial issues to do with the graphical applications to the network economy are addressed. The first one concerns the representation richness of the graph, whereas the second one concerns the formation and evolution of the graph when it is applied to social networks. This study can be a starting point to see the relevance of agent-based computational economics (ACE) to the network economy, in particular, after bringing in the interaction mechanism associated with a network topology.
APA, Harvard, Vancouver, ISO, and other styles
29

Valsan, Calin. "B is for Bias." International Journal of Applied Behavioral Economics 3, no. 2 (April 2014): 35–47. http://dx.doi.org/10.4018/ijabe.2014040103.

Full text
Abstract:
Standard economic theory assumes rational agents. Individuals are expected to have rational expectations and constantly optimize their choices. Modern economic and financial theory is build under the assumption of rationality. There is plenty of evidence from psychology, however, that individuals are biased and rely heavily on heuristics in order to make decisions. Yet, this is not a mere fluke, a behavioral oddity. Because the social and economic environment in which individuals evolve is complex, behavioral biases represent evolutionary adaptations allowing economic agents to deal with undecidability and computational irreducibility.
APA, Harvard, Vancouver, ISO, and other styles
30

Ramsay, James O., and Marie Wiberg. "A Strategy for Replacing Sum Scoring." Journal of Educational and Behavioral Statistics 42, no. 3 (December 22, 2016): 282–307. http://dx.doi.org/10.3102/1076998616680841.

Full text
Abstract:
This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and about 12% for a classroom test. A new parametric density function for ability estimates, the tilted scaled β, is used to resolve the nonidentifiability of the univariate test theory model. Item characteristic curves (ICCs) are represented as basis function expansions of their log-odds transforms. A parameter cascading method along with roughness penalties is used to estimate the corresponding log odds of the ICCs and is demonstrated to be sufficiently computationally efficient that it can support the analysis of large data sets.
APA, Harvard, Vancouver, ISO, and other styles
31

Et. al., Dr Priya Dwivedi ,. "Maslow Theory Revisited-Covid-19 - Lockdown Impact on Consumer Behaviour." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 2445–50. http://dx.doi.org/10.17762/turcomat.v12i2.2072.

Full text
Abstract:
The current study is attempting to derive the reference to the hierarchy of Maslow, where the consumers were placed before the arrival of Covid-19 and during the lockdown time. Consumer behavior consists of cognitive, emotional or physical activities in which people pick, purchase, consume and dispose of products and services to satisfy their choices and expectations. Abraham Maslow defined hierarchy of needs in different forms viz Physiological, Safety,Social,Esteem and Self-actualization needs. A multiplicity of competing factors influences human behaviour and thereby needs and requirements.Recognition of needs is essential as the initial step for market participants in the supply chain. At the same time recognising where the needs of consumers will alter is parallelly significant for smooth functioning of market processes and securing profitability along with capturing the trend. In the present study with the help of primary survey need recognition or any sort of variation therein, pre and during the Covid-19 lockdown periodare traced within the conceptual framework of Maslow Hierarchy of needs theory.
APA, Harvard, Vancouver, ISO, and other styles
32

Et. al., Surya R,. "Fanfiction as an Academic Tool for Advanced Language Fluency: A Study." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 4 (April 11, 2021): 364–69. http://dx.doi.org/10.17762/turcomat.v12i4.515.

Full text
Abstract:
In this globalized world, a thorough grasp of the English language has mushroomed as an inexorable necessity than an obligation. Traditional language learning is often turning out to be an involuntary process, alienating learners and thereby posing bigger challenges to second language teaching. Given the ongoing diversified technological revolution, an informal user-friendly ambience was created, making learning an uncomplicated and stress-free exercise. Digital platforms aid in several ways for learning languages - such as online language courses and special purpose mobile applications. Exposure to the language is vital in the learning process and social media can be of great help here. There is no better choice as a practice ground than social media and its associated forms. Fanfiction forums are the most popular reading and writing communities on the Internet. This paper attempts to throw light on how fanfiction can be useful in the task-based language teaching method for attainment of advanced fluency in reading and writing skills. A looming literary sensation and a source of entertainment, fanfictions of prominent literary works and visual arts are widely read and accepted by masses. This fictional writing can be incorporated into a higher-level language classroom as a learning tool, under the guidance of teachers who are accustomed to this form of writings and are digitally literate. A sample survey was conducted among fan fiction groups to highlight and justify the efficacy of fanfiction in promoting English language learning.
APA, Harvard, Vancouver, ISO, and other styles
33

Boixel, Arthur, and Ronald De Haan. "On the Complexity of Finding Justifications for Collective Decisions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5194–201. http://dx.doi.org/10.1609/aaai.v35i6.16656.

Full text
Abstract:
In a collective decision-making process, having the possibility to provide non-expert agents with a justification for why a target outcome is a good compromise given their individual preferences, is an appealing idea. Such questions have recently been addressed in the computational social choice community at large---whether it was to explain the outcomes of a specific rule in voting theory or to seek transparency and accountability in multi-criteria decision making. Ultimately, the development of real-life applications based on these notions depends on their practical feasibility and on the scalability of the approach taken. In this paper, we provide computational complexity results that address the problem of finding and verifying justifications for collective decisions. In particular, we focus on the recent development of a general notion of justification for outcomes in voting theory. Such a justification consists of a step-by-step explanation, grounded in a normative basis, showing how the selection of the target outcome follows from the normative principles considered. We consider a language in which normative principles can be encoded---either as an explicit list of instances of the principles (by means of quantifier-free sentences), or in a succinct fashion (using quantifiers). We then analyse the computational complexity of identifying and checking justifications. For the case where the normative principles are given in the form of a list of instances, verifying the correctness of a justification is DP-complete and deciding on the existence of such a justification is complete for Sigma 2 P. For the case where the normative principles are given succinctly, deciding whether a justification is correct is in NEXP wedge coNEXP, and NEXP-hard, and deciding whether a justification exists is in EXP with access to an NP oracle and is NEXP-hard.
APA, Harvard, Vancouver, ISO, and other styles
34

Yoganandham, Dr G. "Technological Transformation And Progress Of Agricultural Development In Gudiyattam Taluk – An Assessment." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 11, 2021): 971–80. http://dx.doi.org/10.17762/turcomat.v12i6.2376.

Full text
Abstract:
The present day farming habitually employ trendy know-how such as automation, warmth and moisture sensors, aerial images, and GPS know-how. These highly developed devices and accuracy farming and mechanical structure permit business to be more money-making, competent, safer, and extra environmentally gracious. Contemporary farming knowledge is used to develop the wide types of production learn working by farmers. It is the basis of technological transformation. Advocating technology transfer should believe the various kinds of social capital as a choice policy alternative to the existing top down move towards in order to get better smallholder source of revenue. The key technological know-how modernization in the breathing space encompass paying attention about the areas such as indoor vertical farming, automation and robotics, livestock technology, modern greenhouse practices, precision agriculture and artificial intelligence, and block chain. Contemporary agricultural practices use mechanized tools for irrigation, tilling and harvesting beside with hybrid seeds. In India, the cultivation know-how are labour intensive, whereas the contemporary agriculture equipment are mostly capital intensive. In this background, the researcher mainly concentrates on Technological Transformation and Progress of Agricultural Development in Gudiyattam Taluk of Vellore of Tamil Nadu in analytical perspectives.
APA, Harvard, Vancouver, ISO, and other styles
35

Wallner, Johannes P., Andreas Niskanen, and Matti Järvisalo. "Complexity Results and Algorithms for Extension Enforcement in Abstract Argumentation." Journal of Artificial Intelligence Research 60 (September 13, 2017): 1–40. http://dx.doi.org/10.1613/jair.5415.

Full text
Abstract:
Argumentation is an active area of modern artificial intelligence (AI) research, with connections to a range of fields, from computational complexity theory and knowledge representation and reasoning to philosophy and social sciences, as well as application-oriented work in domains such as legal reasoning, multi-agent systems, and decision support. Argumentation frameworks (AFs) of abstract argumentation have become the graph-based formal model of choice for many approaches to argumentation in AI, with semantics defining sets of jointly acceptable arguments, i.e., extensions. Understanding the dynamics of AFs has been recently recognized as an important topic in the study of argumentation in AI. In this work, we focus on the so-called extension enforcement problem in abstract argumentation as a recently proposed form of argumentation dynamics. We provide a nearly complete computational complexity map of argument-fixed extension enforcement under various major AF semantics, with results ranging from polynomial-time algorithms to completeness for the second level of the polynomial hierarchy. Complementing the complexity results, we propose algorithms for NP-hard extension enforcement based on constraint optimization under the maximum satisfiability (MaxSAT) paradigm. Going beyond NP, we propose novel MaxSAT-based counterexample-guided abstraction refinement procedures for the second-level complete problems and present empirical results on a prototype system constituting the first approach to extension enforcement in its generality.
APA, Harvard, Vancouver, ISO, and other styles
36

Panizza, Folco, Alexander Vostroknutov, and Giorgio Coricelli. "How conformity can lead to polarised social behaviour." PLOS Computational Biology 17, no. 10 (October 20, 2021): e1009530. http://dx.doi.org/10.1371/journal.pcbi.1009530.

Full text
Abstract:
Learning social behaviour of others strongly influences one’s own social attitudes. We compare several distinct explanations of this phenomenon, testing their predictions using computational modelling across four experimental conditions. In the experiment, participants chose repeatedly whether to pay for increasing (prosocial) or decreasing (antisocial) the earnings of an unknown other. Halfway through the task, participants predicted the choices of an extremely prosocial or antisocial agent (either a computer, a single participant, or a group of participants). Our analyses indicate that participants polarise their social attitude mainly due to normative expectations. Specifically, most participants conform to presumed demands by the authority (vertical influence), or because they learn that the observed human agents follow the norm very closely (horizontal influence).
APA, Harvard, Vancouver, ISO, and other styles
37

Viljoen, Salomé, Jake Goldenfein, and Lee McGuigan. "Design choices: Mechanism design and platform capitalism." Big Data & Society 8, no. 2 (July 2021): 205395172110343. http://dx.doi.org/10.1177/20539517211034312.

Full text
Abstract:
Mechanism design is a form of optimization developed in economic theory. It casts economists as institutional engineers, choosing an outcome and then arranging a set of market rules and conditions to achieve it. The toolkit from mechanism design is widely used in economics, policymaking, and now in building and managing online environments. Mechanism design has become one of the most pervasive yet inconspicuous influences on the digital mediation of social life. Its optimizing schemes structure online advertising markets and other multi-sided platform businesses. Whatever normative rationales mechanism design might draw on in its economic origins, as its influence has grown and its applications have become more computational, we suggest those justifications for using mechanism design to orchestrate and optimize human interaction are losing traction. In this article, we ask what ideological work mechanism design is doing in economics, computer science, and its applications to the governance of digital platforms. Observing mechanism design in action in algorithmic environments, we argue it has become a tool for producing information domination, distributing social costs in ways that benefit designers, and controlling and coordinating participants in multi-sided platforms.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Zhongtian, and Shamima Haque. "Corporate social responsibility employment narratives: a linguistic analysis." Accounting, Auditing & Accountability Journal 32, no. 6 (August 23, 2019): 1690–713. http://dx.doi.org/10.1108/aaaj-10-2016-2753.

Full text
Abstract:
Purpose The purpose of this paper is twofold: first, it investigates whether and to what extent “linguistic hedging”, an impression management form of linguistic expression that conveys an ambiguous level of commitment, is used in corporate social responsibility (CSR) employment narratives; and second, it explores whether there is any difference in the use of linguistic hedging between written and spoken corporate forms of language. It mobilises these objectives by examining employee-related narratives made by electronic manufacturing services (EMS) providers domiciled in Taiwan, in the context of labour malpractice incidents. Design/methodology/approach Two groups of data are examined: corporate responsibility reports (written language) and interviews and speeches of corporate founders and senior executives (spoken language). The research sample is ten Taiwanese EMS firms, all of which attracted public criticism and scrutiny due to a series of employee-related incidents. The sample period is between 2009 and 2013. Linguistic hedging is measured quantitatively by the relative word frequency of hedges, using the concordance software ANTCONC, with findings interpreted through the lens of legitimacy theory and impression management. Findings The study found that hedging was evident in CSR narratives. The EMS providers in Taiwan appeared to use hedging in employee-related disclosures to manage legitimacy challenges due to employee-related incidents that had happened in their assembly plants. The adjustments in employee-related disclosures made by the EMS firms as a legitimation strategy can be seen as a rhetoric device of impression management or a form of symbolic legitimation to persuade society to restore their legitimacy status. Further, overall hedging was more frequently used in spoken than written language, which suggests that rhetorical hedging in written narratives is more likely to be a deliberate choice of tactics to influence stakeholder perceptions and thereby manage corporate legitimacy. Originality/value The study introduces a new analytical technique, linguistic hedging, into the CSR literature. This enriches research methods used in this field, providing more compelling insights into the relationship between the use of language and CSR narratives in the process of corporate legitimation of employee-related practices. This study thus provides a platform for future computational-linguistics studies in the field of CSR.
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Ping, and Xiangbin Yan. "Stability analysis and control models for rumor spreading in online social networks." International Journal of Modern Physics C 28, no. 05 (March 9, 2017): 1750061. http://dx.doi.org/10.1142/s0129183117500619.

Full text
Abstract:
This paper establishes a novel Susceptible-Infected-Removed (SIR) rumor spreading model for online social networks (OSNs). The model utilizes the node degree to describe the dynamic changes of the number of rumor spreaders and it can be regarded as an extension of the traditional SIR model. Stability analysis of the model reveals that the spreader in social networks has a basic reproduction number. If the basic reproduction number is less than 1, then rumors will disappear. Otherwise, rumors will persist. According to this result, we can predict the trend of rumor spreading. Then we propose an immune-structure SIR model to explore the control method of rumor spreading. Stability analysis and numerical simulation of the model indicate that immunizing susceptible individual is an effective method to control rumors. Further, the immune-structure model explains that the network structure decides the choice of immune methods. Our findings offer some new insights to control the spread of rumors on OSNs.
APA, Harvard, Vancouver, ISO, and other styles
40

Sufi, Fahim. "Algorithms in Low-Code-No-Code for Research Applications: A Practical Review." Algorithms 16, no. 2 (February 13, 2023): 108. http://dx.doi.org/10.3390/a16020108.

Full text
Abstract:
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23 research works using LCNC platforms, such as SetXRM, the vf-OS platform, Aure-BPM, CRISP-DM, and Microsoft Power Platform (MPP). About 61% of these existing studies resorted to MPP as their primary choice. The critical research problems solved by these research works were within the area of global news analysis, social media analysis, landslides, tornadoes, COVID-19, digitization of process, manufacturing, logistics, and software/app development. The main reasons identified for solving research problems with LCNC algorithms were as follows: (1) obtaining research data from multiple sources in complete automation; (2) generating artificial intelligence-driven insights without having to manually code them. In the course of describing this review, this paper also demonstrates a practical approach to implement a cyber-attack monitoring algorithm with the most popular LCNC platform.
APA, Harvard, Vancouver, ISO, and other styles
41

Ochoa, Xavier, Simon Knight, and Alyssa Friend Wise. "Learning Analytics Impact: Critical Conversations on Relevance and Social Responsibility." Journal of Learning Analytics 7, no. 3 (December 17, 2020): 1–5. http://dx.doi.org/10.18608/jla.2020.73.1.

Full text
Abstract:
Our 2019 editorial opened a dialogue about what is needed to foster an impactful field of learning analytics (Knight, Wise, & Ochoa, 2019). As we head toward the close of a tumultuous year that has raised profound questions about the structure and processes of formal education and its role in society, this conversation is more relevant than ever. That editorial, and a recent online community event, focused on one component of the impact: standards for scientific rigour and the criteria by which knowledge claims in an interdisciplinary, multi-methodology field should be judged. These initial conversations revealed important commonalities across statistical, computational, and qualitative approaches in terms of a need for greater explanation and justification of choices in using appropriate data, models, or other methodological approaches, as well as the many micro-decisions made in applying specific methodologies to specific studies. The conversations also emphasize the need to perform different checks (for overfitting, for bias, for replicability, for the contextual bounds of applicability, for disconfirming cases) and the importance of learning analytics research being relevant by situating itself within a set of educational values, making tighter connections to theory, and considering its practical mobilization to affect learning. These ideas will serve as the starting point for a series of detailed follow-up conversations across the community, with the goal of generating updated standards and guidance for JLA articles.
APA, Harvard, Vancouver, ISO, and other styles
42

Foley, Michael, Rory Smead, Patrick Forber, and Christoph Riedl. "Avoiding the bullies: The resilience of cooperation among unequals." PLOS Computational Biology 17, no. 4 (April 7, 2021): e1008847. http://dx.doi.org/10.1371/journal.pcbi.1008847.

Full text
Abstract:
Can egalitarian norms or conventions survive the presence of dominant individuals who are ensured of victory in conflicts? We investigate the interaction of power asymmetry and partner choice in games of conflict over a contested resource. Previous models of cooperation do not include both power inequality and partner choice. Furthermore, models that do include power inequalities assume a static game where a bully’s advantage does not change. They have therefore not attempted to model complex and realistic properties of social interaction. Here, we introduce three models to study the emergence and resilience of cooperation among unequals when interaction is random, when individuals can choose their partners, and where power asymmetries dynamically depend on accumulated payoffs. We find that the ability to avoid bullies with higher competitive ability afforded by partner choice mostly restores cooperative conventions and that the competitive hierarchy never forms. Partner choice counteracts the hyper dominance of bullies who are isolated in the network and eliminates the need for others to coordinate in a coalition. When competitive ability dynamically depends on cumulative payoffs, complex cycles of coupled network-strategy-rank changes emerge. Effective collaborators gain popularity (and thus power), adopt aggressive behavior, get isolated, and ultimately lose power. Neither the network nor behavior converge to a stable equilibrium. Despite the instability of power dynamics, the cooperative convention in the population remains stable overall and long-term inequality is completely eliminated. The interaction between partner choice and dynamic power asymmetry is crucial for these results: without partner choice, bullies cannot be isolated, and without dynamic power asymmetry, bullies do not lose their power even when isolated. We analytically identify a single critical point that marks a phase transition in all three iterations of our models. This critical point is where the first individual breaks from the convention and cycles start to emerge.
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Yanhui, and Dan Wang. "Privacy Concerns and Antecedents of Building Data Sharing: A Privacy Calculus Perspective." ACM SIGEnergy Energy Informatics Review 3, no. 2 (June 2023): 3–18. http://dx.doi.org/10.1145/3607114.3607116.

Full text
Abstract:
In recent years, machine learning (ML) based building analytics have been developed for diverse building services. Yet, the widespread sharing of building data, which underpins the establishment of ML models, is not a common practice in the buildings industry today. Clearly, there are privacy concerns. There are studies on protecting building data, e.g., to k -anonymize building data; yet these studies are computational methods. The root causes of why building operators are or are not willing to share data are unclear. In this paper, we study the problem of willingness to share building data. First, we justify our study by investigating the field to show that data sharing is indeed limited. Second, we examine the issue of the willingness to share building data from the perspective of a social science study. We observe that the intention to disclose (i.e., decision making on data sharing) is not only based on perceived risks , but also on perceived benefits. We leverage the privacy calculus theory and present a systematic study. We develop hypotheses, design a questionnaire, conduct a survey involving 95 building operators and service providers around the world, and analyze the results, wherein we quantify how various factors influence the willingness to share building data. We further enhance our results by a small scale interview. Third, we use trust, an important factors to the intention to disclose, to develop a trust model with differentiable trust levels. Such model provides building operators a mechanism to share data besides a 0-and-1 choice. We present a case study where we enhance an existing building data anonymization platform, PAD with the trust model. We show that the enhanced PAD has a substantially smaller computation workloads.
APA, Harvard, Vancouver, ISO, and other styles
44

Iwanaga, Saori, and Akira Namatame. "Hub Agents Determine Collective Behavior." New Mathematics and Natural Computation 11, no. 02 (May 20, 2015): 165–81. http://dx.doi.org/10.1142/s1793005715400049.

Full text
Abstract:
There are growing interests for studying collective behavior including the dynamics of markets, the emergence of social norms and conventions and collective phenomena in daily life such as traffic congestion. In our previous work [Iwanaga and Namatame, Collective behavior and diverse social network, International Journal of Advancements in Computing Technology 4(22) (2012) 321–320], we showed that collective behavior in cooperative relationships is affected in the structure of the social network, the initial collective behavior and diversity of payoff parameter. In this paper, we focus on scale-free network and investigate the effect of number of interactions on collective behavior. And we found that choices of hub agents determine collective behavior.
APA, Harvard, Vancouver, ISO, and other styles
45

NIE, DA-CHENG, MING-JING DING, YAN FU, JUN-LIN ZHOU, and ZI-KE ZHANG. "SOCIAL INTEREST FOR USER SELECTING ITEMS IN RECOMMENDER SYSTEMS." International Journal of Modern Physics C 24, no. 04 (April 2013): 1350022. http://dx.doi.org/10.1142/s0129183113500228.

Full text
Abstract:
Recommender systems have developed rapidly and successfully. The system aims to help users find relevant items from a potentially overwhelming set of choices. However, most of the existing recommender algorithms focused on the traditional user-item similarity computation, other than incorporating the social interests into the recommender systems. As we know, each user has their own preference field, they may influence their friends' preference in their expert field when considering the social interest on their friends' item collecting. In order to model this social interest, in this paper, we proposed a simple method to compute users' social interest on the specific items in the recommender systems, and then integrate this social interest with similarity preference. The experimental results on two real-world datasets Epinions and Friendfeed show that this method can significantly improve not only the algorithmic precision-accuracy but also the diversity-accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Salahshour, Mohammad. "Freedom to choose between public resources promotes cooperation." PLOS Computational Biology 17, no. 2 (February 8, 2021): e1008703. http://dx.doi.org/10.1371/journal.pcbi.1008703.

Full text
Abstract:
As cooperation incurs a cost to the cooperator for others to benefit, its evolution seems to contradict natural selection. How evolution has resolved this obstacle has been among the most intensely studied questions in evolutionary theory in recent decades. Here, we show that having a choice between different public resources provides a simple mechanism for cooperation to flourish. Such a mechanism can be at work in many biological or social contexts where individuals can form different groups or join different institutions to perform a collective action task, or when they can choose between collective actions with different profitability. As a simple evolutionary model suggests, defectors tend to join the highest quality resource in such a context. This allows cooperators to survive and out-compete defectors by sheltering in a lower quality resource. Cooperation is maximized, however, when the qualities of the two highest quality resources are similar, and thus, they are almost interchangeable.
APA, Harvard, Vancouver, ISO, and other styles
47

Maynard-Zhang, P., and D. Lehmann. "Representing and Aggregating Conflicting Beliefs." Journal of Artificial Intelligence Research 19 (September 1, 2003): 155–203. http://dx.doi.org/10.1613/jair.1206.

Full text
Abstract:
We consider the two-fold problem of representing collective beliefs and aggregating these beliefs. We propose a novel representation for collective beliefs that uses modular, transitive relations over possible worlds. They allow us to represent conflicting opinions and they have a clear semantics, thus improving upon the quasi-transitive relations often used in social choice. We then describe a way to construct the belief state of an agent informed by a set of sources of varying degrees of reliability. This construction circumvents Arrow's Impossibility Theorem in a satisfactory manner by accounting for the explicitly encoded conflicts. We give a simple set-theory-based operator for combining the information of multiple agents. We show that this operator satisfies the desirable invariants of idempotence, commutativity, and associativity, and, thus, is well-behaved when iterated, and we describe a computationally effective way of computing the resulting belief state. Finally, we extend our framework to incorporate voting.
APA, Harvard, Vancouver, ISO, and other styles
48

Procaccia, Ariel D. "Computational social choice." XRDS: Crossroads, The ACM Magazine for Students 18, no. 2 (December 2011): 31–34. http://dx.doi.org/10.1145/2043236.2043249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Rodríguez, R. Comas, J. M. D. Oca Sánchez, and V. Lucero Salcedo. "Evaluation of Social Projects Using Neutrosophic AHP." International Journal of Neutrosophic Science 19, no. 1 (2022): 280–88. http://dx.doi.org/10.54216/ijns.190124.

Full text
Abstract:
Recently, industrialization has led to a worldwide rise in energy usage. Consequently, satisfying rising energy demands has assumed more significance. Fuel, gasoline, and nat gas are all finite resources, making it all the more important to discover sustainable energy alternatives. To fulfill the current need for energy, renewable resources play a significant role. Therefore, energy decisions and government policy are of paramount importance for nations. Energy policy and judgment challenges, such as the appraisal of energy projects, the choice among fuel sources, the location of power plants, and the determination of energy policy, are solved using a variety of technical, financial, ecological, and social factors. Multi-criterion decision-making (MCDM) methodologies may be used to assess energy policy decisions, one of the important challenges for governments. Some of the challenges associated with making energy-related decisions and formulating policies are choosing between various energy sources, assessing the relative merits of various energy supply techniques, formulating an energy strategy, and carrying it through. Various forms of fuel sources are taken into account in the much research that has been conducted on energy decision-making challenges. Because they take into account several, sometimes competing, criteria in their assessments of potential solutions, MCDM techniques have proven useful in the resolution of energy-related decision-making issues. By combining MCDM with the neutrosophic set theory (NST), which captures the inherent ambiguity of human judgment, we may get more nuanced, tangible, and practical outcomes. This work intends to provide a thorough analysis of the methodology and implementations of neutrosophic MCDM in the power industry, as well as to synthesize the current literature and the latest recent breakthroughs to help guide researchers in this area. The neutrosophic Analytic Hierarchy Process (AHP) method is used to compute the weights of each criterion of energy in a social project. This research shows that neutrosophic AHP, either on its own or in combination with another MCDM approach, is the most often used MCDM technique.
APA, Harvard, Vancouver, ISO, and other styles
50

Petrov, Tatjana, Matej Hajnal, Julia Klein, David Šafránek, and Morgane Nouvian. "Extracting individual characteristics from population data reveals a negative social effect during honeybee defence." PLOS Computational Biology 18, no. 9 (September 15, 2022): e1010305. http://dx.doi.org/10.1371/journal.pcbi.1010305.

Full text
Abstract:
Honeybees protect their colony against vertebrates by mass stinging and they coordinate their actions during this crucial event thanks to an alarm pheromone carried directly on the stinger, which is therefore released upon stinging. The pheromone then recruits nearby bees so that more and more bees participate in the defence. However, a quantitative understanding of how an individual bee adapts its stinging response during the course of an attack is still a challenge: Typically, only the group behaviour is effectively measurable in experiment; Further, linking the observed group behaviour with individual responses requires a probabilistic model enumerating a combinatorial number of possible group contexts during the defence; Finally, extracting the individual characteristics from group observations requires novel methods for parameter inference. We first experimentally observed the behaviour of groups of bees confronted with a fake predator inside an arena and quantified their defensive reaction by counting the number of stingers embedded in the dummy at the end of a trial. We propose a biologically plausible model of this phenomenon, which transparently links the choice of each individual bee to sting or not, to its group context at the time of the decision. Then, we propose an efficient method for inferring the parameters of the model from the experimental data. Finally, we use this methodology to investigate the effect of group size on stinging initiation and alarm pheromone recruitment. Our findings shed light on how the social context influences stinging behaviour, by quantifying how the alarm pheromone concentration level affects the decision of each bee to sting or not in a given group size. We show that recruitment is curbed as group size grows, thus suggesting that the presence of nestmates is integrated as a negative cue by individual bees. Moreover, the unique integration of exact and statistical methods provides a quantitative characterisation of uncertainty associated to each of the inferred parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography