Journal articles on the topic 'Fairness constraints'

To see the other types of publications on this topic, follow the link: Fairness constraints.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fairness constraints.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Detassis, Fabrizio, Michele Lombardi, and Michela Milano. "Teaching the Old Dog New Tricks: Supervised Learning with Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 3742–49. http://dx.doi.org/10.1609/aaai.v35i5.16491.

Full text
Abstract:
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained optimization with limited effort. In practice, we use a decomposition scheme alternating master steps (in charge of enforcing the constraints) and learner steps (where any supervised ML model and training algorithm can be employed). The process leads to approximate constraint satisfaction in general, and convergence properties are difficult to establish; despite this fact, we found empirically that even a naive setup of our approach performs well on ML tasks with fairness constraints, and on classical datasets with synthetic constraints.
APA, Harvard, Vancouver, ISO, and other styles
2

Ben-Porat, Omer, Fedor Sandomirskiy, and Moshe Tennenholtz. "Protecting the Protected Group: Circumventing Harmful Fairness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 5176–84. http://dx.doi.org/10.1609/aaai.v35i6.16654.

Full text
Abstract:
The recent literature on fair Machine Learning manifests that the choice of fairness constraints must be driven by the utilities of the population. However, virtually all previous work makes the unrealistic assumption that the exact underlying utilities of the population (representing private tastes of individuals) are known to the regulator that imposes the fairness constraint. In this paper we initiate the discussion of the \emph{mismatch}, the unavoidable difference between the underlying utilities of the population and the utilities assumed by the regulator. We demonstrate that the mismatch can make the disadvantaged protected group worse off after imposing the fairness constraint and provide tools to design fairness constraints that help the disadvantaged group despite the mismatch.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Fengjiao, Jia Liu, and Bo Ji. "Combinatorial Sleeping Bandits With Fairness Constraints." IEEE Transactions on Network Science and Engineering 7, no. 3 (July 1, 2020): 1799–813. http://dx.doi.org/10.1109/tnse.2019.2954310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pi, Jiancai. "Fairness compatibility constraints and collective actions." Frontiers of Economics in China 2, no. 4 (October 2007): 644–52. http://dx.doi.org/10.1007/s11459-007-0033-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vukadinović, Vladimir, and Gunnar Karlsson. "Multicast scheduling with resource fairness constraints." Wireless Networks 15, no. 5 (December 19, 2007): 571–83. http://dx.doi.org/10.1007/s11276-007-0085-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xiao Fei, Xi Zhang, Yue Bing Chen, Lei Zhang, and Chao Jing Tang. "Spectrum Assignment Algorithm Based on Clonal Selection in Cognitive Radio Networks." Advanced Materials Research 457-458 (January 2012): 931–39. http://dx.doi.org/10.4028/www.scientific.net/amr.457-458.931.

Full text
Abstract:
An improved-immune-clonal-selection based spectrum assignment algorithm (IICSA) in cognitive radio networks is proposed, combing graph theory and immune optimization. It uses constraint satisfaction operation to make encoded antibody population satisfy constraints, and realizes the global optimization. The random-constraint satisfaction operator and fair-constraint satisfaction operator are designed to guarantee efficiency and fairness, respectively. Simulations are performed for performance comparison between the IICSA and the color-sensitive graph coloring algorithm. The results indicate that the proposed algorithm increases network utilization, and efficiently improves the fairness.
APA, Harvard, Vancouver, ISO, and other styles
7

Piron, Robert, and Luis Fernandez. "Are fairness constraints on profit-seeking important?" Journal of Economic Psychology 16, no. 1 (March 1995): 73–96. http://dx.doi.org/10.1016/0167-4870(94)00037-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Jiping, Yuan Ma, Wei Ma, Yanhao Wang, and Xiaoyang Wang. "Happiness maximizing sets under group fairness constraints." Proceedings of the VLDB Endowment 16, no. 2 (October 2022): 291–303. http://dx.doi.org/10.14778/3565816.3565830.

Full text
Abstract:
Finding a happiness maximizing set (HMS) from a database, i.e., selecting a small subset of tuples that preserves the best score with respect to any nonnegative linear utility function, is an important problem in multi-criteria decision-making. When an HMS is extracted from a set of individuals to assist data-driven algorithmic decisions such as hiring and admission, it is crucial to ensure that the HMS can fairly represent different groups of candidates without bias and discrimination. However, although the HMS problem was extensively studied in the database community, existing algorithms do not take group fairness into account and may provide solutions that under-represent some groups. In this paper, we propose and investigate a fair variant of HMS (FairHMS) that not only maximizes the minimum happiness ratio but also guarantees that the number of tuples chosen from each group falls within predefined lower and upper bounds. Similar to the vanilla HMS problem, we show that FairHMS is NP-hard in three and higher dimensions. Therefore, we first propose an exact interval cover-based algorithm called IntCov for FairHMS on two-dimensional databases. Then, we propose a bicriteria approximation algorithm called BiGreedy for FairHMS on multi-dimensional databases by transforming it into a submodular maximization problem under a matroid constraint. We also design an adaptive sampling strategy to improve the practical efficiency of BiGreedy. Extensive experiments on real-world and synthetic datasets confirm the efficacy and efficiency of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
9

Heaton, Stephen. "FINALITY OR FAIRNESS?" Cambridge Law Journal 73, no. 3 (November 2014): 477–80. http://dx.doi.org/10.1017/s0008197314000919.

Full text
Abstract:
THE finality of proceedings, resource constraints, a presumption of guilt, and the existence of the Criminal Cases Review Commission (“CCRC”) all combine to outweigh the principle of fairness for a convicted individual. Such was the stark conclusion of the Supreme Court in dismissing Kevin Nunn's application to force prosecution authorities to grant access to material which he believed would help him get his conviction quashed: R. (Nunn) v Chief Constable of Suffolk Constabulary [2014] UKSC 37, [2014] 3 W.L.R. 77.
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Xianghua, Shasha Wang, Weili Zeng, and Zhibin Quan. "A Collaborative Optimization Method of Flight Slots Considering Fairness Among Airports." Mathematical Problems in Engineering 2022 (September 10, 2022): 1–18. http://dx.doi.org/10.1155/2022/1418911.

Full text
Abstract:
With the rapid development of civil aviation transportation, an increasing number of airport groups are formed. However, the existing literature on fairness mostly focuses on the fairness among airlines. There is no research on the realization of scheduling fairness among airports with overlapping resources in the airport group. The goal of this paper is to comprehensively consider efficiency and fairness in slot scheduling, where fairness should include both interairline and interairport fairness. Subsequently, we developed a collaborative optimization model for airport group that takes into account the above three objectives and then uses the ε-constraint method to solve it. In addition to considering the basic operational constraints of the airport, the model also sets different adjustment boundaries to achieve the scheduling priorities specified by IATA. Applying the model to the Yangtze River Delta airport group, the results show that improved flight schedules can significantly reduce flight congestion, and a relatively fair scheduling result can be obtained by weighing airlines’ fairness and airports’ fairness. The model can improve the transparency of slot scheduling decisions and can be used as an auxiliary tool to help make slot scheduling decisions.
APA, Harvard, Vancouver, ISO, and other styles
11

Goto, Masahiro, Fuhito Kojima, Ryoji Kurata, Akihisa Tamura, and Makoto Yokoo. "Designing Matching Mechanisms under General Distributional Constraints." American Economic Journal: Microeconomics 9, no. 2 (May 1, 2017): 226–62. http://dx.doi.org/10.1257/mic.20160124.

Full text
Abstract:
To handle various applications, we study matching under constraints. The only requirement on the constraints is heredity; given a feasible matching, any matching with fewer students at each school is also feasible. Heredity subsumes existing constraints such as regional maximum quotas and diversity constraints. With constraints, there may not exist a matching that satisfies fairness and nonwastefulness (i.e., stability). We demonstrate our new mechanism, the Adaptive Deferred Acceptance mechanism (ADA), satisfies strategy-proofness for students, nonwastefulness, and a weaker fairness property. We also offer a technique to apply ADA even if heredity is violated (e.g., minimum quotas). (JEL C78, D47, D63, D82, I20)
APA, Harvard, Vancouver, ISO, and other styles
12

Rezaei, Ashkan, Rizal Fathony, Omid Memarrast, and Brian Ziebart. "Fairness for Robust Log Loss Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5511–18. http://dx.doi.org/10.1609/aaai.v34i04.6002.

Full text
Abstract:
Developing classification methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications. Many existing methods enforce fairness constraints on a selected classifier (e.g., logistic regression) by directly forming constrained optimizations. We instead re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization. This construction takes the form of a minimax game and produces a parametric exponential family conditional distribution that resembles truncated logistic regression. We present the theoretical benefits of our approach in terms of its convexity and asymptotic convergence. We then demonstrate the practical advantages of our approach on three benchmark fairness datasets.
APA, Harvard, Vancouver, ISO, and other styles
13

Schoeffer, Jakob, Alexander Ritchie, Keziah Naggita, Faidra Monachou, Jessica Finocchiaro, and Marc Juarez. "Online Platforms and the Fair Exposure Problem under Homophily." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11899–908. http://dx.doi.org/10.1609/aaai.v37i10.26404.

Full text
Abstract:
In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line of criticism has focused on echo chambers and the recommended content served to users by these platforms. In this work, we introduce the fair exposure problem: given limited intervention power of the platform, the goal is to enforce balance in the spread of content (e.g., news articles) among two groups of users through constraints similar to those imposed by the Fairness Doctrine in the United States in the past. Groups are characterized by different affiliations (e.g., political views) and have different preferences for content. We develop a stylized framework that models intra- and inter-group content propagation under homophily, and we formulate the platform's decision as an optimization problem that aims at maximizing user engagement, potentially under fairness constraints. Our main notion of fairness requires that each group see a mixture of their preferred and non-preferred content, encouraging information diversity. Promoting such information diversity is often viewed as desirable and a potential means for breaking out of harmful echo chambers. We study the solutions to both the fairness-agnostic and fairness-aware problems. We prove that a fairness-agnostic approach inevitably leads to group-homogeneous targeting by the platform. This is only partially mitigated by imposing fairness constraints: we show that there exist optimal fairness-aware solutions which target one group with different types of content and the other group with only one type that is not necessarily the group's most preferred. Finally, using simulations with real-world data, we study the system dynamics and quantify the price of fairness.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhai, Zhou, Lei Luo, Heng Huang, and Bin Gu. "Faster Fair Machine via Transferring Fairness Constraints to Virtual Samples." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11918–25. http://dx.doi.org/10.1609/aaai.v37i10.26406.

Full text
Abstract:
Fair classification is an emerging and important research topic in machine learning community. Existing methods usually formulate the fairness metrics as additional inequality constraints, and then embed them into the original objective. This makes fair classification problems unable to be effectively tackled by some solvers specific to unconstrained optimization. Although many new tailored algorithms have been designed to attempt to overcome this limitation, they often increase additional computation burden and cannot cope with all types of fairness metrics. To address these challenging issues, in this paper, we propose a novel method for fair classification. Specifically, we theoretically demonstrate that all types of fairness with linear and non-linear covariance functions can be transferred to two virtual samples, which makes the existing state-of-the-art classification solvers be applicable to these cases. Meanwhile, we generalize the proposed method to multiple fairness constraints. We take SVM as an example to show the effectiveness of our new idea. Empirically, we test the proposed method on real-world datasets and all results confirm its excellent performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Motchoulski, Alexander, and Phil Smolenski. "Principles of Collective Choice and Constraints of Fairness." Journal of Philosophy 116, no. 12 (2019): 678–90. http://dx.doi.org/10.5840/jphil20191161243.

Full text
Abstract:
In “The Difference Principle Would Not Be Chosen behind the Veil of Ignorance,” Johan E. Gustafsson argues that the parties in the Original Position (OP) would not choose the Difference Principle to regulate their society’s basic structure. In reply to this internal critique, we provide two arguments. First, his choice models do not serve as a counterexample to the choice of the difference principle, as the models must assume that individual rationality scales to collective contexts in a way that begs the question in favor of utilitarianism. Second, the choice models he develops are incompatible with the constraints of fairness that apply in the OP, which by design subordinates claims of rationality to claims of impartiality. When the OP is modeled correctly the difference principle is indeed entailed by the conditions of the OP.
APA, Harvard, Vancouver, ISO, and other styles
16

Echenique, Federico, Antonio Miralles, and Jun Zhang. "Fairness and efficiency for allocations with participation constraints." Journal of Economic Theory 195 (July 2021): 105274. http://dx.doi.org/10.1016/j.jet.2021.105274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Balzano, Walter, Marco Lapegna, Silvia Stranieri, and Fabio Vitale. "Competitive-blockchain-based parking system with fairness constraints." Soft Computing 26, no. 9 (March 1, 2022): 4151–62. http://dx.doi.org/10.1007/s00500-022-06888-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kryger, Esben Masotti. "Pension Fund Design under Long-term Fairness Constraints." Geneva Risk and Insurance Review 35, no. 2 (March 23, 2010): 130–59. http://dx.doi.org/10.1057/grir.2009.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Roy, Rita, and Giduturi Appa Rao. "A Framework for an Efficient Recommendation System Using Time and Fairness Constraint Based Web Usage Mining Technique." Ingénierie des systèmes d information 27, no. 3 (June 30, 2022): 425–31. http://dx.doi.org/10.18280/isi.270308.

Full text
Abstract:
Users prefer to use various websites like Facebook, Gmail, and YouTube. We can make the system predict what pages we expect in the future and give the users what they have requested. Based on the data gathered and analyzed, we can predict the user's future navigation patterns in response to the user's requests. In order to track down users’ navigational sessions, the web access logs created at a specific website are processed. Grouping the user session data is then done into clusters, where inter-cluster similarities are minimized, although the intra-cluster similarities are maximised. Recent clustering and fairness analysis research has focused on centric-based methods such as k-median and k-means clustering. We propose improved constrained based clustering (ICBC) based on fair algorithms for managing Hierarchical Agglomerative Clustering (HAC) that apply fairness constraints regardless of distance linking parameters, simplifying clustering fairness trials for HAC and intended for various protected groups compared to vanilla HAC techniques. Also, this ICBC is used to select an algorithm whose inherent bias matches a specific problem, and then to adjust the optimization criterion of any distinct algorithm to take the constraints on interpretation to improve the efficiency of clustering. We show that our proposed algorithm finds fairer clustering by evaluation on the NASA dataset by balancing the constraints of the problem.
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Rui, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong, Jiawei Li, Rui Zhang, Zhen Li, and Zhenguo Li. "Fair-CDA: Continuous and Directional Augmentation for Group Fairness." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9918–26. http://dx.doi.org/10.1609/aaai.v37i8.26183.

Full text
Abstract:
In this work, we propose Fair-CDA, a fine-grained data augmentation strategy for imposing fairness constraints. We use a feature disentanglement method to extract the features highly related to the sensitive attributes. Then we show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups. By adjusting the perturbation strength in the direction of the paths, our proposed augmentation is controllable and auditable. To alleviate the accuracy degradation caused by fairness constraints, we further introduce a calibrated model to impute labels for the augmented data. Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness. Experimental results show that Fair-CDA consistently outperforms state-of-the-art methods on widely-used benchmarks, e.g., Adult, CelebA and MovieLens. Especially, Fair-CDA obtains an 86.3% relative improvement for fairness while maintaining the accuracy on the Adult dataset. Moreover, we evaluate Fair-CDA in an online recommendation system to demonstrate the effectiveness of our method in terms of accuracy and fairness.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yunyi, Maria De-Arteaga, and Maytal Saar-Tsechansky. "When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10, no. 1 (October 14, 2022): 133–46. http://dx.doi.org/10.1609/hcomp.v10i1.21994.

Full text
Abstract:
An increased awareness concerning risks of algorithmic bias has driven a surge of efforts around bias mitigation strategies. A vast majority of the proposed approaches fall under one of two categories: (1) imposing algorithmic fairness constraints on predictive models, and (2) collecting additional training samples. Most recently and at the intersection of these two categories, methods that propose active learning under fairness constraints have been developed. However, proposed bias mitigation strategies typically overlook the bias presented in the observed labels. In this work, we study fairness considerations of active data collection strategies in the presence of label bias. We first present an overview of different types of label bias in the context of supervised learning systems. We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem. Our results illustrate the unintended consequences of deploying a model that attempts to mitigate a single type of bias while neglecting others, emphasizing the importance of explicitly differentiating between the types of bias that fairness-aware algorithms aim to address, and highlighting the risks of neglecting label bias during data collection.
APA, Harvard, Vancouver, ISO, and other styles
22

Islam, Md Mouinul, Dong Wei, Baruch Schieber, and Senjuti Basu Roy. "Satisfying complex top- k fairness constraints by preference substitutions." Proceedings of the VLDB Endowment 16, no. 2 (October 2022): 317–29. http://dx.doi.org/10.14778/3565816.3565832.

Full text
Abstract:
Given m users (voters), where each user casts her preference for a single item (candidate) over n items (candidates) as a ballot, the preference aggregation problem returns k items (candidates) that have the k highest number of preferences (votes). Our work studies this problem considering complex fairness constraints that have to be satisfied via proportionate representations of different values of the group protected attribute(s) in the top- k results. Precisely, we study the margin finding problem under single ballot substitutions , where a single substitution amounts to removing a vote from candidate i and assigning it to candidate j and the goal is to minimize the number of single ballot substitutions needed to guarantee that the top-k results satisfy the fairness constraints. We study several variants of this problem considering how top- k fairness constraints are defined, (i) MFBinaryS and MFMultiS are defined when the fairness (proportionate representation) is defined over a single, binary or multivalued, protected attribute, respectively; (ii) MF-Multi2 is studied when top- k fairness is defined over two different protected attributes; (iii) MFMulti3+ investigates the margin finding problem, considering 3 or more protected attributes. We study these problems theoretically, and present a suite of algorithms with provable guarantees. We conduct rigorous large scale experiments involving multiple real world datasets by appropriately adapting multiple state-of-the-art solutions to demonstrate the effectiveness and scalability of our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Yin, Yingqi, Fengye Hu, Ling Cen, Yu Du, and Lu Wang. "Balancing Long Lifetime and Satisfying Fairness in WBAN Using a Constrained Markov Decision Process." International Journal of Antennas and Propagation 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/657854.

Full text
Abstract:
As an important part of the Internet of Things (IOT) and the special case of device-to-device (D2D) communication, wireless body area network (WBAN) gradually becomes the focus of attention. Since WBAN is a body-centered network, the energy of sensor nodes is strictly restrained since they are supplied by battery with limited power. In each data collection, only one sensor node is scheduled to transmit its measurements directly to the access point (AP) through the fading channel. We formulate the problem of dynamically choosing which sensor should communicate with the AP to maximize network lifetime under the constraint of fairness as a constrained markov decision process (CMDP). The optimal lifetime and optimal policy are obtained by Bellman equation in dynamic programming. The proposed algorithm defines the limiting performance in WBAN lifetime under different degrees of fairness constraints. Due to the defect of large implementation overhead in acquiring global channel state information (CSI), we put forward a distributed scheduling algorithm that adopts local CSI, which saves the network overhead and simplifies the algorithm. It was demonstrated via simulation that this scheduling algorithm can allocate time slot reasonably under different channel conditions to balance the performances of network lifetime and fairness.
APA, Harvard, Vancouver, ISO, and other styles
24

Liao, Yiqiao, and Parinaz Naghizadeh. "Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8764–72. http://dx.doi.org/10.1609/aaai.v37i7.26054.

Full text
Abstract:
Although many fairness criteria have been proposed to ensure that machine learning algorithms do not exhibit or amplify our existing social biases, these algorithms are trained on datasets that can themselves be statistically biased. In this paper, we investigate the robustness of existing (demographic) fairness criteria when the algorithm is trained on biased data. We consider two forms of dataset bias: errors by prior decision makers in the labeling process, and errors in the measurement of the features of disadvantaged individuals. We analytically show that some constraints (such as Demographic Parity) can remain robust when facing certain statistical biases, while others (such as Equalized Odds) are significantly violated if trained on biased data. We provide numerical experiments based on three real-world datasets (the FICO, Adult, and German credit score datasets) supporting our analytical findings. While fairness criteria are primarily chosen under normative considerations in practice, our results show that naively applying a fairness constraint can lead to not only a loss in utility for the decision maker, but more severe unfairness when data bias exists. Thus, understanding how fairness criteria react to different forms of data bias presents a critical guideline for choosing among existing fairness criteria, or for proposing new criteria, when available datasets may be biased.
APA, Harvard, Vancouver, ISO, and other styles
25

Suksompong, Warut. "Constraints in fair division." ACM SIGecom Exchanges 19, no. 2 (November 2021): 46–61. http://dx.doi.org/10.1145/3505156.3505162.

Full text
Abstract:
The fair allocation of resources to interested agents is a fundamental problem in society. While the majority of the fair division literature assumes that all allocations are feasible, in practice there are often constraints on the allocation that can be chosen. In this survey, we discuss fairness guarantees for both divisible (cake cutting) and indivisible resources under several common types of constraints, including connectivity, cardinality, matroid, geometric, separation, budget, and conflict constraints. We also outline a number of open questions and directions.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Yaowei, and Lu Zhang. "Achieving Long-Term Fairness in Sequential Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9549–57. http://dx.doi.org/10.1609/aaai.v36i9.21188.

Full text
Abstract:
In this paper, we propose a framework for achieving long-term fair sequential decision making. By conducting both the hard and soft interventions, we propose to take path-specific effects on the time-lagged causal graph as a quantitative tool for measuring long-term fairness. The problem of fair sequential decision making is then formulated as a constrained optimization problem with the utility as the objective and the long-term and short-term fairness as constraints. We show that such an optimization problem can be converted to a performative risk optimization. Finally, repeated risk minimization (RRM) is used for model training, and the convergence of RRM is theoretically analyzed. The empirical evaluation shows the effectiveness of the proposed algorithm on synthetic and semi-synthetic temporal datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Becker, Ruben, Gianlorenzo D'Angelo, and Sajjad Ghobadi. "On the Cost of Demographic Parity in Influence Maximization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14110–18. http://dx.doi.org/10.1609/aaai.v37i12.26651.

Full text
Abstract:
Modeling and shaping how information spreads through a network is a major research topic in network analysis. While initially the focus has been mostly on efficiency, recently fairness criteria have been taken into account in this setting. Most work has focused on the maximin criteria however, and thus still different groups can receive very different shares of information. In this work we propose to consider fairness as a notion to be guaranteed by an algorithm rather than as a criterion to be maximized. To this end, we propose three optimization problems that aim at maximizing the overall spread while enforcing strict levels of demographic parity fairness via constraints (either ex-post or ex-ante). The level of fairness hence becomes a user choice rather than a property to be observed upon output. We study this setting from various perspectives. First, we prove that the cost of introducing demographic parity can be high in terms of both overall spread and computational complexity, i.e., the price of fairness may be unbounded for all three problems and optimal solutions are hard to compute, in some case even approximately or when fairness constraints may be violated. For one of our problems, we still design an algorithm with both constant approximation factor and fairness violation. We also give two heuristics that allow the user to choose the tolerated fairness violation. By means of an extensive experimental study, we show that our algorithms perform well in practice, that is, they achieve the best demographic parity fairness values. For certain instances we additionally even obtain an overall spread comparable to the most efficient algorithms that come without any fairness guarantee, indicating that the empirical price of fairness may actually be small when using our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Busard, Simon, Charles Pecheur, Hongyang Qu, and Franco Raimondi. "Reasoning about Strategies under Partial Observability and Fairness Constraints." Electronic Proceedings in Theoretical Computer Science 112 (March 1, 2013): 71–79. http://dx.doi.org/10.4204/eptcs.112.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, G., and H. Liu. "Resource Allocation for OFDMA Relay Networks With Fairness Constraints." IEEE Journal on Selected Areas in Communications 24, no. 11 (November 2006): 2061–69. http://dx.doi.org/10.1109/jsac.2006.881627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Honghai, Qiqian Zhang, and Lei Yang. "A User Equilibrium Assignment Flow Model for Multiairport Open Network System." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/631428.

Full text
Abstract:
To reduce flight delays and promote fairness in air traffic management, we study the imbalance problem between supply and demand in airport network system from the view of both the system and the users. First, we establish an open multiairport oriented network flow system with the correlation between the arrival and departure in capacity-constrained airports, as well as the relevance between multiairports united flights. Then, based on the efficiency rationing principle, we propose an optimization model to reassign flow with user equilibrium constraints. These constraints include Gini coefficient, system capacity, and united flights. The model minimizes the total flight delay cost of capacity-constrained airports in the network system. We also introduce some evaluation indexes to quantitatively analyze fairness among airlines. Finally, with an open multiairport network system and its actual flights data in China, the model is verified. Test results show that the model can be used to coordinate and optimize matching the flow and capacity in the multiairport, make full use of the capacity of airports, and minimize the system delays. The findings can provide a powerful reference of developing scientific and rational assignment strategy for air traffic controllers.
APA, Harvard, Vancouver, ISO, and other styles
31

Biswas, Arpita, and Siddharth Barman. "Matroid Constrained Fair Allocation Problem." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9921–22. http://dx.doi.org/10.1609/aaai.v33i01.33019921.

Full text
Abstract:
We consider the problem of allocating a set of indivisible goods among a group of homogeneous agents under matroid constraints and additive valuations, in a fair manner. We propose a novel algorithm that computes a fair allocation for instances with additive and identical valuations, even under matroid constraints. Our result provides a computational anchor to the existential result of the fairness notion, called EF1 (envy-free up to one good) by Biswas and Barman in this setting. We further provide examples to show that the fairness notions stronger than EF1 does not always exist in this setting.
APA, Harvard, Vancouver, ISO, and other styles
32

Ma, Wen Min, Hai Jun Zhang, Xiang Ming Wen, Wei Zheng, and Zhao Ming Lu. "A Novel QoS Guaranteed Cross-Layer Scheduling Scheme for Downlink Multiuser OFDM Systems." Applied Mechanics and Materials 182-183 (June 2012): 1352–57. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1352.

Full text
Abstract:
To provide quality-of-service (QoS) differentiation and guarantee user fairness, efficient power and spectrum utilization for the downlink multiuser orthogonal frequency-division multiplexing (MU-OFDM) systems, a novel QoS guaranteed cross-layer (QGCL) scheduling scheme is proposed in this paper. The scheme formulates the scheduling into an optimization problem of overall system utility under the system constraints. Moreover, we propose a simple and efficient binary constrained particle swarm optimization (PSO) to solve the scheduling more effectively. Comparing with the classical methods, simulation results show that the proposed QGCL scheduling scheme can significantly maximize the system throughput given that the fulfillment of the QoS requirements as well as the fairness is guaranteed.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Xueru, Mohammad Mahdi Khalili, and Mingyan Liu. "Long-Term Impacts of Fair Machine Learning." Ergonomics in Design: The Quarterly of Human Factors Applications 28, no. 3 (October 25, 2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.

Full text
Abstract:
Machine learning models developed from real-world data can inherit potential, preexisting bias in the dataset. When these models are used to inform decisions involving human beings, fairness concerns inevitably arise. Imposing certain fairness constraints in the training of models can be effective only if appropriate criteria are applied. However, a fairness criterion can be defined/assessed only when the interaction between the decisions and the underlying population is well understood. We introduce two feedback models describing how people react when receiving machine-aided decisions and illustrate that some commonly used fairness criteria can end with undesirable consequences while reinforcing discrimination.
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Joshua, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory W. Wornell, Leonid Karlinsky, and Rogerio Schmidt Feris. "A Maximal Correlation Framework for Fair Machine Learning." Entropy 24, no. 4 (March 26, 2022): 461. http://dx.doi.org/10.3390/e24040461.

Full text
Abstract:
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant. We explore the problem of algorithmic fairness, taking an information–theoretic view. The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than existing algorithms. We show that these algorithms provide smooth performance–fairness tradeoff curves and perform competitively with state-of-the-art methods on both discrete datasets (COMPAS, Adult) and continuous datasets (Communities and Crimes).
APA, Harvard, Vancouver, ISO, and other styles
35

Nguyen, Bich-Ngan T., Phuong N. H. Pham, Van-Vang Le, and Václav Snášel. "Influence Maximization under Fairness Budget Distribution in Online Social Networks." Mathematics 10, no. 22 (November 9, 2022): 4185. http://dx.doi.org/10.3390/math10224185.

Full text
Abstract:
In social influence analysis, viral marketing, and other fields, the influence maximization problem is a fundamental one with critical applications and has attracted many researchers in the last decades. This problem asks to find a k-size seed set with the largest expected influence spread size. Our paper studies the problem of fairness budget distribution in influence maximization, aiming to find a seed set of size k fairly disseminated in target communities. Each community has certain lower and upper bounded budgets, and the number of each community’s elements is selected into a seed set holding these bounds. Nevertheless, resolving this problem encounters two main challenges: strongly influential seed sets might not adhere to the fairness constraint, and it is an NP-hard problem. To address these shortcomings, we propose three algorithms (FBIM1, FBIM2, and FBIM3). These algorithms combine an improved greedy strategy for selecting seeds to ensure maximum coverage with the fairness constraints by generating sampling through a Reverse Influence Sampling framework. Our algorithms provide a (1/2−ϵ)-approximation of the optimal solution, and require OkTlog(8+2ϵ)nln2δ+ln(kn)ϵ2, OkTlognϵ2k, and OTϵlogkϵlognϵ2k complexity, respectively. We conducted experiments on real social networks. The result shows that our proposed algorithms are highly scalable while satisfying theoretical assurances, and that the coverage ratios with respect to the target communities are larger than those of the state-of-the-art alternatives; there are even cases in which our algorithms reaches 100% coverage with respect to target communities. In addition, our algorithms are feasible and effective even in cases involving big data; in particular, the results of the algorithms guarantee fairness constraints.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Xuran, Peng Wu, and Jing Su. "Accurate Fairness: Improving Individual Fairness without Trading Accuracy." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.

Full text
Abstract:
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two aspects are often incompatible with each other so that enhancing one aspect may sacrifice the other inevitably with side effects of true bias or false fairness. We propose in this paper a new fairness criterion, accurate fairness, to align individual fairness with accuracy. Informally, it requires the treatments of an individual and the individual's similar counterparts to conform to a uniform target, i.e., the ground truth of the individual. We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations. We then present a Siamese fairness in-processing approach to minimize the accuracy and fairness losses of a machine learning model under the accurate fairness constraints. To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation. We also propose fairness confusion matrix-based metrics, fair-precision, fair-recall, and fair-F1 score, to quantify a trade-off between accuracy and individual fairness. Comparative case studies with popular fairness datasets show that our Siamese fairness approach can achieve on average 1.02%-8.78% higher individual fairness (in terms of fairness through awareness) and 8.38%-13.69% higher accuracy, as well as 10.09%-20.57% higher true fair rate, and 5.43%-10.01% higher fair-F1 score, than the state-of-the-art bias mitigation techniques. This demonstrates that our Siamese fairness approach can indeed improve individual fairness without trading accuracy. Finally, the accurate fairness criterion and Siamese fairness approach are applied to mitigate the possible service discrimination with a real Ctrip dataset, by on average fairly serving 112.33% more customers (specifically, 81.29% more customers in an accurately fair way) than baseline models.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Depei, Lianglun Cheng, and Tao Wang. "Fairness-aware genetic-algorithm-based few-shot classification." Mathematical Biosciences and Engineering 20, no. 2 (2022): 3624–37. http://dx.doi.org/10.3934/mbe.2023169.

Full text
Abstract:
<abstract><p>Artificial-intelligence-assisted decision-making is appearing increasingly more frequently in our daily lives; however, it has been shown that biased data can cause unfairness in decision-making. In light of this, computational techniques are needed to limit the inequities in algorithmic decision-making. In this letter, we present a framework to join fair feature selection and fair meta-learning to do few-shot classification, which contains three parts: (1) a pre-processing component acts as an intermediate bridge between fair genetic algorithm (FairGA) and fair few-shot (FairFS) to generate the feature pool; (2) the FairGA module considers the presence or absence of words as gene expression, and filters out key features by a fairness clustering genetic algorithm; (3) the FairFS part carries out the task of representation and fairness constraint classification. Meanwhile, we propose a combinatorial loss function to cope with fairness constraints and hard samples. Experiments show that the proposed method achieves strong competitive performance on three public benchmarks.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
38

Steinmann, Sarina, and Ralph Winkler. "Sharing a River with Downstream Externalities." Games 10, no. 2 (May 15, 2019): 23. http://dx.doi.org/10.3390/g10020023.

Full text
Abstract:
We consider the problem of efficient emission abatement in a multi polluter setting, where agents are located along a river in which net emissions accumulate and induce negative externalities to downstream riparians. Assuming a cooperative transferable utility game, we seek welfare distributions that satisfy all agents’ participation constraints and, in addition, a fairness constraint implying that no coalition of agents should be better off than it were if all non-members of the coalition would not pollute the river at all. We show that the downstream incremental distribution, as introduced by Ambec and Sprumont (2002), is the only welfare distribution satisfying both constraints. In addition, we show that this result holds true for numerous extensions of our model.
APA, Harvard, Vancouver, ISO, and other styles
39

Van Bulck, David, and Dries Goossens. "Handling fairness issues in time-relaxed tournaments with availability constraints." Computers & Operations Research 115 (March 2020): 104856. http://dx.doi.org/10.1016/j.cor.2019.104856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Schulz, Andreas S., and Nicolás E. Stier-Moses. "Efficiency and fairness of system-optimal routing with user constraints." Networks 48, no. 4 (2006): 223–34. http://dx.doi.org/10.1002/net.20133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kügelgen, Julius von, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Schölkopf. "On the Fairness of Causal Algorithmic Recourse." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9584–94. http://dx.doi.org/10.1609/aaai.v36i9.21192.

Full text
Abstract:
Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fair-ness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction. We study theoretically and empirically how to enforce fair causal recourse by altering the classifier and perform a case study on the Adult dataset. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions as opposed to constraints on the classifier.
APA, Harvard, Vancouver, ISO, and other styles
42

Kaleta, Mariusz. "Price of Fairness on Networked Auctions." Journal of Applied Mathematics 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/860747.

Full text
Abstract:
We consider an auction design problem under network flow constraints. We focus on pricing mechanisms that provide fair solutions, where fairness is defined in absolute and relative terms. The absolute fairness is equivalent to “no individual losses” assumption. The relative fairness can be verbalized as follows: no agent can be treated worse than any other in similar circumstances. Ensuring the fairness conditions makes only part of the social welfare available in the auction to be distributed on pure market rules. The rest of welfare must be distributed without market rules and constitutes the so-calledprice of fairness. We prove that there exists the minimum ofprice of fairnessand that it is achieved when uniform unconstrained market price is used as the base price. Theprice of fairnesstakes into account costs of forced offers and compensations for lost profits. The final payments can be different than locational marginal pricing. That means that the widely applied locational marginal pricing mechanism does not in general minimize theprice of fairness.
APA, Harvard, Vancouver, ISO, and other styles
43

Hao, Jingjing, Xinquan Liu, Xiaojing Shen, and Nana Feng. "Bilevel Programming Model of Urban Public Transport Network under Fairness Constraints." Discrete Dynamics in Nature and Society 2019 (March 12, 2019): 1–10. http://dx.doi.org/10.1155/2019/2930502.

Full text
Abstract:
In this paper, the bilevel programming model of the public transport network considering factors such as the per capita occupancy area and travel cost of different groups was established, to alleviate the urban transportation equity and optimize the urban public transport network under fairness constraints. The upper layer minimized the travel cost deprivation coefficient and the road area Gini coefficient as the objective function, to solve the optimization scheme of public transport network considering fairness constraints; the lower layer was a stochastic equilibrium traffic assignment model of multimode and multiuser, used to describe the complex selection behavior of different groups for different traffic modes in the bus optimization scheme given by the upper layer. The model in addition utilised the noninferior sorting genetic algorithm II to validate the model via a simple network. The results showed that (1) the travel cost deprivation coefficient of the three groups declined from 33.42 to 26.51, with a decrease of 20.68%; the Gini coefficient of the road area declined from 0.248 to 0.030, with a decrease of 87.76%; it could be seen that the transportation equity feeling of low-income groups and objective resource allocation improved significantly; (2) before the optimization of public transport network, the sharing rate of cars, buses, and bicycles was 42%, 47%, and 11%, respectively; after the optimization, the sharing rate of each mode was 7%, 82%, and 11%, respectively. Some of the high and middle income users who owned the car were transferred to the public transportation. It could be seen that the overall travel time of the optimized public transport network reduced, enhancing the attraction of the public transport network to various travel groups. The model improves the fairness of the urban public transport system effectively while ensuring the travel demand of the residents. It provides theoretical basis and model foundation for the optimization of public transit network, and it is a new attempt to improve the fairness of the traffic planning scheme.
APA, Harvard, Vancouver, ISO, and other styles
44

Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida, and Frank Valencia. "On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.

Full text
Abstract:
One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy. To overcome this issue, Hardt et al. proposed the notion of equality of opportunity (EO), which is compatible with maximal accuracy when the target label is deterministic with respect to the input features. In the probabilistic case, however, the issue is more complicated: It has been shown that under differential privacy constraints, there are data sources for which EO can only be achieved at the total detriment of accuracy, in the sense that a classifier that satisfies EO cannot be more accurate than a trivial (random guessing) classifier. In our paper we strengthen this result by removing the privacy constraint. Namely, we show that for certain data sources, the most accurate classifier that satisfies EO is a trivial classifier. Furthermore, we study the trade-off between accuracy and EO loss (opportunity difference), and provide a sufficient condition on the data source under which EO and non-trivial accuracy are compatible.
APA, Harvard, Vancouver, ISO, and other styles
45

Alechina, Natasha, Wiebe van der Hoek, and Brian Logan. "Fair decomposition of group obligations." Journal of Logic and Computation 27, no. 7 (March 24, 2017): 2043–62. http://dx.doi.org/10.1093/logcom/exx012.

Full text
Abstract:
Abstract We consider the problem of decomposing a group norm into a set of individual obligations for the agents comprising the group, such that if the individual obligations are fulfilled, the group obligation is fulfilled. Such an assignment of tasks to agents is often subject to additional social or organizational norms that specify permissible ways in which tasks can be assigned. An important role of social norms is that they can be used to impose ‘fairness constraints’, which seek to distribute individual responsibility for discharging the group norm in a ‘fair’ or ‘equitable’ way. We propose a simple language for this kind of fairness constraints and analyse the problem of computing a fair decomposition of a group obligation, both for non-repeating and for repeating group obligations.
APA, Harvard, Vancouver, ISO, and other styles
46

Choi, YooJung, Golnoosh Farnadi, Behrouz Babaki, and Guy Van den Broeck. "Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10077–84. http://dx.doi.org/10.1609/aaai.v34i06.6565.

Full text
Abstract:
As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fairness in algorithmic decision making. Existing methods often assume a fixed set of observable features to define individuals, but lack a discussion of certain features not being observed at test time. In this paper, we study fairness of naive Bayes classifiers, which allow partial observations. In particular, we introduce the notion of a discrimination pattern, which refers to an individual receiving different classifications depending on whether some sensitive attributes were observed. Then a model is considered fair if it has no such pattern. We propose an algorithm to discover and mine for discrimination patterns in a naive Bayes classifier, and show how to learn maximum-likelihood parameters subject to these fairness constraints. Our approach iteratively discovers and eliminates discrimination patterns until a fair model is learned. An empirical evaluation on three real-world datasets demonstrates that we can remove exponentially many discrimination patterns by only adding a small fraction of them as constraints.
APA, Harvard, Vancouver, ISO, and other styles
47

Dodevska, Zorica, Sandro Radovanović, Andrija Petrović, and Boris Delibašić. "When Fairness Meets Consistency in AHP Pairwise Comparisons." Mathematics 11, no. 3 (January 25, 2023): 604. http://dx.doi.org/10.3390/math11030604.

Full text
Abstract:
We propose introducing fairness constraints to one of the most famous multi-criteria decision-making methods, the analytic hierarchy process (AHP). We offer a solution that guarantees consistency while respecting legally binding fairness constraints in AHP pairwise comparison matrices. Through a synthetic experiment, we generate the comparison matrices of different sizes and ranges/levels of the initial parameters (i.e., consistency ratio and disparate impact). We optimize disparate impact for various combinations of these initial parameters and observed matrix sizes while respecting an acceptable level of consistency and minimizing deviations of pairwise comparison matrices (or their upper triangles) before and after the optimization. We use a metaheuristic genetic algorithm to set the dually motivating problem and operate a discrete optimization procedure (in connection with Saaty’s 9-point scale). The results confirm the initial hypothesis (with 99.5% validity concerning 2800 optimization runs) that achieving fair ranking while respecting consistency in AHP pairwise comparison matrices (when comparing alternatives regarding given criterium) is possible, thus meeting two challenging goals simultaneously. This research contributes to the initiatives directed toward unbiased decision-making, either automated or algorithm-assisted (which is the case covered by this research).
APA, Harvard, Vancouver, ISO, and other styles
48

Persico, Nicola. "Racial Profiling, Fairness, and Effectiveness of Policing." American Economic Review 92, no. 5 (November 1, 2002): 1472–97. http://dx.doi.org/10.1257/000282802762024593.

Full text
Abstract:
Citizens of two groups may engage in crime, depending on their legal earning opportunities and on the probability of being audited. Police audit citizens. Police behavior is fair if both groups are policed with the same intensity. We provide exact conditions under which forcing the police to behave more fairly reduces the total amount of crime. These conditions are expressed as constraints on the quantile-quantile plot of the distributions of legal earning opportunities in the two groups. We also investigate the definition of fairness when the cost of being searched reflects the stigma of being singled out by police.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Cuiru, Youming Li, Bin Chen, Zhao Wang, and Jiongtao Wang. "Resource Allocation for OFDMA-MIMO Relay Systems with Proportional Fairness Constraints." Communications and Network 05, no. 03 (2013): 303–7. http://dx.doi.org/10.4236/cn.2013.53b2056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Collins, Brian J., and Fujun Lai. "Examining Affective Constraints of Fairness on OCB: A 3-way Interaction." Academy of Management Proceedings 2013, no. 1 (January 2013): 13004. http://dx.doi.org/10.5465/ambpp.2013.13004abstract.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography