Academic literature on the topic 'Fair Machine Learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fair Machine Learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Fair Machine Learning"
Basu Roy Chowdhury, Somnath, and Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.
Full textPerello, Nick, and Przemyslaw Grabowicz. "Fair Machine Learning Post Affirmative Action." ACM SIGCAS Computers and Society 52, no. 2 (September 2023): 22. http://dx.doi.org/10.1145/3656021.3656029.
Full textOneto, Luca. "Learning fair models and representations." Intelligenza Artificiale 14, no. 1 (September 17, 2020): 151–78. http://dx.doi.org/10.3233/ia-190034.
Full textKim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.
Full textKim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.
Full textZhang, Xueru, Mohammad Mahdi Khalili, and Mingyan Liu. "Long-Term Impacts of Fair Machine Learning." Ergonomics in Design: The Quarterly of Human Factors Applications 28, no. 3 (October 25, 2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.
Full textZhu, Yunlan. "The Comparative Analysis of Fair Use of Works in Machine Learning." SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.
Full textRedko, Ievgen, and Charlotte Laclau. "On Fair Cost Sharing Games in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.
Full textLee, Joshua, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory W. Wornell, Leonid Karlinsky, and Rogerio Schmidt Feris. "A Maximal Correlation Framework for Fair Machine Learning." Entropy 24, no. 4 (March 26, 2022): 461. http://dx.doi.org/10.3390/e24040461.
Full textvan Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly, and Vassilis Kostakos. "Crowdsourcing Perceptions of Fair Predictors for Machine Learning." Proceedings of the ACM on Human-Computer Interaction 3, CSCW (November 7, 2019): 1–21. http://dx.doi.org/10.1145/3359130.
Full textDissertations / Theses on the topic "Fair Machine Learning"
Schildt, Alexandra, and Jenny Luo. "Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.
Full textAI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
Dalgren, Anton, and Ylva Lundegård. "GreenML : A methodology for fair evaluation of machine learning algorithms with respect to resource consumption." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159837.
Full textGordaliza, Pastor Paula. "Fair learning : une approche basée sur le transport optimale." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30084.
Full textThe aim of this thesis is two-fold. On the one hand, optimal transportation methods are studied for statistical inference purposes. On the other hand, the recent problem of fair learning is addressed through the prism of optimal transport theory. The generalization of applications based on machine learning models in the everyday life and the professional world has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. In the first part of the thesis, we motivate the fairness problem by presenting some comprehensive results from the study of the statistical parity criterion through the analysis of the disparate impact index on the real and well-known Adult Income dataset. Importantly, we show that trying to make fair machine learning models may be a particularly challenging task, especially when the training observations contain bias. Then a review of Mathematics for fairness in machine learning is given in a general setting, with some novel contributions in the analysis of the price for fairness in regression and classification. In the latter, we finish this first part by recasting the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. Finally, we propose a random repair which yields a tradeoff between minimal information loss and a certain amount of fairness. The second part is devoted to the asymptotic theory of the empirical transportation cost. We provide a Central Limit Theorem for the Monge-Kantorovich distance between two empirical distributions with different sizes n and m, Wp(Pn,Qm), p > = 1, for observations on R. In the case p > 1 our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centering constants. We provide a consistent estimate of the asymptotic variance which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification. Additionally, we provide a moderate deviation principle for the empirical transportation cost in general dimension. Finally, Wasserstein barycenters and variance-like criterion using Wasserstein distance are used in many problems to analyze the homogeneity of collections of distributions and structural relationships between the observations. We propose the estimation of the quantiles of the empirical process of the Wasserstein's variation using a bootstrap procedure. Then we use these results for statistical inference on a distribution registration model for general deformation functions. The tests are based on the variance of the distributions with respect to their Wasserstein's barycenters for which we prove central limit theorems, including bootstrap versions
Grari, Vincent. "Adversarial mitigation to reduce unwanted biases in machine learning." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS096.
Full textThe past few years have seen a dramatic rise of academic and societal interest in fair machine learning. As a result, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Its primary purpose is to ensure that model predictions do not depend on any sensitive attribute as gender or race, for example. Although this notion of independence is incontestable in a general context, it can theoretically be defined in many different ways depending on how one sees fairness. As a result, many recent papers tackle this challenge by using their "own" objectives and notions of fairness. Objectives can be categorized in two different families: Individual and Group fairness. This thesis gives an overview of the methodologies applied in these different families in order to encourage good practices. Then, we identify and complete gaps by presenting new metrics and new Fair-ML algorithms that are more appropriate for specific contexts
Berisha, Visar. "AI as a Threat to Democracy : Towards an Empirically Grounded Theory." Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340733.
Full textSitruk, Jonathan. "Fais Ce Qu'il Te Plaît... Mais Fais Le Comme Je L'aime : Amélioration des performances en crowdfunding par l’utilisation des catégories et des récits." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR0018.
Full textThis dissertation aims to provide entrepreneurs with a better understanding of how to improve their performance when raising funds from investors. Entrepreneurs have difficulty accessing financial resources and capital because they suffer from a liability of newness. This inherent condition is due to their lack of legitimacy in their target market and leads investors to see them as inherently risky. The traditional means of financing new venture ideas have been through personal savings, family and friends, banks, or professional investors. Crowdfunding has emerged as an alternative to these and scholars in the field of management and entrepreneurship have taken great interest in understanding its multiple facets. Most research in crowdfunding has focused on quantifiable elements that investors use in order to determine the quality of an entrepreneur’s venture. The higher the perceived quality, the higher the likelihood investors have of investing in it. However, orthogonal to these elements of quality, and not addressed in current research, are those qualitative elements that allow projects to become clearer in the eyes of potential funders and transmit valuable information about the venture in a coherent fashion regarding the medium they are raising funds from. This dissertation aims to explore strategies entrepreneurs can use to increase their performance in crowdfunding by understanding how investors make sense of projects and how they evaluate them given the nature of the platform used by the entrepreneur. This thesis contributes to the literature on crowdfunding, categorization, and platforms. The thesis first explores how entrepreneurs can use categories and narrative strategies as strategic levers to improve their performance by lowering the level of ambiguity of their offer while aligning their narrative strategies to the expectations of the platform they use. On a second level, the dissertation provides a deeper understanding of the relation that exists between category spanning, ambiguity, and creativity by addressing this relatively unexplored path. Categorization theory is further enriched through a closer examination of the importance of semantic networks and visuals in the sense making process by using a novel empirical approach. Visuals are of particular interest given they were of seminal importance at the foundation of categorization theory, are processed by different cognitive means than words, and are of vital importance in today’s world. Finally, the dissertation explores the relation between platforms and narratives by theorizing that the former are particular types of organizations whose identity is forged by their internal and external stakeholders. Platform identities are vulnerable to change such as exogenous shocks. Entrepreneurs need to learn how to identify these identities and potential changes in order to tailor their narrative strategies in the hopes of increasing their performance
Muriithi, Paul Mutuanyingi. "A case for memory enhancement : ethical, social, legal, and policy implications for enhancing the memory." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/a-case-for-memory-enhancement-ethical-social-legal-and-policy-implications-for-enhancing-the-memory(bf11d09d-6326-49d2-8ef3-a40340471acf).html.
Full textAzami, Sajjad. "Exploring fair machine learning in sequential prediction and supervised learning." Thesis, 2020. http://hdl.handle.net/1828/12098.
Full textGraduate
Allabadi, Swati. "Algorithms for Fair Clustering." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5709.
Full textBooks on the topic "Fair Machine Learning"
Practicing Trustworthy Machine Learning: Consistent, Transparent, and Fair AI Pipelines. O'Reilly Media, Incorporated, 2022.
Find full textVallor, Shannon, and George A. Bekey. Artificial Intelligence and the Ethics of Self-Learning Robots. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.003.0022.
Full textBook chapters on the topic "Fair Machine Learning"
Pérez-Suay, Adrián, Valero Laparra, Gonzalo Mateo-García, Jordi Muñoz-Marí, Luis Gómez-Chova, and Gustau Camps-Valls. "Fair Kernel Learning." In Machine Learning and Knowledge Discovery in Databases, 339–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71249-9_21.
Full textFreitas, Alex, and James Brookhouse. "Evolutionary Algorithms for Fair Machine Learning." In Handbook of Evolutionary Machine Learning, 507–31. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3814-8_17.
Full textVan, Minh-Hao, Wei Du, Xintao Wu, and Aidong Lu. "Poisoning Attacks on Fair Machine Learning." In Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.
Full textVan, Minh-Hao, Wei Du, Xintao Wu, and Aidong Lu. "Poisoning Attacks on Fair Machine Learning." In Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.
Full textWu, Yongkai, Lu Zhang, and Xintao Wu. "Fair Machine Learning Through the Lens of Causality." In Machine Learning for Causal Inference, 103–35. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-35051-1_6.
Full textAbdollahi, Behnoush, and Olfa Nasraoui. "Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems." In Human and Machine Learning, 21–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90403-0_2.
Full textLappas, Theodoros, and Evimaria Terzi. "Toward a Fair Review-Management System." In Machine Learning and Knowledge Discovery in Databases, 293–309. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23783-6_19.
Full textZhang, Mingwu, Xiao Chen, Gang Shen, and Yong Ding. "A Fair and Efficient Secret Sharing Scheme Based on Cloud Assisting." In Machine Learning for Cyber Security, 348–60. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30619-9_25.
Full textRančić, Sanja, Sandro Radovanović, and Boris Delibašić. "Investigating Oversampling Techniques for Fair Machine Learning Models." In Lecture Notes in Business Information Processing, 110–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73976-8_9.
Full textWu, Zhou, and Mingxiang Guan. "Research on Fair Scheduling Algorithm of 5G Intelligent Wireless System Based on Machine Learning." In Machine Learning and Intelligent Communications, 53–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66785-6_6.
Full textConference papers on the topic "Fair Machine Learning"
Perrier, Elija. "Quantum Fair Machine Learning." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462611.
Full textKearns, Michael. "Fair Algorithms for Machine Learning." In EC '17: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3033274.3084096.
Full textDai, Jessica, Sina Fazelpour, and Zachary Lipton. "Fair Machine Learning Under Partial Compliance." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462521.
Full textLiu, Lydia T., Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. "Delayed Impact of Fair Machine Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/862.
Full textWang, Haoyu, Hanyu Hu, Mingrui Zhuang, and Jiayi Shen. "Integrating Machine Learning into Fair Inference." In The International Conference on New Media Development and Modernized Education. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011908000003613.
Full textJorgensen, Mackenzie, Hannah Richert, Elizabeth Black, Natalia Criado, and Jose Such. "Not So Fair: The Impact of Presumably Fair Machine Learning Models." In AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604699.
Full textSahlgren, Otto. "What's (Not) Ideal about Fair Machine Learning?" In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539543.
Full textHu, Shengyuan, Zhiwei Steven Wu, and Virginia Smith. "Fair Federated Learning via Bounded Group Loss." In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2024. http://dx.doi.org/10.1109/satml59370.2024.00015.
Full textBelitz, Clara, Lan Jiang, and Nigel Bosch. "Automating Procedurally Fair Feature Selection in Machine Learning." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462585.
Full textShimao, Hajime, Warut Khern-am-nuai, Karthik Kannan, and Maxime C. Cohen. "Strategic Best Response Fairness in Fair Machine Learning." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3534194.
Full textReports on the topic "Fair Machine Learning"
Nickerson, Jeffrey, Kalle Lyytinen, and John L. King. Automated Vehicles: A Human/Machine Co-learning Perspective. SAE International, April 2022. http://dx.doi.org/10.4271/epr2022009.
Full textAdegoke, Damilola, Natasha Chilambo, Adeoti Dipeolu, Ibrahim Machina, Ade Obafemi-Olopade, and Dolapo Yusuf. Public discourses and Engagement on Governance of Covid-19 in Ekiti State, Nigeria. African Leadership Center, King's College London, December 2021. http://dx.doi.org/10.47697/lab.202101.
Full text