Literatura académica sobre el tema "Fair Machine Learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Fair Machine Learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Fair Machine Learning"
Basu Roy Chowdhury, Somnath y Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junio de 2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.
Texto completoPerello, Nick y Przemyslaw Grabowicz. "Fair Machine Learning Post Affirmative Action". ACM SIGCAS Computers and Society 52, n.º 2 (septiembre de 2023): 22. http://dx.doi.org/10.1145/3656021.3656029.
Texto completoOneto, Luca. "Learning fair models and representations". Intelligenza Artificiale 14, n.º 1 (17 de septiembre de 2020): 151–78. http://dx.doi.org/10.3233/ia-190034.
Texto completoKim, Yun-Myung. "Data and Fair use". Korea Copyright Commission 141 (30 de marzo de 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.
Texto completoKim, Yun-Myung. "Data and Fair use". Korea Copyright Commission 141 (30 de marzo de 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.
Texto completoZhang, Xueru, Mohammad Mahdi Khalili y Mingyan Liu. "Long-Term Impacts of Fair Machine Learning". Ergonomics in Design: The Quarterly of Human Factors Applications 28, n.º 3 (25 de octubre de 2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.
Texto completoZhu, Yunlan. "The Comparative Analysis of Fair Use of Works in Machine Learning". SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.
Texto completoRedko, Ievgen y Charlotte Laclau. "On Fair Cost Sharing Games in Machine Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.
Texto completoLee, Joshua, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory W. Wornell, Leonid Karlinsky y Rogerio Schmidt Feris. "A Maximal Correlation Framework for Fair Machine Learning". Entropy 24, n.º 4 (26 de marzo de 2022): 461. http://dx.doi.org/10.3390/e24040461.
Texto completovan Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly y Vassilis Kostakos. "Crowdsourcing Perceptions of Fair Predictors for Machine Learning". Proceedings of the ACM on Human-Computer Interaction 3, CSCW (7 de noviembre de 2019): 1–21. http://dx.doi.org/10.1145/3359130.
Texto completoTesis sobre el tema "Fair Machine Learning"
Schildt, Alexandra y Jenny Luo. "Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.
Texto completoAI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
Dalgren, Anton y Ylva Lundegård. "GreenML : A methodology for fair evaluation of machine learning algorithms with respect to resource consumption". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159837.
Texto completoGordaliza, Pastor Paula. "Fair learning : une approche basée sur le transport optimale". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30084.
Texto completoThe aim of this thesis is two-fold. On the one hand, optimal transportation methods are studied for statistical inference purposes. On the other hand, the recent problem of fair learning is addressed through the prism of optimal transport theory. The generalization of applications based on machine learning models in the everyday life and the professional world has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. In the first part of the thesis, we motivate the fairness problem by presenting some comprehensive results from the study of the statistical parity criterion through the analysis of the disparate impact index on the real and well-known Adult Income dataset. Importantly, we show that trying to make fair machine learning models may be a particularly challenging task, especially when the training observations contain bias. Then a review of Mathematics for fairness in machine learning is given in a general setting, with some novel contributions in the analysis of the price for fairness in regression and classification. In the latter, we finish this first part by recasting the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. Finally, we propose a random repair which yields a tradeoff between minimal information loss and a certain amount of fairness. The second part is devoted to the asymptotic theory of the empirical transportation cost. We provide a Central Limit Theorem for the Monge-Kantorovich distance between two empirical distributions with different sizes n and m, Wp(Pn,Qm), p > = 1, for observations on R. In the case p > 1 our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centering constants. We provide a consistent estimate of the asymptotic variance which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification. Additionally, we provide a moderate deviation principle for the empirical transportation cost in general dimension. Finally, Wasserstein barycenters and variance-like criterion using Wasserstein distance are used in many problems to analyze the homogeneity of collections of distributions and structural relationships between the observations. We propose the estimation of the quantiles of the empirical process of the Wasserstein's variation using a bootstrap procedure. Then we use these results for statistical inference on a distribution registration model for general deformation functions. The tests are based on the variance of the distributions with respect to their Wasserstein's barycenters for which we prove central limit theorems, including bootstrap versions
Grari, Vincent. "Adversarial mitigation to reduce unwanted biases in machine learning". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS096.
Texto completoThe past few years have seen a dramatic rise of academic and societal interest in fair machine learning. As a result, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Its primary purpose is to ensure that model predictions do not depend on any sensitive attribute as gender or race, for example. Although this notion of independence is incontestable in a general context, it can theoretically be defined in many different ways depending on how one sees fairness. As a result, many recent papers tackle this challenge by using their "own" objectives and notions of fairness. Objectives can be categorized in two different families: Individual and Group fairness. This thesis gives an overview of the methodologies applied in these different families in order to encourage good practices. Then, we identify and complete gaps by presenting new metrics and new Fair-ML algorithms that are more appropriate for specific contexts
Berisha, Visar. "AI as a Threat to Democracy : Towards an Empirically Grounded Theory". Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340733.
Texto completoSitruk, Jonathan. "Fais Ce Qu'il Te Plaît... Mais Fais Le Comme Je L'aime : Amélioration des performances en crowdfunding par l’utilisation des catégories et des récits". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR0018.
Texto completoThis dissertation aims to provide entrepreneurs with a better understanding of how to improve their performance when raising funds from investors. Entrepreneurs have difficulty accessing financial resources and capital because they suffer from a liability of newness. This inherent condition is due to their lack of legitimacy in their target market and leads investors to see them as inherently risky. The traditional means of financing new venture ideas have been through personal savings, family and friends, banks, or professional investors. Crowdfunding has emerged as an alternative to these and scholars in the field of management and entrepreneurship have taken great interest in understanding its multiple facets. Most research in crowdfunding has focused on quantifiable elements that investors use in order to determine the quality of an entrepreneur’s venture. The higher the perceived quality, the higher the likelihood investors have of investing in it. However, orthogonal to these elements of quality, and not addressed in current research, are those qualitative elements that allow projects to become clearer in the eyes of potential funders and transmit valuable information about the venture in a coherent fashion regarding the medium they are raising funds from. This dissertation aims to explore strategies entrepreneurs can use to increase their performance in crowdfunding by understanding how investors make sense of projects and how they evaluate them given the nature of the platform used by the entrepreneur. This thesis contributes to the literature on crowdfunding, categorization, and platforms. The thesis first explores how entrepreneurs can use categories and narrative strategies as strategic levers to improve their performance by lowering the level of ambiguity of their offer while aligning their narrative strategies to the expectations of the platform they use. On a second level, the dissertation provides a deeper understanding of the relation that exists between category spanning, ambiguity, and creativity by addressing this relatively unexplored path. Categorization theory is further enriched through a closer examination of the importance of semantic networks and visuals in the sense making process by using a novel empirical approach. Visuals are of particular interest given they were of seminal importance at the foundation of categorization theory, are processed by different cognitive means than words, and are of vital importance in today’s world. Finally, the dissertation explores the relation between platforms and narratives by theorizing that the former are particular types of organizations whose identity is forged by their internal and external stakeholders. Platform identities are vulnerable to change such as exogenous shocks. Entrepreneurs need to learn how to identify these identities and potential changes in order to tailor their narrative strategies in the hopes of increasing their performance
Muriithi, Paul Mutuanyingi. "A case for memory enhancement : ethical, social, legal, and policy implications for enhancing the memory". Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/a-case-for-memory-enhancement-ethical-social-legal-and-policy-implications-for-enhancing-the-memory(bf11d09d-6326-49d2-8ef3-a40340471acf).html.
Texto completoAzami, Sajjad. "Exploring fair machine learning in sequential prediction and supervised learning". Thesis, 2020. http://hdl.handle.net/1828/12098.
Texto completoGraduate
Allabadi, Swati. "Algorithms for Fair Clustering". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5709.
Texto completoLibros sobre el tema "Fair Machine Learning"
Practicing Trustworthy Machine Learning: Consistent, Transparent, and Fair AI Pipelines. O'Reilly Media, Incorporated, 2022.
Buscar texto completoVallor, Shannon y George A. Bekey. Artificial Intelligence and the Ethics of Self-Learning Robots. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.003.0022.
Texto completoCapítulos de libros sobre el tema "Fair Machine Learning"
Pérez-Suay, Adrián, Valero Laparra, Gonzalo Mateo-García, Jordi Muñoz-Marí, Luis Gómez-Chova y Gustau Camps-Valls. "Fair Kernel Learning". En Machine Learning and Knowledge Discovery in Databases, 339–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71249-9_21.
Texto completoFreitas, Alex y James Brookhouse. "Evolutionary Algorithms for Fair Machine Learning". En Handbook of Evolutionary Machine Learning, 507–31. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3814-8_17.
Texto completoVan, Minh-Hao, Wei Du, Xintao Wu y Aidong Lu. "Poisoning Attacks on Fair Machine Learning". En Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.
Texto completoVan, Minh-Hao, Wei Du, Xintao Wu y Aidong Lu. "Poisoning Attacks on Fair Machine Learning". En Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.
Texto completoWu, Yongkai, Lu Zhang y Xintao Wu. "Fair Machine Learning Through the Lens of Causality". En Machine Learning for Causal Inference, 103–35. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-35051-1_6.
Texto completoAbdollahi, Behnoush y Olfa Nasraoui. "Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems". En Human and Machine Learning, 21–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90403-0_2.
Texto completoLappas, Theodoros y Evimaria Terzi. "Toward a Fair Review-Management System". En Machine Learning and Knowledge Discovery in Databases, 293–309. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23783-6_19.
Texto completoZhang, Mingwu, Xiao Chen, Gang Shen y Yong Ding. "A Fair and Efficient Secret Sharing Scheme Based on Cloud Assisting". En Machine Learning for Cyber Security, 348–60. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30619-9_25.
Texto completoRančić, Sanja, Sandro Radovanović y Boris Delibašić. "Investigating Oversampling Techniques for Fair Machine Learning Models". En Lecture Notes in Business Information Processing, 110–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73976-8_9.
Texto completoWu, Zhou y Mingxiang Guan. "Research on Fair Scheduling Algorithm of 5G Intelligent Wireless System Based on Machine Learning". En Machine Learning and Intelligent Communications, 53–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66785-6_6.
Texto completoActas de conferencias sobre el tema "Fair Machine Learning"
Perrier, Elija. "Quantum Fair Machine Learning". En AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462611.
Texto completoKearns, Michael. "Fair Algorithms for Machine Learning". En EC '17: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3033274.3084096.
Texto completoDai, Jessica, Sina Fazelpour y Zachary Lipton. "Fair Machine Learning Under Partial Compliance". En AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462521.
Texto completoLiu, Lydia T., Sarah Dean, Esther Rolf, Max Simchowitz y Moritz Hardt. "Delayed Impact of Fair Machine Learning". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/862.
Texto completoWang, Haoyu, Hanyu Hu, Mingrui Zhuang y Jiayi Shen. "Integrating Machine Learning into Fair Inference". En The International Conference on New Media Development and Modernized Education. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011908000003613.
Texto completoJorgensen, Mackenzie, Hannah Richert, Elizabeth Black, Natalia Criado y Jose Such. "Not So Fair: The Impact of Presumably Fair Machine Learning Models". En AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604699.
Texto completoSahlgren, Otto. "What's (Not) Ideal about Fair Machine Learning?" En AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539543.
Texto completoHu, Shengyuan, Zhiwei Steven Wu y Virginia Smith. "Fair Federated Learning via Bounded Group Loss". En 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2024. http://dx.doi.org/10.1109/satml59370.2024.00015.
Texto completoBelitz, Clara, Lan Jiang y Nigel Bosch. "Automating Procedurally Fair Feature Selection in Machine Learning". En AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462585.
Texto completoShimao, Hajime, Warut Khern-am-nuai, Karthik Kannan y Maxime C. Cohen. "Strategic Best Response Fairness in Fair Machine Learning". En AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3534194.
Texto completoInformes sobre el tema "Fair Machine Learning"
Nickerson, Jeffrey, Kalle Lyytinen y John L. King. Automated Vehicles: A Human/Machine Co-learning Perspective. SAE International, abril de 2022. http://dx.doi.org/10.4271/epr2022009.
Texto completoAdegoke, Damilola, Natasha Chilambo, Adeoti Dipeolu, Ibrahim Machina, Ade Obafemi-Olopade y Dolapo Yusuf. Public discourses and Engagement on Governance of Covid-19 in Ekiti State, Nigeria. African Leadership Center, King's College London, diciembre de 2021. http://dx.doi.org/10.47697/lab.202101.
Texto completo