Siga este enlace para ver otros tipos de publicaciones sobre el tema: Fair Machine Learning.

Artículos de revistas sobre el tema "Fair Machine Learning"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Fair Machine Learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Basu Roy Chowdhury, Somnath, and Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Texto completo
Resumen
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such change
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Perello, Nick, and Przemyslaw Grabowicz. "Fair Machine Learning Post Affirmative Action." ACM SIGCAS Computers and Society 52, no. 2 (2023): 22. http://dx.doi.org/10.1145/3656021.3656029.

Texto completo
Resumen
The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions [1]. Indeed, national polls found that a plurality of Americans - 42%, according to a poll conducted by the University of Massachusetts [2] - agree that the policy should be discontinued, while 33% support its continued use in admissions decisions. As scholars of fair machine learning, we ponder how the Supreme Court decision shifts points of focus in the field. The most popular fair machine learning methods aim to achieve some form of "impact parity" by diminishing or removing the cor
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rance, Joseph, and Filip Svoboda. "Can Private Machine Learning Be Fair?" Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 19 (2025): 20121–29. https://doi.org/10.1609/aaai.v39i19.34216.

Texto completo
Resumen
We show that current SOTA methods for privately and fairly training models are unreliable in many practical scenarios. Specifically, we (1) introduce a new type of adversarial attack that seeks to introduce unfairness into private model training, and (2) demonstrate that the use of methods for training on private data that are robust to adversarial attacks often leads to unfair models, regardless of the use of fairness-enhancing training methods. This leads to a dilemma when attempting to train fair models on private data: either (A) we use a robust training method which may introduce unfairne
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Oneto, Luca. "Learning fair models and representations." Intelligenza Artificiale 14, no. 1 (2020): 151–78. http://dx.doi.org/10.3233/ia-190034.

Texto completo
Resumen
Machine learning based systems and products are reaching society at large in many aspects of everyday life, including financial lending, online advertising, pretrial and immigration detention, child maltreatment screening, health care, social services, and education. This phenomenon has been accompanied by an increase in concern about the ethical issues that may rise from the adoption of these technologies. In response to this concern, a new area of machine learning has recently emerged that studies how to address disparate treatment caused by algorithmic errors and bias in the data. The centr
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo
Resumen
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its wi
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo
Resumen
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its wi
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Xueru, Mohammad Mahdi Khalili, and Mingyan Liu. "Long-Term Impacts of Fair Machine Learning." Ergonomics in Design: The Quarterly of Human Factors Applications 28, no. 3 (2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.

Texto completo
Resumen
Machine learning models developed from real-world data can inherit potential, preexisting bias in the dataset. When these models are used to inform decisions involving human beings, fairness concerns inevitably arise. Imposing certain fairness constraints in the training of models can be effective only if appropriate criteria are applied. However, a fairness criterion can be defined/assessed only when the interaction between the decisions and the underlying population is well understood. We introduce two feedback models describing how people react when receiving machine-aided decisions and ill
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhu, Yunlan. "The Comparative Analysis of Fair Use of Works in Machine Learning." SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.

Texto completo
Resumen
Before generative AI outputs the content, it copies a large amount of text content. This process is machine learning. For the development of artificial intelligence technology and cultural prosperity, many countries have included machine learning within the scope of fair use. However, China’s copyright law currently does not legislate the fair use of machine learning works. This paper will construct a Chinese model of fair use of machine learning works through comparative analysis of the legislation of other countries. This is a fair use model that balances the flexibility of the United States
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

JEONG, JIN KEUN. "Will the U.S. Court Judge TDM for Artificial Intelligence Machine Learning as Fair Use?" Korea Copyright Commission 144 (December 31, 2023): 215–50. http://dx.doi.org/10.30582/kdps.2023.36.4.215.

Texto completo
Resumen
A representative debate is whether TDM (Text and Data Mining) in the machine learning process, which occurs when AI uses other people’s copyrighted works by unauthorized means such as copying, is in accordance with the fair use principle or not. The issue is whether one can be exempted from copyright infringement.
 In this regard, Korean scholar’s attitude starts from the optimistic perspective that U.S. courts will view AI TDM or AI machine learning as fair use based on the fair use principle.
 Nevertheless, there is no direct basis for the claim that US courts will exempt AI TDM or
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Redko, Ievgen, and Charlotte Laclau. "On Fair Cost Sharing Games in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.

Texto completo
Resumen
Machine learning and game theory are known to exhibit a very strong link as they mutually provide each other with solutions and models allowing to study and analyze the optimal behaviour of a set of agents. In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous so
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Lee, Joshua, Yuheng Bu, Prasanna Sattigeri, et al. "A Maximal Correlation Framework for Fair Machine Learning." Entropy 24, no. 4 (2022): 461. http://dx.doi.org/10.3390/e24040461.

Texto completo
Resumen
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant. We explore the problem of algorithmic fairness, taking an information–theoretic view. The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than exist
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

van Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly, and Vassilis Kostakos. "Crowdsourcing Perceptions of Fair Predictors for Machine Learning." Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019): 1–21. http://dx.doi.org/10.1145/3359130.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhao, Han. "Fair and optimal prediction via post‐processing." AI Magazine 45, no. 3 (2024): 411–18. http://dx.doi.org/10.1002/aaai.12191.

Texto completo
Resumen
AbstractWith the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the fairness of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high‐stakes domains, for example, criminal judgment, medical testing, online adver
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Fahimi, Miriam, Mayra Russo, Kristen M. Scott, Maria-Esther Vidal, Bettina Berendt, and Katharina Kinder-Kurlanda. "Articulation Work and Tinkering for Fairness in Machine Learning." Proceedings of the ACM on Human-Computer Interaction 8, CSCW2 (2024): 1–23. http://dx.doi.org/10.1145/3686973.

Texto completo
Resumen
The field of fair AI aims to counter biased algorithms through computational modelling. However, it faces increasing criticism for perpetuating the use of overly technical and reductionist methods. As a result, novel approaches appear in the field to address more socially-oriented and interdisciplinary (SOI) perspectives on fair AI. In this paper, we take this dynamic as the starting point to study the tension between computer science (CS) and SOI research. By drawing on STS and CSCW theory, we position fair AI research as a matter of 'organizational alignment': what makes research 'doable' is
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Edwards, Chris. "AI Struggles with Fair Use." New Electronics 56, no. 9 (2023): 40–41. http://dx.doi.org/10.12968/s0047-9624(24)60063-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jang, Taeuk, Feng Zheng, and Xiaoqian Wang. "Constructing a Fair Classifier with Generated Fair Data." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7908–16. http://dx.doi.org/10.1609/aaai.v35i9.16965.

Texto completo
Resumen
Fairness in machine learning is getting rising attention as it is directly related to real-world applications and social problems. Recent methods have been explored to alleviate the discrimination between certain demographic groups that are characterized by sensitive attributes (such as race, age, or gender). Some studies have found that the data itself is biased, so training directly on the data causes unfair decision making. Models directly trained on raw data can replicate or even exacerbate bias in the prediction between demographic groups. This leads to vastly different prediction perform
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Eponeshnikov, Alexander, Natalia Bakhtadze, Gulnara Smirnova, Rustem Sabitov, and Shamil Sabitov. "Differentially Private and Fair Machine Learning: A Benchmark Study." IFAC-PapersOnLine 58, no. 19 (2024): 277–82. http://dx.doi.org/10.1016/j.ifacol.2024.09.192.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Blumenröhr, Nicolas, Thomas Jejkal, Andreas Pfeil, and Rainer Stotzka. "FAIR Digital Object Application Case for Composing Machine Learning Training Data." Research Ideas and Outcomes 8 (October 12, 2022): e94113. https://doi.org/10.3897/rio.8.e94113.

Texto completo
Resumen
The application case for implementing and using the FAIR Digital Object (FAIR DO) concept (Schultes and Wittenburg 2019), aims to simplify the access to label information for composing Machine Learning (ML) (Awad and Khanna 2015) training data.Data sets curated by different domain experts usually have non-identical label terms. This prevents images with similar labels from being easily assigned to the same category. Therefore, using them collectively for application as training data in ML comes with the cost of laborious relabeling. The data needs to be machine-interpretable and -actionable to
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chandra, Rushil, Karun Sanjaya, AR Aravind, Ahmed Radie Abbas, Ruzieva Gulrukh, and T. S. Senthil kumar. "Algorithmic Fairness and Bias in Machine Learning Systems." E3S Web of Conferences 399 (2023): 04036. http://dx.doi.org/10.1051/e3sconf/202339904036.

Texto completo
Resumen
In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in m
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Brotcke, Liming. "Time to Assess Bias in Machine Learning Models for Credit Decisions." Journal of Risk and Financial Management 15, no. 4 (2022): 165. http://dx.doi.org/10.3390/jrfm15040165.

Texto completo
Resumen
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML appl
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Tarasiuk, Anton. "Legal basis for using copyright objects in machine learning." Theory and Practice of Intellectual Property, no. 2 (June 4, 2024): 73–83. https://doi.org/10.33731/22024.305506.

Texto completo
Resumen
In this article, I characterize the legal basis for using copyright objects in machine learning.In this regard, the definition of machine learning from the perspective of copyright law isprovided. In the work, legal relations that occur in the context of machine learning usingcopyright objects are studied.The two key legal grounds for the usage of copyright objects to train models/neural networksare defined: license agreement, fair dealing, and fair use doctrine. Specific terms of thelicense agreement on copyright objects for machine learning are defined.In the analysis of the doctrine of fair
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Blumenröhr, Nicolas, and Rossella Aversa. "From implementation to application: FAIR digital objects for training data composition." Research Ideas and Outcomes 9 (August 22, 2023): e108706. https://doi.org/10.3897/rio.9.e108706.

Texto completo
Resumen
Composing training data for Machine Learning applications can be laborious and time-consuming when done manually. The use of FAIR Digital Objects, in which the data is machine-interpretable and -actionable, makes it possible to automate and simplify this task. As an application case, we represented labeled Scanning Electron Microscopy images from different sources as FAIR Digital Objects to compose a training data set. In addition to some existing services included in our implementation (the Typed-PID Maker, the Handle Registry, and the ePIC Data Type Registry), we developed a Python client to
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova, and Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making." Mathematics 11, no. 8 (2023): 1771. http://dx.doi.org/10.3390/math11081771.

Texto completo
Resumen
Most research on fairness in Machine Learning assumes the relationship between fairness and accuracy to be a trade-off, with an increase in fairness leading to an unavoidable loss of accuracy. In this study, several approaches for fair Machine Learning are studied to experimentally analyze the relationship between accuracy and group fairness. The results indicated that group fairness and accuracy may even benefit each other, which emphasizes the importance of selecting appropriate measures for performance evaluation. This work provides a foundation for further studies on the adequate objective
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Sun, Shao Chao, and Dao Huang. "A Novel Robust Smooth Support Vector Machine." Applied Mechanics and Materials 148-149 (December 2011): 1438–41. http://dx.doi.org/10.4028/www.scientific.net/amm.148-149.1438.

Texto completo
Resumen
In this paper, we propose a new type of ε-insensitive loss function, called as ε-insensitive Fair estimator. With this loss function we can obtain better robustness and sparseness. To enhance the learning speed ,we apply the smoothing techniques that have been used for solving the support vector machine for classification, to replace the ε-insensitive Fair estimator by an accurate smooth approximation. This will allow us to solve ε-SFSVR as an unconstrained minimization problem directly. Based on the simulation results, the proposed approach has fast learning speed and better generalization pe
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Tian, Xiao, Rachael Hwee Ling Sim, Jue Fan, and Bryan Kian Hsiang Low. "DeRDaVa: Deletion-Robust Data Valuation for Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15373–81. http://dx.doi.org/10.1609/aaai.v38i14.29462.

Texto completo
Resumen
Data valuation is concerned with determining a fair valuation of data from data sources to compensate them or to identify training examples that are the most or least useful for predictions. With the rising interest in personal data ownership and data protection regulations, model owners will likely have to fulfil more data deletion requests. This raises issues that have not been addressed by existing works: Are the data valuation scores still fair with deletions? Must the scores be expensively recomputed? The answer is no. To avoid recomputations, we propose using our data valuation framework
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Davis, Jenny L., Apryl Williams, and Michael W. Yang. "Algorithmic reparation." Big Data & Society 8, no. 2 (2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo
Resumen
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that do
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Davis, Jenny L., Apryl Williams, and Michael W. Yang. "Algorithmic reparation." Big Data & Society 8, no. 2 (2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo
Resumen
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that do
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Dhabliya, Dharmesh, Sukhvinder Singh Dari, Anishkumar Dhablia, N. Akhila, Renu Kachhoria, and Vinit Khetani. "Addressing Bias in Machine Learning Algorithms: Promoting Fairness and Ethical Design." E3S Web of Conferences 491 (2024): 02040. http://dx.doi.org/10.1051/e3sconf/202449102040.

Texto completo
Resumen
Machine learning algorithms have quickly risen to the top of several fields' decision-making processes in recent years. However, it is simple for these algorithms to confirm already present prejudices in data, leading to biassed and unfair choices. In this work, we examine bias in machine learning in great detail and offer strategies for promoting fair and moral algorithm design. The paper then emphasises the value of fairnessaware machine learning algorithms, which aim to lessen bias by including fairness constraints into the training and evaluation procedures. Reweighting, adversarial traini
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ian, Hardy. "[Re] An Implementation of Fair Robust Learning." ReScience C 8, no. 2 (2022): #16. https://doi.org/10.5281/zenodo.6574657.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Firestone, Chaz. "Performance vs. competence in human–machine comparisons." Proceedings of the National Academy of Sciences 117, no. 43 (2020): 26562–71. http://dx.doi.org/10.1073/pnas.1905334117.

Texto completo
Resumen
Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. when such failures are only superficial or
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Raftopoulos, George, Gregory Davrazos, and Sotiris Kotsiantis. "Fair and Transparent Student Admission Prediction Using Machine Learning Models." Algorithms 17, no. 12 (2024): 572. https://doi.org/10.3390/a17120572.

Texto completo
Resumen
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This paper explores the development and evaluation of machine learning models designed to predict student admissions while prioritizing fairness and interpretability. We employ a diverse set of algorithms, including Logistic Regression, Decision Trees, and ensemble methods,
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Vinay Kumar, Kotte, Santosh N.C, and Narasimha reddy soor. "Data Analysis and Fair Price Prediction Using Machine Learning Algorithms." Journal of Computer Allied Intelligence 2, no. 1 (2024): 26–45. http://dx.doi.org/10.69996/jcai.2024004.

Texto completo
Resumen
Data Analysis is the main important subject in recent times as the ongoing demand for it is growing accordingly with the huge amounts of data we get from many sources, All the huge data we get should be properly analysed so that the information will be used accordingly to its needs. In this paper, the main objective is to analyse the data that is taken from uber related data from a csv file which is already available in the outside world. In addition to the analysing of data we are also including two features to our project, from which the first one is fare price detection and the second one g
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Plečko, Drago, and Elias Bareinboim. "Causal Fairness Analysis: A Causal Toolkit for Fair Machine Learning." Foundations and Trends® in Machine Learning 17, no. 3 (2024): 304–589. http://dx.doi.org/10.1561/2200000106.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hoche, Marine, Olga Mineeva, Gunnar Rätsch, Effy Vayena, and Alessandro Blasimme. "What makes clinical machine learning fair? A practical ethics framework." PLOS Digital Health 4, no. 3 (2025): e0000728. https://doi.org/10.1371/journal.pdig.0000728.

Texto completo
Resumen
Machine learning (ML) can offer a tremendous contribution to medicine by streamlining decision-making, reducing mistakes, improving clinical accuracy and ensuring better patient outcomes. The prospects of a widespread and rapid integration of machine learning in clinical workflow have attracted considerable attention including due to complex ethical implications–algorithmic bias being among the most frequently discussed ML models. Here we introduce and discuss a practical ethics framework inductively-generated via normative analysis of the practical challenges in developing an actual clinical
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Do, Hyungrok, Jesse Persily, Judy Zhong, Yassamin Neshatvar, Katie Murray, and Madhur Nayan. "MITIGATING DISPARITIES IN PROSTATE CANCER THROUGH FAIR MACHINE LEARNING MODELS." Urologic Oncology: Seminars and Original Investigations 43, no. 3 (2025): 80. https://doi.org/10.1016/j.urolonc.2024.12.201.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Taylor, Greg. "Risks Special Issue on “Granular Models and Machine Learning Models”." Risks 8, no. 1 (2019): 1. http://dx.doi.org/10.3390/risks8010001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Guo, Peng, Yanqing Yang, Wei Guo, and Yanping Shen. "A Fair Contribution Measurement Method for Federated Learning." Sensors 24, no. 15 (2024): 4967. http://dx.doi.org/10.3390/s24154967.

Texto completo
Resumen
Federated learning is an effective approach for preserving data privacy and security, enabling machine learning to occur in a distributed environment and promoting its development. However, an urgent problem that needs to be addressed is how to encourage active client participation in federated learning. The Shapley value, a classical concept in cooperative game theory, has been utilized for data valuation in machine learning services. Nevertheless, existing numerical evaluation schemes based on the Shapley value are impractical, as they necessitate additional model training, leading to increa
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Chowdhury, Somnath Basu Roy, and Snigdha Chaturvedi. "Learning Fair Representations via Rate-Distortion Maximization." Transactions of the Association for Computational Linguistics 10 (2022): 1159–74. http://dx.doi.org/10.1162/tacl_a_00512.

Texto completo
Resumen
Abstract Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive models based on these representations can rely on such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected information by making representations of instances belonging to the same protected attribute class uncorrelated, using the rate-distortion function. FaRM is able to debias representations with or without a target task at hand. FaRM can also be adapted to remo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Primawati, Primawati, Fitrah Qalbina, Mulyanti Mulyanti, et al. "Predictive Maintenance of Old Grinding Machines Using Machine Learning Techniques." Journal of Applied Engineering and Technological Science (JAETS) 6, no. 2 (2025): 874–88. https://doi.org/10.37385/jaets.v6i2.6417.

Texto completo
Resumen
This study aims to develop a predictive maintenance system for an aging vertical grinding machine, operational since 1978, by integrating machine learning techniques, vibration analysis, and fuzzy logic. The research addresses the challenges of increased wear and unexpected failures in older machinery, which can lead to costly downtime and reduced operational efficiency. Vibration and temperature data were collected over 12 days using an MPU-9250 accelerometer, with conditions categorized as good, fair, and faulty. Various machine learning models, including logistic regression, k-nearest neigh
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ahire, Pritam, Atish Agale, and Mayur Augad. "Machine Learning for Forecasting Promotions." International Journal of Science and Healthcare Research 8, no. 2 (2023): 329–33. http://dx.doi.org/10.52403/ijshr.20230242.

Texto completo
Resumen
Employee promotion is an important aspect of an employee's career growth and job satisfaction. Organizations need to ensure that the promotion process is fair and unbiased. However, the promotion process can be complicated, and many factors need to be considered before deciding on a promotion. The use of data analytics and machine learning algorithms has become increasingly popular in recent years, and organizations can leverage these tools to predict employee promotion. In this paper, we present a web-based application for employee promotion prediction that uses the Naive Bayes algorithm. Our
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Heidrich, Louisa, Emanuel Slany, Stephan Scheele, and Ute Schmid. "FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1519–38. http://dx.doi.org/10.3390/make5040076.

Texto completo
Resumen
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong, and Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits." Proceedings of the VLDB Endowment 17, no. 5 (2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.

Texto completo
Resumen
Biased data can lead to unfair machine learning models, highlighting the importance of embedding fairness at the beginning of data analysis, particularly during dataset curation and labeling. In response, we propose Falcon, a scalable fair active learning framework. Falcon adopts a data-centric approach that improves machine learning model fairness via strategic sample selection. Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e.g., (attribute=female, label=positive)) that are the most informative for improving fairness. However, a challenge arise
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Pandey, Divya, Zohaib Hasan, Pradeep Soni, and Sujeet Padit. "Achieving Equity in Machine Learning: Technical Solutions and Societal Implications." International Journal of Innovative Research in Computer and Communication Engineering 10, no. 12 (2023): 8690–96. http://dx.doi.org/10.15680/ijircce.2022.1012034.

Texto completo
Resumen
The rapid advancement and widespread adoption of machine learning (ML) technologies have transformed numerous industries, including healthcare and finance. While these innovations have introduced significant benefits and efficiencies, they have also raised critical ethical and fairness concerns. As machine learning models increasingly influence decision-making processes, ensuring these models operate in a fair and unbiased manner has become an essential aspect of their deployment. Ethical issues in machine learning primarily revolve around the potential for biased outcomes, lack of transparenc
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Fitzsimons, Jack, AbdulRahman Al Ali, Michael Osborne, and Stephen Roberts. "A General Framework for Fair Regression." Entropy 21, no. 8 (2019): 741. http://dx.doi.org/10.3390/e21080741.

Texto completo
Resumen
Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints into kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Gaikar, Asha, Dr Uttara Gogate, and Amar Panchal. "Review on Evaluation of Stroke Prediction Using Machine Learning Methods." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (2023): 1011–17. http://dx.doi.org/10.22214/ijraset.2023.50262.

Texto completo
Resumen
Abstract: This research proposes early prediction of stroke disease using different machine learning approaches such as Logistic Regression Classifier, Decision Tree Classifier, Support Vector Machine and Random Forest Classifier. In this paper we have used different Machine Learning algorithm and by calculating their accuracy we are proposing fair comparisons of different Stoke prediction algorithms by using same dataset with same no of features. The researcher will help to predict the Stroke using best Machine Learning algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Feder, Toni. "Research facilities strive for fair and efficient time allocation." Physics Today 77, no. 9 (2024): 20–22. http://dx.doi.org/10.1063/pt.jvgy.emrz.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Khan, Shahid, Viktor Klochkov, Olha Lavoryk та ін. "Machine Learning Application for Λ Hyperon Reconstruction in CBM at FAIR". EPJ Web of Conferences 259 (2022): 13008. http://dx.doi.org/10.1051/epjconf/202225913008.

Texto completo
Resumen
The Compressed Baryonic Matter experiment at FAIR will investigate the QCD phase diagram in the region of high net-baryon densities. Enhanced production of strange baryons, such as the most abundantly produced Λ hyperons, can signal transition to a new phase of the QCD matter. In this work, the CBM performance for reconstruction of the Λ hyperon via its decay to proton and π− is presented. Decay topology reconstruction is implemented in the Particle-Finder Simple (PFSimple) package with Machine Learning algorithms providing effcient selection of the decays and high signal to background ratio.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Karim, Rizwan, and Muhammad Imran Asjad. "A Fair Approach to Heart Disease Prediction: Leveraging Machine Learning Model." Systems Assessment and Engineering Management 2 (December 1, 2024): 23–32. https://doi.org/10.61356/j.saem.2024.2438.

Texto completo
Resumen
This study focuses on the tasks of diagnosing and predicting diseases, which are crucial, for accurately classifying and treating them by cardiologists. By utilizing the increasing use of machine learning in the field in pattern recognition from data this research introduces a specialized model that aims to predict cardiovascular diseases. The main objectives of this model are to reduce misdiagnosis rates and minimize fatalities. To achieve these goals the proposed approach combines Logistic Regression with a fairness component. The model is trained using a real world dataset consisting of 70,
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ning, Yilin, Siqi Li, Yih Yng Ng, et al. "Variable importance analysis with interpretable machine learning for fair risk prediction." PLOS Digital Health 3, no. 7 (2024): e0000542. http://dx.doi.org/10.1371/journal.pdig.0000542.

Texto completo
Resumen
Machine learning (ML) methods are increasingly used to assess variable importance, but such black box models lack stability when limited in sample sizes, and do not formally indicate non-important factors. The Shapley variable importance cloud (ShapleyVIC) addresses these limitations by assessing variable importance from an ensemble of regression models, which enhances robustness while maintaining interpretability, and estimates uncertainty of overall importance to formally test its significance. In a clinical study, ShapleyVIC reasonably identified important variables when the random forest a
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Texto completo
Resumen
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!