Siga este enlace para ver otros tipos de publicaciones sobre el tema: Fair Machine Learning.

Artículos de revistas sobre el tema "Fair Machine Learning"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Fair Machine Learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Basu Roy Chowdhury, Somnath, and Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Texto completo
Resumen
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Perello, Nick, and Przemyslaw Grabowicz. "Fair Machine Learning Post Affirmative Action." ACM SIGCAS Computers and Society 52, no. 2 (2023): 22. http://dx.doi.org/10.1145/3656021.3656029.

Texto completo
Resumen
The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions [1]. Indeed, national polls found that a plurality of Americans - 42%, according to a poll conducted by the University of Massachusetts [2] - agree that the policy should be discontinued, while 33% support its continued use in admissions decisions. As scholars of fair machine learning, we ponder how the Supreme Court decision shifts points of focus in the field. The most popular fair machine learning methods aim to achieve some form of "impact parity" by diminishing or removing the correlation between decisions and protected attributes, such as race or gender, similarly to the 80% rule of thumb of the Equal Employment Opportunity Commision. Impact parity can be achieved by reversing historical discrimination, which corresponds to affirmative actions, or by diminishing or removing the influence of the attributes correlated with the protected attributes, which is impractical as it severely undermines model accuracy. Besides, impact disparity is not necessarily a bad thing, e.g., African-American patients suffer from a higher rate of chronic illnesses than White patients and, hence, it may be justified to admit them to care programs at a proportionally higher rate [3]. The U.S. burden-shifting framework under Title VII offers solutions alternative to impact parity. To determine employment discrimination, U.S. courts rely on the McDonnell-Douglas burden-shifting framework where the explanations, justifications, and comparisons of employment practices play a central role. Can similar methods be applied in machine learning?
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rance, Joseph, and Filip Svoboda. "Can Private Machine Learning Be Fair?" Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 19 (2025): 20121–29. https://doi.org/10.1609/aaai.v39i19.34216.

Texto completo
Resumen
We show that current SOTA methods for privately and fairly training models are unreliable in many practical scenarios. Specifically, we (1) introduce a new type of adversarial attack that seeks to introduce unfairness into private model training, and (2) demonstrate that the use of methods for training on private data that are robust to adversarial attacks often leads to unfair models, regardless of the use of fairness-enhancing training methods. This leads to a dilemma when attempting to train fair models on private data: either (A) we use a robust training method which may introduce unfairness to the model itself, or (B) we train models which are vulnerable to adversarial attacks that introduce unfairness. This paper highlights flaws in robust learning methods when training fair models, yielding a new perspective for the design of robust and private learning systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Oneto, Luca. "Learning fair models and representations." Intelligenza Artificiale 14, no. 1 (2020): 151–78. http://dx.doi.org/10.3233/ia-190034.

Texto completo
Resumen
Machine learning based systems and products are reaching society at large in many aspects of everyday life, including financial lending, online advertising, pretrial and immigration detention, child maltreatment screening, health care, social services, and education. This phenomenon has been accompanied by an increase in concern about the ethical issues that may rise from the adoption of these technologies. In response to this concern, a new area of machine learning has recently emerged that studies how to address disparate treatment caused by algorithmic errors and bias in the data. The central question is how to ensure that the learned model does not treat subgroups in the population unfairly. While the design of solutions to this issue requires an interdisciplinary effort, fundamental progress can only be achieved through a radical change in the machine learning paradigm. In this work, we will describe the state of the art on algorithmic fairness using statistical learning theory, machine learning, and deep learning approaches that are able to learn fair models and data representation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo
Resumen
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kim, Yun-Myung. "Data and Fair use." Korea Copyright Commission 141 (March 30, 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo
Resumen
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Xueru, Mohammad Mahdi Khalili, and Mingyan Liu. "Long-Term Impacts of Fair Machine Learning." Ergonomics in Design: The Quarterly of Human Factors Applications 28, no. 3 (2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.

Texto completo
Resumen
Machine learning models developed from real-world data can inherit potential, preexisting bias in the dataset. When these models are used to inform decisions involving human beings, fairness concerns inevitably arise. Imposing certain fairness constraints in the training of models can be effective only if appropriate criteria are applied. However, a fairness criterion can be defined/assessed only when the interaction between the decisions and the underlying population is well understood. We introduce two feedback models describing how people react when receiving machine-aided decisions and illustrate that some commonly used fairness criteria can end with undesirable consequences while reinforcing discrimination.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhu, Yunlan. "The Comparative Analysis of Fair Use of Works in Machine Learning." SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.

Texto completo
Resumen
Before generative AI outputs the content, it copies a large amount of text content. This process is machine learning. For the development of artificial intelligence technology and cultural prosperity, many countries have included machine learning within the scope of fair use. However, China’s copyright law currently does not legislate the fair use of machine learning works. This paper will construct a Chinese model of fair use of machine learning works through comparative analysis of the legislation of other countries. This is a fair use model that balances the flexibility of the United States with the rigor of the European Union.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

JEONG, JIN KEUN. "Will the U.S. Court Judge TDM for Artificial Intelligence Machine Learning as Fair Use?" Korea Copyright Commission 144 (December 31, 2023): 215–50. http://dx.doi.org/10.30582/kdps.2023.36.4.215.

Texto completo
Resumen
A representative debate is whether TDM (Text and Data Mining) in the machine learning process, which occurs when AI uses other people’s copyrighted works by unauthorized means such as copying, is in accordance with the fair use principle or not. The issue is whether one can be exempted from copyright infringement.
 In this regard, Korean scholar’s attitude starts from the optimistic perspective that U.S. courts will view AI TDM or AI machine learning as fair use based on the fair use principle.
 Nevertheless, there is no direct basis for the claim that US courts will exempt AI TDM or AI machine learning from fair use. This is because there has been no case in the United States where a court has recognized fair use by clearly targeting AI TDM or AI machine learning. Meanwhile, The Internet Archive case and the Andy Warhol case are hesitant to expand the fair use principle and are giving rise to pessimistic views on whether the use of other people’s copyrighted works in the AI TDM or AI machine learning process will be considered fair use.
 Taking that into consideration, American scholars are also developing the argument that the use of copyrighted works for AI machine learning or AI TDM should be considered fair use.
 Therefore, the positive stance on the possibility of TDM exemption under Article 35-5 of the Korean Copyright Act needs to be carefully reexamined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Redko, Ievgen, and Charlotte Laclau. "On Fair Cost Sharing Games in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.

Texto completo
Resumen
Machine learning and game theory are known to exhibit a very strong link as they mutually provide each other with solutions and models allowing to study and analyze the optimal behaviour of a set of agents. In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous sources of data. We further demonstrate how the game-theoretical results bounding the ratio between the best Nash equilibrium (or its approximate counterpart) and the optimal solution of a given game can be used to provide the upper bound of the gain achievable by the collaborative learning expressed as the expected risk and the sample complexity for homogeneous and heterogeneous cases, respectively. We believe that the established link can spur many possible future implications for other learning scenarios as well, with privacy-aware learning being among the most noticeable examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Lee, Joshua, Yuheng Bu, Prasanna Sattigeri, et al. "A Maximal Correlation Framework for Fair Machine Learning." Entropy 24, no. 4 (2022): 461. http://dx.doi.org/10.3390/e24040461.

Texto completo
Resumen
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant. We explore the problem of algorithmic fairness, taking an information–theoretic view. The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than existing algorithms. We show that these algorithms provide smooth performance–fairness tradeoff curves and perform competitively with state-of-the-art methods on both discrete datasets (COMPAS, Adult) and continuous datasets (Communities and Crimes).
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

van Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly, and Vassilis Kostakos. "Crowdsourcing Perceptions of Fair Predictors for Machine Learning." Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019): 1–21. http://dx.doi.org/10.1145/3359130.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Fahimi, Miriam, Mayra Russo, Kristen M. Scott, Maria-Esther Vidal, Bettina Berendt, and Katharina Kinder-Kurlanda. "Articulation Work and Tinkering for Fairness in Machine Learning." Proceedings of the ACM on Human-Computer Interaction 8, CSCW2 (2024): 1–23. http://dx.doi.org/10.1145/3686973.

Texto completo
Resumen
The field of fair AI aims to counter biased algorithms through computational modelling. However, it faces increasing criticism for perpetuating the use of overly technical and reductionist methods. As a result, novel approaches appear in the field to address more socially-oriented and interdisciplinary (SOI) perspectives on fair AI. In this paper, we take this dynamic as the starting point to study the tension between computer science (CS) and SOI research. By drawing on STS and CSCW theory, we position fair AI research as a matter of 'organizational alignment': what makes research 'doable' is the successful alignment of three levels of work organization (the social world, the laboratory, and the experiment). Based on qualitative interviews with CS researchers, we analyze the tasks, resources, and actors required for doable research in the case of fair AI. We find that CS researchers engage with SOI research to some extent, but organizational conditions, articulation work, and ambiguities of the social world constrain the doability of SOI research for them. Based on our findings, we identify and discuss problems for aligning CS and SOI as fair AI continues to evolve.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Zhao, Han. "Fair and optimal prediction via post‐processing." AI Magazine 45, no. 3 (2024): 411–18. http://dx.doi.org/10.1002/aaai.12191.

Texto completo
Resumen
AbstractWith the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the fairness of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high‐stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post‐processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute‐aware and attribute‐blind settings, and is particularly suited to large‐scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Edwards, Chris. "AI Struggles with Fair Use." New Electronics 56, no. 9 (2023): 40–41. http://dx.doi.org/10.12968/s0047-9624(24)60063-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jang, Taeuk, Feng Zheng, and Xiaoqian Wang. "Constructing a Fair Classifier with Generated Fair Data." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7908–16. http://dx.doi.org/10.1609/aaai.v35i9.16965.

Texto completo
Resumen
Fairness in machine learning is getting rising attention as it is directly related to real-world applications and social problems. Recent methods have been explored to alleviate the discrimination between certain demographic groups that are characterized by sensitive attributes (such as race, age, or gender). Some studies have found that the data itself is biased, so training directly on the data causes unfair decision making. Models directly trained on raw data can replicate or even exacerbate bias in the prediction between demographic groups. This leads to vastly different prediction performance in different demographic groups. In order to address this issue, we propose a new approach to improve machine learning fairness by generating fair data. We introduce a generative model to generate cross-domain samples w.r.t. multiple sensitive attributes. This ensures that we can generate infinite number of samples that are balanced \wrt both target label and sensitive attributes to enhance fair prediction. By training the classifier solely with the synthetic data and then transfer the model to real data, we can overcome the under-representation problem which is non-trivial since collecting real data is extremely time and resource consuming. We provide empirical evidence to demonstrate the benefit of our model with respect to both fairness and accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Eponeshnikov, Alexander, Natalia Bakhtadze, Gulnara Smirnova, Rustem Sabitov, and Shamil Sabitov. "Differentially Private and Fair Machine Learning: A Benchmark Study." IFAC-PapersOnLine 58, no. 19 (2024): 277–82. http://dx.doi.org/10.1016/j.ifacol.2024.09.192.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Blumenröhr, Nicolas, Thomas Jejkal, Andreas Pfeil, and Rainer Stotzka. "FAIR Digital Object Application Case for Composing Machine Learning Training Data." Research Ideas and Outcomes 8 (October 12, 2022): e94113. https://doi.org/10.3897/rio.8.e94113.

Texto completo
Resumen
The application case for implementing and using the FAIR Digital Object (FAIR DO) concept (Schultes and Wittenburg 2019), aims to simplify the access to label information for composing Machine Learning (ML) (Awad and Khanna 2015) training data.Data sets curated by different domain experts usually have non-identical label terms. This prevents images with similar labels from being easily assigned to the same category. Therefore, using them collectively for application as training data in ML comes with the cost of laborious relabeling. The data needs to be machine-interpretable and -actionable to automate this process. This is enabled by applying the FAIR DO concept. A FAIR DO is a representation of scientific data and requires at least a globally unique Persistent Identifier (PID) (Schultes and Wittenburg 2019), mandatory metadata, and a digital object type.Storing typed information in the PID record demands a prior selection of that information. This includes mandatory metadata and a digital object type to enable machine interpretability and subsequent actionability. The information provided in the PID record refers to its PID Kernel Information Profile (PIDKIP), defined or selected by the creator of the FAIR DO. A PIDKIP is a standard that facilitates the definition and validation of the mandatory metadata attributes in the PID record. This information acts as a basis for a machine to decide if the digital object is reusable for a particular application. Part of that is also the digital object type, which enables a machine to work with the data represented by the FAIR DO. If more information is required, the data itself or other associated FAIR DOs need to be accessed through references in the PID record.Specifying the granularity of the data representation, and the granularity of the metadata in the information record is not a fixed task but depends on the objective. Here, the FAIR DO concept is used for representing image data sets with their label metadata. Each data set contains multiple images, which refer to the same label term. One data set associated with a particular label is represented as one FAIR DO. A type that provides information about this entity covers the packaged format of the images and the image format itself. Further information about the label term and other metadata associated with the data set is provided or accessed through references in the PID record. For the PIDKIP, the Helmholtz KIP was chosen, following the RDA Working Group recommendations on PID Kernel Information (RDA 2013). This profile includes mandatory metadata attributes, used for machine-actionable decisions required for relabeling. Information about the data labels is not directly provided in its PID record, but in another PID record of an associated image label FAIR DO. This one represents a metadata document, containing label information about the data set. Its PID record is based on the same PIDKIP, i.e. the Helmholtz KIP. Both FAIR DOs point to each other. Thus, the image label FAIR DO is accessed via the reference in the PID record of the data set FAIR DO and vice versa. Its PID record contains information about the labels, which are relevant to the relabeling task. Accessing data label information that way means the user does not have to look up each data set, analyze its content and search for its labels. (Fig. 1)The automated procedure for relabeling then looks as follows: A specialized client that can work with PIDs, resolves the PID of a FAIR DO which represents an image data set, and fetches its record. Analyzing its type, the client validates the data usability for composing a ML training data set. Furthermore, the referenced PID of the image label FAIR DO in the record is resolved the same way. By analyzing its PID record, the client identifies that it is relevant for getting information about the labels. The document represented by the image label FAIR DO is accessed via its location path provided in the PID record. To work with its content, a specialized tool is required that is compatible with its format and schema, i.e. its type. This tool identifies and analyzes the label term of the data set for mapping it to corresponding label terms of other image data sets.This specification of FAIR DOs enables the relabeling of entire image data sets for application in ML. However, the current granularity of data representation is insufficient for other machine-based decisions and actions on single images. Another aspect in this regard is to increase the information in the PID record to enable more machine-actionable decisions. This requires reconsideration of the granularity of metadata in the PID record and needs to be balanced with the aim of fast record processing. Changing the content of the PID record also leads to deriving a new PIDKIP, or extending existing ones. Metadata tools applied in conjunction with the FAIR DO concept that uses the label information in the document of the metadata FAIR DOs need further specification. One requirement for their implementation is a standardized data description for the metadata document, using schemas and vocabularies.Using the machine actionability of FAIR DOs described above, enables automation for relabeling data sets. This leaves more time for the ML user to concentrate on model training and optimization. Software development of FAIR DO-specific clients and metadata mapping tools are the subject of current research. The next step is to implement such software, for carrying out the proposed concept on a large scale.This work has been supported by the research program 'Engineering Digital Futures' of the Helmholtz Association of German Research Centers and the Helmholtz Metadata Collaboration Platform (Helmholtz-Gemeinschaft Deutscher Forschungszentren 1995).
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chandra, Rushil, Karun Sanjaya, AR Aravind, Ahmed Radie Abbas, Ruzieva Gulrukh, and T. S. Senthil kumar. "Algorithmic Fairness and Bias in Machine Learning Systems." E3S Web of Conferences 399 (2023): 04036. http://dx.doi.org/10.1051/e3sconf/202339904036.

Texto completo
Resumen
In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in machine learning systems are crucial issues in this regard that demand the attention of academics, practitioners, and policymakers. Building fair and unbiased machine learning systems that uphold equality and prevent discrimination requires addressing biases in training data, creating fairness-aware algorithms, encouraging transparency and interpretability, and encouraging diversity and inclusivity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Brotcke, Liming. "Time to Assess Bias in Machine Learning Models for Credit Decisions." Journal of Risk and Financial Management 15, no. 4 (2022): 165. http://dx.doi.org/10.3390/jrfm15040165.

Texto completo
Resumen
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML applications in loan underwriting and pricing, the fair lending assessments typically led by compliance and legal functions will likely continue to evolve. In this paper, the author discusses unique considerations around ML in the existing fair lending risk assessment practice for underwriting and pricing models and proposes consideration of additional evaluations to be added in the present practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Tarasiuk, Anton. "Legal basis for using copyright objects in machine learning." Theory and Practice of Intellectual Property, no. 2 (June 4, 2024): 73–83. https://doi.org/10.33731/22024.305506.

Texto completo
Resumen
In this article, I characterize the legal basis for using copyright objects in machine learning.In this regard, the definition of machine learning from the perspective of copyright law isprovided. In the work, legal relations that occur in the context of machine learning usingcopyright objects are studied.The two key legal grounds for the usage of copyright objects to train models/neural networksare defined: license agreement, fair dealing, and fair use doctrine. Specific terms of thelicense agreement on copyright objects for machine learning are defined.In the analysis of the doctrine of fair use as a legal ground to use copyright objects withoutpermission/license of the author, four actual and ongoing disputes in the USA regarding possibleunlawful usage of copyright objects for machine learning without getting a license orproviding payment to the authors are examined. As a result of this study, the author concludesthat in order to determine whether it is possible to use the fair use doctrine as a legalbasis, there must be a separate examination for each data used, and the analysis should becomplex, including the way the new product based on the respective trained neural networkwill be used and the effect it may have on the market.As a result, key indicators of the pros and cons of the possibilities of applying the fair usedoctrine are defined. Pro indicators: (a) obtaining information from objects about the essence,structure, and ideas embedded in these objects; (b) a transformational and innovative goal; (c)lack of reproduction of the full/partial expression of objects for the end user/the possibility ofcreating modifications of such objects; (d) lack of competition with authors. Con indicators: (a)the use of extremely valuable/highest quality copies of copyright objects in the industry thatare a fundamental asset of a particular company (an organized collection of photographs,etc.), while it was possible to get objects in the public domain for the stated goals; (b) a realimpact on the market of authors due to the creation of competitive end products that causereal harm to them (real/hypothetical); (c) partial/full reproduction of original copyright objectsfor end users of the final product; (d) causing reputational damage to authors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Blumenröhr, Nicolas, and Rossella Aversa. "From implementation to application: FAIR digital objects for training data composition." Research Ideas and Outcomes 9 (August 22, 2023): e108706. https://doi.org/10.3897/rio.9.e108706.

Texto completo
Resumen
Composing training data for Machine Learning applications can be laborious and time-consuming when done manually. The use of FAIR Digital Objects, in which the data is machine-interpretable and -actionable, makes it possible to automate and simplify this task. As an application case, we represented labeled Scanning Electron Microscopy images from different sources as FAIR Digital Objects to compose a training data set. In addition to some existing services included in our implementation (the Typed-PID Maker, the Handle Registry, and the ePIC Data Type Registry), we developed a Python client to automate the relabeling task. Our work provides a Proof-of-Concept validation for the usefulness of FAIR Digital Objects on a specific task, facilitating further developments and future extensions to other machine learning applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova, and Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making." Mathematics 11, no. 8 (2023): 1771. http://dx.doi.org/10.3390/math11081771.

Texto completo
Resumen
Most research on fairness in Machine Learning assumes the relationship between fairness and accuracy to be a trade-off, with an increase in fairness leading to an unavoidable loss of accuracy. In this study, several approaches for fair Machine Learning are studied to experimentally analyze the relationship between accuracy and group fairness. The results indicated that group fairness and accuracy may even benefit each other, which emphasizes the importance of selecting appropriate measures for performance evaluation. This work provides a foundation for further studies on the adequate objectives of Machine Learning in the context of fair automated decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Sun, Shao Chao, and Dao Huang. "A Novel Robust Smooth Support Vector Machine." Applied Mechanics and Materials 148-149 (December 2011): 1438–41. http://dx.doi.org/10.4028/www.scientific.net/amm.148-149.1438.

Texto completo
Resumen
In this paper, we propose a new type of ε-insensitive loss function, called as ε-insensitive Fair estimator. With this loss function we can obtain better robustness and sparseness. To enhance the learning speed ,we apply the smoothing techniques that have been used for solving the support vector machine for classification, to replace the ε-insensitive Fair estimator by an accurate smooth approximation. This will allow us to solve ε-SFSVR as an unconstrained minimization problem directly. Based on the simulation results, the proposed approach has fast learning speed and better generalization performance whether outliers exist or not.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Tian, Xiao, Rachael Hwee Ling Sim, Jue Fan, and Bryan Kian Hsiang Low. "DeRDaVa: Deletion-Robust Data Valuation for Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15373–81. http://dx.doi.org/10.1609/aaai.v38i14.29462.

Texto completo
Resumen
Data valuation is concerned with determining a fair valuation of data from data sources to compensate them or to identify training examples that are the most or least useful for predictions. With the rising interest in personal data ownership and data protection regulations, model owners will likely have to fulfil more data deletion requests. This raises issues that have not been addressed by existing works: Are the data valuation scores still fair with deletions? Must the scores be expensively recomputed? The answer is no. To avoid recomputations, we propose using our data valuation framework DeRDaVa upfront for valuing each data source's contribution to preserving robust model performance after anticipated data deletions. DeRDaVa can be efficiently approximated and will assign higher values to data that are more useful or less likely to be deleted. We further generalize DeRDaVa to Risk-DeRDaVa to cater to risk-averse/seeking model owners who are concerned with the worst/best-cases model utility. We also empirically demonstrate the practicality of our solutions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Davis, Jenny L., Apryl Williams, and Michael W. Yang. "Algorithmic reparation." Big Data & Society 8, no. 2 (2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo
Resumen
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Davis, Jenny L., Apryl Williams, and Michael W. Yang. "Algorithmic reparation." Big Data & Society 8, no. 2 (2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo
Resumen
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Dhabliya, Dharmesh, Sukhvinder Singh Dari, Anishkumar Dhablia, N. Akhila, Renu Kachhoria, and Vinit Khetani. "Addressing Bias in Machine Learning Algorithms: Promoting Fairness and Ethical Design." E3S Web of Conferences 491 (2024): 02040. http://dx.doi.org/10.1051/e3sconf/202449102040.

Texto completo
Resumen
Machine learning algorithms have quickly risen to the top of several fields' decision-making processes in recent years. However, it is simple for these algorithms to confirm already present prejudices in data, leading to biassed and unfair choices. In this work, we examine bias in machine learning in great detail and offer strategies for promoting fair and moral algorithm design. The paper then emphasises the value of fairnessaware machine learning algorithms, which aim to lessen bias by including fairness constraints into the training and evaluation procedures. Reweighting, adversarial training, and resampling are a few strategies that could be used to overcome prejudice. Machine learning systems that better serve society and respect ethical ideals can be developed by promoting justice, transparency, and inclusivity. This paper lays the groundwork for researchers, practitioners, and policymakers to forward the cause of ethical and fair machine learning through concerted effort.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ian, Hardy. "[Re] An Implementation of Fair Robust Learning." ReScience C 8, no. 2 (2022): #16. https://doi.org/10.5281/zenodo.6574657.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Firestone, Chaz. "Performance vs. competence in human–machine comparisons." Proceedings of the National Academy of Sciences 117, no. 43 (2020): 26562–71. http://dx.doi.org/10.1073/pnas.1905334117.

Texto completo
Resumen
Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. when such failures are only superficial or peripheral? This article draws on a foundational insight from cognitive science—the distinction between performance and competence—to encourage “species-fair” comparisons between humans and machines. The performance/competence distinction urges us to consider whether the failure of a system to behave as ideally hypothesized, or the failure of one creature to behave like another, arises not because the system lacks the relevant knowledge or internal capacities (“competence”), but instead because of superficial constraints on demonstrating that knowledge (“performance”). I argue that this distinction has been neglected by research comparing human and machine behavior, and that it should be essential to any such comparison. Focusing on the domain of image classification, I identify three factors contributing to the species-fairness of human–machine comparisons, extracted from recent work that equates such constraints. Species-fair comparisons level the playing field between natural and artificial intelligence, so that we can separate more superficial differences from those that may be deep and enduring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Raftopoulos, George, Gregory Davrazos, and Sotiris Kotsiantis. "Fair and Transparent Student Admission Prediction Using Machine Learning Models." Algorithms 17, no. 12 (2024): 572. https://doi.org/10.3390/a17120572.

Texto completo
Resumen
Student admission prediction is a crucial aspect of academic planning, offering insights into enrollment trends, resource allocation, and institutional growth. However, traditional methods often lack the ability to address fairness and transparency, leading to potential biases and inequities in the decision-making process. This paper explores the development and evaluation of machine learning models designed to predict student admissions while prioritizing fairness and interpretability. We employ a diverse set of algorithms, including Logistic Regression, Decision Trees, and ensemble methods, to forecast admission outcomes based on academic, demographic, and extracurricular features. Experimental results on real-world datasets highlight the effectiveness of the proposed models in achieving competitive predictive performance while adhering to fairness metrics such as demographic parity and equalized odds. Our findings demonstrate that machine learning can not only enhance the accuracy of admission predictions but also support equitable access to education by promoting transparency and accountability in automated systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Vinay Kumar, Kotte, Santosh N.C, and Narasimha reddy soor. "Data Analysis and Fair Price Prediction Using Machine Learning Algorithms." Journal of Computer Allied Intelligence 2, no. 1 (2024): 26–45. http://dx.doi.org/10.69996/jcai.2024004.

Texto completo
Resumen
Data Analysis is the main important subject in recent times as the ongoing demand for it is growing accordingly with the huge amounts of data we get from many sources, All the huge data we get should be properly analysed so that the information will be used accordingly to its needs. In this paper, the main objective is to analyse the data that is taken from uber related data from a csv file which is already available in the outside world. In addition to the analysing of data we are also including two features to our project, from which the first one is fare price detection and the second one goes with optimal allotment of a cab using appropriate machine learning algorithms. we used k- means clustering, Db Scan for optimal allotment of a cab. We used Linear regression, Logistic Regression ,Random Forest algorithms, Decision Tree Algorithm for fair price detection. We are also finding the best algorithm for increasing the accuracy of selection using unsupervised algorithms which is the best moto of our project. The additional feature we are wanting to add in to this mechanism of analysing the data is to use an android app developed by us in order to receive the required data from the users and perform various actions on it to obtain a best result for segmented two operations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Plečko, Drago, and Elias Bareinboim. "Causal Fairness Analysis: A Causal Toolkit for Fair Machine Learning." Foundations and Trends® in Machine Learning 17, no. 3 (2024): 304–589. http://dx.doi.org/10.1561/2200000106.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hoche, Marine, Olga Mineeva, Gunnar Rätsch, Effy Vayena, and Alessandro Blasimme. "What makes clinical machine learning fair? A practical ethics framework." PLOS Digital Health 4, no. 3 (2025): e0000728. https://doi.org/10.1371/journal.pdig.0000728.

Texto completo
Resumen
Machine learning (ML) can offer a tremendous contribution to medicine by streamlining decision-making, reducing mistakes, improving clinical accuracy and ensuring better patient outcomes. The prospects of a widespread and rapid integration of machine learning in clinical workflow have attracted considerable attention including due to complex ethical implications–algorithmic bias being among the most frequently discussed ML models. Here we introduce and discuss a practical ethics framework inductively-generated via normative analysis of the practical challenges in developing an actual clinical ML model (see case study). The framework is usable to identify, measure and address bias in clinical machine learning models, thus improving fairness as to both model performance and health outcomes. We detail a proportionate approach to ML bias by defining the demands of fair ML in light of what is ethically justifiable and, at the same time, technically feasible in light of inevitable trade-offs. Our framework enables ethically robust and transparent decision-making both in the design and the context-dependent aspects of ML bias mitigation, thus improving accountability for both developers and clinical users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Do, Hyungrok, Jesse Persily, Judy Zhong, Yassamin Neshatvar, Katie Murray, and Madhur Nayan. "MITIGATING DISPARITIES IN PROSTATE CANCER THROUGH FAIR MACHINE LEARNING MODELS." Urologic Oncology: Seminars and Original Investigations 43, no. 3 (2025): 80. https://doi.org/10.1016/j.urolonc.2024.12.201.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Taylor, Greg. "Risks Special Issue on “Granular Models and Machine Learning Models”." Risks 8, no. 1 (2019): 1. http://dx.doi.org/10.3390/risks8010001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Guo, Peng, Yanqing Yang, Wei Guo, and Yanping Shen. "A Fair Contribution Measurement Method for Federated Learning." Sensors 24, no. 15 (2024): 4967. http://dx.doi.org/10.3390/s24154967.

Texto completo
Resumen
Federated learning is an effective approach for preserving data privacy and security, enabling machine learning to occur in a distributed environment and promoting its development. However, an urgent problem that needs to be addressed is how to encourage active client participation in federated learning. The Shapley value, a classical concept in cooperative game theory, has been utilized for data valuation in machine learning services. Nevertheless, existing numerical evaluation schemes based on the Shapley value are impractical, as they necessitate additional model training, leading to increased communication overhead. Moreover, participants’ data may exhibit Non-IID characteristics, posing a significant challenge to evaluating participant contributions. Non-IID data have greatly affected the accuracy of the global model, weakened the marginal effect of the participants, and led to the underestimated contribution measurement results of the participants. Current work often overlooks the impact of heterogeneity on model aggregation. This paper presents a fair federated learning contribution measurement scheme that addresses the need for additional model computations. By introducing a novel aggregation weight, it enhances the accuracy of the contribution measurement. Experiments on the MNIST and Fashion MNIST dataset show that the proposed method can accurately compute the contributions of participants. Compared to existing baseline algorithms, the model accuracy is significantly improved, with a similar time cost.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Chowdhury, Somnath Basu Roy, and Snigdha Chaturvedi. "Learning Fair Representations via Rate-Distortion Maximization." Transactions of the Association for Computational Linguistics 10 (2022): 1159–74. http://dx.doi.org/10.1162/tacl_a_00512.

Texto completo
Resumen
Abstract Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive models based on these representations can rely on such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected information by making representations of instances belonging to the same protected attribute class uncorrelated, using the rate-distortion function. FaRM is able to debias representations with or without a target task at hand. FaRM can also be adapted to remove information about multiple protected attributes simultaneously. Empirical evaluations show that FaRM achieves state-of-the-art performance on several datasets, and learned representations leak significantly less protected attribute information against an attack by a non-linear probing network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Primawati, Primawati, Fitrah Qalbina, Mulyanti Mulyanti, et al. "Predictive Maintenance of Old Grinding Machines Using Machine Learning Techniques." Journal of Applied Engineering and Technological Science (JAETS) 6, no. 2 (2025): 874–88. https://doi.org/10.37385/jaets.v6i2.6417.

Texto completo
Resumen
This study aims to develop a predictive maintenance system for an aging vertical grinding machine, operational since 1978, by integrating machine learning techniques, vibration analysis, and fuzzy logic. The research addresses the challenges of increased wear and unexpected failures in older machinery, which can lead to costly downtime and reduced operational efficiency. Vibration and temperature data were collected over 12 days using an MPU-9250 accelerometer, with conditions categorized as good, fair, and faulty. Various machine learning models, including logistic regression, k-nearest neighbors, support vector machines, decision trees, random forest, and Naive Bayes, were trained to classify bearing states. The random forest model achieved the highest accuracy of 94.59%, demonstrating its effectiveness in predicting machine failures. The results highlight the potential of combining multi-dimensional sensor data with advanced analytics to enable early fault detection, minimize downtime, and improve operational efficiency. This approach provides a cost-effective solution for maintaining aging machinery and contributes to both theoretical advancements in machine learning applications and practical improvements in industrial maintenance practices. The study’s findings offer scalable insights for industries reliant on legacy equipment, promoting sustainable manufacturing through optimized resource use and enhanced reliability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ahire, Pritam, Atish Agale, and Mayur Augad. "Machine Learning for Forecasting Promotions." International Journal of Science and Healthcare Research 8, no. 2 (2023): 329–33. http://dx.doi.org/10.52403/ijshr.20230242.

Texto completo
Resumen
Employee promotion is an important aspect of an employee's career growth and job satisfaction. Organizations need to ensure that the promotion process is fair and unbiased. However, the promotion process can be complicated, and many factors need to be considered before deciding on a promotion. The use of data analytics and machine learning algorithms has become increasingly popular in recent years, and organizations can leverage these tools to predict employee promotion. In this paper, we present a web-based application for employee promotion prediction that uses the Naive Bayes algorithm. Our application uses data from employees and trains a Naive Bayes algorithm to predict employee promotion. We use Spyder Python libraries for data analysis and machine learning and a dB SQLite database for login and data storage. [1] You can refer paper no 1 in the reference for more theory explanation regarding this project. Keywords: [classification, machine Learning, Prediction, Confusion matrix, Naive bayes algorithm, attributes].
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Heidrich, Louisa, Emanuel Slany, Stephan Scheele, and Ute Schmid. "FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1519–38. http://dx.doi.org/10.3390/make5040076.

Texto completo
Resumen
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi for fair machine learning. FairCaipi incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that FairCaipi outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that FairCaipi can both uncover and reduce bias in machine-learning models and allows us to detect human bias.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong, and Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits." Proceedings of the VLDB Endowment 17, no. 5 (2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.

Texto completo
Resumen
Biased data can lead to unfair machine learning models, highlighting the importance of embedding fairness at the beginning of data analysis, particularly during dataset curation and labeling. In response, we propose Falcon, a scalable fair active learning framework. Falcon adopts a data-centric approach that improves machine learning model fairness via strategic sample selection. Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e.g., (attribute=female, label=positive)) that are the most informative for improving fairness. However, a challenge arises since these target groups are defined using ground truth labels that are not available during sample selection. To handle this, we propose a novel trial-and-error method, where we postpone using a sample if the predicted label is different from the expected one and falls outside the target group. We also observe the trade-off that selecting more informative samples results in higher likelihood of postponing due to undesired label prediction, and the optimal balance varies per dataset. We capture the trade-off between informativeness and postpone rate as policies and propose to automatically select the best policy using adversarial multi-armed bandit methods, given their computational efficiency and theoretical guarantees. Experiments show that Falcon significantly outperforms existing fair active learning approaches in terms of fairness and accuracy and is more efficient. In particular, only Falcon supports a proper trade-off between accuracy and fairness where its maximum fairness score is 1.8--4.5x higher than the second-best results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Pandey, Divya, Zohaib Hasan, Pradeep Soni, and Sujeet Padit. "Achieving Equity in Machine Learning: Technical Solutions and Societal Implications." International Journal of Innovative Research in Computer and Communication Engineering 10, no. 12 (2023): 8690–96. http://dx.doi.org/10.15680/ijircce.2022.1012034.

Texto completo
Resumen
The rapid advancement and widespread adoption of machine learning (ML) technologies have transformed numerous industries, including healthcare and finance. While these innovations have introduced significant benefits and efficiencies, they have also raised critical ethical and fairness concerns. As machine learning models increasingly influence decision-making processes, ensuring these models operate in a fair and unbiased manner has become an essential aspect of their deployment. Ethical issues in machine learning primarily revolve around the potential for biased outcomes, lack of transparency, and the inadvertent reinforcement of societal inequalities. This paper explores the current state of ethical and fairness solutions in machine learning, highlighting key methodologies and frameworks addressing these pressing issues. The proposed method demonstrates a high level of performance, with an accuracy of 97.6%, a mean absolute error (MAE) of 0.403, and a root mean square error (RMSE) of 0.203. By examining both the technical advancements and the broader ethical considerations, this study seeks to provide a holistic view of the efforts being made to ensure that machine learning technologies are deployed in a manner that is both fair and ethical.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Fitzsimons, Jack, AbdulRahman Al Ali, Michael Osborne, and Stephen Roberts. "A General Framework for Fair Regression." Entropy 21, no. 8 (2019): 741. http://dx.doi.org/10.3390/e21080741.

Texto completo
Resumen
Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints into kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly binds the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Gaikar, Asha, Dr Uttara Gogate, and Amar Panchal. "Review on Evaluation of Stroke Prediction Using Machine Learning Methods." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (2023): 1011–17. http://dx.doi.org/10.22214/ijraset.2023.50262.

Texto completo
Resumen
Abstract: This research proposes early prediction of stroke disease using different machine learning approaches such as Logistic Regression Classifier, Decision Tree Classifier, Support Vector Machine and Random Forest Classifier. In this paper we have used different Machine Learning algorithm and by calculating their accuracy we are proposing fair comparisons of different Stoke prediction algorithms by using same dataset with same no of features. The researcher will help to predict the Stroke using best Machine Learning algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Feder, Toni. "Research facilities strive for fair and efficient time allocation." Physics Today 77, no. 9 (2024): 20–22. http://dx.doi.org/10.1063/pt.jvgy.emrz.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Khan, Shahid, Viktor Klochkov, Olha Lavoryk та ін. "Machine Learning Application for Λ Hyperon Reconstruction in CBM at FAIR". EPJ Web of Conferences 259 (2022): 13008. http://dx.doi.org/10.1051/epjconf/202225913008.

Texto completo
Resumen
The Compressed Baryonic Matter experiment at FAIR will investigate the QCD phase diagram in the region of high net-baryon densities. Enhanced production of strange baryons, such as the most abundantly produced Λ hyperons, can signal transition to a new phase of the QCD matter. In this work, the CBM performance for reconstruction of the Λ hyperon via its decay to proton and π− is presented. Decay topology reconstruction is implemented in the Particle-Finder Simple (PFSimple) package with Machine Learning algorithms providing effcient selection of the decays and high signal to background ratio.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Karim, Rizwan, and Muhammad Imran Asjad. "A Fair Approach to Heart Disease Prediction: Leveraging Machine Learning Model." Systems Assessment and Engineering Management 2 (December 1, 2024): 23–32. https://doi.org/10.61356/j.saem.2024.2438.

Texto completo
Resumen
This study focuses on the tasks of diagnosing and predicting diseases, which are crucial, for accurately classifying and treating them by cardiologists. By utilizing the increasing use of machine learning in the field in pattern recognition from data this research introduces a specialized model that aims to predict cardiovascular diseases. The main objectives of this model are to reduce misdiagnosis rates and minimize fatalities. To achieve these goals the proposed approach combines Logistic Regression with a fairness component. The model is trained using a real world dataset consisting of 70,000 instances obtained from Kaggle. The dataset is split into 70% for training and 30% for testing purposes to evaluate accuracy and fairness metrics at values of Logistic Regression. Through reweighing techniques applied to the model improvements, in both accuracy and fairness are observed. In conclusion this research suggests that machine learning models that prioritize fairness demonstrate performance by achieving an accuracy rate of 72% with a fairness value of 0.009.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ning, Yilin, Siqi Li, Yih Yng Ng, et al. "Variable importance analysis with interpretable machine learning for fair risk prediction." PLOS Digital Health 3, no. 7 (2024): e0000542. http://dx.doi.org/10.1371/journal.pdig.0000542.

Texto completo
Resumen
Machine learning (ML) methods are increasingly used to assess variable importance, but such black box models lack stability when limited in sample sizes, and do not formally indicate non-important factors. The Shapley variable importance cloud (ShapleyVIC) addresses these limitations by assessing variable importance from an ensemble of regression models, which enhances robustness while maintaining interpretability, and estimates uncertainty of overall importance to formally test its significance. In a clinical study, ShapleyVIC reasonably identified important variables when the random forest and XGBoost failed to, and generally reproduced the findings from smaller subsamples (n = 2500 and 500) when statistical power of the logistic regression became attenuated. Moreover, ShapleyVIC reasonably estimated non-significant importance of race to justify its exclusion from the final prediction model, as opposed to the race-dependent model from the conventional stepwise model building. Hence, ShapleyVIC is robust and interpretable for variable importance assessment, with potential contribution to fairer clinical risk prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Texto completo
Resumen
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new framework—Fair CRISP-DM, which groups and maps these biases corresponding to each phase of an ML application development. Through this study, we contribute to the literature on ML development and fairness. We present recommendations to ML researchers on including fairness as part of the ML evaluation process. Further, ML practitioners can use our framework to identify and mitigate fairness-related biases in each phase of an ML project development. Finally, we also discuss emerging technologies which can help developers to detect and mitigate biases in different stages of ML application development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía