Artigos de revistas sobre o tema "Fair Machine Learning"

Siga este link para ver outros tipos de publicações sobre o tema: Fair Machine Learning.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Fair Machine Learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Basu Roy Chowdhury, Somnath, e Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junho de 2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Texto completo da fonte
Resumo:
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Perello, Nick, e Przemyslaw Grabowicz. "Fair Machine Learning Post Affirmative Action". ACM SIGCAS Computers and Society 52, n.º 2 (setembro de 2023): 22. http://dx.doi.org/10.1145/3656021.3656029.

Texto completo da fonte
Resumo:
The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions [1]. Indeed, national polls found that a plurality of Americans - 42%, according to a poll conducted by the University of Massachusetts [2] - agree that the policy should be discontinued, while 33% support its continued use in admissions decisions. As scholars of fair machine learning, we ponder how the Supreme Court decision shifts points of focus in the field. The most popular fair machine learning methods aim to achieve some form of "impact parity" by diminishing or removing the correlation between decisions and protected attributes, such as race or gender, similarly to the 80% rule of thumb of the Equal Employment Opportunity Commision. Impact parity can be achieved by reversing historical discrimination, which corresponds to affirmative actions, or by diminishing or removing the influence of the attributes correlated with the protected attributes, which is impractical as it severely undermines model accuracy. Besides, impact disparity is not necessarily a bad thing, e.g., African-American patients suffer from a higher rate of chronic illnesses than White patients and, hence, it may be justified to admit them to care programs at a proportionally higher rate [3]. The U.S. burden-shifting framework under Title VII offers solutions alternative to impact parity. To determine employment discrimination, U.S. courts rely on the McDonnell-Douglas burden-shifting framework where the explanations, justifications, and comparisons of employment practices play a central role. Can similar methods be applied in machine learning?
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Oneto, Luca. "Learning fair models and representations". Intelligenza Artificiale 14, n.º 1 (17 de setembro de 2020): 151–78. http://dx.doi.org/10.3233/ia-190034.

Texto completo da fonte
Resumo:
Machine learning based systems and products are reaching society at large in many aspects of everyday life, including financial lending, online advertising, pretrial and immigration detention, child maltreatment screening, health care, social services, and education. This phenomenon has been accompanied by an increase in concern about the ethical issues that may rise from the adoption of these technologies. In response to this concern, a new area of machine learning has recently emerged that studies how to address disparate treatment caused by algorithmic errors and bias in the data. The central question is how to ensure that the learned model does not treat subgroups in the population unfairly. While the design of solutions to this issue requires an interdisciplinary effort, fundamental progress can only be achieved through a radical change in the machine learning paradigm. In this work, we will describe the state of the art on algorithmic fairness using statistical learning theory, machine learning, and deep learning approaches that are able to learn fair models and data representation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kim, Yun-Myung. "Data and Fair use". Korea Copyright Commission 141 (30 de março de 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo da fonte
Resumo:
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kim, Yun-Myung. "Data and Fair use". Korea Copyright Commission 141 (30 de março de 2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Texto completo da fonte
Resumo:
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Zhang, Xueru, Mohammad Mahdi Khalili e Mingyan Liu. "Long-Term Impacts of Fair Machine Learning". Ergonomics in Design: The Quarterly of Human Factors Applications 28, n.º 3 (25 de outubro de 2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.

Texto completo da fonte
Resumo:
Machine learning models developed from real-world data can inherit potential, preexisting bias in the dataset. When these models are used to inform decisions involving human beings, fairness concerns inevitably arise. Imposing certain fairness constraints in the training of models can be effective only if appropriate criteria are applied. However, a fairness criterion can be defined/assessed only when the interaction between the decisions and the underlying population is well understood. We introduce two feedback models describing how people react when receiving machine-aided decisions and illustrate that some commonly used fairness criteria can end with undesirable consequences while reinforcing discrimination.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhu, Yunlan. "The Comparative Analysis of Fair Use of Works in Machine Learning". SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.

Texto completo da fonte
Resumo:
Before generative AI outputs the content, it copies a large amount of text content. This process is machine learning. For the development of artificial intelligence technology and cultural prosperity, many countries have included machine learning within the scope of fair use. However, China’s copyright law currently does not legislate the fair use of machine learning works. This paper will construct a Chinese model of fair use of machine learning works through comparative analysis of the legislation of other countries. This is a fair use model that balances the flexibility of the United States with the rigor of the European Union.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Redko, Ievgen, e Charlotte Laclau. "On Fair Cost Sharing Games in Machine Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.

Texto completo da fonte
Resumo:
Machine learning and game theory are known to exhibit a very strong link as they mutually provide each other with solutions and models allowing to study and analyze the optimal behaviour of a set of agents. In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous sources of data. We further demonstrate how the game-theoretical results bounding the ratio between the best Nash equilibrium (or its approximate counterpart) and the optimal solution of a given game can be used to provide the upper bound of the gain achievable by the collaborative learning expressed as the expected risk and the sample complexity for homogeneous and heterogeneous cases, respectively. We believe that the established link can spur many possible future implications for other learning scenarios as well, with privacy-aware learning being among the most noticeable examples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Lee, Joshua, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory W. Wornell, Leonid Karlinsky e Rogerio Schmidt Feris. "A Maximal Correlation Framework for Fair Machine Learning". Entropy 24, n.º 4 (26 de março de 2022): 461. http://dx.doi.org/10.3390/e24040461.

Texto completo da fonte
Resumo:
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant. We explore the problem of algorithmic fairness, taking an information–theoretic view. The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than existing algorithms. We show that these algorithms provide smooth performance–fairness tradeoff curves and perform competitively with state-of-the-art methods on both discrete datasets (COMPAS, Adult) and continuous datasets (Communities and Crimes).
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

van Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly e Vassilis Kostakos. "Crowdsourcing Perceptions of Fair Predictors for Machine Learning". Proceedings of the ACM on Human-Computer Interaction 3, CSCW (7 de novembro de 2019): 1–21. http://dx.doi.org/10.1145/3359130.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

JEONG, JIN KEUN. "Will the U.S. Court Judge TDM for Artificial Intelligence Machine Learning as Fair Use?" Korea Copyright Commission 144 (31 de dezembro de 2023): 215–50. http://dx.doi.org/10.30582/kdps.2023.36.4.215.

Texto completo da fonte
Resumo:
A representative debate is whether TDM (Text and Data Mining) in the machine learning process, which occurs when AI uses other people’s copyrighted works by unauthorized means such as copying, is in accordance with the fair use principle or not. The issue is whether one can be exempted from copyright infringement. In this regard, Korean scholar’s attitude starts from the optimistic perspective that U.S. courts will view AI TDM or AI machine learning as fair use based on the fair use principle. Nevertheless, there is no direct basis for the claim that US courts will exempt AI TDM or AI machine learning from fair use. This is because there has been no case in the United States where a court has recognized fair use by clearly targeting AI TDM or AI machine learning. Meanwhile, The Internet Archive case and the Andy Warhol case are hesitant to expand the fair use principle and are giving rise to pessimistic views on whether the use of other people’s copyrighted works in the AI TDM or AI machine learning process will be considered fair use. Taking that into consideration, American scholars are also developing the argument that the use of copyrighted works for AI machine learning or AI TDM should be considered fair use. Therefore, the positive stance on the possibility of TDM exemption under Article 35-5 of the Korean Copyright Act needs to be carefully reexamined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Edwards, Chris. "AI Struggles with Fair Use". New Electronics 56, n.º 9 (setembro de 2023): 40–41. http://dx.doi.org/10.12968/s0047-9624(24)60063-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Jang, Taeuk, Feng Zheng e Xiaoqian Wang. "Constructing a Fair Classifier with Generated Fair Data". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 9 (18 de maio de 2021): 7908–16. http://dx.doi.org/10.1609/aaai.v35i9.16965.

Texto completo da fonte
Resumo:
Fairness in machine learning is getting rising attention as it is directly related to real-world applications and social problems. Recent methods have been explored to alleviate the discrimination between certain demographic groups that are characterized by sensitive attributes (such as race, age, or gender). Some studies have found that the data itself is biased, so training directly on the data causes unfair decision making. Models directly trained on raw data can replicate or even exacerbate bias in the prediction between demographic groups. This leads to vastly different prediction performance in different demographic groups. In order to address this issue, we propose a new approach to improve machine learning fairness by generating fair data. We introduce a generative model to generate cross-domain samples w.r.t. multiple sensitive attributes. This ensures that we can generate infinite number of samples that are balanced \wrt both target label and sensitive attributes to enhance fair prediction. By training the classifier solely with the synthetic data and then transfer the model to real data, we can overcome the under-representation problem which is non-trivial since collecting real data is extremely time and resource consuming. We provide empirical evidence to demonstrate the benefit of our model with respect to both fairness and accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Chandra, Rushil, Karun Sanjaya, AR Aravind, Ahmed Radie Abbas, Ruzieva Gulrukh e T. S. Senthil kumar. "Algorithmic Fairness and Bias in Machine Learning Systems". E3S Web of Conferences 399 (2023): 04036. http://dx.doi.org/10.1051/e3sconf/202339904036.

Texto completo da fonte
Resumo:
In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in machine learning systems are crucial issues in this regard that demand the attention of academics, practitioners, and policymakers. Building fair and unbiased machine learning systems that uphold equality and prevent discrimination requires addressing biases in training data, creating fairness-aware algorithms, encouraging transparency and interpretability, and encouraging diversity and inclusivity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Brotcke, Liming. "Time to Assess Bias in Machine Learning Models for Credit Decisions". Journal of Risk and Financial Management 15, n.º 4 (5 de abril de 2022): 165. http://dx.doi.org/10.3390/jrfm15040165.

Texto completo da fonte
Resumo:
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML applications in loan underwriting and pricing, the fair lending assessments typically led by compliance and legal functions will likely continue to evolve. In this paper, the author discusses unique considerations around ML in the existing fair lending risk assessment practice for underwriting and pricing models and proposes consideration of additional evaluations to be added in the present practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Tian, Xiao, Rachael Hwee Ling Sim, Jue Fan e Bryan Kian Hsiang Low. "DeRDaVa: Deletion-Robust Data Valuation for Machine Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de março de 2024): 15373–81. http://dx.doi.org/10.1609/aaai.v38i14.29462.

Texto completo da fonte
Resumo:
Data valuation is concerned with determining a fair valuation of data from data sources to compensate them or to identify training examples that are the most or least useful for predictions. With the rising interest in personal data ownership and data protection regulations, model owners will likely have to fulfil more data deletion requests. This raises issues that have not been addressed by existing works: Are the data valuation scores still fair with deletions? Must the scores be expensively recomputed? The answer is no. To avoid recomputations, we propose using our data valuation framework DeRDaVa upfront for valuing each data source's contribution to preserving robust model performance after anticipated data deletions. DeRDaVa can be efficiently approximated and will assign higher values to data that are more useful or less likely to be deleted. We further generalize DeRDaVa to Risk-DeRDaVa to cater to risk-averse/seeking model owners who are concerned with the worst/best-cases model utility. We also empirically demonstrate the practicality of our solutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Plečko, Drago, e Elias Bareinboim. "Causal Fairness Analysis: A Causal Toolkit for Fair Machine Learning". Foundations and Trends® in Machine Learning 17, n.º 3 (2024): 304–589. http://dx.doi.org/10.1561/2200000106.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Sun, Shao Chao, e Dao Huang. "A Novel Robust Smooth Support Vector Machine". Applied Mechanics and Materials 148-149 (dezembro de 2011): 1438–41. http://dx.doi.org/10.4028/www.scientific.net/amm.148-149.1438.

Texto completo da fonte
Resumo:
In this paper, we propose a new type of ε-insensitive loss function, called as ε-insensitive Fair estimator. With this loss function we can obtain better robustness and sparseness. To enhance the learning speed ,we apply the smoothing techniques that have been used for solving the support vector machine for classification, to replace the ε-insensitive Fair estimator by an accurate smooth approximation. This will allow us to solve ε-SFSVR as an unconstrained minimization problem directly. Based on the simulation results, the proposed approach has fast learning speed and better generalization performance whether outliers exist or not.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Firestone, Chaz. "Performance vs. competence in human–machine comparisons". Proceedings of the National Academy of Sciences 117, n.º 43 (13 de outubro de 2020): 26562–71. http://dx.doi.org/10.1073/pnas.1905334117.

Texto completo da fonte
Resumo:
Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. when such failures are only superficial or peripheral? This article draws on a foundational insight from cognitive science—the distinction between performance and competence—to encourage “species-fair” comparisons between humans and machines. The performance/competence distinction urges us to consider whether the failure of a system to behave as ideally hypothesized, or the failure of one creature to behave like another, arises not because the system lacks the relevant knowledge or internal capacities (“competence”), but instead because of superficial constraints on demonstrating that knowledge (“performance”). I argue that this distinction has been neglected by research comparing human and machine behavior, and that it should be essential to any such comparison. Focusing on the domain of image classification, I identify three factors contributing to the species-fairness of human–machine comparisons, extracted from recent work that equates such constraints. Species-fair comparisons level the playing field between natural and artificial intelligence, so that we can separate more superficial differences from those that may be deep and enduring.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova e Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making". Mathematics 11, n.º 8 (7 de abril de 2023): 1771. http://dx.doi.org/10.3390/math11081771.

Texto completo da fonte
Resumo:
Most research on fairness in Machine Learning assumes the relationship between fairness and accuracy to be a trade-off, with an increase in fairness leading to an unavoidable loss of accuracy. In this study, several approaches for fair Machine Learning are studied to experimentally analyze the relationship between accuracy and group fairness. The results indicated that group fairness and accuracy may even benefit each other, which emphasizes the importance of selecting appropriate measures for performance evaluation. This work provides a foundation for further studies on the adequate objectives of Machine Learning in the context of fair automated decision making.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Taylor, Greg. "Risks Special Issue on “Granular Models and Machine Learning Models”". Risks 8, n.º 1 (30 de dezembro de 2019): 1. http://dx.doi.org/10.3390/risks8010001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Davis, Jenny L., Apryl Williams e Michael W. Yang. "Algorithmic reparation". Big Data & Society 8, n.º 2 (julho de 2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo da fonte
Resumo:
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Davis, Jenny L., Apryl Williams e Michael W. Yang. "Algorithmic reparation". Big Data & Society 8, n.º 2 (julho de 2021): 205395172110448. http://dx.doi.org/10.1177/20539517211044808.

Texto completo da fonte
Resumo:
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Dhabliya, Dharmesh, Sukhvinder Singh Dari, Anishkumar Dhablia, N. Akhila, Renu Kachhoria e Vinit Khetani. "Addressing Bias in Machine Learning Algorithms: Promoting Fairness and Ethical Design". E3S Web of Conferences 491 (2024): 02040. http://dx.doi.org/10.1051/e3sconf/202449102040.

Texto completo da fonte
Resumo:
Machine learning algorithms have quickly risen to the top of several fields' decision-making processes in recent years. However, it is simple for these algorithms to confirm already present prejudices in data, leading to biassed and unfair choices. In this work, we examine bias in machine learning in great detail and offer strategies for promoting fair and moral algorithm design. The paper then emphasises the value of fairnessaware machine learning algorithms, which aim to lessen bias by including fairness constraints into the training and evaluation procedures. Reweighting, adversarial training, and resampling are a few strategies that could be used to overcome prejudice. Machine learning systems that better serve society and respect ethical ideals can be developed by promoting justice, transparency, and inclusivity. This paper lays the groundwork for researchers, practitioners, and policymakers to forward the cause of ethical and fair machine learning through concerted effort.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Chowdhury, Somnath Basu Roy, e Snigdha Chaturvedi. "Learning Fair Representations via Rate-Distortion Maximization". Transactions of the Association for Computational Linguistics 10 (2022): 1159–74. http://dx.doi.org/10.1162/tacl_a_00512.

Texto completo da fonte
Resumo:
Abstract Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive models based on these representations can rely on such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected information by making representations of instances belonging to the same protected attribute class uncorrelated, using the rate-distortion function. FaRM is able to debias representations with or without a target task at hand. FaRM can also be adapted to remove information about multiple protected attributes simultaneously. Empirical evaluations show that FaRM achieves state-of-the-art performance on several datasets, and learned representations leak significantly less protected attribute information against an attack by a non-linear probing network.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Ahire, Pritam, Atish Agale e Mayur Augad. "Machine Learning for Forecasting Promotions". International Journal of Science and Healthcare Research 8, n.º 2 (25 de maio de 2023): 329–33. http://dx.doi.org/10.52403/ijshr.20230242.

Texto completo da fonte
Resumo:
Employee promotion is an important aspect of an employee's career growth and job satisfaction. Organizations need to ensure that the promotion process is fair and unbiased. However, the promotion process can be complicated, and many factors need to be considered before deciding on a promotion. The use of data analytics and machine learning algorithms has become increasingly popular in recent years, and organizations can leverage these tools to predict employee promotion. In this paper, we present a web-based application for employee promotion prediction that uses the Naive Bayes algorithm. Our application uses data from employees and trains a Naive Bayes algorithm to predict employee promotion. We use Spyder Python libraries for data analysis and machine learning and a dB SQLite database for login and data storage. [1] You can refer paper no 1 in the reference for more theory explanation regarding this project. Keywords: [classification, machine Learning, Prediction, Confusion matrix, Naive bayes algorithm, attributes].
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Heidrich, Louisa, Emanuel Slany, Stephan Scheele e Ute Schmid. "FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction". Machine Learning and Knowledge Extraction 5, n.º 4 (18 de outubro de 2023): 1519–38. http://dx.doi.org/10.3390/make5040076.

Texto completo da fonte
Resumo:
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi for fair machine learning. FairCaipi incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that FairCaipi outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that FairCaipi can both uncover and reduce bias in machine-learning models and allows us to detect human bias.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong e Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits". Proceedings of the VLDB Endowment 17, n.º 5 (janeiro de 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.

Texto completo da fonte
Resumo:
Biased data can lead to unfair machine learning models, highlighting the importance of embedding fairness at the beginning of data analysis, particularly during dataset curation and labeling. In response, we propose Falcon, a scalable fair active learning framework. Falcon adopts a data-centric approach that improves machine learning model fairness via strategic sample selection. Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e.g., (attribute=female, label=positive)) that are the most informative for improving fairness. However, a challenge arises since these target groups are defined using ground truth labels that are not available during sample selection. To handle this, we propose a novel trial-and-error method, where we postpone using a sample if the predicted label is different from the expected one and falls outside the target group. We also observe the trade-off that selecting more informative samples results in higher likelihood of postponing due to undesired label prediction, and the optimal balance varies per dataset. We capture the trade-off between informativeness and postpone rate as policies and propose to automatically select the best policy using adversarial multi-armed bandit methods, given their computational efficiency and theoretical guarantees. Experiments show that Falcon significantly outperforms existing fair active learning approaches in terms of fairness and accuracy and is more efficient. In particular, only Falcon supports a proper trade-off between accuracy and fairness where its maximum fairness score is 1.8--4.5x higher than the second-best results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Fitzsimons, Jack, AbdulRahman Al Ali, Michael Osborne e Stephen Roberts. "A General Framework for Fair Regression". Entropy 21, n.º 8 (29 de julho de 2019): 741. http://dx.doi.org/10.3390/e21080741.

Texto completo da fonte
Resumo:
Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints into kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly binds the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Khan, Shahid, Viktor Klochkov, Olha Lavoryk, Oleksii Lubynets, Ali Imdad Khan, Andrea Dubla e Ilya Selyuzhenkov. "Machine Learning Application for Λ Hyperon Reconstruction in CBM at FAIR". EPJ Web of Conferences 259 (2022): 13008. http://dx.doi.org/10.1051/epjconf/202225913008.

Texto completo da fonte
Resumo:
The Compressed Baryonic Matter experiment at FAIR will investigate the QCD phase diagram in the region of high net-baryon densities. Enhanced production of strange baryons, such as the most abundantly produced Λ hyperons, can signal transition to a new phase of the QCD matter. In this work, the CBM performance for reconstruction of the Λ hyperon via its decay to proton and π− is presented. Decay topology reconstruction is implemented in the Particle-Finder Simple (PFSimple) package with Machine Learning algorithms providing effcient selection of the decays and high signal to background ratio.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Singh, Vivek K., e Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM". e-Service Journal 14, n.º 2 (dezembro de 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Texto completo da fonte
Resumo:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new framework—Fair CRISP-DM, which groups and maps these biases corresponding to each phase of an ML application development. Through this study, we contribute to the literature on ML development and fairness. We present recommendations to ML researchers on including fairness as part of the ML evaluation process. Further, ML practitioners can use our framework to identify and mitigate fairness-related biases in each phase of an ML project development. Finally, we also discuss emerging technologies which can help developers to detect and mitigate biases in different stages of ML application development.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Wei, Jingrui, e Paul M. Voyles. "Foundry-ML: a Platform for FAIR Machine Learning in Materials Science". Microscopy and Microanalysis 29, Supplement_1 (22 de julho de 2023): 720. http://dx.doi.org/10.1093/micmic/ozad067.355.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Gaikar, Asha, Dr Uttara Gogate e Amar Panchal. "Review on Evaluation of Stroke Prediction Using Machine Learning Methods". International Journal for Research in Applied Science and Engineering Technology 11, n.º 4 (30 de abril de 2023): 1011–17. http://dx.doi.org/10.22214/ijraset.2023.50262.

Texto completo da fonte
Resumo:
Abstract: This research proposes early prediction of stroke disease using different machine learning approaches such as Logistic Regression Classifier, Decision Tree Classifier, Support Vector Machine and Random Forest Classifier. In this paper we have used different Machine Learning algorithm and by calculating their accuracy we are proposing fair comparisons of different Stoke prediction algorithms by using same dataset with same no of features. The researcher will help to predict the Stroke using best Machine Learning algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Guo, Zhihao, Shengyuan Chen, Xiao Huang, Zhiqiang Qian, Chunsing Yu, Yan Xu e Fang Ding. "Fair Benchmark for Unsupervised Node Representation Learning". Algorithms 15, n.º 10 (17 de outubro de 2022): 379. http://dx.doi.org/10.3390/a15100379.

Texto completo da fonte
Resumo:
Most machine-learning algorithms assume that instances are independent of each other. This does not hold for networked data. Node representation learning (NRL) aims to learn low-dimensional vectors to represent nodes in a network, such that all actionable patterns in topological structures and side information can be preserved. The widespread availability of networked data, e.g., social media, biological networks, and traffic networks, along with plentiful applications, facilitate the development of NRL. However, it has become challenging for researchers and practitioners to track the state-of-the-art NRL algorithms, given that they were evaluated using different experimental settings and datasets. To this end, in this paper, we focus on unsupervised NRL and propose a fair and comprehensive evaluation framework to systematically evaluate state-of-the-art unsupervised NRL algorithms. We comprehensively evaluate each algorithm by applying it to three evaluation tasks, i.e., classification fine tuned via a validation set, link prediction fine-tuned in the first run, and classification fine tuned via link prediction. In each task and each dataset, all NRL algorithms were fine-tuned using a random search within a fixed amount of time. Based on the results for three tasks and eight datasets, we evaluate and rank thirteen unsupervised NRL algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Ampountolas, Apostolos, Titus Nyarko Nde, Paresh Date e Corina Constantinescu. "A Machine Learning Approach for Micro-Credit Scoring". Risks 9, n.º 3 (9 de março de 2021): 50. http://dx.doi.org/10.3390/risks9030050.

Texto completo da fonte
Resumo:
In micro-lending markets, lack of recorded credit history is a significant impediment to assessing individual borrowers’ creditworthiness and therefore deciding fair interest rates. This research compares various machine learning algorithms on real micro-lending data to test their efficacy at classifying borrowers into various credit categories. We demonstrate that off-the-shelf multi-class classifiers such as random forest algorithms can perform this task very well, using readily available data about customers (such as age, occupation, and location). This presents inexpensive and reliable means to micro-lending institutions around the developing world with which to assess creditworthiness in the absence of credit history or central credit databases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zhang, Yixuan, Boyu Li, Zenan Ling e Feng Zhou. "Mitigating Label Bias in Machine Learning: Fairness through Confident Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de março de 2024): 16917–25. http://dx.doi.org/10.1609/aaai.v38i15.29634.

Texto completo da fonte
Resumo:
Discrimination can occur when the underlying unbiased labels are overwritten by an agent with potential bias, resulting in biased datasets that unfairly harm specific groups and cause classifiers to inherit these biases. In this paper, we demonstrate that despite only having access to the biased labels, it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning. In the context of confident learning, low self-confidence usually indicates potential label errors; however, this is not always the case. Instances, particularly those from underrepresented groups, might exhibit low confidence scores for reasons other than labeling errors. To address this limitation, our approach employs truncation of the confidence score and extends the confidence interval of the probabilistic threshold. Additionally, we incorporate with co-teaching paradigm for providing a more robust and reliable selection of fair instances and effectively mitigating the adverse effects of biased labels. Through extensive experimentation and evaluation of various datasets, we demonstrate the efficacy of our approach in promoting fairness and reducing the impact of label bias in machine learning models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Menezes, Andreia Duarte, Edilberto Pereira Teixeira, Jose Roberto Delalibera Finzer e Rafael Bonacin de Oliveira. "Machine learning-driven development of niobium-containing optical glasses". Research, Society and Development 11, n.º 9 (5 de julho de 2022): e13811931290. http://dx.doi.org/10.33448/rsd-v11i9.31290.

Texto completo da fonte
Resumo:
High refractive index glasses are essential for old and new optical systems, such as microscopes, telescopes and novel augmented reality lenses and micro projectors. However, a fair portion of these glasses use toxic components, such as PbO, BaO, As2O3, and TeO2, which lead to high refractive indexes and facilitate the melting operation, but are harmful for human beings and the environment. On the other hand, it is known that niobium significantly increases the refractive index and is a non-toxic element. The objective of this paper was to develop new optical glass compositions containing Nb2O5 with a relatively high refractive index (nd > 1.65), intermediate Abbe number (35 < Vd < 55) and fair glass transition temperature, Tg. To this end, we used a machine learning algorithm titled GLAS, which was recently developed at DEMA-UFSCar to produce new optical glasses composition. After running the algorithm 13 times, two of the most promising compositions were chosen and tested for their glass forming ability and other properties. The best composition was analyzed in respect to the refractive index, glass transition temperature and chemical durability. A comparison between the laboratory results and predictions of the artificial neural network indicates that the GLAS algorithm provides adequate formulations and can be immediately used for accelerating the design of new glasses, substantially reducing the laboratory testing effort. Also, the results indicate that niobium glasses might offer some advantages over its main competitor (La2O3).
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Asher, Nicholas, Lucas De Lara, Soumya Paul e Chris Russell. "Counterfactual Models for Fair and Adequate Explanations". Machine Learning and Knowledge Extraction 4, n.º 2 (31 de março de 2022): 316–49. http://dx.doi.org/10.3390/make4020014.

Texto completo da fonte
Resumo:
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Mohsin, Farhad, Ao Liu, Pin-Yu Chen, Francesca Rossi e Lirong Xia. "Learning to Design Fair and Private Voting Rules". Journal of Artificial Intelligence Research 75 (30 de novembro de 2022): 1139–76. http://dx.doi.org/10.1613/jair.1.13734.

Texto completo da fonte
Resumo:
Voting is used widely to identify a collective decision for a group of agents, based on their preferences. In this paper, we focus on evaluating and designing voting rules that support both the privacy of the voting agents and a notion of fairness over such agents. To do this, we introduce a novel notion of group fairness and adopt the existing notion of local differential privacy. We then evaluate the level of group fairness in several existing voting rules, as well as the trade-offs between fairness and privacy, showing that it is not possible to always obtain maximal economic efficiency with high fairness or high privacy levels. Then, we present both a machine learning and a constrained optimization approach to design new voting rules that are fair while maintaining a high level of economic efficiency. Finally, we empirically examine the effect of adding noise to create local differentially private voting rules and discuss the three-way trade-off between economic efficiency, fairness, and privacy. This paper appears in the special track on AI & Society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Goretzko, David, e Laura Sophia Finja Israel. "Pitfalls of Machine Learning-Based Personnel Selection". Journal of Personnel Psychology 21, n.º 1 (janeiro de 2022): 37–47. http://dx.doi.org/10.1027/1866-5888/a000287.

Texto completo da fonte
Resumo:
Abstract. In recent years, machine learning (ML) modeling (often referred to as artificial intelligence) has become increasingly popular for personnel selection purposes. Numerous organizations use ML-based procedures for screening large candidate pools, while some companies try to automate the hiring process as far as possible. Since ML models can handle large sets of predictor variables and are therefore able to incorporate many different data sources (often more than common procedures can consider), they promise a higher predictive accuracy and objectivity in selecting the best candidate than traditional personal selection processes. However, there are some pitfalls and challenges that have to be taken into account when using ML for a sensitive issue as personnel selection. In this paper, we address these major challenges – namely the definition of a valid criterion, transparency regarding collected data and decision mechanisms, algorithmic fairness, changing data conditions, and adequate performance evaluation – and discuss some recommendations for implementing fair, transparent, and accurate ML-based selection algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Yugam Bajaj and Shallu Bashambu. "Traffic Signs Detection Using Machine Learning Algorithms". November 2020 6, n.º 11 (23 de novembro de 2020): 109–12. http://dx.doi.org/10.46501/ijmtst061119.

Texto completo da fonte
Resumo:
With the rapid advancement and developments in the Automobile industry, that day is not far when each of us would be owning their own Autonomous Vehicle. Although manufacturing of a full proof Autonomous Vehicle has its own fair share of challenges. The main challenge that lies in front of us, is imbibing the latest technologies and advancements into the conventional vehicles we already have. This paper discusses one such technology that we can incorporate in our vehicle, to direct the Conventional Vehicle into becoming an Autonomous Vehicle in future. The user would be able to classify Traffic Signs on Road, which would help him/her to understand what that sign signifies, i.e. what rules the driver must follow while driving on that particular road. We use Machine Learning Classification Algorithms like k-Nearest Neighbors, Random Forest and Support Vector Machine on our dataset, to compute the best accuracies in the process as well.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Zhao, Yanqi, Yong Yu, Yannan Li, Gang Han e Xiaojiang Du. "Machine learning based privacy-preserving fair data trading in big data market". Information Sciences 478 (abril de 2019): 449–60. http://dx.doi.org/10.1016/j.ins.2018.11.028.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Luo, Xi, Ran Yan, Shuaian Wang e Lu Zhen. "A fair evaluation of the potential of machine learning in maritime transportation". Electronic Research Archive 31, n.º 8 (2023): 4753–72. http://dx.doi.org/10.3934/era.2023243.

Texto completo da fonte
Resumo:
<abstract> <p>Machine learning (ML) techniques are extensively applied to practical maritime transportation issues. Due to the difficulty and high cost of collecting large volumes of data in the maritime industry, in many maritime studies, ML models are trained with small training datasets. The relative predictive performances of these trained ML models are then compared with each other and with the conventional model using the same test set. The ML model that performs the best out of the ML models and better than the conventional model on the test set is regarded as the most effective in terms of this prediction task. However, in scenarios with small datasets, this common process may lead to an unfair comparison between the ML and the conventional model. Therefore, we propose a novel process to fairly compare multiple ML models and the conventional model. We first select the best ML model in terms of predictive performance for the validation set. Then, we combine the training and the validation sets to retrain the best ML model and compare it with the conventional model on the same test set. Based on historical port state control (PSC) inspection data, we examine both the common process and the novel process in terms of their ability to fairly compare ML models and the conventional model. The results show that the novel process is more effective at fairly comparing the ML models with the conventional model on different test sets. Therefore, the novel process enables a fair assessment of ML models' ability to predict key performance indicators in the context of limited data availability in the maritime industry, such as predicting the ship fuel consumption and port traffic volume, thereby enhancing their reliability for real-world applications.</p> </abstract>
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Mudarakola, Lakshmi Prasad, D. Shabda Prakash, K. L. N. Shashidhar e D. Yaswanth. "Car Price Prediction Using Machine Learning". International Journal for Research in Applied Science and Engineering Technology 12, n.º 5 (31 de maio de 2024): 81–87. http://dx.doi.org/10.22214/ijraset.2024.61441.

Texto completo da fonte
Resumo:
Abstract: The objective of the "Car Price Prediction Using Machine Learning" project is to anticipate car prices based on pertinent variables by utilizing predictive modelling and advanced data analytics. The rising need for precise and active pricing systems in the automobile sector is addressed by this initiative. The system will evaluate past vehicle data, including brand, model, manufacturing year, the mileage, type of fuel, and other details, by utilizing machine learning techniques. In order to produce specific predictions, the suggested model will be trained on an extensive dataset, noticing patterns and parallels within the data. In order to create a reliable prediction model, the research focuses on using regression techniques, such as ensemble approaches or linear regression. System of measurement like as absolute mean error and R-squared coefficient will be used to evaluate the predicted accuracy of the system in order to regulate its usefulness. If this resourcefulness is employed successfully, it will have a big impact on the automobile business for clients and venders alike by giving them a tool to evaluate fair market prices and helping them make decisions. This study enhances to the body of knowledge in price analysis using machine learning and lays the footing for future improvements in the prognostication of dynamic market trends in the automobile sector.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Buijs, Maria Magdalena, Mohammed Hossain Ramezani, Jürgen Herp, Rasmus Kroijer, Morten Kobaek-Larsen, Gunnar Baatrup e Esmaeil S. Nadimi. "Assessment of bowel cleansing quality in colon capsule endoscopy using machine learning: a pilot study". Endoscopy International Open 06, n.º 08 (agosto de 2018): E1044—E1050. http://dx.doi.org/10.1055/a-0627-7136.

Texto completo da fonte
Resumo:
Abstract Background and study aims The aim of this study was to develop a machine learning-based model to classify bowel cleansing quality and to test this model in comparison to a pixel analysis model and assessments by four colon capsule endoscopy (CCE) readers. Methods A pixel analysis and a machine learning-based model with four cleanliness classes (unacceptable, poor, fair and good) were developed to classify CCE videos. Cleansing assessments by four CCE readers in 41 videos from a previous study were compared to the results both models yielded in this pilot study. Results The machine learning-based model classified 47 % of the videos in agreement with the averaged classification by CCE readers, as compared to 32 % by the pixel analysis model. A difference of more than one class was detected in 12 % of the videos by the machine learning-based model and in 32 % by the pixel analysis model, as the latter tended to overestimate cleansing quality. A specific analysis of unacceptable videos found that the pixel analysis model classified almost all of them as fair or good, whereas the machine learning-based model identified five out of 11 videos in agreement with at least one CCE reader as unacceptable. Conclusions The machine learning-based model was superior to the pixel analysis in classifying bowel cleansing quality, due to a higher sensitivity to unacceptable and poor cleansing quality. The machine learning-based model can be further improved by coming to a consensus on how to classify cleanliness of a complete CCE video, by means of an expert panel.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Covaci, Florina. "Machine Learning Empowered Insights into Rental Market Behavior". Journal of Economics, Finance and Accounting Studies 6, n.º 2 (23 de abril de 2024): 143–55. http://dx.doi.org/10.32996/jefas.2024.6.2.11.

Texto completo da fonte
Resumo:
The aim of the current study is to determine which models are most suited for forecasting a property's rental price given a variety of provided characteristics and to develop a predictive model using machine learning techniques to estimate the rental prices of apartments in Cluj-Napoca, Romania, in relation to market dynamics. Given the absence of a comprehensive dataset tailored for this specific purpose, a primary focus was placed on data acquisition, cleaning, and transformation processes. By leveraging this dataset, the model aims to provide accurate predictions of fair rental prices within the Cluj-Napoca real estate market. Additionally, the research explores the factors influencing rental prices and evaluates the model's performance against real-world data to assess its practical utility and effectiveness in aiding rental market stakeholders.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Pemmaraju Satya Prem. "Machine learning in employee performance evaluation: A HRM perspective". International Journal of Science and Research Archive 11, n.º 1 (28 de fevereiro de 2024): 1573–85. http://dx.doi.org/10.30574/ijsra.2024.11.1.0193.

Texto completo da fonte
Resumo:
This research explores how machine learning, the tech whiz kid, is shaking up the traditional HR world of employee reviews. Companies are hungry for better ways to assess their workforce, and machine learning comes to the table with a buffet of data analysis tools. Productivity metrics, project wins, even that qualitative feedback, all get crunched by these algorithms to dish up comprehensive and objective performance pictures. This research dives into both the pros and cons of bringing this tech titan into HR, aiming to make evaluations not just accurate, but also fair and impactful. It's about finding the sweet spot where technology and human understanding join forces to power up performance reviews for the modern workplace. In the ever-evolving dance of the modern workplace, where performance reigns supreme, the old waltz of subjective evaluations stumbles toward obsolescence. This study steps into the spotlight, exploring the transformative potential of machine learning as the new lead partner in HRM, specifically for crafting nuanced and objective performance assessments. Imagine algorithms like virtuoso musicians, weaving together diverse data melodies – productivity's staccato riffs, project outcomes' triumphant crescendos, even the subtle whispers of qualitative feedback – to paint a vibrant portrait of individual contributions. But this data-driven tango isn't without its tricky steps. This research scrutinizes both the grace and potential missteps of machine learning in HRM, aiming to illuminate a path toward optimized performance evaluations that are not only accurate but also fair and effective. By bridging the gap between technology and human understanding, this study offers a roadmap for organizations waltzing towards a future where performance appraisals unlock the full potential of their workforce.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Lakshmi, Metta Dhana, Jani Revathi, Chichula Sravani, Maddila Adarsa Suhas e Balagam Umesh. "Comparative Analysis of Ride-On-Demand Services for Fair Price Detection Using Machine Learning". International Journal for Research in Applied Science and Engineering Technology 12, n.º 4 (30 de abril de 2024): 2557–66. http://dx.doi.org/10.22214/ijraset.2024.60337.

Texto completo da fonte
Resumo:
Abstract: The project titled "Comparative Analysis of Ride-On-Demand Services for Fair Price Detection Using Machine Learning" aims to investigate and evaluate the methodologies employed by different ride-on-demand platforms to determine equitable pricing through the application of machine learning algorithms. The primary focus of this research is to assess the effectiveness, transparency, and adaptability of pricing mechanisms in the context of dynamic factors such as geographical location, time of day, cab type, source, destination and weather conditions. The project involves a comprehensive comparative analysis of various ride-on-demand services, exploring the diversity of machine learning models utilized for fair price detection. The study will delve into the accuracy of price predictions, considering real-time demand fluctuations and the adaptability of algorithms to dynamic operational environments. Transparency in pricing decisions will be a key parameter for evaluation, as clear and understandable explanations are crucial for establishing user trust. The research methodology includes data collection from multiple ride-on-demand platforms like Uber, Ola, Rapido and Indrive, analysis of pricing algorithms, and the development of performance metrics to assess the fairness and efficacy of each service. The project aims to provide insights into best practices for implementing machine learning in ride-on-demand services, with the ultimate main goal of enhancing user experience and fostering trust within the user community. The findings of this comparative analysis will contribute valuable knowledge to the field of transportation technology and assist in shaping future advancements in fair price detection mechanisms
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Chakraborty, Pratic. "Embedded Machine Learning and Embedded Systems in the Industry." International Journal for Research in Applied Science and Engineering Technology 9, n.º 11 (30 de novembro de 2021): 1872–75. http://dx.doi.org/10.22214/ijraset.2021.39067.

Texto completo da fonte
Resumo:
Abstract: Machine learning is the buzz word right now. With the machine learning algorithms one can make a computer differentiate between a human and a cow. Can detect objects, can predict different parameters and can process our native languages. But all these algorithms require a fair amount of processing power in order to be trained and fitted as a model. Thankfully, with the current improvement in technology, processing power of computers have significantly increased. But there is a limitation in power consumption and deployability of a server computer. This is where “tinyML” helps the industry out. Machine Learning has never been so easy to access before!
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Fazelpour, Sina, e Maria De-Arteaga. "Diversity in sociotechnical machine learning systems". Big Data & Society 9, n.º 1 (janeiro de 2022): 205395172210820. http://dx.doi.org/10.1177/20539517221082027.

Texto completo da fonte
Resumo:
There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct rationales that should inform how we measure diversity in a given context. Similarly, the lack of specificity about the precise mechanisms underpinning diversity’s potential benefits can result in uninformative generalities, invalid experimental designs, and illicit interpretations of findings. In this work, we draw on research in philosophy, psychology, and social and organizational sciences to make three contributions: First, we introduce a taxonomy of different diversity concepts from philosophy of science, and explicate the distinct epistemic and political rationales underlying these concepts. Second, we provide an overview of mechanisms by which diversity can benefit group performance. Third, we situate these taxonomies of concepts and mechanisms in the lifecycle of sociotechnical machine learning systems and make a case for their usefulness in fair and accountable machine learning. We do so by illustrating how they clarify the discourse around diversity in the context of machine learning systems, promote the formulation of more precise research questions about diversity’s impact, and provide conceptual tools to further advance research and practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia